survey_title
stringlengths 19
197
| section_num
int64 3
56
| references
stringlengths 4
1.34M
| section_outline
stringlengths 531
9.08k
|
---|---|---|---|
A Survey on State-of-the-Art Drowsiness Detection Techniques
| 8 |
---
paper_title: Driver’s Face Detection Using Space-time Restrained Adaboost Method
paper_content:
Face detection is the first step of vision-based driver fatigue detection method. Traditional face detection methods have problems of high false-detection rates and long detection times. A space-time restrained Adaboost method is presented in this paper that resolves these problems. Firstly, the possible position of a driver’s face in a video frame is measured relative to the previous frame. Secondly, a space-time restriction strategy is designed to restrain the detection window and scale of the Adaboost method to reduce time consumption and false-detection of face detection. Finally, a face knowledge restriction strategy is designed to confirm that the faces detected by this Adaboost method. Experiments compare the methods and confirm that a driver’s face can be detected rapidly and precisely.
---
paper_title: Real time drowsiness detection using eye blink monitoring
paper_content:
According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.
---
paper_title: Driver Drowsiness Detection Using Eye-Closeness Detection
paper_content:
The purpose of this paper was to devise a way to alert drowsy drivers in the act of driving. One of the causes of car accidents comes from drowsiness of the driver. Therefore, this study attempted to address the issue by creating an experiment in order to calculate the level of drowsiness. A requirement for this paper was the utilisation of a Raspberry Pi Camera and Raspberry Pi 3 module, which were able to calculate the level of drowsiness in drivers. The frequency of head tilting and blinking of the eyes was used to determine whether or not a driver felt drowsy. With an evaluation on ten volunteers, the accuracy of face and eye detection was up to 99.59 percent.
---
paper_title: Performing systematic literature reviews in software engineering
paper_content:
Context: Making best use of the growing number of empirical studies in Software Engineering, for making decisions and formulating research questions, requires the ability to construct an objective summary of available research evidence. Adopting a systematic approach to assessing and aggregating the outcomes from a set of empirical studies is also particularly important in Software Engineering, given that such studies may employ very different experimental forms and be undertaken in very different experimental contexts.Objectives: To provide an introduction to the role, form and processes involved in performing Systematic Literature Reviews. After the tutorial, participants should be able to read and use such reviews, and have gained the knowledge needed to conduct systematic reviews of their own.Method: We will use a blend of information presentation (including some experiences of the problems that can arise in the Software Engineering domain), and also of interactive working, using review material prepared in advance.
---
paper_title: Subjective sleepiness, simulated driving performance and blink duration: examining individual differences.
paper_content:
The present study aimed to provide subject-specific estimates of the relation between subjective sleepiness measured with the Karolinska Sleepiness Scale (KSS) and blink duration (BLINKD) and lane drifting calculated as the standard deviation of the lateral position (SDLAT) in a high-fidelity moving base driving simulator. Five male and five female shift workers were recruited to participate in a 2-h drive (08:00-10:00 hours) after a normal night sleep and after working a night shift. Subjective sleepiness was rated on the KSS in 5-min intervals during the drive, electro-occulogram (EOG) was measured continuously to calculate BLINKD, and SDLAT was collected from the simulator. A mixed model anova showed a significant (P Language: en
---
paper_title: Performing systematic literature reviews in software engineering
paper_content:
Context: Making best use of the growing number of empirical studies in Software Engineering, for making decisions and formulating research questions, requires the ability to construct an objective summary of available research evidence. Adopting a systematic approach to assessing and aggregating the outcomes from a set of empirical studies is also particularly important in Software Engineering, given that such studies may employ very different experimental forms and be undertaken in very different experimental contexts.Objectives: To provide an introduction to the role, form and processes involved in performing Systematic Literature Reviews. After the tutorial, participants should be able to read and use such reviews, and have gained the knowledge needed to conduct systematic reviews of their own.Method: We will use a blend of information presentation (including some experiences of the problems that can arise in the Software Engineering domain), and also of interactive working, using review material prepared in advance.
---
paper_title: An Evaluation of Emerging Driver Fatigue Detection Measures and Technologies
paper_content:
Operator fatigue and sleep deprivation have been widely recognized as critical safety issues that cut across all modes in the transportation industry. FMCSA, the trucking industry, highway safety advocates, and transportation researchers have all identified driver fatigue as a high priority commercial vehicle safety issue. Fatigue affects mental alertness, decreasing an individual’s ability to operate a vehicle safely and increasing the risk of human error that could lead to fatalities and injuries. Sleepiness slows reaction time, decreases awareness, and impairs judgment. Fatigue and sleep deprivation impact all transportation operators (airline pilots, truck drivers, and railroad engineers, for example). Adding to the difficulty of understanding the fatigue problem and developing effective countermeasures to address operator fatigue is the fact that the incidence of fatigue is underestimated because it is so hard to quantify and measure. Obtaining reliable data on fatigue-related crashes is challenging because it is difficult to determine the degree to which fatigue plays a role in crashes. Fatigue, however, can be managed, and effectively managing fatigue will result in a significant reduction in related risk and improved safety. This study focuses on recent developments in mathematical models and vehicle-based operator alertness monitoring technologies. The major objective of this paper is to review and discuss many of the activities currently underway to develop unobtrusive, in-vehicle, real-time drowsy driver detection and fatigue-monitoring/alerting systems.
---
paper_title: Driver Fatigue Detection Using Mouth and Yawning Analysis
paper_content:
Summary Driver fatigue is an important factor in large number of accidents. There has been much work done in driver fatigue detection. This paper presents driver fatigue detection based on tracking the mouth and to study on monitoring and recognizing yawning. The authors proposed a method to locate and track driver’s mouth using cascade of classifiers proposed by Viola-Jones for faces. SVM is used to train the mouth and yawning images. During the fatigue detection mouth is detected from face images using cascade of classifiers. Then, SVM is used to classify the mouth and to detect yawning then alert Fatigue.
---
paper_title: Driver drowsiness detection using face expression recognition
paper_content:
An important application of machine vision and image processing could be driver drowsiness detection system due to its high importance. In recent years there have been many research projects reported in the literature in this field. In this paper, unlike conventional drowsiness detection methods, which are based on the eye states alone, we used facial expressions to detect drowsiness. There are many challenges involving drowsiness detection systems. Among the important aspects are: change of intensity due to lighting conditions, the presence of glasses and beard on the face of the person. In this project, we propose and implement a hardware system which is based on infrared light and can be used in resolving these problems. In the proposed method, following the face detection step, the facial components that are more important and considered as the most effective for drowsiness, are extracted and tracked in video sequence frames. The system has been tested and implemented in a real environment.
---
paper_title: Video-Based Classification of Driving Behavior Using a Hierarchical Classification System with Multiple Features
paper_content:
Driver fatigue and inattention have long been recognized as one of the main contributing factors in traffic accidents. Therefore, the development of intelligent driver assistance systems, which provides automatic monitoring of driver's vigilance, is an urgent and challenging task. This paper presents a novel system for video-based driving behavior recognition. The fundamental idea is to monitor driver's hand movements and to use these as predictors for safe/unsafe driving behavior. In comparison to previous work, the proposed method utilizes hierarchical classification and treats driving behavior in terms of a spatio-temporal reference framework as opposed to a static image. The approach was verified using the Southeast University Driving-Posture Dataset, a dataset comprised of video clips covering aspects of driving such as: normal driving, responding to a cell phone call, eating and smoking. After pre-processing for illumination variations and motion sequence segmentation, eight classes of behavior were...
---
paper_title: A novel approach for drowsy driver detection using head posture estimation and eyes recognition system based on wavelet network
paper_content:
Fatigue and drowsiness are a part of everyday life for millions of people. They reduce reaction time, vigilance, alertness and concentration, therefore, the ability of performing attention-based activities (such as driving) is impaired. In this paper, a drowsy driver detection system has been developed, using video processing analyzing eyes blinking concepts for measuring eyes closure duration and head posture estimation to verify the driver vigilance state. There are two contributions in this paper. the first one resides in combining two parameters of drowsiness analysis. The second is the creation of a new method of head posture estimation.
---
paper_title: Driver’s Face Detection Using Space-time Restrained Adaboost Method
paper_content:
Face detection is the first step of vision-based driver fatigue detection method. Traditional face detection methods have problems of high false-detection rates and long detection times. A space-time restrained Adaboost method is presented in this paper that resolves these problems. Firstly, the possible position of a driver’s face in a video frame is measured relative to the previous frame. Secondly, a space-time restriction strategy is designed to restrain the detection window and scale of the Adaboost method to reduce time consumption and false-detection of face detection. Finally, a face knowledge restriction strategy is designed to confirm that the faces detected by this Adaboost method. Experiments compare the methods and confirm that a driver’s face can be detected rapidly and precisely.
---
paper_title: An Evaluation of Emerging Driver Fatigue Detection Measures and Technologies
paper_content:
Operator fatigue and sleep deprivation have been widely recognized as critical safety issues that cut across all modes in the transportation industry. FMCSA, the trucking industry, highway safety advocates, and transportation researchers have all identified driver fatigue as a high priority commercial vehicle safety issue. Fatigue affects mental alertness, decreasing an individual’s ability to operate a vehicle safely and increasing the risk of human error that could lead to fatalities and injuries. Sleepiness slows reaction time, decreases awareness, and impairs judgment. Fatigue and sleep deprivation impact all transportation operators (airline pilots, truck drivers, and railroad engineers, for example). Adding to the difficulty of understanding the fatigue problem and developing effective countermeasures to address operator fatigue is the fact that the incidence of fatigue is underestimated because it is so hard to quantify and measure. Obtaining reliable data on fatigue-related crashes is challenging because it is difficult to determine the degree to which fatigue plays a role in crashes. Fatigue, however, can be managed, and effectively managing fatigue will result in a significant reduction in related risk and improved safety. This study focuses on recent developments in mathematical models and vehicle-based operator alertness monitoring technologies. The major objective of this paper is to review and discuss many of the activities currently underway to develop unobtrusive, in-vehicle, real-time drowsy driver detection and fatigue-monitoring/alerting systems.
---
paper_title: Eye tracking system to detect driver drowsiness
paper_content:
This paper describes an eye tracking system for drowsiness detection of a driver. It is based on application of Viola Jones algorithm and Percentage of Eyelid Closure (PERCLOS). The system alerts the driver if the drowsiness index exceeds a pre-specified level.
---
paper_title: Real time drowsiness detection using eye blink monitoring
paper_content:
According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.
---
paper_title: Driver Drowsiness Detection Using Eye-Closeness Detection
paper_content:
The purpose of this paper was to devise a way to alert drowsy drivers in the act of driving. One of the causes of car accidents comes from drowsiness of the driver. Therefore, this study attempted to address the issue by creating an experiment in order to calculate the level of drowsiness. A requirement for this paper was the utilisation of a Raspberry Pi Camera and Raspberry Pi 3 module, which were able to calculate the level of drowsiness in drivers. The frequency of head tilting and blinking of the eyes was used to determine whether or not a driver felt drowsy. With an evaluation on ten volunteers, the accuracy of face and eye detection was up to 99.59 percent.
---
paper_title: Information data flow in AWAKE multi-sensor driver monitoring system
paper_content:
Hypovigilance detection and warning systems are currently based on stand alone sensor approaches. This paper presents a multisensor system that allows the information fusion of different sources (vehicle, driver and environmental sensing parameters) and contributes to the decrease of false alarms and misses of the hypovigilance detection system. A hybrid scheme- centralized communication and data flow management of integrated stand alone systems- is adopted, which in turn, allows the real time application to monitor the driver and provide imminent and information messages according to his/her state and adapted to the external traffic and environmental scenario. The data flow between all systems, sensors and modules is described to synthesize the functional architecture. The system development is funded by the European so-called AWAKE project.
---
paper_title: Subjective sleepiness, simulated driving performance and blink duration: examining individual differences.
paper_content:
The present study aimed to provide subject-specific estimates of the relation between subjective sleepiness measured with the Karolinska Sleepiness Scale (KSS) and blink duration (BLINKD) and lane drifting calculated as the standard deviation of the lateral position (SDLAT) in a high-fidelity moving base driving simulator. Five male and five female shift workers were recruited to participate in a 2-h drive (08:00-10:00 hours) after a normal night sleep and after working a night shift. Subjective sleepiness was rated on the KSS in 5-min intervals during the drive, electro-occulogram (EOG) was measured continuously to calculate BLINKD, and SDLAT was collected from the simulator. A mixed model anova showed a significant (P Language: en
---
paper_title: Driver Drowsiness Detection Based on Time Series Analysis of Steering Wheel Angular Velocity
paper_content:
A novel driver drowsiness detection method based on time series analysis of the steering wheel angular velocity is proposed in this paper. Firstly, the steering behavior under the fatigue state is analyzed, followed by the determination of the temporal detection window, and then, the data series of the steering wheel angular velocity in the temporal detection window is selected as the detection feature. IF the detection feature satisfies the extent constraint and the variability constraint in the temporal window, a drowsiness state is detected accordingly. At last, experiment tests validate our method has good performance and could be well used in the real world.
---
paper_title: Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions
paper_content:
This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn)featuresfromfixedslidingwindowsonreal-timesteeringwheelanglestimeseries. Afterthat, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: “wake” and “drowsy”. The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the “awake” state, and 15.15% false detections of the “drowsy” state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue.
---
paper_title: Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety
paper_content:
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.
---
paper_title: EEG-based Driver Fatigue Detection
paper_content:
Fatigue is a gradual process leads to a slower reaction time. Fatigue is the major cause of road accidents around the globe. This paper proposes, implements and tests a system to detect fatigue based on Electroencephalogram (EEG) signal. The system produces fatigue index which is relevant to the level of subject's drowsiness. The input to the system is EEG signal which is measured by inexpensive single electrode neuro-signal acquisition device. The system was tested on locally collected dataset for a car simulation driver in different drowsiness levels. The system was able to detect the fatigue level for all subjects in different levels of tiredness.
---
paper_title: Detection of Driver Drowsiness Using Wavelet Analysis of Heart Rate Variability and a Support Vector Machine Classifier
paper_content:
Driving while fatigued is just as dangerous as drunk driving and may result in car accidents. Heart rate variability (HRV) analysis has been studied recently for the detection of driver drowsiness. However, the detection reliability has been lower than anticipated, because the HRV signals of drivers were always regarded as stationary signals. The wavelet transform method is a method for analyzing non-stationary signals. The aim of this study is to classify alert and drowsy driving events using the wavelet transform of HRV signals over short time periods and to compare the classification performance of this method with the conventional method that uses fast Fourier transform (FFT)-based features. Based on the standard shortest duration for FFT-based short-term HRV evaluation, the wavelet decomposition is performed on 2-min HRV samples, as well as 1-min and 3-min samples for reference purposes. A receiver operation curve (ROC) analysis and a support vector machine (SVM) classifier are used for feature selection and classification, respectively. The ROC analysis results show that the wavelet-based method performs better than the FFT-based method regardless of the duration of the HRV sample that is used. Finally, based on the real-time requirements for driver drowsiness detection, the SVM classifier is trained using eighty FFT and wavelet-based features that are extracted from 1-min HRV signals from four subjects. The averaged leave-one-out (LOO) classification performance using wavelet-based feature is 95% accuracy, 95% sensitivity, and 95% specificity. This is better than the FFT-based results that have 68.8% accuracy, 62.5% sensitivity, and 75% specificity. In addition, the proposed hardware platform is inexpensive and easy-to-use.
---
paper_title: Detecting Drowsy Driver Using Pulse Sensor
paper_content:
The driver’s condition, which involves staying focus on the road, is the most important aspect to consider whenever one is driving. To ignore the importance of this could result in severe physical injuries, deaths and economic losses. Again, previous researches were focused mainly on the physical conditions of the driver; eg movement of head and drowsiness. However, this research is focused on the driver’s heart rate by using an infrared heart-rate sensor or pulse sensor. These sensors are non-intrusively measured heart pulse wave from the driver’s heart. By doing experiment, the results show clear pulse wave signal can be obtained by looking at the low to high frequency (LF/HF ratio) which calculate HRV frequency domain of the driver’s heart rate time series. The LF/HF ratio shows decreasing trends as the drivers go from the state of being awake and alert to the state of drowsiness. Therefore, accidents can be avoided if there is an alert system to keep the drivers alert and focused on the road.
---
paper_title: Wearable driver drowsiness detection system based on biomedical and motion sensors
paper_content:
Driver drowsiness detection system had been developed as mobile device application such as Percentage of Eye Closure (PERCLOS) measured by using mobile device camera. Nevertheless, the mobile device has the potential risk of distracting the driver's attention, causing accidents. Thus, a wearable-type drowsiness detection system is proposed to overcome such issue. The proposed system used self-designed wristband consisted of photoplethysmogram sensor and galvanic skin response sensor. The sensors data are sent to the mobile device which served as a main analyzing processing unit. Those data are analyzed along with the motion sensors, which are the mobile device built-in accelerometer and gyroscope sensors. Five features are extracted accordingly based on the received raw sensors data, including heart rate, pulse rate variability, respiratory rate, stress level, and adjustment counter. Those features are further served as computation parameters to a support vector machine to derive the driver drowsiness state. The testing results indicated that the accuracy of the system with SVM model reached up to 98.3%. In addition, driver will be alerted using graphical and vibration alarm generated by the mobile device. In fact, the integration of driver physical behavior and physiological signals is proven to be an outstanding solution to detect driver drowsiness in a safer, more flexible and portable used.
---
paper_title: Detecting Driver Drowsiness Using Wireless Wearables
paper_content:
The National Highway Traffic Safety Administration data show that drowsy driving causes more than 100,000 crashes a year. In order to prevent these devastating accidents, it is necessary to build a reliable driver drowsiness detection system which could alert the driver before a mishap happens. In the literature, the drowsiness of a driver can be measured by vehicle-based, behavior-based, and physiology-based approaches. Comparing with the vehicle-based and behavior-based measurements, the physiological measurement of drowsiness is more accurate. With the latest release of wireless wearable devices such as biosensors that can measure people's physiological data, we aim to explore the possibility of designing a user-friendly and accurate driver drowsiness detection system using wireless wearables. In this paper, we use a wearable biosensor called Bio Harness 3 produced by Zephyr Technology to measure a driver's physiological data. We present our overall design idea of the driver drowsiness detection system and the preliminary experimental results using the biosensor. The detection system will be designed in two phases: The main task of the first phase is to collect a driver's physiological data by the biosensor and analyze the measured data to find the key parameters related to the drowsiness. In the second phase, we will design a drowsiness detection algorithm and develop a mobile app to alert drowsy drivers. The results from this project can lead to the development of real products which can save many lives and avoid many accidents on the road. Furthermore, our results can be widely applied to any situation where people should not fall asleep: from the applications in mission-critical fields to the applications in everyday life.
---
paper_title: Driver fatigue detection system
paper_content:
The research aims to detect the onset of drowsiness in drivers, while the vehicle is in motion. Detection is done by continuously looking out for symptoms of drowsiness, while considering both physiological and physical signs. Physiological factors include core body temperature and pulse rate. Both of these parameters will decrease during the onset of drowsiness. These are monitored by using somatic sensors. Physical cues including yawning, drooping eyelids, closed eyes and increased blink durations. Once the system detects that the driver is drowsy by using a combination of these factors, it alerts the driver across multiple stages depending on the severity of the symptoms. The system also becomes attuned to the driver's unique characteristics over time, thus reducing the margin of false positives.
---
paper_title: A Hybrid Approach to Detect Driver Drowsiness Utilizing Physiological Signals to Improve System Performance and Wearability
paper_content:
Driver drowsiness is a major cause of fatal accidents, injury, and property damage, and has become an area of substantial research attention in recent years. The present study proposes a method to detect drowsiness in drivers which integrates features of electrocardiography (ECG) and electroencephalography (EEG) to improve detection performance. The study measures differences between the alert and drowsy states from physiological data collected from 22 healthy subjects in a driving simulator-based study. A monotonous driving environment is used to induce drowsiness in the participants. Various time and frequency domain feature were extracted from EEG including time domain statistical descriptors, complexity measures and power spectral measures. Features extracted from the ECG signal included heart rate (HR) and heart rate variability (HRV), including low frequency (LF), high frequency (HF) and LF/HF ratio. Furthermore, subjective sleepiness scale is also assessed to study its relationship with drowsiness. We used paired t-tests to select only statistically significant features (p < 0.05), that can differentiate between the alert and drowsy states effectively. Significant features of both modalities (EEG and ECG) are then combined to investigate the improvement in performance using support vector machine (SVM) classifier. The other main contribution of this paper is the study on channel reduction and its impact to the performance of detection. The proposed method demonstrated that combining EEG and ECG has improved the system’s performance in discriminating between alert and drowsy states, instead of using them alone. Our channel reduction analysis revealed that an acceptable level of accuracy (80%) could be achieved by combining just two electrodes (one EEG and one ECG), indicating the feasibility of a system with improved wearability compared with existing systems involving many electrodes. Overall, our results demonstrate that the proposed method can be a viable solution for a practical driver drowsiness system that is both accurate and comfortable to wear.
---
paper_title: A new system for driver drowsiness and distraction detection
paper_content:
Drowsiness especially in long distance journeys is a key factor in traffic accidents. In this paper a new module for automatic driver drowsiness detection based on visual information and Artificial Intelligence is presented. The aim of this system is to locate, track and analyze both the driver's face and eyes to compute a drowsiness index to prevent accidents. Both face and eye detection is performed by Haar-like features and AdaBoost classifiers. In order to achieve better accuracy in face tracking, we propose a new method which is combination of detection and object tracking. Proposed face tracking method, also has capability to self correction. After eye region is found, Local Binary Pattern (LBP) is employed to extract eye characteristics. Using these features, an SVM classifier was trained to perform eye state analysis. To evaluate the effectiveness of proposed method, a drowsy person was pictured, while his EEG signals were taken. In this video we were able to track face by an accuracy of 100% and detecting eye blink by accuracy of 98.4%. Also we can calculate face orientation and tilt using eye position which is valuable knowledge about driver concentration. Finally, we can make a decision about drowsiness and distraction of the driver. Experimental results show high accuracy in each section which makes this system reliable for driver drowsiness detection.
---
|
Title: A Survey on State-of-the-Art Drowsiness Detection Techniques
Section 1: INTRODUCTION
Description 1: Introduce the importance of drowsiness detection, its impact on road safety, and the motivation behind the review.
Section 2: RESEARCH METHODOLOGY
Description 2: Describe the systematic approach used to gather and evaluate research papers on drowsiness detection techniques.
Section 3: DROWSINESS DETECTION TECHNIQUES
Description 3: Provide a detailed review of various drowsiness detection techniques, categorized by their approach, and discuss their pros and cons.
Section 4: A COMPARATIVE STUDY OF DDT
Description 4: Conduct a comparative analysis of different drowsiness detection techniques and discuss their advantages and disadvantages.
Section 5: HYBRID APPROACHES OF DDT
Description 5: Discuss the combination of different drowsiness detection techniques to enhance detection accuracy and reliability.
Section 6: CLASSIFICATION METHODS USED FOR DDT
Description 6: Detail the various classification methods used in drowsiness detection systems, including their pros and cons, and comparative performance.
Section 7: COMPARATIVE STUDY OF CLASSIFICATION METHODS
Description 7: Present a comparative study of different classification methods, discussing their error rates and suitability under various conditions.
Section 8: CONCLUSION
Description 8: Summarize the main findings of the survey, highlighting the most effective drowsiness detection techniques and future research directions.
|
Survey of Down Link Data Allocation Algorithms in IEEE 802.16 WiMAX
| 10 |
---
paper_title: Fundamentals of WiMAX: Understanding Broadband Wireless Networking
paper_content:
This is the eBook version of the printed book. Praise for Fundamentals of WiMAX "This book is one of the most comprehensive books I have reviewed ... it is a must-read for engineers and students planning to remain current or who plan to pursue a career in telecommunications. I have reviewed other publications on WiMAX and have been disappointed. This book is refreshing in that it is clear that the authors have the in-depth technical knowledge and communications skills to deliver a logically laid out publication that has substance to it." Ron Resnick, President, WiMAX Forum "This is the first book with a great introductory treatment of WiMAX technology. It should be essential reading for all engineers involved in WiMAX. The high-level overview is very useful for those with non-technical background. The introductory sections for OFDM and MIMO technologies are very useful for those with implementation background and some knowledge of communication theory. The chapters covering physical and MAC layers are at the appropriate level of detail. In short, I recommend this book to systems engineers and designers at different layers of the protocol, deployment engineers, and even students who are interested in practical applications of communication theory." Siavash M. Alamouti, Chief Technology Officer, Mobility Group, Intel "This is a very well-written, easy-to-follow, and comprehensive treatment of WiMAX. It should be of great interest." Dr. Reinaldo Valenzuela, Director of Wireless Research, Bell Labs "Fundamentals of WiMAX is a comprehensive guide to WiMAX from both industry and academic viewpoints, which is an unusual accomplishment. I recommend it to anyone who is curious about this exciting new standard." Dr. Teresa Meng, Professor, Stanford University, Founder and Director, Atheros Communications "Andrews, Ghosh, and Muhamed have provided a clear, concise, and well-written text on 802.16e/WiMAX. The book provides both the breadth and depth to make sense of the highly complicated 802.16e standard. I would recommend this book to both development engineers and technical managers who want an understating of WiMAX and insight into 4G modems in general." Paul Struhsaker, VP of Engineering, Chipset platforms, Motorola Mobile Device Business Unit, former vice chair of IEEE 802.16 working group "Fundamentals of WiMAX is written in an easy-to-understand tutorial fashion. The chapter on multiple antenna techniques is a very clear summary of this important technology and nicely organizes the vast number of different proposed techniques into a simple-to-understand framework." Dr. Ender Ayanoglu, Professor, University of California, Irvine, Editor-in-Chief, IEEE Transactions on Communications "Fundamentals of WiMAX is a comprehensive examination of the 802.16/WiMAX standard and discusses how to design, develop, and deploy equipment for this wireless communication standard. It provides both insightful overviews for those wanting to know what WiMAX is about and comprehensive, in-depth chapters on technical details of the standard, including the coding and modulation, signal processing methods, Multiple-Input Multiple-Output (MIMO) channels, medium access control, mobility issues, link-layer performance, and system-level performance." Dr. Mark C. Reed, Principal Researcher, National ICT Australia, Adjunct Associate Professor, Australian National University "This book is an excellent resourc...
---
paper_title: eOCSA: An algorithm for burst mapping with strict QoS requirements in IEEE 802.16e Mobile WiMAX networks
paper_content:
Mobile WiMAX systems based on the IEEE 802.16e standard require all downlink allocations to be mapped to a rectangular region in the two dimensional subcarrier-time map. Many published resource allocation schemes ignore this requirement. It is possible that the allocations when mapped to rectangular regions may exceed the capacity of the downlink frame, and the QoS of some flows may be violated. The rectangle mapping problem is a variation of the bin or strip packing problem, which is known to be NP-complete. In a previous paper, an algorithm called OCSA (One Column Striping with non-increasing Area first mapping) for rectangular mapping was introduced. In this paper, we propose an enhanced version of the algorithm. Similar to OCSA, the enhanced algorithm is also simple and fast to implement; however, eOCSA considers the allocation of an additional resource to ensure the QoS. eOCSA also avoids an enumeration process and so lowers the complexity to O(n2).
---
paper_title: Fundamentals of WiMAX: Understanding Broadband Wireless Networking
paper_content:
This is the eBook version of the printed book. Praise for Fundamentals of WiMAX "This book is one of the most comprehensive books I have reviewed ... it is a must-read for engineers and students planning to remain current or who plan to pursue a career in telecommunications. I have reviewed other publications on WiMAX and have been disappointed. This book is refreshing in that it is clear that the authors have the in-depth technical knowledge and communications skills to deliver a logically laid out publication that has substance to it." Ron Resnick, President, WiMAX Forum "This is the first book with a great introductory treatment of WiMAX technology. It should be essential reading for all engineers involved in WiMAX. The high-level overview is very useful for those with non-technical background. The introductory sections for OFDM and MIMO technologies are very useful for those with implementation background and some knowledge of communication theory. The chapters covering physical and MAC layers are at the appropriate level of detail. In short, I recommend this book to systems engineers and designers at different layers of the protocol, deployment engineers, and even students who are interested in practical applications of communication theory." Siavash M. Alamouti, Chief Technology Officer, Mobility Group, Intel "This is a very well-written, easy-to-follow, and comprehensive treatment of WiMAX. It should be of great interest." Dr. Reinaldo Valenzuela, Director of Wireless Research, Bell Labs "Fundamentals of WiMAX is a comprehensive guide to WiMAX from both industry and academic viewpoints, which is an unusual accomplishment. I recommend it to anyone who is curious about this exciting new standard." Dr. Teresa Meng, Professor, Stanford University, Founder and Director, Atheros Communications "Andrews, Ghosh, and Muhamed have provided a clear, concise, and well-written text on 802.16e/WiMAX. The book provides both the breadth and depth to make sense of the highly complicated 802.16e standard. I would recommend this book to both development engineers and technical managers who want an understating of WiMAX and insight into 4G modems in general." Paul Struhsaker, VP of Engineering, Chipset platforms, Motorola Mobile Device Business Unit, former vice chair of IEEE 802.16 working group "Fundamentals of WiMAX is written in an easy-to-understand tutorial fashion. The chapter on multiple antenna techniques is a very clear summary of this important technology and nicely organizes the vast number of different proposed techniques into a simple-to-understand framework." Dr. Ender Ayanoglu, Professor, University of California, Irvine, Editor-in-Chief, IEEE Transactions on Communications "Fundamentals of WiMAX is a comprehensive examination of the 802.16/WiMAX standard and discusses how to design, develop, and deploy equipment for this wireless communication standard. It provides both insightful overviews for those wanting to know what WiMAX is about and comprehensive, in-depth chapters on technical details of the standard, including the coding and modulation, signal processing methods, Multiple-Input Multiple-Output (MIMO) channels, medium access control, mobility issues, link-layer performance, and system-level performance." Dr. Mark C. Reed, Principal Researcher, National ICT Australia, Adjunct Associate Professor, Australian National University "This book is an excellent resourc...
---
paper_title: Fundamentals of WiMAX: Understanding Broadband Wireless Networking
paper_content:
This is the eBook version of the printed book. Praise for Fundamentals of WiMAX "This book is one of the most comprehensive books I have reviewed ... it is a must-read for engineers and students planning to remain current or who plan to pursue a career in telecommunications. I have reviewed other publications on WiMAX and have been disappointed. This book is refreshing in that it is clear that the authors have the in-depth technical knowledge and communications skills to deliver a logically laid out publication that has substance to it." Ron Resnick, President, WiMAX Forum "This is the first book with a great introductory treatment of WiMAX technology. It should be essential reading for all engineers involved in WiMAX. The high-level overview is very useful for those with non-technical background. The introductory sections for OFDM and MIMO technologies are very useful for those with implementation background and some knowledge of communication theory. The chapters covering physical and MAC layers are at the appropriate level of detail. In short, I recommend this book to systems engineers and designers at different layers of the protocol, deployment engineers, and even students who are interested in practical applications of communication theory." Siavash M. Alamouti, Chief Technology Officer, Mobility Group, Intel "This is a very well-written, easy-to-follow, and comprehensive treatment of WiMAX. It should be of great interest." Dr. Reinaldo Valenzuela, Director of Wireless Research, Bell Labs "Fundamentals of WiMAX is a comprehensive guide to WiMAX from both industry and academic viewpoints, which is an unusual accomplishment. I recommend it to anyone who is curious about this exciting new standard." Dr. Teresa Meng, Professor, Stanford University, Founder and Director, Atheros Communications "Andrews, Ghosh, and Muhamed have provided a clear, concise, and well-written text on 802.16e/WiMAX. The book provides both the breadth and depth to make sense of the highly complicated 802.16e standard. I would recommend this book to both development engineers and technical managers who want an understating of WiMAX and insight into 4G modems in general." Paul Struhsaker, VP of Engineering, Chipset platforms, Motorola Mobile Device Business Unit, former vice chair of IEEE 802.16 working group "Fundamentals of WiMAX is written in an easy-to-understand tutorial fashion. The chapter on multiple antenna techniques is a very clear summary of this important technology and nicely organizes the vast number of different proposed techniques into a simple-to-understand framework." Dr. Ender Ayanoglu, Professor, University of California, Irvine, Editor-in-Chief, IEEE Transactions on Communications "Fundamentals of WiMAX is a comprehensive examination of the 802.16/WiMAX standard and discusses how to design, develop, and deploy equipment for this wireless communication standard. It provides both insightful overviews for those wanting to know what WiMAX is about and comprehensive, in-depth chapters on technical details of the standard, including the coding and modulation, signal processing methods, Multiple-Input Multiple-Output (MIMO) channels, medium access control, mobility issues, link-layer performance, and system-level performance." Dr. Mark C. Reed, Principal Researcher, National ICT Australia, Adjunct Associate Professor, Australian National University "This book is an excellent resourc...
---
paper_title: WiMAX Technology and Network Evolution
paper_content:
Written and edited by experts who have developed WiMAX technology and standards WiMAX, the Worldwide Interoperability for Microwave Access, represents a paradigm shift in telecommunications technology. It offers the promise of cheaper, smaller, and simpler technology compared to existing broadband options such as DSL, cable, fiber, and 3G wireless. WiMAX Technology and Network Evolution is the first publication to present an accurate, complete, and objective description of mobile WiMAX technology. Each chapter was written and edited by experts, all of whom have been directly engaged in and lead the development of WiMAX either through the IEEE 802.16 Working Group or the WiMAX Forum. As a result, the book addresses not only key technical concepts and design principles, but also a wide range of practical issues concerning this new wireless technology, including: Detailed description of WiMAX technology features and capabilities from both radio and network perspectives WiMAX technology evolution in the near and long term Emerging broadband services enabled by the WiMAX networks Regulatory issues affecting WiMAX deployment and global adoption WiMAX accounting, roaming, and network management Each chapter ends with a summary and a list of references to facilitate further research. Wireless engineers, service designers, product managers, telecommunications professionals, network operators, and academics will all gain new insights into the key issues surrounding the development and implementation of mobile WiMAX. Moreover, the book will help them make informed management and business decisions in devising their own WiMAX strategies.
---
paper_title: WiMAX Networks: Techno-Economic Vision and Challenges
paper_content:
Ignited by the mobile phone's huge success at the end of last century, the demand for wireless services is constantly growing. To face this demand, wireless systems have been and are deployed at a large scale. These include mobility-oriented technologies such as GPRS, CDMA or UMTS, and Local Area Network-oriented technologies such as WiFi. WiMAX Networks covers aspects of WiMAX quality of service (QoS), security, mobility, radio resource management, multiple input multiple output antenna, planning, cost/revenue optimization, physical layer, medium access control (MAC) layer, network layer, and so on.
---
paper_title: Wimax Technology for Broadband Wireless Access
paper_content:
WiMAX Broadband Wireless Access Technology, based on the IEEE 802.16 standard, is at the origin of great promises for many different markets covering fixed wireless Internet Access, Backhauling and Mobile cellular networks. WiMAX technology is designed for the transmission of multimedia services (voice, Internet, email, games and others) at high data rates (of the order of Mb/s per user). It is a very powerful but sometimes complicated technique. The WiMAX System is described in thousands of pages of IEEE 802.16 standard and amendments documents and WiMAX Forum documents. WiMAX: Technology for Broadband Wireless Access provides a global picture of WiMAX and a large number of details that makes access to WiMAX documents much easier. All the aspects of WIMAX are covered. Illustrations and clear explanations for all the main procedures of WiMAX are pedagogically presented in a succession of relatively short chapters Topics covered include WiMAX genesis and framework, WiMAX topologies, protocol layers, MAC layer, MAC frames, WiMAX multiple access, the physical layer, QoS Management, Radio Resource Management, Bandwidth allocation, Network Architecture, Mobility and Security Features a glossary of abbreviations and their definitions, and a wealth of explanatory tables and figures Highlights the most recent changes, including the 802.16e amendment of the standard, needed for Mobile WiMAX Includes technical comparisons of WiMAX vs. 802.11 (WiFi) and cellular 3G technologies This technical introduction to WiMAX, explaining the rather complex standards (IEEE 802.16-2004 and 802.16e) is a must read for engineers, decision-makers and students interested in WiMAX, as well as other researchers and scientists from this evolving field.
---
paper_title: Burst Construction and Packet Mapping Scheme for OFDMA Downlinks in IEEE 802.16 Systems
paper_content:
In this paper, we propose a burst construction and packet mapping scheme in the orthogonal frequency-division multiple access (OFDMA) downlinks of IEEE 802.16 systems. In the standard of the systems, there are some restrictions on the usage of downlink radio resources and they are defined in the physical layer (PHY) specification. One of the main restrictions is that a rectangular region, which is called Burst, is defined on a two-dimensional domain of time and frequency, and packets must be allocated within the region. However, how to define the burst and allocate data packets within the burst is left implementation dependent. Therefore, we consider a burst construction and packet mapping scheme that is easy to realize on systems and attains efficient usage of radio resources. By computer simulation, it has been confirmed that the proposed scheme can decrease not only the control data ratio within the rectangles, but also control data that must be transmitted at the head of every frame, which can result in higher throughput.
---
paper_title: WiMAX Downlink OFDMA Burst Placement for Optimized Receiver Duty-Cycling
paper_content:
Mobile wireless broadband access networks are now becoming a reality, thanks to the emerging IEEE 802.16e standard. This kind of network offers different challenges when compared to the fixed ones, as power consumption becomes a major concern. In this standard, a strict organization of the downlink bursts is not guaranteed in the OFDMA frame and this may lead to extra power consumption for the receiver, decreasing the device's lifetime. In the present paper, we introduce an optimization algorithm capable of reducing the activity of each receiver in the system for decoding its addressed bursts, thanks to a better time-frequency organization of the bursts. We first work on fitting bursts within the smallest frame, and show that the minimal number of OFDM symbols is enough in 70 to 80% of the cases, while one extra is needed otherwise. Using a binary tree implementation of an exhaustive burst placement search, we also show that we can gain 20 to 30% in duty-cycling of the receivers by selecting the best configuration, hence gaining the corresponding energy. This holds for receivers decoding either their bursts only or all the bursts from the beginning of the frame up to their own bursts before sleeping, depending on the scenario. The full search is sustainable for up to 8 user bursts per frame.
---
paper_title: Two-Dimensional Resource Allocation for OFDMA System
paper_content:
The resource allocation problem in OFDMA system is how to assign two-dimensional blocks covering time and frequency to multi-users. Resource allocation is crucial to system performance but is not defined in IEEE 802.16 standards. We proved that the problem of resource allocation in IEEE 802.16 standard is NP-complete. To approximately solve the problem in real time, we present a fast heuristic algorithm with O(n2) computational complexity based on topology evaluation. Simulation results show that the allocation result is competitive.
---
paper_title: A Cross-Layer Framework for Overhead Reduction, Traffic Scheduling, and Burst Allocation in IEEE 802.16 OFDMA Networks
paper_content:
IEEE 802.16 orthogonal frequency-division multiple access (OFDMA) downlink subframes have a special 2-D channel-time structure. Allocation resources from such a 2-D structure incur extra control overheads that hurt network performance. Existing solutions try to improve network performance by designing either the scheduler in the medium access control layer or the burst allocator in the physical layer, but the efficiency of overhead reduction is limited. In this paper, we point out the necessity of “codesigning” both the scheduler and the burst allocator to efficiently reduce overheads and improve network performance. Under the partial-usage-of-subcarriers model, we propose a cross-layer framework that covers overhead reduction, real-time and non-real-time traffic scheduling, and burst allocation. The framework includes a two-tier priority-based scheduler and a bucket-based burst allocator, which is more complete and efficient than prior studies. Both the scheduler and the burst allocator are tightly coupled together to solve the problem of arranging resources to data traffic. Given available space and bucket design from the burst allocator, the scheduler can well utilize the frame resource, reduce real-time traffic delays, and maintain fairness. On the other hand, with priority knowledge and resource assignment from the scheduler, the burst allocator can efficiently arrange downlink bursts to satisfy traffic requirements with low complexity. Through analysis, the cross-layer framework is validated to give an upper bound to overheads and achieve high network performance. Extensive simulation results verify that the cross-layer framework significantly increases network throughput, maintains long-term fairness, alleviates real-time traffic delays, and enhances frame utilization.
---
paper_title: Optimal WiMAX frame packing for minimum energy consumption
paper_content:
Minimizing energy consumption is an urgent and challenging problem. As in any communication system, high energy efficiency in WiMAX systems should be maintained by increasing resource efficiency. Thus, WiMAX resources should be properly utilized by optimizing the construction of downlink (DL) bursts. This paper proposes an energy-efficient scheme that maximizes the use of resources at the base station (BS) by reducing the energy wasted caused by sending padding bits instead of useful data. The problem was formulated as nonlinear integer programming model. Due to the complexity of the problem, this paper presents first the formulation of the base model for optimal DL bursts construction problem assuming the packet is represented by one burst. Then, the formulation is expanded to allow the representation of packets by several bursts. The results show an improvement in data packing that maximizes the utilization of frames, and minimizes energy wastage.
---
paper_title: A Linear-Complexity Burst Packing Scheme for IEEE 802.16e OFDMA Downlink Frames
paper_content:
The problem of efficiently shaping downlink data bursts into rectangles and packing them in the OFDMA subframe is not addressed by the IEEE 802.16 standard, and is left as an implementation issue. In this paper, we propose a linear complex- ity bursts packing algorithm to maximize radio resources usage on the OFDMA downlink. Our scheme shapes the bursts into identical width, places them in column direction and maximally fills the columns by shifting the bursts on different columns. By simulation, we show that the scheme can push the radio resource usage over 95%, which is on average 20% improvement compared with the simple 2D fixed-bin strip packing and 8% improvement with the exhaustive search. Moreover, the proposed scheme is also power-efficient, because subscriber stations (SSs) do not need to receive a large aggregated burst created for packing problem complexity reduction.
---
paper_title: Greedy scheduling algorithm (GSA) - Design and evaluation of an efficient and flexible WiMAX OFDMA scheduling solution
paper_content:
WiMAX is one of the most promising technologies to provide broadband wireless access in the near future. In this paper we focus on the study of the combined performance of a WiMAX Base Station MAC downlink scheduler and OFDMA packing algorithm which mainly determine the usage efficiency of the available radio resources. We design and analyze an efficient and flexible solution, greedy scheduling algorithm (GSA), and evaluate its performance as compared to several relevant alternative solutions. Specifically, we analyze performance differences with respect to efficiency, flexibility to provide per subscriber station burst shape preferences, interference mitigation and computational load. Our results show that GSA achieves a performance similar to the one of the competing approaches considered in terms of efficiency, even better in some cases, and significantly outperforms them in flexibility to provide per subscriber station burst shape preferences, interference mitigation and computational load. As a conclusion, the proposed GSA solution is a promising candidate to maximize the utilization of the available WiMAX radio resources at a low computational cost while at the same time being able to fulfill a wide range of requirements based on operators' preferences and/or network environment specifics.
---
paper_title: An energy-efficient scheme for WiMAX downlink bursts construction
paper_content:
One of the challenges in WiMAX networks is to increase resource and power efficiency by optimizing the downlink (DL) burst construction. This paper proposes an energy-efficient scheme that maximizes the use of resources at the base station (BS) while reducing the energy wasted caused by sending padding bits instead of useful data. This paper presents the derivation of the general optimization problem that was formulated as a nonlinear integer programming model. A simple case formulation is used to illustrate the general application. The results show an improvement in data packing that maximizes the utilization of frames, and minimizes energy wastage.
---
paper_title: Condensed Downlink MAP Structures for IEEE 802.16e Wireless Metropolitan Area Networks (MANs)
paper_content:
The new mobile wireless metropolitan area network (WMAN) architecture imposes a demanding performance requirement on the radio resource to provide broadband internet access. The radio resource is partitioned as bursts in time and frequency domains and used by mobile stations (MS) in an exclusive manner. The base station (BS) functionally serves as a resource controller for traffic to and from the MSs associated with it, and thus naturally generates the proper downlink (DL) and uplink (UL) MAPs for active MSs based on service and traffic requirements. However, the DL-MAP construction scheme in IEEE 802.16e OFDMA standard, which was designed for handling irregular traffic pattern of MS, often produces a large DL-MAP, as a certain small amount of data distribution to MSs will render a potential overhead of information elements (IE) in DL-MAP and limit the overall capacity. Moreover, the robustness requirement on MAP broadcasting would further cause severe system overhead. As a solution, we propose two exclusive condensed DL-MAP structures, which only carry partial information of each rectangular burst in order to reduce the size of IE in DL-MAP. For each condensed DL-MAP structure, the algorithms for BS to produce the condensed DL-MAP structure and the scheme for MS to precisely reconstruct the original DL-MAP structure are provided. As confirmed by the analytical results, the proposed condensed DL-MAP can achieve significant DL-MAP size reduction compared with standard DL-MAP structure.
---
paper_title: Efficient downlink scheduling with power boosting in mobile IEEE 802.16 networks
paper_content:
In current mobile broadband wireless access (BWA) technologies, which are based on orthogonal frequency division multiple access (OFDMA), terminals close to cell edge experience poor channel quality, due to severe path-loss and high interference from concurrent transmissions in nearby cells. To mitigate this problem we propose: (a) to partition the set of sub-channels into chunks, which are assigned different power levels; (b) the design of a data scheduling and allocation algorithm positioned in the medium access control (MAC) layer of the base station (BS), exploiting such a partitioning. The framework is analyzed in a multi-cell IEEE 802.16 network by means of system-level packet-based simulations, with detailed MAC and physical layer abstractions in combination with realistic models of the wireless channel and interference.
---
paper_title: OBBP: An Efficient Burst Packing Algorithm for IEEE802.16e Systems
paper_content:
Mobile communications have witnessed a phenomenal increase in the amount of users, services, and applications. Orthogonal Frequency Division Multiple Access (OFDMA) targets to provide broadband connectivity to wide area coverage, in mobile environments, for Next Generation Networks (NGNs), which results in significant design challenges in the MAC (Medium Access Control) layer to provide an efficient resource allocation in a cost-effective manner. This paper proposes a two-dimensional (2D) Burst Packing (BP) algorithm for OFDMA downlink (DL) subframe that can provide the service providers with an efficient, fast, flexible, and high-spectral efficiency method to allocate system resources among the Mobile Stations (MSs). The proposed Orientation-Based Burst Packing (OBBP) algorithm uses the Orientation Factors (OFs) of the bursts as a criteria to achieve the challenging issues in the BP problem. The simulation results show that OBBP algorithm can achieve a high packing efficiency up to 99.2% when the burst size ratio (BSR) is 50%.
---
paper_title: Cross-layer design for radio resource allocation based on priority scheduling in OFDMA wireless access network
paper_content:
The orthogonal frequency-division multiple access (OFDMA) system has the advantages of flexible subcarrier allocation and adaptive modulation with respect to channel conditions. However, transmission overhead is required in each frame to broadcast the arrangement of radio resources to all mobile stations within the coverage of the same base station. This overhead greatly affects the utilization of valuable radio resources. In this paper, a cross layer scheme is proposed to reduce the number of traffic bursts at the downlink of an OFDMA wireless access network so that the overhead of the media access protocol (MAP) field can be minimized. The proposed scheme considers the priorities and the channel conditions of quality of service (QoS) traffic streams to arrange for them to be sent with minimum bursts in a heuristic manner. In addition, the trade-off between the degradation of the modulation level and the reduction of traffic bursts is investigated. Simulation results show that the proposed scheme can effectively reduce the traffic bursts and, therefore, increase resource utilization.
---
paper_title: eOCSA: An algorithm for burst mapping with strict QoS requirements in IEEE 802.16e Mobile WiMAX networks
paper_content:
Mobile WiMAX systems based on the IEEE 802.16e standard require all downlink allocations to be mapped to a rectangular region in the two dimensional subcarrier-time map. Many published resource allocation schemes ignore this requirement. It is possible that the allocations when mapped to rectangular regions may exceed the capacity of the downlink frame, and the QoS of some flows may be violated. The rectangle mapping problem is a variation of the bin or strip packing problem, which is known to be NP-complete. In a previous paper, an algorithm called OCSA (One Column Striping with non-increasing Area first mapping) for rectangular mapping was introduced. In this paper, we propose an enhanced version of the algorithm. Similar to OCSA, the enhanced algorithm is also simple and fast to implement; however, eOCSA considers the allocation of an additional resource to ensure the QoS. eOCSA also avoids an enumeration process and so lowers the complexity to O(n2).
---
paper_title: A downlink data region allocation algorithm for IEEE 802.16e OFDMA
paper_content:
IEEE 802.16e specifies a connection-oriented centralized medium access control (MAC) protocol, based on time division multiple access (TDMA), which adds mobility support to the MAC protocol defined by the IEEE 802.16 standard for fixed broadband wireless access. To this end, orthogonal frequency division multiple access (OFDMA) is specified as the air interface. In OFDMA, the MAC frame extends over two dimensions: time, in units of OFDMA symbols, and frequency, in units of logical sub-channels. The base station (BS) is responsible for allocating data into the MAC frames so as to meet the quality of service (QoS) guarantees of the admitted connections of the mobile stations (MSs). This is done on a frame-by-frame basis by defining the content of map messages, which advertise the position and shape of data regions reserved for transmission to/from MSs. We refer to the latter operation as data region allocation. In this paper, we propose a sample data region allocation algorithm (SDRA), and we evaluate its performance by means of Monte Carlo analysis. The effectiveness of SDRA is assessed in several scenarios, involving mixed voice over IP (VoIP) and best effort MSs, different modulations, and frequency re-use plans.
---
paper_title: Two-dimensional downlink burst construction in IEEE 802.16 networks
paper_content:
Several burst construction algorithms for orthogonal frequency division multiple access were proposed. However, these algorithms did not meet the downlink burst characteristics specified in the IEEE 802.16 standard. This article therefore proposes the best corner-oriented algorithm (BCO). BCO not only complies with downlink burst characteristics, but also considers the three issues to obtain high throughput, as follows: BCO maintains all free slots as a continuous area by constructing each burst in the corner of the available bandwidth area for minimizing external fragmentation; BCO shrinks the burst area to minimize internal fragmentation, if the requested bandwidth has been satisfied; and for exploring the continuous subchannels with good channel quality, BCO ensures that the burst adopts an optimal modulation coding scheme by selecting the excellent corner that can generate the maximal throughput. The simulation results indicate that BCO achieves 2-9 times the throughput achieved by the previous algorithms under a heavy load.
---
paper_title: An Efficient Downlink Data Mapping Algorithm for IEEE802.16e OFDMA Systems
paper_content:
In the IEEE 802.16e OFDMA systems, the data mapping algorithm maps the data to the appropriate rectangular regions in the two-dimensional matrix of time and frequency domain. Each region is described by an Information Element (IE) which is used for signaling and occupies a slot. The IEs as well as vacant slots in the allocated rectangular region result in a substantial amount of overhead. In order to minimize the overhead so as to increase system throughput, the paper proposes a "Mapping with Appropriate Truncation and Sort" (MATS) algorithm. Extensive simulations are conducted in terms of mapping efficiency, mapping cost and system throughput to evaluate the performance of MATS. The results show that compared with Raster, MATS can increase the mapping efficiency by up to 2.4% and reduce the mapping cost by up to 80% and 37% for constant bit rate traffic and variable bit rate traffic, respectively. Moreover, system throughput is increased by more than 3% in the 10 MHz bandwidth network. Consequently, MATS can substantially reduce the overhead and achieve high system throughput.
---
paper_title: Piggybacking Scheme of MAP IE for Minimizing MAC Overhead in the IEEE 802.16e OFDMA Systems
paper_content:
This paper analyzes Media Access Control(MAC) overhead of the IEEE 802.16e systems and shows that it causes to degrade system performance critically. MAP, a control message about resource allocation, is broadcasted with high robustness and uses a great amount of radio resource for it. This paper also proposes an advanced scheme which transmits MAP IE, a component of MAP, piggybacked on data packets, and uses fast feedback to conserve the transmission reliability of the MAP IE. Then, MAP IEs can be transmitted with high data rate, and the amount of radio resource for transmitting MAP IEs becomes extremely small. Numerical analysis and simulation results show that the proposed scheme can significantly improve the MAC overhead.
---
paper_title: eOCSA: An algorithm for burst mapping with strict QoS requirements in IEEE 802.16e Mobile WiMAX networks
paper_content:
Mobile WiMAX systems based on the IEEE 802.16e standard require all downlink allocations to be mapped to a rectangular region in the two dimensional subcarrier-time map. Many published resource allocation schemes ignore this requirement. It is possible that the allocations when mapped to rectangular regions may exceed the capacity of the downlink frame, and the QoS of some flows may be violated. The rectangle mapping problem is a variation of the bin or strip packing problem, which is known to be NP-complete. In a previous paper, an algorithm called OCSA (One Column Striping with non-increasing Area first mapping) for rectangular mapping was introduced. In this paper, we propose an enhanced version of the algorithm. Similar to OCSA, the enhanced algorithm is also simple and fast to implement; however, eOCSA considers the allocation of an additional resource to ensure the QoS. eOCSA also avoids an enumeration process and so lowers the complexity to O(n2).
---
|
Title: Survey of Down Link Data Allocation Algorithms in IEEE 802.16 WiMAX
Section 1: INTRODUCTION
Description 1: Provide an overview of WiMAX technology, its advantages, and the challenges associated with resource allocation in WiMAX systems. Introduce the importance of scheduling and data packing algorithms, and highlight the paper's focus on downlink packing algorithms.
Section 2: WIMAX OVERVIEW
Description 2: Offer a general description of WiMAX as a wireless broadband solution, its standards, and key features.
Section 3: PHY layer overview
Description 3: Detail the physical layer specifications of WiMAX including the operating bands, OFDMA, modulation schemes, and frame structures.
Section 4: MAC layer overview
Description 4: Discuss the MAC layer, including its sublayers, functions such as PDU construction, QoS scheduling, call admission control, and bandwidth allocation.
Section 5: WIMAX FRAME ALLOCATION
Description 5: Explain the frame allocation process in TDD WiMAX, including the use of DL-MAP and UL-MAP messages, and the packing data into downlink and uplink frames.
Section 6: DL-Map and its overhead
Description 6: Illustrate how WiMAX assigns slots to users in downlink, the structure and impact of DL-MAP on the overall system, and an example scenario to highlight DL-MAP overhead.
Section 7: Wastages slots in downlink frame
Description 7: Provide a scenario to show how the packing process can lead to wastage slots in the downlink frame and discuss the implications on system performance.
Section 8: SURVEY OF DOWNLINK PACKING ALGORITHMS
Description 8: Conduct a comprehensive survey of various downlink packing algorithms, discuss their focus and efficiency based on different factors like fragments, power consumption, DL_MAP overhead, and cross-layer design considerations.
Section 9: Some considerations and design factors for downlink packing algorithm
Description 9: Discuss the considerations and design factors essential for developing an efficient downlink packing algorithm, emphasizing the trade-offs among different performance aspects.
Section 10: CONCLUSION
Description 10: Summarize the study, reiterate the importance of optimizing trade-offs in downlink data packing algorithms, and point out the need for comprehensive comparative analysis of existing algorithms under varied traffic classes.
|
Multivariate Statistical Process Control Charts and the Problem of Interpretation : A Short Overview and Some Applications in Industry
| 5 |
---
paper_title: RESEARCH ISSUES AND IDEAS IN STATISTICAL PROCESS CONTROL
paper_content:
An overview is given of current research on control charting methods for process monitoring and improvement. A historical perspective and ideas for future research also are given. Research topics include: variable sample size and sampling interval met..
---
paper_title: Multivariate SPC Methods for Process and Product Monitoring
paper_content:
Statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered. Traditional multivariate control charts based on X2 and T2 statistics ..
---
paper_title: Generalized contribution plots in multivariate statistical process monitoring
paper_content:
Abstract This paper discusses contribution plots for both the D -statistic and the Q -statistic in multivariate statistical process control of batch processes. Contributions of process variables to the D -statistic are generalized to any type of latent variable model with or without orthogonality constraints. The calculation of contributions to the Q -statistic is discussed. Control limits for both types of contributions are introduced to show the relative importance of a contribution compared to the contributions of the corresponding process variables in the batches obtained under normal operating conditions. The contributions are introduced for off-line monitoring of batch processes, but can easily be extended to on-line monitoring and to continuous processes, as is shown in this paper.
---
paper_title: Comparisons of Multivariate CUSUM Charts
paper_content:
We consider several distinct approaches for controlling the mean of a multivariate normal process including two new and distinct multivariate CUSUM charts, several multiple univariate CUSUM charts, and a Shewhart x2 control chart. The performances of th..
---
paper_title: A review of multivariate control charts
paper_content:
A review of the literature on control charts for multivariate quality control (MQC) is given, with a concentration on developments occurring since the mid-1980s. Multivariate cumulative sum (CUSUM) control procedures and a multivariate exponentially weighted moving average (EWMA) control chart are reviewed and recommendations are made regarding their use. Several recent articles that give methods for interpreting an out-of-control signal on a multivariate control chart are analyzed and discussed. Other topics such as the use of principal components and regression adjustment of variables in MQC, as well as frequently used approximations in MQC, are discussed.
---
paper_title: Multivariate generalizations of cumulative sum quality-control schemes
paper_content:
This article presents the design procedures and average run lengths for two mulativariater cumulative sum (CUSUM) quality-control procedures. The first CUSUM procedure reduces each multivariate observation to a scalar and then forms a CUSUM of the scalars. The second CUSUM procedure forms a CUSUM vector directly from the observations. These two procedures are compared with each other and with the multivariate Shewhart chart. Other multivariate quality-control procedures are mentioned. Robustness, the fast initial response feature for CUSUM schemes, and combined Shewhart-CUSUM schemes are discussed.
---
paper_title: Multivariate statistical process control—recent results and directions for future research
paper_content:
The performance of a product often depends on several quality characteristics. These characteristics may have interactions. In answering the question “Is the process in control?”, multivariate statistical process control methods take these interactions into account. In this paper, we review several of these multivariate methods and point out where to fill up gaps in the theory. The review includes multivariate control charts, multivariate CUSUM charts, a multivariate MMA chart, and multivariate process capability indices. The most important open question from a practical point of view is how to detect the variables that caused an out-of-control signal. Theoretically, the statistical properties of the methods should be investigated more profoundly.
---
paper_title: Multivariate CUSUM Quality- Control Procedures
paper_content:
It is a common practice to use, simultaneously, several one-sided or two-sided CUSUM procedures of the type proposed by Page (1954). In this article, this method of control is considered to be a single multivariate CUSUM (MCUSUM) procedure. Methods are given for approximating parameters of the distribution of the minimum of the run lengths of the univariate CUSUM charts. Using a new method of comparing multivariate control charts, it is shown that an MCUSUM procedure is often preferable to Hotelling's TZ procedure for the case in which the quality characteristics are bivariate normal random variables.
---
paper_title: Multivariate Statistical Process Control with Industrial Applications
paper_content:
This applied, self-contained text provides detailed coverage of the practical aspects of multivariate statistical process control (MVSPC) based on the application of Hotelling's T2 statistic. MVSPC is the application of multivariate statistical techniques to improve the quality and productivity of an industrial process. The authors, leading researchers in this area who have developed major software for this type of charting procedure, provide valuable insight into the T2 statistic. Intentionally including only a minimal amount of theory, they lead readers through the construction and monitoring phases of the T2 control statistic using numerous industrial examples taken primarily from the chemical and power industries. These examples are applied to the construction of historical data sets to serve as a point of reference for the control procedure and are also applied to the monitoring phase, where emphasis is placed on signal location and interpretation in terms of the process variables. Specifically devoted to the T2 methodology, Multivariate Statistical Process Control with Industrial Applications is the only book available that concisely and thoroughly presents such topics as how to construct a historical data set; how to check the necessary assumptions used with this procedure; how to chart the T2 statistic; how to interpret its signals; how to use the chart in the presence of autocorrelated data; and how to apply the procedure to batch processes. The book comes with a CD-ROM containing a 90-day demonstration version of the QualStat multivariate SPC software specifically designed for the application of T2 control procedures. The CD-ROM is compatible with Windows 95, Windows 98, Windows Me Millennium Edition, and Windows NT operating systems.
---
paper_title: Improving the Performance of the Chi-square Control Chart via Runs Rules
paper_content:
The most popular multivariate process monitoring and control procedure used in the industry is the chi-square control chart. As with most Shewhart-type control charts, the major disadvantage of the chi-square control chart, is that it only uses the information contained in the most recently inspected sample; as a consequence, it is not very efficient in detecting gradual or small shifts in the process mean vector. During the last decades, the performance improvement of the chi-square control chart has attracted continuous research interest. In this paper we introduce a simple modification of the chi-square control chart which makes use of the notion of runs to improve the sensitivity of the chart in the case of small and moderate process mean vector shifts.
---
paper_title: Multivariate Profile Charts for Statistical Process Control
paper_content:
The multivariate profile (MP) chart is a new control chart for simultaneous display of univariate and multivariate statistics. It is designed to analyze and display extended structures of statistical process control data for various cases of grouping, reference distribution, and use of nominal specifications. For each group of observations, the scaled deviations from reference values are portrayed together as a modified profile plot symbol. The vertical location of the symbol is determined by the multivariate distance of the vector of means from the reference values. The graphical display in the MP chart enjoys improved visual characteristics as compared with previously suggested methods. Moreover, the perceptual tasks required by the use of the MP chart provide higher accuracy in retrieving the quantitative information. This graphical display is used to display other combined univariate and multivariate statistics, such as measures of dispersion, principal components, and cumulative sums
---
paper_title: Multivariate SPC Methods for Process and Product Monitoring
paper_content:
Statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered. Traditional multivariate control charts based on X2 and T2 statistics ..
---
paper_title: Multivariate Quality Control Using Finite Intersection Tests
paper_content:
Multivariate quality control problems involve the evaluation of a process based on the simultaneous behavior of p variables. Most multivariate quality control procedures evaluate the in-control or out-of-control condition based upon an overall statistic..
---
paper_title: Decomposition of T2 for Multivariate Control Chart Interpretation
paper_content:
Cumulative sum (CUSUM) control charts have been widely used for monitoring the process mean. Relatively little attention has been given to the use of CUSUM charts for monitoring the process variance. The properties of CUSUM charts based on the logarithm..
---
paper_title: A Practical Approach for Interpreting Multivariate T2 Control Chart Signals
paper_content:
A persistent problem in multivariate control chart procedures is the interpretation of a signal. Determining which variable or group of variables is contributing to the signal can be a difficult task fo rthe practitioner. However, a procedure for decomp..
---
paper_title: Generalized contribution plots in multivariate statistical process monitoring
paper_content:
Abstract This paper discusses contribution plots for both the D -statistic and the Q -statistic in multivariate statistical process control of batch processes. Contributions of process variables to the D -statistic are generalized to any type of latent variable model with or without orthogonality constraints. The calculation of contributions to the Q -statistic is discussed. Control limits for both types of contributions are introduced to show the relative importance of a contribution compared to the contributions of the corresponding process variables in the batches obtained under normal operating conditions. The contributions are introduced for off-line monitoring of batch processes, but can easily be extended to on-line monitoring and to continuous processes, as is shown in this paper.
---
paper_title: Investigation and characterization of a control scheme for multivariate quality control
paper_content:
A new scheme for multivariate statistical quality control is investigated and characterized. The control scheme consists of three steps and it will identify any out-of-control samples, select the subset of variables that are out of control, and diagnose the out-of-control variables. A new control variable selection algorithm, the backward selection algorithm, and a new control variable diagnosis method, the hyperplane methods, are proposed. It is shown by simulation that the control scheme is useful in cases where the process variables are correlated and where they are uncorrelated.
---
paper_title: Identification and Quantification in Multivariate Quality Control Problems
paper_content:
Many quality control problems are multivariate in character since the quality of a given product or object consists simultaneously of more than one variable. A good multivariate quality control procedure should possess three important properties, namely, the control of the overall error rate, the easy identification of errant variables, and the easy quantification of any changes in the variable means. In this paper a procedure is suggested based on the construction of exact simultaneous confidence intervals for each of the variable means that meets each of these three goals. Both parametric and nonparametric procedures are considered, and critical point evaluation through tables, numerical integration, and simulation is discussed. Various examples of the implementation of the procedure are given.
---
paper_title: IDENTIFYING THE OUT OF CONTROL VARIABLE IN A MULTIVARIATE CONTROL CHART
paper_content:
The identification of the out of control variable, or variables, after a multivariate control chart signals, is an appealing subject for many researchers in the last years. In this paper we propose a new method for approaching this problem based on principal components analysis. Theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given. A graphical technique which can be applied in some of these limiting situations is also provided.
---
paper_title: Improving the Sensitivity of the T2 Statistic in Multivariate Process Control
paper_content:
The T2 statistic in multivariate process control is a function of the residuals taken from a set of linear regressions of the process variables. These residuals are contained in the conditional T2 terms of the orthogonal decomposition of the statistic...
---
paper_title: Monitoring a Multivariate Step Process
paper_content:
The productivity of an industrial processing unit often depends on equipment that changes over time. These changes may not be consistent, and , in many cases, may appear to occur in stages. Although changes in the process levels within each stage may ap..
---
paper_title: Multivariate Control Charts for Individual Observations
paper_content:
When p correlated process characteristics are being measured simultaneously, often individual observations are initially collected. The process data are monitored and special causes of variation are identified in order to establish control and to obtain..
---
paper_title: Visualization of multivariate data with radial plots using SAS
paper_content:
Data visualization tools can provide very powerful information and insight when performing data analysis. In many situations, a set of data can be adequately analyzed through data visualization methods alone. In other situations, data visualization can be used for preliminary data analysis. In this paper, radial plots are developed as a SAS-based data visualization tool that can improve one's ability to monitor, analyze and control a process. Using the program developed in this research, we present two examples of data analysis using radial plots; the first example is based on data from a particle board manufacturing process and the second example is a business process for monitoring the time-varying level of stock return data.
---
paper_title: Multivariate Process Monitoring Using the Dynamic Biplot
paper_content:
Summary ::: ::: In this article, we present a method for monitoring multivariate process data based on the Gabriel biplot. In contrast to existing methods that are based on some form of dimension reduction, we use reduction to two dimensions for displaying the state of the process but all the data for determining whether it is in a state of statistical control. This approach allows us to detect changes in location, variation, and correlational structure accurately yet display a large amount of information concisely. We illustrate the use of the biplot on an example of industrial data and also discuss some of the issues related to a practical implementation of the method.
---
paper_title: IDENTIFYING THE OUT OF CONTROL VARIABLE IN A MULTIVARIATE CONTROL CHART
paper_content:
The identification of the out of control variable, or variables, after a multivariate control chart signals, is an appealing subject for many researchers in the last years. In this paper we propose a new method for approaching this problem based on principal components analysis. Theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given. A graphical technique which can be applied in some of these limiting situations is also provided.
---
|
Title: Multivariate Statistical Process Control Charts and the Problem of Interpretation: A Short Overview and Some Applications in Industry
Section 1: INTRODUCTION
Description 1: Introduce the basic concept of Statistical Process Control (SPC), highlight the limitations of univariate SPC in process industries, and provide an overview of multivariate SPC techniques and their historical development.
Section 2: CONTROLLING AND MONITORING MULTIVARIATE PROCESSES USING CONTROL CHARTS
Description 2: Describe the implementation of multivariate statistical process control using control charts, including detailed explanations of Phase I and Phase II control charting, and discuss different types of multivariate control charts such as Shewhart, CUSUM, and EWMA.
Section 3: IDENTIFYING THE OUT-OF-CONTROL VARIABLE
Description 3: Present various methods for detecting which specific variables are out of control when an out-of-control signal is identified in a multivariate control chart, including Bonferroni limits, principal component analysis, and contribution plots.
Section 4: APPLICATIONS OF MULTIVARIATE SPC TECHNIQUES IN THE INDUSTRIAL ENVIRONMENT
Description 4: Discuss real-world applications of multivariate SPC techniques in industry, focusing on a three-variable case study in a chemical process, and describe Phase I and Phase II analysis procedures along with identification methods for out-of-control variables.
Section 5: COMMENTS
Description 5: Offer concluding remarks and discuss potential areas for further research in the domain of multivariate SPC, such as robust design, nonparametric control charts, and improved methods for interpreting out-of-control signals.
|
A Review of Closed-Loop Algorithms for Glycemic Control in the Treatment of Type 1 Diabetes
| 6 |
---
paper_title: Closing the Loop: The Adicol Experience
paper_content:
The objective of the project Advanced Insulin Infusion using a Control Loop (ADICOL) was to develop a treatment system that continuously measures and controls the glucose concentration in subjects with type 1 diabetes. The modular concept of the ADICOL's extracorporeal artificial pancreas consisted of a minimally invasive subcutaneous glucose system, a handheld PocketPC computer, and an insulin pump (D-Tron, Disetronic, Burgdorf, Switzerland) delivering subcutaneously insulin lispro. The present paper describes a subset of ADICOL activities focusing on the development of a glucose controller for semi-closed-loop control, an in silico testing environment, clinical testing, and system integration. An incremental approach was adopted to evaluate experimentally a model predictive glucose controller. A feasibility study was followed by efficacy studies of increasing complexity. The ADICOL project demonstrated feasibility of a semi-closed-loop glucose control during fasting and fed conditions with a wearable, m...
---
paper_title: The future of open- and closed-loop insulin delivery systems.
paper_content:
We have analysed several aspects of insulin-dependent diabetes mellitus, including the glucose metabolic system, diabetes complications, and previous and ongoing research aimed at controlling glucose in diabetic patients. An expert review of various models and control algorithms developed for the glucose homeostasis system is presented, along with an analysis of research towards the development of a polymeric insulin infusion system. Recommendations for future directions in creating a true closed-loop glucose control system are presented, including the development of multivariable models and control systems to more accurately describe and control the multi-metabolite, multi-hormonal system, as well as in-vivo assessments of implicit closed-loop control systems.
---
paper_title: Hypoglycaemia: The limiting factor in the glycaemic management of Type I and Type II Diabetes*
paper_content:
Hypoglycaemia is the limiting factor in the glycaemic management of diabetes. Iatrogenic hypoglycaemia is typically the result of the interplay of insulin excess and compromised glucose counterregulation in Type I (insulin-dependent) diabetes mellitus. Insulin concentrations do not decrease and glucagon and epinephrine concentrations do not increase normally as glucose concentrations decrease. The concept of hypoglycaemia-associated autonomic failure (HAAF) in Type I diabetes posits that recent antecedent iatrogenic hypoglycaemia causes both defective glucose counterregulation (by reducing the epinephrine response in the setting of an absent glucagon response) and hypoglycaemia unawareness (by reducing the autonomic and the resulting neurogenic symptom responses). Perhaps the most compelling support for HAAF is the finding that as little as 2 to 3 weeks of scrupulous avoidance of hypoglycaemia reverses hypoglycaemia unawareness and improves the reduced epinephrine component of defective glucose counterregulation in most affected patients. The mediator and mechanism of HAAF are not known but are under active investigation. The glucagon response to hypoglycaemia is also reduced in patients approaching the insulin deficient end of the spectrum of Type II (non-insulin-dependent) diabetes mellitus, and glycaemic thresholds for autonomic (including epinephrine) and symptomatic responses to hypoglycaemia are shifted to lower plasma glucose concentrations after hypoglycaemia in Type II diabetes. Thus, patients with advanced Type II diabetes are also at risk for HAAF. While it is possible to minimise the risk of hypoglycaemia by reducing risks ‐ including a 2 to 3 week period of scrupulous avoidance of hypoglycaemia in patients with hypoglycaemia unawareness ‐ methods that provide glucose-regulated insulin replacement or secretion are needed to eliminate hypoglycaemia and maintain euglycaemia over a lifetime of diabetes. [Diabetologia (2002) 45:937‐948]
---
paper_title: Physiologic evaluation of factors controlling glucose tolerance in man: measurement of insulin sensitivity and beta-cell glucose sensitivity from the response to intravenous glucose.
paper_content:
The quantitative contributions of pancreatic responsiveness and insulin sensitivity to glucose tolerance were measured using the "minimal modeling technique" in 18 lean and obese subjects (88-206% ideal body wt). The individual contributions of insulin secretion and action were measured by interpreting the dynamics of plasma glucose and insulin during the intravenous glucose tolerance test in terms of two mathematical models. One, the insulin kinetics model, yields parameters of first-phase (phi 1) and second-phase (phi 2) responsivity of the beta-cells to glucose. The other glucose kinetics model yields the insulin sensitivity parameters, SI. Lean and obese subjects were subdivided into good (KG greater than 1.5) and lower (KG less than 1.5) glucose tolerance groups. The etiology of lower glucose tolerance was entirely different in lean and obese subjects. Lean, lower tolerance was related to pancreatic insufficiency (phi 2 77% lower than in good tolerance controls [P less than 0.03]), but insulin sensitivity was normal (P greater than 0.5). In contrast, obese lower tolerance was entirely due to insulin resistance (SI diminished 60% [P less than 0.01]); pancreatic responsiveness was not different from lean, good tolerance controls (phi 1: P greater than 0.06; phi 2: P greater than 0.40). Subjects (regardless of weight) could be segregated into good and lower tolerance by the product of second-phase beta-cell responsivity and insulin sensitivity (phi 2 . SI). Thus, these two factors were primarily responsible for overall determination of glucose tolerance. The effect of phi 1 was to modulate the KG value within those groups whose overall tolerance was determined by phi 2 . SI. This phi 1 modulating influence was more pronounced among insulin sensitive (phi 1 vs. KG, r = 0.79) than insulin resistant (obese, low tolerance; phi 1 vs. KG, r = 0.91) subjects. This study demonstrates the feasibility of the minimal model technique to determine the etiology of impaired glucose tolerance.
---
paper_title: Reduced beta-cell compensation to the insulin resistance associated with obesity in members of caucasian familial type 2 diabetic kindreds.
paper_content:
OBJECTIVE: Both obesity and a family history of diabetes reduce insulin sensitivity, but the impact of obesity on insulin secretion among individuals predisposed to diabetes is uncertain. We used a pedigree-based approach to test the hypothesis that beta-cell compensation to the insulin resistance associated with obesity is defective among individuals predisposed to diabetes by virtue of a strong family history of type 2 diabetes before the development of diabetes or glucose intolerance. RESEARCH DESIGN AND METHODS: A total of 126 members of 26 families ascertained for at least a sib pair with type 2 diabetes with onset before age 65 years underwent a tolbutamide-modified frequently sampled intravenous glucose tolerance test (FSIGT). Family members included 26 individuals with impaired glucose tolerance and 100 individuals with normal glucose tolerance (NGT). The acute insulin response to glucose (AIRglucose) was determined and insulin sensitivity (S(I)) estimated by minimal model analysis of FSIGT data. The beta-cell compensation for insulin sensitivity was estimated from the disposition index (DI), calculated as the product of S(I) and AIRglucose. Obesity was measured by BMI. RESULTS: Among all individuals, BMI was a significant predictor of both S(I) and AIRglucose, as expected. However, BMI also significantly predicted DI (P = 0.002) after correcting for age, sex, family membership, and glucose tolerance status. The relationship of BMI and DI was confirmed in 85 individuals with NGT who were aged 30), S(I) decreased progressively and significantly with obesity whereas AIRglucose rose significantly from lean to most obese classes. In contrast to the expectation of complete beta-cell compensation with obesity D1 fell significantly (P = 0.004) among obese family members. This relationship was not observed in control subjects. CONCLUSIONS: Individuals with a genetic predisposition to diabetes show a reduced beta-cell compensatory response to the reduced insulin sensitivity associated with obesity. We propose that this impaired compensation may be one manifestation of the underlying genetic defect in susceptible individuals. This finding helps explain the multiplicative effects of family history and obesity on risk of type 2 diabetes.
---
paper_title: Continuous Glucose Monitoring and Intensive Treatment of Type 1 Diabetes
paper_content:
BACKGROUND ::: The value of continuous glucose monitoring in the management of type 1 diabetes mellitus has not been determined. ::: ::: ::: METHODS ::: In a multicenter clinical trial, we randomly assigned 322 adults and children who were already receiving intensive therapy for type 1 diabetes to a group with continuous glucose monitoring or to a control group performing home monitoring with a blood glucose meter. All the patients were stratified into three groups according to age and had a glycated hemoglobin level of 7.0 to 10.0%. The primary outcome was the change in the glycated hemoglobin level at 26 weeks. ::: ::: ::: RESULTS ::: The changes in glycated hemoglobin levels in the two study groups varied markedly according to age group (P=0.003), with a significant difference among patients 25 years of age or older that favored the continuous-monitoring group (mean difference in change, -0.53%; 95% confidence interval [CI], -0.71 to -0.35; P<0.001). The between-group difference was not significant among those who were 15 to 24 years of age (mean difference, 0.08; 95% CI, -0.17 to 0.33; P=0.52) or among those who were 8 to 14 years of age (mean difference, -0.13; 95% CI, -0.38 to 0.11; P=0.29). Secondary glycated hemoglobin outcomes were better in the continuous-monitoring group than in the control group among the oldest and youngest patients but not among those who were 15 to 24 years of age. The use of continuous glucose monitoring averaged 6.0 or more days per week for 83% of patients 25 years of age or older, 30% of those 15 to 24 years of age, and 50% of those 8 to 14 years of age. The rate of severe hypoglycemia was low and did not differ between the two study groups; however, the trial was not powered to detect such a difference. ::: ::: ::: CONCLUSIONS ::: Continuous glucose monitoring can be associated with improved glycemic control in adults with type 1 diabetes. Further work is needed to identify barriers to effectiveness of continuous monitoring in children and adolescents. (ClinicalTrials.gov number, NCT00406133.)
---
paper_title: An overview of pancreatic beta-cell defects in human type 2 diabetes: Implications for treatment
paper_content:
Type 2 diabetes is the most common form of diabetes in humans. It results from a combination of factors that impair beta-cell function and tissue insulin sensitivity. However, growing evidence is showing that the beta-cell is central to the development and progression of this form of diabetes. Reduced islet and/or insulin-containing cell mass or volume in Type 2 diabetes has been reported by several authors. Furthermore, studies with isolated Type 2 diabetic islets have consistently shown both quantitative and qualitative defects of glucose-stimulated insulin secretion. The impact of genotype in affecting beta-cell function and survival is a very fast growing field or research, and several gene polymorphisms have been associated with this form of diabetes. Among acquired factors, glucotoxicity, lipotoxicity and altered IAPP processing are likely to play an important role. Interestingly, however, pharmacological intervention can improve several defects of Type 2 diabetes islet cells in vitro, suggesting that progression of the disease might not be relentless.
---
paper_title: The hot IVGTT two-compartment minimal model: indexes of glucose effectiveness and insulin sensitivity
paper_content:
A two-compartment minimal model (2CMM) has been proposed [A. Caumo and C. Cobelli. Am. J. Physiol. 264 ( Endocrinol. Metab. 27): E829–E841, 1993] to describe intravenous glucose tolerance test (IVGTT) labeled (hereafter hot) glucose kinetics. This model, at variance with the one-compartment minimal model (1CMM), allows the estimation of a plausible profile of glucose production. The aim of this study is to show that the 2CMM also allows the assessment of insulin sensitivity (![Formula][1] ), glucose effectiveness (![Formula][2] ), and plasma clearance rate (PCR). The 2CMM was identified on stable-isotope IVGTTs performed in normal subjects ( n = 14). Results were (means ± SE) ![Formula][3] = 0.85 ± 0.14 ml ⋅ kg−1 ⋅ min−1, PCR = 2.02 ± 0.14 ml ⋅ kg−1 ⋅ min−1, and ![Formula][4] = 13.83 ± 2.54 × 10−2ml ⋅ kg−1 ⋅ min−1 ⋅ μU−1 ⋅ ml. The 1CMM was also identified; glucose effectiveness and insulin sensitivity indexes were ![Formula][5] V = 1.36 ± 0.08 ml ⋅ kg−1 ⋅ min−1and ![Formula][6] V = 12.98 ± 2.21 × 10−2ml ⋅ kg−1 ⋅ min−1 ⋅ μU−1 ⋅ ml, respectively, where V is the 1CMM glucose distribution volume.![Formula][7] V was lower than PCR and higher than ![Formula][8] and did not correlate with either [ r = 0.45 (NS) and r = 0.50 (NS), respectively], whereas ![Formula][9] V was not different from and was correlated with![Formula][10] ( r = 0.95; P compares well ( r = 0.78; P < 0.001) with PCR normalized by the 2CMM total glucose distribution volume. In conclusion, the 2CMM is a powerful tool to assess glucose metabolism in vivo. ::: ::: [1]: /embed/mml-math-1.gif ::: [2]: /embed/mml-math-2.gif ::: [3]: /embed/mml-math-3.gif ::: [4]: /embed/mml-math-4.gif ::: [5]: /embed/mml-math-5.gif ::: [6]: /embed/mml-math-6.gif ::: [7]: /embed/mml-math-7.gif ::: [8]: /embed/mml-math-8.gif ::: [9]: /embed/mml-math-9.gif ::: [10]: /embed/mml-math-10.gif ::: [11]: /embed/mml-math-11.gif
---
paper_title: Coefficients of normal blood glucose regulation.
paper_content:
A previously formulated glucose-insulin feed-back theory was simplified with appropriate assumptions for the purpose of determining which physiological sensitivity coefficients dominate the mathematical characteristics of the normal insulin and glucose tolerance curves. It was found from experimental data that these physiological coefficients approximate the well-known critical damping criteria of servomechanism theory. Correlations between theoretical and experimental results were made with some particular solutions of the necessary differential equations, obtained with the aid of an electronic analogue computer. Using a distribution volume of 17.5 liters for the 70-kg adult in three different methods of approach, it was found that the average coefficients of the insulin and glucose responses of the liver, pancreas, and peripheral tissues are approximately ⍺ = 0.780 unit/hr/unit, β = 0.208 unit/hr/g, γ = 4.34 g/hr/unit, and δ = 2.92 g/hr/g. ::: ::: Submitted on December 27, 1960
---
paper_title: A Mathematical Model for the Glucose Induced Insulin Release in Man
paper_content:
Abstract. The dynamics of the insulin response to intravenously administered glucose were studied in man. It was shown that (a) insulin response to prolonged stimulation is biphasic; (b) if the glucose stimulus is repeated with short intervals, inhibition of the second response occurs; (c) if longer time-intervals are used, enhancement of the response is noted at the second stimulation. These findings suggest that when the pancreatic islets are exposed to hyperglycaemia, three, kinetically distinct phenomena are initiated. Glucose induces almost instantaneous initiation of insulin release. Shortly thereafter, the pancreas enters a refractory phase. Thirdly, and at a later stage, a state of potentiation is built up in the islets. The effect of glucose on insulin synthesis is not considered here.–Against this background, and based on an earlier model, a mathematical model for the analysis of the glucose-insulin interplay during glucose infusions was constructed. The model describes the eventual occurrence of glucosuria, changes in the concentration of glucose in its pool, and mimics the effects of regulatory hormones when hypoglycemia appears. Insulin secretion is assumed to be con trolled, in a multiplicative manner, by an immediate glucose function, a hypothetical potentiator that is slowly generated by glucose, and a negative factor with a shorter time-course which corresponds to the refractory phase of the pancreas. A three compartment model is used in the simulation of the metabolism and distribution of insulin after its release. Finally, glucose utilization is described as a multiplicative function, related to the prevailing concentrations of glucose in blood and insulin in the extracellular space.–This model is able to simulate all the experimental situations described in this report, both in normal man and in the diabetic syndrome, in which insulin secretion shows varying degrees of impairment. The results of the simulation of individual experiments are given either as a set of theoretical parameter values, or described as the insulin response of the model to a standard, hypothetical glucose stimulus. The latter alternative is an attractive method for objectively evaluating the insulin response to a standard glucose load in clinical materials.
---
paper_title: PID controller tuning for the first-order-plus-dead-time process model via Hermite-Biehler theorem.
paper_content:
This paper discusses PID stabilization of a first-order-plus-dead-time (FOPDT) process model using the stability framework of the Hermite-Biehler theorem. The FOPDT model approximates many processes in the chemical and petroleum industries. Using a PID controller and first-order Padé approximation for the transport delay, the Hermite-Biehler theorem allows one to analytically study the stability of the closed-loop system. We derive necessary and sufficient conditions for stability and develop an algorithm for selection of stabilizing feedback gains. The results are given in terms of stability bounds that are functions of plant parameters. Sensitivity and disturbance rejection characteristics of the proposed PID controller are studied. The results are compared with established tuning methods such as Ziegler-Nichols, Cohen-Coon, and internal model control.
---
paper_title: Modelingβ-Cell Insulin Secretion - Implications for Closed-Loop Glucose Homeostasis
paper_content:
Glucose sensing and insulin delivery technology can potentially be linked to form a closed-loop insulin delivery system. Ideally, such a system would establish normal physiologic glucose profiles. To this end, a model of β-cell secretion can potentially provide insight into the preferred structure of the insulin delivery algorithm. Two secretion models were evaluated for their ability to describe plasma insulin dynamics during hyperglycemic clamps (humans; n = 7), and for their ability to establish and maintain fasting euglycemia under conditions simulated by the minimal model. The first β-cell model (SD) characterized insulin secretion as a static component that had a delayed response to glucose, and a dynamic component that responded to the rate of increase of glucose. The second model (PID) described the response in terms of a proportional component without delay, an integral component that adjusted basal delivery in proportion to hyper/hypoglycemia, and a derivative component that responded to the rat...
---
paper_title: Cascade control strategy for external carbon dosage in predenitrifying process
paper_content:
We propose a cascade control strategy composed of two Proportional-Integral (PI) controllers to regulate the nitrate concentration in the predenitrifying process by manipulating the external carbon dosage. It controls the nitrate concentrations in the effluent as well as in the final anoxic reactor simultaneously to strictly satisfy the quality of the effluent as well as to remove the effects of disturbances more quickly. The design of two PI controllers in the cascade control loop can be completed with the Ziegler–Nichols (Z–N) tuning rule together with a simple relay feedback identification method. Results from the Benchmark simulation confirm that both good set point tracking and satisfactory disturbance rejection can be guaranteed due to the structural advantages of the proposed cascade control strategy. Also, compared with a previous work, the fluctuation of the nitrate concentration in the effluent has been decreased significantly.
---
paper_title: Continuous Glucose Monitoring and Intensive Treatment of Type 1 Diabetes
paper_content:
BACKGROUND ::: The value of continuous glucose monitoring in the management of type 1 diabetes mellitus has not been determined. ::: ::: ::: METHODS ::: In a multicenter clinical trial, we randomly assigned 322 adults and children who were already receiving intensive therapy for type 1 diabetes to a group with continuous glucose monitoring or to a control group performing home monitoring with a blood glucose meter. All the patients were stratified into three groups according to age and had a glycated hemoglobin level of 7.0 to 10.0%. The primary outcome was the change in the glycated hemoglobin level at 26 weeks. ::: ::: ::: RESULTS ::: The changes in glycated hemoglobin levels in the two study groups varied markedly according to age group (P=0.003), with a significant difference among patients 25 years of age or older that favored the continuous-monitoring group (mean difference in change, -0.53%; 95% confidence interval [CI], -0.71 to -0.35; P<0.001). The between-group difference was not significant among those who were 15 to 24 years of age (mean difference, 0.08; 95% CI, -0.17 to 0.33; P=0.52) or among those who were 8 to 14 years of age (mean difference, -0.13; 95% CI, -0.38 to 0.11; P=0.29). Secondary glycated hemoglobin outcomes were better in the continuous-monitoring group than in the control group among the oldest and youngest patients but not among those who were 15 to 24 years of age. The use of continuous glucose monitoring averaged 6.0 or more days per week for 83% of patients 25 years of age or older, 30% of those 15 to 24 years of age, and 50% of those 8 to 14 years of age. The rate of severe hypoglycemia was low and did not differ between the two study groups; however, the trial was not powered to detect such a difference. ::: ::: ::: CONCLUSIONS ::: Continuous glucose monitoring can be associated with improved glycemic control in adults with type 1 diabetes. Further work is needed to identify barriers to effectiveness of continuous monitoring in children and adolescents. (ClinicalTrials.gov number, NCT00406133.)
---
paper_title: Process Control in Municipal Solid Waste Incinerators: Survey and Assessment
paper_content:
As there is only rare and scattered published information about the process control in industrial incineration facilities for municipal solid waste (MSW), a survey of the literature has been supplemented by a number of waste incineration site visits in Belgium and the Netherlands, in order to make a realistic assessment of the current status of technology in the area. Owing to the commercial character, and therefore, the confidentiality restrictions imposed by plant builders and many of the operators, much of the information collected has either to be presented in a generalized manner, and in any case anonymously. The survey was focused on four major issues: process control strategy, process control systems, monitors used for process control and finally the correlation between the 850°C/2 s rule in the European waste incineration directive and integrated process control. The process control strategies range from reaching good and stable emissions at the stack to stabilizing and maximizing the energy outpu...
---
paper_title: Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes
paper_content:
A nonlinear model predictive controller has been developed to maintain normoglycemia in subjects with type 1 diabetes during fasting conditions such as during overnight fast. The controller employs a compartment model, which represents the glucoregulatory system and includes submodels representing absorption of subcutaneously administered short-acting insulin Lispro and gut absorption. The controller uses Bayesian parameter estimation to determine time-varying model parameters. Moving target trajectory facilitates slow, controlled normalization of elevated glucose levels and faster normalization of low glucose values. The predictive capabilities of the model have been evaluated using data from 15 clinical experiments in subjects with type 1 diabetes. The experiments employed intravenous glucose sampling (every 15 min) and subcutaneous infusion of insulin Lispro by insulin pump (modified also every 15 min). The model gave glucose predictions with a mean square error proportionally related to the prediction horizon with the value of 0.2 mmol L−1 per 15 min. The assessment of clinical utility of model-based glucose predictions using Clarke error grid analysis gave 95% of values in zone A and the remaining 5% of values in zone B for glucose predictions up to 60 min (n = 1674). In conclusion, adaptive nonlinear model predictive control is promising for the control of glucose concentration during fasting conditions in subjects with type 1 diabetes.
---
paper_title: Blood Glucose Control by a Model Predictive Control Algorithm with Variable Sampling Rate Versus a Routine Glucose Management Protocol in Cardiac Surgery Patients: A Randomized Controlled Trial
paper_content:
Context: Elevated blood glucose levels occur frequently in the critically ill. Tight glucose control by intensive insulin treatment markedly improves clinical outcome. Objective and Design: This is a randomized controlled trial comparing blood glucose control by a laptop-based model predictive control algorithm with a variable sampling rate [enhanced model predictive control (eMPC); version 1.04.03] against a routine glucose management protocol (RMP) during the peri- and postoperative periods. Setting: The study was performed at the Department of Cardiac Surgery, University Hospital. Patients: A total of 60 elective cardiac surgery patients were included in the study. Interventions: Elective cardiac surgery and treatment with continuous insulin infusion (eMPC) or continuous insulin infusion combined with iv insulin boluses (RMP) to maintain euglycemia (target range 4.4–6.1 mmol/liter) were performed. There were 30 patients randomized for eMPC and 30 for RMP treatment. Blood glucose was measured in 1- to 4...
---
paper_title: Closing the Loop: The Adicol Experience
paper_content:
The objective of the project Advanced Insulin Infusion using a Control Loop (ADICOL) was to develop a treatment system that continuously measures and controls the glucose concentration in subjects with type 1 diabetes. The modular concept of the ADICOL's extracorporeal artificial pancreas consisted of a minimally invasive subcutaneous glucose system, a handheld PocketPC computer, and an insulin pump (D-Tron, Disetronic, Burgdorf, Switzerland) delivering subcutaneously insulin lispro. The present paper describes a subset of ADICOL activities focusing on the development of a glucose controller for semi-closed-loop control, an in silico testing environment, clinical testing, and system integration. An incremental approach was adopted to evaluate experimentally a model predictive glucose controller. A feasibility study was followed by efficacy studies of increasing complexity. The ADICOL project demonstrated feasibility of a semi-closed-loop glucose control during fasting and fed conditions with a wearable, m...
---
paper_title: Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes
paper_content:
A nonlinear model predictive controller has been developed to maintain normoglycemia in subjects with type 1 diabetes during fasting conditions such as during overnight fast. The controller employs a compartment model, which represents the glucoregulatory system and includes submodels representing absorption of subcutaneously administered short-acting insulin Lispro and gut absorption. The controller uses Bayesian parameter estimation to determine time-varying model parameters. Moving target trajectory facilitates slow, controlled normalization of elevated glucose levels and faster normalization of low glucose values. The predictive capabilities of the model have been evaluated using data from 15 clinical experiments in subjects with type 1 diabetes. The experiments employed intravenous glucose sampling (every 15 min) and subcutaneous infusion of insulin Lispro by insulin pump (modified also every 15 min). The model gave glucose predictions with a mean square error proportionally related to the prediction horizon with the value of 0.2 mmol L−1 per 15 min. The assessment of clinical utility of model-based glucose predictions using Clarke error grid analysis gave 95% of values in zone A and the remaining 5% of values in zone B for glucose predictions up to 60 min (n = 1674). In conclusion, adaptive nonlinear model predictive control is promising for the control of glucose concentration during fasting conditions in subjects with type 1 diabetes.
---
paper_title: Blood Glucose Control by a Model Predictive Control Algorithm with Variable Sampling Rate Versus a Routine Glucose Management Protocol in Cardiac Surgery Patients: A Randomized Controlled Trial
paper_content:
Context: Elevated blood glucose levels occur frequently in the critically ill. Tight glucose control by intensive insulin treatment markedly improves clinical outcome. Objective and Design: This is a randomized controlled trial comparing blood glucose control by a laptop-based model predictive control algorithm with a variable sampling rate [enhanced model predictive control (eMPC); version 1.04.03] against a routine glucose management protocol (RMP) during the peri- and postoperative periods. Setting: The study was performed at the Department of Cardiac Surgery, University Hospital. Patients: A total of 60 elective cardiac surgery patients were included in the study. Interventions: Elective cardiac surgery and treatment with continuous insulin infusion (eMPC) or continuous insulin infusion combined with iv insulin boluses (RMP) to maintain euglycemia (target range 4.4–6.1 mmol/liter) were performed. There were 30 patients randomized for eMPC and 30 for RMP treatment. Blood glucose was measured in 1- to 4...
---
paper_title: Tight glycaemic control by an automated algorithm with time-variant sampling in medical ICU patients
paper_content:
OBJECTIVE ::: Tight glycaemic control (TGC) in critically ill patients improves clinical outcome, but is difficult to establish The primary objective of the present study was to compare glucose control in medical ICU patients applying a computer-based enhanced model predictive control algorithm (eMPC) extended to include time-variant sampling against an implemented glucose management protocol. ::: ::: ::: DESIGN ::: Open randomised controlled trial. ::: ::: ::: SETTING ::: Nine-bed medical intensive care unit (ICU) in a tertiary teaching hospital. ::: ::: ::: PATIENTS AND PARTICIPANTS ::: Fifty mechanically ventilated medical ICU patients. ::: ::: ::: INTERVENTIONS ::: Patients were included for a study period of up to 72 h. Patients were randomised to the control group (n = 25), treated by an implemented insulin algorithm, or to the eMPC group (n = 25), using the laptop-based algorithm. Target range for blood glucose (BG) was 4.4-6.1 mM. Efficacy was assessed by mean BG, hyperglycaemic index (HGI) and BG sampling interval. Safety was assessed by the number of hypoglycaemic-episodes < 2.2 mM. Each participating nurse filled-in a questionnaire regarding the usability of the algorithm. ::: ::: ::: MEASUREMENTS AND MAIN RESULTS ::: BG and HGI were significantly lower in the eMPC group [BG 5.9 mM (5.5-6.3), median (IQR); HGI 0.4 mM (0.2-0.9)] than in control patients [BG 7.4 mM (6.9-8.6), p < 0.001; HGI 1.6 mM (1.1-2.4), p < 0.001]. One hypoglycaemic episode was detected in the eMPC group; no such episodes in the control group. Sampling interval was significantly shorter in the eMPC group [eMPC 117[Symbol: see text]min (+/- 34), mean (+/- SD), vs 174 min (+/- 27); p < 0.001]. Thirty-four nurses filled-in the questionnaire. Thirty answered the question of whether the algorithm could be applied in daily routine in the affirmative. ::: ::: ::: CONCLUSIONS ::: The eMPC algorithm was effective in maintaining tight glycaemic control in severely ill medical ICU patients.
---
paper_title: Blood Glucose Control by a Model Predictive Control Algorithm with Variable Sampling Rate Versus a Routine Glucose Management Protocol in Cardiac Surgery Patients: A Randomized Controlled Trial
paper_content:
Context: Elevated blood glucose levels occur frequently in the critically ill. Tight glucose control by intensive insulin treatment markedly improves clinical outcome. Objective and Design: This is a randomized controlled trial comparing blood glucose control by a laptop-based model predictive control algorithm with a variable sampling rate [enhanced model predictive control (eMPC); version 1.04.03] against a routine glucose management protocol (RMP) during the peri- and postoperative periods. Setting: The study was performed at the Department of Cardiac Surgery, University Hospital. Patients: A total of 60 elective cardiac surgery patients were included in the study. Interventions: Elective cardiac surgery and treatment with continuous insulin infusion (eMPC) or continuous insulin infusion combined with iv insulin boluses (RMP) to maintain euglycemia (target range 4.4–6.1 mmol/liter) were performed. There were 30 patients randomized for eMPC and 30 for RMP treatment. Blood glucose was measured in 1- to 4...
---
paper_title: Model Predictive Control of Type 1 Diabetes: An in Silico Trial
paper_content:
BACKGROUND ::: The development of artificial pancreas has received a new impulse from recent technological advancements in subcutaneous continuous glucose monitoring and subcutaneous insulin pump delivery systems. However, the availability of innovative sensors and actuators, although essential, does not guarantee optimal glycemic regulation. Closed-loop control of blood glucose levels still poses technological challenges to the automatic control expert, most notable of which are the inevitable time delays between glucose sensing and insulin actuation. ::: ::: ::: METHODS ::: A new in silico model is exploited for both design and validation of a linear model predictive control (MPC) glucose control system. The starting point is a recently developed meal glucose-insulin model in health, which is modified to describe the metabolic dynamics of a person with type 1 diabetes mellitus. The population distribution of the model parameters originally obtained in healthy 204 patients is modified to describe diabetic patients. Individual models of virtual patients are extracted from this distribution. A discrete-time MPC is designed for all the virtual patients from a unique input-output-linearized approximation of the full model based on the average population values of the parameters. The in silico trial simulates 4 consecutive days, during which the patient receives breakfast, lunch, and dinner each day. ::: ::: ::: RESULTS ::: Provided that the regulator undergoes some individual tuning, satisfactory results are obtained even if the control design relies solely on the average patient model. Only the weight on the glucose concentration error needs to be tuned in a quite straightforward and intuitive way. The ability of the MPC to take advantage of meal announcement information is demonstrated. Imperfect knowledge of the amount of ingested glucose causes only marginal deterioration of performance. In general, MPC results in better regulation than proportional integral derivative, limiting significantly the oscillation of glucose levels. ::: ::: ::: CONCLUSIONS ::: The proposed in silico trial shows the potential of MPC for artificial pancreas design. The main features are a capability to consider meal announcement information, delay compensation, and simplicity of tuning and implementation.
---
paper_title: An implantable subcutaneous glucose sensor array in ketosis-prone rats: closed loop glycemic control.
paper_content:
A closed loop system of diabetes control would minimize hyperglycemia and hypoglycemia. We therefore implanted and tested a subcutaneous amperometric glucose sensor array in alloxan-diabetic rats. Each array employed four sensing units, the outputs of which were processed in real time to yield a unified signal. We utilized a gain-scheduled insulin control algorithm which rapidly reduced insulin delivery as glucose concentration declined. Such a system was generally effective in controlling glycemia and the degree of lag between blood glucose and the sensor signal was usually 3-8 min. After prolonged implantation, this lag was sometimes longer, which led to impairment of sensor accuracy. Using a prospective two-point calibration method, sensor accuracy and closed loop control were good. A revised algorithm yielded better glycemic control than the initial algorithm did. Future research needs to further improve calibration methods and reduce foreign body fibrosis in order to avoid a time-related increase in lag duration.
---
paper_title: The Benefit of Subcutaneous Glucagon During Closed-Loop Glycemic Control in Rats With Type 1 Diabetes
paper_content:
Because of its prolonged action, subcutaneously administered insulin has a potential for overcorrection hypoglycemia during closed-loop glucose control. For this reason, we hypothesized that subcutaneous glucagon, whose action is faster, could lessen the risk for hypoglycemia during closed-loop control. We therefore compared insulin alone versus insulin plus glucagon in diabetic rats in a controlled closed-loop study. Both hormones were delivered by algorithms based on proportional error, derivative error, and the glucose history. Based on this algorithm, glucagon was delivered when glucose was declining and approaching a hypoglycemic level. The delivery of glucagon was largely reciprocal with the delivery of insulin. With the addition of glucagon, there was less hypoglycemia at the glucose nadir, less hyperglycemia later in the study, and lower absolute error values during these periods. We also found that for 7 days after glucagon reconstitution, commercially available glucagon retained its original ability to quickly raise glucose level. We conclude that when subcutaneous insulin delivery is accompanied by subcutaneous glucagon, glycemic control during closed-loop treatment is improved. Since its action is faster than that of insulin, glucagon may prove useful during closed-loop diabetes control.
---
paper_title: Pharmacodynamics And Stability of Subcutaneously Infused Glucagon in A Type 1 Diabetic Swine Model In Vivo
paper_content:
Background: The objective of this study was to determine the in vivo pharmacodynamics of glucagon and to test its glycemic effect over days by assessing its time course of activity and potency in a...
---
paper_title: Diminished B cell secretory capacity in patients with noninsulin-dependent diabetes mellitus.
paper_content:
In order to assess whether patients with noninsulin-dependent diabetes mellitus (NIDDM) possess normal insulin secretory capacity, maximal B cell responsiveness to the potentiating effects of glucose was estimated in eight untreated patients with NIDDM and in eight nondiabetic controls. The acute insulin response to 5 g intravenous arginine was measured at five matched plasma glucose levels that ranged from approximately 100-615 mg/dl. The upper asymptote approached by acute insulin responses (AIRmax) and the plasma glucose concentration at half-maximal responsiveness (PG50) were estimated using nonlinear regression to fit a modification of the Michaelis-Menten equation. In addition, glucagon responses to arginine were measured at these same glucose levels to compare maximal A cell suppression by hyperglycemia in diabetics and controls. Insulin responses to arginine were lower in diabetics than in controls at all matched glucose levels (P less than 0.001 at all levels). In addition, estimated AIRmax was much lower in diabetics than in controls (83 +/- 21 vs. 450 +/- 93 microU/ml, P less than 0.01). In contrast, PG50 was similar in diabetics and controls (234 +/- 28 vs. 197 +/- 20 mg/dl, P equals NS) and insulin responses in both groups approached or attained maxima at a glucose level of approximately 460 mg/dl. Acute glucagon responses to arginine in patients with NIDDM were significantly higher than responses in controls at all glucose levels. In addition, although glucagon responses in control subjects reached a minimum at a glucose level of approximately 460 mg/dl, responses in diabetics declined continuously throughout the glucose range and did not reach a minimum. Thus, A cell sensitivity to changes in glucose level may be diminished in patients with NIDDM. In summary, patients with NIDDM possess markedly decreased maximal insulin responsiveness to the potentiating effects of glucose. Such a defect indicates the presence of a reduced B cell secretory capacity and suggests a marked generalized impairment of B cell function in patients with NIDDM.
---
paper_title: Autonomic mechanism and defects in the glucagon response to insulin-induced hypoglycaemia.
paper_content:
In summary, this article briefly reviews the evidence that three separate autonomic inputs to the islet are capable of stimulating glucagon secretion and that each is activated during IIH. We have reviewed our evidence that these autonomic inputs mediate the glucagon response to IIH, both in non-diabetic animals and humans. Finally, we outline our new preliminary data suggesting an eSIN in an autoimmune animal model of T1DM. We conclude that the glucagon response to IIH is autonomically mediated in non-diabetic animals and humans. We further suggest that at least one of these autonomic inputs, the sympathetic innervation of the islet, is diminished in autoimmune T1DM. These data raise the novel possibility that an autonomic defect contributes to the loss of the glucagon response to IIH in T1DM.
---
paper_title: A Novel Insulin Delivery Algorithm in Rats With Type 1 Diabetes: The Fading Memory Proportional-Derivative Method
paper_content:
An algorithm designed to automatically control insulin delivery was tested in rats with Type 1 diabetes. This nonlinear algorithm included a fading memory component of proportional and derivative errors in order to simulate normal insulin secretion. Error-weighting functions for the proportional and derivative terms were used with a performance index designed for error adaptation. In the first version of the algorithm, the proportional gain was adaptively varied. In the second version, a low rate of basal insulin delivery was adaptively varied. Six 6-h studies with each version were conducted using frequent blood sampling and intravenous insulin delivery. In Version 2 studies, blood glucose levels during the last two hours were well-controlled and significantly lower than in Version 1 (118 +/- 2.0 vs. 130 +/- 2.9 mg/dL). Neither version produced hypoglycemia. Future research using this algorithm needs to focus on automated glucose sensing in combination with insulin delivery.
---
paper_title: Fuzzy-Based Controller for Glucose Regulation in Type-1 Diabetic Patients by Subcutaneous Route
paper_content:
This paper presents an advisory/control algorithm for a type-1 diabetes mellitus (TIDM) patient under an intensive insulin treatment based on a multiple daily injections regimen (MDIR). The advisory/control algorithm incorporates expert knowledge about the treatment of this disease by using Mamdani-type fuzzy logic controllers to regulate the blood glucose level (BGL). The overall control strategy is based on a two-loop feedback strategy to overcome the variability in the glucose-insulin dynamics from patient to patient. An inner-loop provides the amount of both rapid/short and intermediate/long acting insulin (RSAI and ILAI) formulations that are programmed in a three-shots daily basis before meals. The combined preparation is then injected by the patient through a subcutaneous route. Meanwhile, an outer-loop adjusts the maximum amounts of insulin provided to the patient in a time-scale of days. The outer-loop controller aims to work as a supervisor of the inner-loop controller. Extensive closed-loop simulations are illustrated, using a detailed compartmental model of the insulin-glucose dynamics in a TIDM patient with meal intake
---
paper_title: Neural network modeling and control of type 1 diabetes mellitus.
paper_content:
This paper presents a developed and validated dynamic simulation model of type 1 diabetes, that simulates the progression of the disease and the two term controller that is responsible for the insulin released to stabilize the glucose level. The modeling and simulation of type 1 diabetes mellitus is based on an artificial neural network approach. The methodology builds upon an existing rich database on the progression of type 1 diabetes for a group of diabetic patients. The model was found to perform well at estimating the next glucose level over time without control. A neural controller that mimics the pancreas secretion of insulin into the body was also developed. This controller is of the two term type: one stage is responsible for short-term and the other for mid-term insulin delivery. It was found that the controller designed predicts an adequate amount of insulin that should be delivered into the body to obtain a normalization of the elevated glucose level. This helps to achieve the main objective of insulin therapy: to obtain an accurate estimate of the amount of insulin to be delivered in order to compensate for the increase in glucose concentration.
---
paper_title: Fuzzy filter for state estimation of a glucoregulatory system.
paper_content:
A filter based on fuzzy logic for state estimation of a glucoregulatory system is presented. A published non-linear model for the dynamics of glucose and its hormonal control including a single glucose compartment, five insulin compartments and a glucagon compartment was used for simulation. The simulated data were corrupted by an additive white noise with zero mean and a coefficient of variation (CV) of between 2 and 20% and then submitted to the state estimation procedure using a fuzzy filter (FF). The performance of the FF was compared with an extended Kalman filter (EKF) for state estimation. Both the FF and the EKF were evaluated in the following cases: (a) five state variables are measurable; three plasma variables are measurable; only plasma glucose is measurable; (b) for different measurement noise levels (CV of 2-20%); and (c) a mismatch between the glucoregulatory system and the given mathematical model (uncertain or approximate model). In contrast to the FF, in the case of approximate model of the glucose system, the EKF failed to achieve useful state estimation. Moreover, the performance of the FF was independent of the noise level. In conclusion, the FF approach is a viable alternative for state estimation in a noisy environment and with an uncertain mathematical model of the glucoregulatory system.
---
paper_title: A robust controller for insulin pumps based on H-infinity theory
paper_content:
The feedback control of insulin pumps for diabetic patients is discussed. Because the parameters in the mathematical model of the blood glucose dynamics present a considerable amount of uncertainty, the H- infinity framework is well suited for the design of controllers that take into account a nice compromise between robust closed-loop regulation of a constant set point and performance expressed in terms of peak values of the plasma glucose concentration. >
---
|
Title: A Review of Closed-Loop Algorithms for Glycemic Control in the Treatment of Type 1 Diabetes
Section 1: Introduction
Description 1: Introduce the significance of insulin in diabetes management, challenges in treating Type 1 diabetes, and the concept of closed-loop glycemic control.
Section 2: History of Closed Loop Systems
Description 2: Detail the historical development of closed-loop systems, components involved, and significant advancements in glucose sensing and insulin delivery technology.
Section 3: Proportional-Integral-Derivative (PID) Algorithms
Description 3: Explain the fundamentals and application of PID algorithms in closed-loop glycemic control, including successful implementations and inherent challenges.
Section 4: Model Predictive Control Algorithms
Description 4: Describe the principles of Model Predictive Control (MPC) algorithms and their application in managing insulin delivery for tight glycemic control.
Section 5: Fading Memory Proportional Derivative Algorithm
Description 5: Introduce the Fading Memory Proportional Derivative (FMPD) algorithm, its components, and its advantages over traditional PID approaches in diabetes management.
Section 6: Other Algorithms
Description 6: Review additional closed-loop control algorithms such as H-infinity loops, fuzzy logic systems, and neural networks, with their respective advantages and application in glycemic control.
|
A Survey on Gas Sensing Technology
| 15 |
---
paper_title: Differential Absorption Lidar to Measure Subhourly Variation of Tropospheric Ozone Profiles
paper_content:
A tropospheric ozone Differential Absorption Lidar system, developed jointly by The University of Alabama in Huntsville and the National Aeronautics and Space Administration, is making regular observations of ozone vertical distributions between 1 and 8 km with two receivers under both daytime and nighttime conditions using lasers at 285 and 291 nm. This paper describes the lidar system and analysis technique with some measurement examples. An iterative aerosol correction procedure reduces the retrieval error arising from differential aerosol backscatter in the lower troposphere. Lidar observations with coincident ozonesonde flights demonstrate that the retrieval accuracy ranges from better than 10% below 4 km to better than 20% below 8 km with 750-m vertical resolution and 10-min temporal integration.
---
paper_title: Design and Construction of an Odour Sensor for Various Biomedical Applications
paper_content:
This project deals with the construction of electronic device that senses odour and its application in Telesurgery. Of the body's five senses, the sense of Smell is the most mysterious. In the past, gas chromatography and mass spectrometry have been used as sensing systems, although these are usually expensive and time consuming. Today, the development of odour sensor helped to analyze odors and hence applied in various Medical applications such as, Telesurgery. The odour sensor is generally made of two main parts: a sensing system and a pattern recognition system. The sensing system of the "Odour Sensor" consists of individual thin-film carbon-black polymer homogeneously mingled throughout a non-conducting polymer. When the composite is exposed to a vapor-phase analyte, the resistance varies, which is unique for each odour. The change in resistance is converted into voltage and hence given to the Computer through a Data Acquisition Card. The analysis of Sensor output is done using MatLAB code called Smellware. The response of the Smellware is given to Artificial Neural Network, that is pattern recognition system and hence odour is determined. The main application of this developed odour sensor is Telesurgery. The odour sensor would identify odours in the remote surgical environment. These identified odours would then be electronically transmitted to another site where an odour generation system would recreate them. This developed "odour sensor" was successfully tested and was submitted as a Project Work.
---
paper_title: Photoacoustic Spectroscopy with Quantum Cascade Lasers for Trace Gas Detection
paper_content:
Abstract: Various applications, such as pollution monitoring, toxic-gas detection, noninvasive medical diagnostics and industrial process control, require sensitive and selectivedetection of gas traces with concentrations in the parts in 10 9 (ppb) and sub-ppb range.The recent development of quantum-cascade lasers (QCLs) has given a new aspect toinfrared laser-based trace gas sensors. In particular, single mode distributed feedback QCLsare attractive spectroscopic sources because of their excellent properties in terms of narrowlinewidth, average power and room temperature operation. In combination with these lasersources, photoacoustic spectroscopy offers the advantage of high sensitivity and selectivity,compact sensor platform, fast time-response and user friendly operation. This paper reportsrecent developments on quantum cascade laser-based photoacoustic spectroscopy for tracegas detection. In particular, different applications of a photoacoustic trace gas sensoremploying a longitudinal resonant cell with a detection limit on the order of hundred ppb ofozone and ammonia are discussed. We also report two QC laser-based photoacousticsensors for the detection of nitric oxide, for environmental pollution monitoring andmedical diagnostics, and hexamethyldisilazane, for applications in semiconductormanufacturing process.
---
paper_title: Trace amount formaldehyde gas detection for indoor air quality monitoring
paper_content:
Formaldehyde is not only a carcinogenic chemical, but also causes sick building syndrome. Very small amounts of formaldehyde, such as those emitted from building materials and furniture, pose great concerns for human health. A Health Canada guideline, proposed in 2005, set the maximum formaldehyde concentration for long term exposure (8-hours averaged) as 40 ppb (50 μg/m3). This is a low concentration that commercially available formaldehyde sensors have great difficulty to detect both accurately and continuously. In this paper, we report a formaldehyde gas detection system which is capable of pre-concentrating formaldehyde gas using absorbent, and subsequently thermally desorbing the concentrated gas for detection by the electrochemical sensor. Initial results show that the system is able to detect formaldehyde gas at the ppb level, thus making it feasible to detect trace amount of formaldehyde in indoor environments.
---
paper_title: Response Speed Optimization Of Thermal Gas-flow Sensors For Medical Application
paper_content:
Gas-flow sensors, for medical applications (respiration monitoring), require a relatively large bandwidth (10-300 Hz). Fast flow changes occur e.g. at the start of inspiration. Thermal gas-flow sensors are very favourable for medical applications. Due to the small size, they have a small influence on the total system (small obstruction in a tube). On the other hand, the response speed of these sensors is not automatically large enough: temperature changes take a relatively long time. Optimization of the response speed is necessary.
---
paper_title: A study on NDIR-based CO2 sensor to apply remote air quality monitoring system
paper_content:
Recently, various CO2 sensors reported in the literature may be classified in two major categories depending on measuring principle of sensors. Fist, the chemical CO2 gas sensors have the principal advantage of a very low energy consumption and small size. On the negative side, It is difficult to apply to a variety of practical fields because they have a short lifetime and as well as low durability. Second, NDIR-based CO2 sensor is commonly used in monitoring indoor air quality due to relatively high accuracy compared with that of a chemical CO2 gas sensor. In this paper, therefore, we evaluate NDIR-based CO2 sensors to verify applicable potentialities of them in a remote air quality monitoring system. In addition, the principle and structure of NDIR CO2 sensor is discussed, and then we analyze the CO2 concentration has been measured in the real platform of subway station for a month. Finally this paper proposes that the accuracy of the measuring values from NDIR CO2 sensor is enough for indoor air quality monitoring applications.
---
paper_title: Workflow for High Throughput Screening of Gas Sensing Materials
paper_content:
The workflow of a high throughput screening setup for the rapid identification of new and improved sensor materials is presented. The polyol method was applied to prepare nanoparticular metal oxides as base materials, which were functionalised by surface doping. Using multi-electrode substrates and high throughput impedance spectroscopy (HT-IS) a wide range of materials could be screened in a short time. Applying HT-IS in search of new selective gas sensing materials a NO2-tolerant NO sensing material with reduced sensitivities towards other test gases was identified based on iridium doped zinc oxide. Analogous behaviour was observed for iridium doped indium oxide.
---
paper_title: Overview of automotive sensors
paper_content:
An up-to-date review paper on automotive sensors is presented. Attention is focused on sensors used in production automotive systems. The primary sensor technologies in use today are reviewed and are classified according to their three major areas of automotive systems application-powertrain, chassis, and body. This subject is extensive. As described in this paper, for use in automotive systems, there are six types of rotational motion sensors, four types of pressure sensors, five types of position sensors, and three types of temperature sensors. Additionally, two types of mass air flow sensors, five types of exhaust gas oxygen sensors, one type of engine knock sensor, four types of linear acceleration sensors, four types of angular-rate sensors, four types of occupant comfort/convenience sensors, two types of near-distance obstacle detection sensors, four types of far-distance obstacle detection sensors, and and ten types of emerging, state-of the-art, sensor technologies are identified.
---
paper_title: The Effects of the Location of Au Additives on Combustion-generated SnO2 Nanopowders for CO Gas Sensing
paper_content:
The current work presents the results of an experimental study of the effects of the location of gold additives on the performance of combustion-generated tin dioxide (SnO(2)) nanopowders in solid state gas sensors. The time response and sensor response to 500 ppm carbon monoxide is reported for a range of gold additive/SnO(2) film architectures including the use of colloidal, sputtered, and combustion-generated Au additives. The opportunities afforded by combustion synthesis to affect the SnO(2)/additive morphology are demonstrated. The best sensor performance in terms of sensor response (S) and time response (τ) was observed when the Au additives were restricted to the outermost layer of the gas-sensing film. Further improvement was observed in the sensor response and time response when the Au additives were dispersed throughout the outermost layer of the film, where S = 11.3 and τ = 51 s, as opposed to Au localized at the surface, where S = 6.1 and τ = 60 s.
---
paper_title: Highly sensitive and selective ammonia gas sensor
paper_content:
We have fabricated and examined an ammonia gas sensor with high sensitivity using thick-film technology. The sensing material of the gas sensor is FeO/sub x/-WO/sub 3/-SnO/sub 2/ oxide semiconductor. The sensor exhibits resistance increase upon exposure to low concentration of ammonia gas. The resistance of the sensor is decreased, on the other hand, for exposure to reducing gases such as ethyl alcohol, methane, propane and carbon monoxide. We have proposed and investigated a novel method for detecting ammonia gas quite selectively by using a sensor array with two sensing elements which contains an ammonia gas sensor and a compensation element. The compensation element is a Pt-doped WO/sub 3/-SnO/sub 2/ gas sensor which shows an opposite direction of resistance change in comparison with the ammonia gas sensor upon exposure to ammonia gas. Excellent selectivity has been achieved using the sensor array with two sensing elements.
---
paper_title: Airborne Chemical Sensing with Mobile Robots
paper_content:
Airborne chemical sensing with mobile robots has been an active research area since the beginning of the 1990s. This article presents a review of research work in this field, including gas distribution mapping, trail guidance, and the different subtasks of gas source localisation. Due to the difficulty of modelling gas distribution in a real world environment with currently available simulation techniques, we focus largely on experimental work and do not consider publications that are purely based on simulations.
---
paper_title: Odour measurement using conducting polymer gas sensors and an artificial neural network decision system
paper_content:
The conventional way of assessing the magnitude of nuisance odours using an olfactometer and a sensory panel is costly. This paper describes experiments that have been conducted into matching the results from trained sensory panellists to those from a conducting polymer-based electronic nose. By taking the data from the electronic nose and applying them to a trained neural network, it has been shown that the data can be manipulated to give rise to results that are within a few percent of those from the sensory panellists. This is the first time that an electronic nose has been calibrated in terms of odour intensity measurements and it points the way forward to more objective measurements of nuisance odours.
---
paper_title: Compact Raman lidar for hydrogen gas leak detection
paper_content:
A compact Raman lidar system for hydrogen gas leak detection was constructed. Laser-induced fluorescence at a distance of 70 m and Raman scattering light from N 2 gas at short range could be detected.
---
paper_title: Planar Zeolite Film-Based Potentiometric Gas Sensors Manufactured by a Combined Thick-Film and Electroplating Technique
paper_content:
Zeolites are promising materials in the field of gas sensors. In this technology-oriented paper, a planar setup for potentiometric hydrocarbon and hydrogen gas sensors using zeolites as ionic sodium conductors is presented, in which the Pt-loaded Na-ZSM-5 zeolite is applied using a thick-film technique between two interdigitated gold electrodes and one of them is selectively covered for the first time by an electroplated chromium oxide film. The influence of the sensor temperature, the type of hydrocarbons, the zeolite film thickness, and the chromium oxide film thickness is investigated. The influence of the zeolite on the sensor response is briefly discussed in the light of studies dealing with zeolites as selectivity-enhancing cover layers.
---
paper_title: Semiconducting Metal Oxide Based Sensors for Selective Gas Pollutant Detection
paper_content:
A review of some papers published in the last fifty years that focus on the semiconducting metal oxide (SMO) based sensors for the selective and sensitive detection of various environmental pollutants is presented.
---
paper_title: Measurement of CH4 by differential infrared optical absorption spectroscopy
paper_content:
To solve the problem that the current use of optical methods used to detect various gases existing in sensors and signal processing was too complicated, detection sensitivity was not high, detection of the type of gases was single. We used optical phase-shifted grating laser window adjust the differential infrared absorption spectrometry and studied Spectrum adjustment and detection signal processing problem. CH4 in cigarette main smoke of different kind brands was measured continuously by differential optical absorption spectroscopy (DOAS). By using RM200 rotating disk smoking machine with 20 smoking flue, 20 kinds of gas were inserted into the rotating disk and ignited every 3s. By use of DOAS technique, the measurement was completed with in 6~7min after the gas was transported into a White cell with 31.5m path length which was connected to the smoking machine. This technique can improve the measuremental temporal resolution compared with traditional method. At the beginning, the press in White cell was 5.2×104 Pa and it increased to 1.03×105Pa while the measurement finished. The result shows the concentration of CH4 in main gas is between 0.89mg/m3 and 1.54mg/m. Pool in the White cell entrance placed wide infrared wavelength LED light source, light source after a large-diameter phase-shifted fiber grating access to White cell, Through the piezoelectric ceramic (PZT) phase-shift controlled Phase-Shifted Fiber Bragg Grating to infra-red LED into a laser modulation of narrowband continuous changes in light, wavelength corresponding to various gases such as CH4 absorption spectrum. White cell output pool signals entered into the photoelectric detector and ADC further, controlled by computer-related demodulation digitally. It can achieve high sensitivity digital lock-in amplifier function and be suitable for high-resolution detection of trace gases. Through simulation studies and practical tests, The results show that the piezoelectric ceramic (PZT) control Phase-Shifted Fiber Bragg Grating spectral scan can effectively suppress the background optical interference, the elimination of Rayleigh scattering and Raman scattering. Phase-Shifted Fiber Bragg Grating modulation and digital lock-in amplifier detection sensitivity of the combination improve three orders of magnitude. The methods solve the problem a wide range of gas detection by one kind of equipment. The device is small size, low cost, easy to carry, can measure a wide range of gas content, and has an important application value for environmental monitoring. If the White pool with underwater ocean can detect trace gases, for rivers and the marine environment and resources to detect environmental monitoring is also of great significance.
---
paper_title: Membrane Based Measurement Technology for in situ Monitoring of Gases in Soil
paper_content:
The representative measurement of gas concentration and fluxes in heterogeneous soils is one of the current challenges when analyzing the interactions of biogeochemical processes in soils and global change. Furthermore, recent research projects on CO(2)-sequestration have an urgent need of CO(2)-monitoring networks. Therefore, a measurement method based on selective permeation of gases through tubular membranes has been developed. Combining the specific permeation rates of gas components for a membrane and Dalton's principle, the gas concentration (or partial pressure) can be determined by the measurement of physical quantities (pressure or volume) only. Due to the comparatively small permeation constants of membranes, the influence of the sensor on its surrounding area can be neglected. The design of the sensor membranes can be adapted to the spatial scale from the bench scale to the field scale. The sensitive area for the measurement can be optimized to obtain representative results. Furthermore, a continuous time-averaged measurement is possible where the time for averaging is simply controlled by the wall-thickness of the membrane used. The measuring method is demonstrated for continuous monitoring of O(2) and CO(2) inside of a sand filled Lysimeter. Using three sensor planes inside the sand pack, which were installed normal to the gas flow direction and a reference measurement system, we demonstrate the accuracy of the gas-detection for different flux-based boundary conditions.
---
paper_title: A wireless home safety gas leakage detection system
paper_content:
A wireless safety device for gas leakage detection is proposed. The device is intended for use in household safety where appliances and heaters that use natural gas and liquid petroleum gas (LPG) may be a source of risk. The system also can be used for other applications in the industry or plants that depend on LPG and natural gas in their operations. The system design consists of two main modules: the detection and transmission module, and the receiving module. The detection and transmitting module detects the change of gas concentration using a special sensing circuit built for this purpose. This module checks if a change in concentration of gas(es) has exceeded a certain pre-determined threshold. If the sensor detects a change in gas concentration, it activates and audiovisual alarm and sends a signal to the receiver module. The receiver module acts as a mobile alarm device to allow the mobility within the house premises. The system was tested using LPG and the alarm was activated as a result of change in concentration.
---
paper_title: Catalytic Fiber Bragg Grating Sensor for Hydrogen Leak Detection in Air
paper_content:
The explosion risk linked to the use of hydrogen as fuel requires low-cost and efficient sensors. We present here a multipoint in-fiber sensor capable of hydrogen leak detection in air as low as 1% concentration with a response time smaller than a few seconds. Our solution makes use of fiber Bragg gratings (FBGs) covered by a catalytic sensitive layer made of a ceramic doped with noble metal which, in turn, induces a temperature elevation around the FBGs in the presence of hydrogen in air.
---
paper_title: A wireless, passive carbon nanotube-based gas sensor
paper_content:
A gas sensor, comprised of a gas-responsive multiwall carbon nanotube (MWNT)-silicon dioxide (SiO/sub 2/) composite layer deposited on a planar inductor-capacitor resonant circuit is presented here for the monitoring of carbon dioxide (CO/sub 2/), oxygen (O/sub 2/), and ammonia (NH/sub 3/). The absorption of different gases in the MWNT-SiO/sub 2/ layer changes the permittivity and conductivity of the material and consequently alters the resonant frequency of the sensor. By tracking the frequency spectrum of the sensor with a loop antenna, humidity, temperature, as well as CO/sub 2/, O/sub 2/ and NH/sub 3/ concentrations can be determined, enabling applications such as remotely monitoring conditions inside opaque, sealed containers. Experimental results show the sensor response to CO/sub 2/ and O/sub 2/ is both linear and reversible. Both irreversible and reversible responses are observed in response to NH/sub 3/, indicating both physisorption and chemisorption of NH/sub 3/ by the carbon nanotubes. A sensor array, comprised of an uncoated, SiO/sub 2/ coated, and MWNT-SiO/sub 2/ coated sensor, enables CO/sub 2/ measurement to be automatically calibrated for operation in a variable humidity and temperature environment.
---
paper_title: Development of a protected gas sensor for exhaust automotive applications
paper_content:
A /spl beta/-alumina-based gas sensor for automotive exhaust application (hydrocarbon, CO, NO/sub 2/ detection in 10-1000 ppm concentration range) has been developed by thick film technology (screen-printing) in the frame of a European project. The sensing device consists of a solid electrolyte (/spl beta/ alumina) and of two metallic electrodes having different catalytic properties, the whole system being in contact with the surrounding atmosphere to be analyzed. The detection principle is based on the chemisorption of oxygen which leads to a capacitance effect at the metal-electrolyte interface, resulting in a measurable difference of potential depending on nature and concentration of pollutants and on the sensor temperature. For application in exhaust pipe, a porous protective layer based on /spl alpha/-alumina for preserving the sensing material and the metal electrodes from contamination and deterioration was screen-printed on the sensing element. For limiting the possible interface interactions between the overlapped layers, a new concept of screen -printable ink was set up based on mixing the oxide powder and its gelly precursor without any inorganic binder addition. The performances of the sensor were tested both on laboratory and engine bench. The sensitivity is relevant for exhaust application, and the long-term stability is improved by the protective layer.
---
paper_title: A MEMS-based Benzene Gas Sensor with a Self-heating WO3 Sensing Layer
paper_content:
In the study, a MEMS-based benzene gas sensor is presented, consisting of a quartz substrate, a thin-film WO3 sensing layer, an integrated Pt micro-heater, and Pt interdigitated electrodes (IDEs). When benzene is present in the atmosphere, oxidation occurs on the heated WO3 sensing layer. This causes a change in the electrical conductivity of the WO3 film, and hence changes the resistance between the IDEs. The benzene concentration is then computed from the change in the measured resistance. A specific orientation of the WO3 layer is obtained by optimizing the sputtering process parameters. It is found that the sensitivity of the gas sensor is optimized at a working temperature of 300 °C. At the optimal working temperature, the experimental results show that the sensor has a high degree of sensitivity (1.0 KΩ ppm−1), a low detection limit (0.2 ppm) and a rapid response time (35 s).
---
paper_title: Circuit and Noise Analysis of Odorant Gas Sensors in an E-Nose
paper_content:
In this paper, the relationship between typical circuit structures of gas sensor circuits and their output noise is analyzed. By using averaged segmenting periodical graph and improved histogram estimation methods, we estimated their noise power spectra and optimal probability distribution functions (pdf). The results were confirmed through experiment studies.
---
paper_title: Novel nano-hybrid gas sensor based on n-TiO2 functionalized by phthalocyanines via supersonic beam co-deposition: Performance and application to automotive air quality
paper_content:
Supersonic beams of TiO2 clusters and metal phthalocyanines have been developed for the synthesis of hybrid for gas-sensing applications. This approach allows high degree of control on the properties of the synthesized materials and on the interface between clusters and organic molecules, so that new functional materials with novel and promising sensing properties are obtained. These materials can be synthesized in different architectures, shape and phases, and are constituted by nanocrystalline clusters of TiO2 functionalized during the growth by the co-deposited molecules. The outcome is a porous nanostructured material characterized by a diffused organic-inorganic interface at the nanoscale. The properties of the codeposited interfaces, where the functionalization of the hybrid material plays a crucial role, can be tailored directly acting on the beam parameters of the cluster (mass size distribution, phase) and of the organic molecules (kinetic energy, deposition rate).
---
paper_title: Thermopile sensor array for an electronic nose integrated non-selective NDIR gas detection system
paper_content:
The fabrication and characterization of a thermopile sensor array is described in this work. The device is intended to be part of a non-dispersive infrared (NDIR) gas detection system integrated in an electronic nose where each sensor is not oriented for the detection of a particular substance. Different designs were considered changing the size of the array (9 or 16 elements) and improving some specific detector performance parameters: responsivity and noise equivalent power (NEP). After device fabrication, the measurements of these parameters show a good agreement with the thermal one-dimensional model used for the design.
---
paper_title: A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network
paper_content:
A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors.
---
paper_title: Quartz Crystal Microbalance Coated with Sol-gel-derived Thin Films as Gas Sensor for NO Detection
paper_content:
This paper presents the possibilities and properties of Indium tin oxide (ITO)-covered quartz crystal as a NOx toxic gas-sensor. The starting sol-gel solution was prepared by mixing indium chloride dissolved in acetylacetone and tin chloride dissolved in ethanol (0-20% by weight). The ITO thin films were deposited on the gold electrodes of quartz crystal by spin-coating technique and subsequently followed a standard photolithography to pattern the derived films to ensure all sensors with the same sensing areas. All heat treatment processes were controlled below 500°C in order to avoid the piezoelectric characteristics degradation of quartz crystal (Quartz will lose its piezoelectricity at ~573°C due to the phase change from α to β). The electrical and structural properties of ITO thin films were characterized with Hall analysis system, TG/DTA, XRD, XPS, SEM and etc. The gas sensor had featured with ITO thin films of ~100nm as the receptor to sense the toxic gas NO and quartz crystal with frequency of 10MHz as the transducer to transfer the surface reactions (mass loading, etc) into the frequency shift. A homemade setup had been employed to measure the sensor response under the static mode. The experimental results had indicated that the ITO-coated QCM had a good sensitivity for NO gas, ~12Hz/100ppm within 5mins. These results prove that the ITO-covered quartz crystals are usable as a gas sensor and as an analytical device.
---
paper_title: Temperature Gradient Effect on Gas Discrimination Power of a Metal-Oxide Thin-Film Sensor Microarray
paper_content:
The paper presents results concerning the effect of spatial inhomogeneous operating temperature on the gas discrimination power of a gas-sensor microarray, with the latter based on a thin SnO2 film employed in the KAMINA electronic nose. Three different temperature distributions over the substrate are discussed: a nearly homogeneous one and two temperature gradients, equal to approx. 3.3 °C/mm and 6.7 °C/mm, applied across the sensor elements (segments) of the array. The gas discrimination power of the microarray is judged by using the Mahalanobis distance in the LDA (Linear Discrimination Analysis) coordinate system between the data clusters obtained by the response of the microarray to four target vapors: ethanol, acetone, propanol and ammonia. It is shown that the application of a temperature gradient increases the gas discrimination power of the microarray by up to 35 %.
---
paper_title: Development of laser based spectroscopic trace-gas sensors for environmental sensor networks and medical exposure monitors
paper_content:
We report wavelength modulated TDLAS/QEPAS trace gas sensors with reduced size, efficiency, and cost for use in environmental sensor networks and medical exposure monitors. CO2 measurements with a 2 mum diode laser dissipate <1 W.
---
paper_title: Vinegar Classification Based on Feature Extraction and Selection From Tin Oxide Gas Sensor Array Data
paper_content:
Tin oxide gas sensor array based devices were often cited in publications dealing with food products. However, during the process of using a tin oxide gas sensor array to analysis and identify different gas, the most important and difficult was how to get useful parameters from the sensors and how to optimize the parameters. Which can make the sensor array can identify the gas rapidly and accuracy, and there was not a comfortable method. For this reason we developed a device which satisfied the gas sensor array act with the gas from vinegar. The parameters of the sensor act with gas were picked up after getting the whole acting process data. In order to assure whether the feature parameter was optimum or not, in this paper a new method called “distinguish index”(DI) has been proposed. Thus we can assure the feature parameter was useful in the later pattern recognition process. Principal component analysis (PCA) and artificial neural network (ANN) were used to combine the optimum feature parameters. Good separation among the gases with different vinegar is obtained using principal component analysis. The recognition probability of the ANN is 98 %. The new method can also be applied to other pattern recognition problems.
---
paper_title: Flame-Spray-Made Undoped Zinc Oxide Films for Gas Sensing Applications
paper_content:
Using zinc naphthenate dissolved in xylene as a precursor undoped ZnO nanopowders were synthesized by the flame spray pyrolysis technique. The average diameter and length of ZnO spherical and hexagonal particles were in the range of 5 to 20 nm, while ZnO nanorods were found to be 5–20 nm wide and 20–40 nm long, under 5/5 (precursor/oxygen) flame conditions. The gas sensitivity of the undoped ZnO nanopowders towards 50 ppm of NO2, C2H5OH and SO2 were found to be 33, 7 and 3, respectively. The sensors showed a great selectivity towards NO2 at high working temperature (at 300 °C), while small resistance variations were observed for C2H5OH and SO2, respectively.
---
paper_title: Application research of laser gas detection technology in the analysis of sulphur hexafluoride
paper_content:
The decision of latent fault in SF 6 gas electrical equipment can be made more timely and accurately through online monitoring of content change of HF, one of SF 6 gas decomposition. Tunable diode laser absorption spectroscopic (TDLAS) technology applies the wavelength tunable characteristics of diode lasers to gain the absorption spectroscopy of the selected absorption line of the target gas for the qualitative or quantitative analysis of the target gas. Because of its high sensitivity, high selectivity, short response time and without cross-interference, etc., we can achieve online monitoring and make the detection of HF more direct, accurate and timely through installing the technical equipment in the SF 6 electrical equipment monitoring site location. The analysis shows that the TDLAS technology is an effective and reliable method of SF 6 online monitoring technique.
---
paper_title: Gas Sensors Based on Electrospun Nanofibers
paper_content:
Nanofibers fabricated via electrospinning have specific surface approximately one to two orders of the magnitude larger than flat films, making them excellent candidates for potential applications in sensors. This review is an attempt to give an overview on gas sensors using electrospun nanofibers comprising polyelectrolytes, conducting polymer composites, and semiconductors based on various sensing techniques such as acoustic wave, resistive, photoelectric, and optical techniques. The results of sensing experiments indicate that the nanofiber-based sensors showed much higher sensitivity and quicker responses to target gases, compared with sensors based on flat films.
---
paper_title: Cytochrome C Biosensor—A Model for Gas Sensing
paper_content:
This work is about gas biosensing with a cytochrome c biosensor. Emphasis is put on the analysis of the sensing process and a mathematical model to make predictions about the biosensor response. Reliable predictions about biosensor responses can provide valuable information and facilitate biosensor development, particularly at an early development stage. The sensing process comprises several individual steps, such as phase partition equilibrium, intermediate reactions, mass-transport, and reaction kinetics, which take place in and between the gas and liquid phases. A quantitative description of each step was worked out and finally combined into a mathematical model. The applicability of the model was demonstrated for a particular example of methanethiol gas detection by a cytochrome c biosensor. The model allowed us to predict the optical readout response of the biosensor from tabulated data and data obtained in simple liquid phase experiments. The prediction was experimentally verified with a planar three-electrode electro-optical cytochrome c biosensor in contact with methanethiol gas in a gas tight spectroelectrochemical measurement cell.
---
paper_title: The Multi-Chamber Electronic Nose—An Improved Olfaction Sensor for Mobile Robotics
paper_content:
One of the major disadvantages of the use of Metal Oxide Semiconductor (MOS) technology as a transducer for electronic gas sensing devices (e-noses) is the long recovery period needed after each gas exposure. This severely restricts its usage in applications where the gas concentrations may change rapidly, as in mobile robotic olfaction, where allowing for sensor recovery forces the robot to move at a very low speed, almost incompatible with any practical robot operation. This paper describes the design of a new e-nose which overcomes, to a great extent, such a limitation. The proposed e-nose, called Multi-Chamber Electronic Nose (MCE-nose), comprises several identical sets of MOS sensors accommodated in separate chambers (four in our current prototype), which alternate between sensing and recovery states, providing, as a whole, a device capable of sensing changes in chemical concentrations faster. The utility and performance of the MCE-nose in mobile robotic olfaction is shown through several experiments involving rapid sensing of gas concentration and mobile robot gas mapping.
---
paper_title: Gas-sensing properties of catalytically modified WO/sub 3/ with copper and vanadium for NH/sub 3/ detection
paper_content:
Ammonia gas detection by pure and catalytically modified WO/sub 3/-based gas sensors was analyzed. Sensor response of pure tungsten oxide to NH/sub 3/ was unsatisfactory, probably due to the unselective oxidation of ammonia into NO/sub x/. Copper and vanadium were introduced in different concentrations and the resulting material was annealed at different temperatures in order to improve the sensing properties for NH/sub 3/ detection. The introduction of Cu and V as catalytic additives improved the sensor response to NH/sub 3/. Possible reaction mechanisms of NH/sub 3/ over these materials are discussed. Sensor responses to other gases like NO/sub 2/ or CO and interference of humidity on ammonia detection were also analyzed so as to choose the best sensing element.
---
paper_title: Remote Moisture Sensing utilizing Ordinary RFID Tags
paper_content:
The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.
---
paper_title: A Nanopore Structured High Performance Toluene Gas Sensor Made by Nanoimprinting Method
paper_content:
Toluene gas was successfully measured at room temperature using a device microfabricated by a nanoimprinting method. A highly uniform nanoporous thin film was produced with a dense array of titania (TiO2) pores with a diameter of 70~80 nm using this method. This thin film had a Pd/TiO2 nanoporous/SiO2/Si MIS layered structure with Pd-TiO2 as the catalytic sensing layer. The nanoimprinting method was useful in expanding the TiO2 surface area by about 30%, as confirmed using AFM and SEM imaging. The measured toluene concentrations ranged from 50 ppm to 200 ppm. The toluene was easily detected by changing the Pd/TiO2 interface work function, resulting in a change in the I-V characteristics.
---
paper_title: Dynamic thermal conductivity sensor for gas detection
paper_content:
A dynamic thermal conductivity sensor for gas detection based on the transient thermal response of a SiC microplate slightly heated by a screen-printed Pt resistance is described. This sensor is developed for specific applications such as the determination of carbon monoxide content in hydrogen for fuel cell, or that of methane in biogas applications. On the contrary of existing devices, the apparatus developed here does not need any reference cell, it operates in transient mode near room temperature (�T ≈ 5 K in air), so has a very low power requirements (≈5 mW) and keeps the gas near thermal equilibrium which simplifies the mathematical model and eases data processing. In test gases mixtures (N2 + He), absolute and precise measurement of the gas thermal conductivity have been achieved, leading to the exact molar fraction of the gas to detect with a good reproducibility. © 2003 Elsevier B.V. All rights reserved.
---
paper_title: On the Electrooxidation and Amperometric Detection of NO Gas at the Pt/Nafion® Electrode
paper_content:
Abstract : The electrochemical oxidation of nitric oxide (NO) gas at the Pt/Nafion â electrode has been studied at a concentration of 500 ppm. The electrooxidation of NO taking place over a wide potential range can be described by a transcendental equation, from which the half-wave potential of the reaction can be determined. For NO oxidation with appreciable overpotentials but negligible mass-transfer effects, the Tafel kinetics applies. The obtained charge transfer coefficient (a) and the exchange current density (i o ) are 0.77 and 14 mA/cm 2 , respectively. An amperometric NO gas sensor based on the Pt/Nafion â electrode has been fabricated and tested over the NO concentration range from 0 to 500 ppm. The Pt/Nafion â electrode was used as an anode at a fixed potential, preferably 1.15 V (vs. Ag/AgCl/sat. KCl), which assures current limitation by diffusion only. The sensitivity of the electrochemical sensor was found to be 1.86 mA/ppm/cm 2 . The potential interference by other gases, such as nitrogen dioxide (NO
---
paper_title: Metal Oxide Semi-Conductor Gas Sensors in Environmental Monitoring
paper_content:
Metal oxide semiconductor gas sensors are utilised in a variety of different roles and industries. They are relatively inexpensive compared to other sensing technologies, robust, lightweight, long lasting and benefit from high material sensitivity and quick response times. They have been used extensively to measure and monitor trace amounts of environmentally important gases such as carbon monoxide and nitrogen dioxide. In this review the nature of the gas response and how it is fundamentally linked to surface structure is explored. Synthetic routes to metal oxide semiconductor gas sensors are also discussed and related to their affect on surface structure. An overview of important contributions and recent advances are discussed for the use of metal oxide semiconductor sensors for the detection of a variety of gases—CO, NOx, NH3 and the particularly challenging case of CO2. Finally a description of recent advances in work completed at University College London is presented including the use of selective zeolites layers, new perovskite type materials and an innovative chemical vapour deposition approach to film deposition.
---
paper_title: A thin-film SnO2 sensor system for simultaneous detection of CO and NO2 with neural signal evaluation
paper_content:
Simultaneous CO and NO 2 measurements are of importance for the ventilation control of automobiles and other applications. For this purpose semiconducting SnO 2 sensors are often used. A well known disadvantage of SnO 2 sensors is the concurrent reaction of the oxidizing NO 2 and the reducing CO on the sensor surface, which causes a near zero sensor signal in the presence of both gases in a certain range of mixtures. A second disadvantage of SnO 2 sensors are the long rise and decay times of the sensor signal. The combination of different SnO 2 sensors, operated at different temperatures and combined with a signal evaluation system based on a specially trained neural forward network (artificial neural net (ANN)) solves this problem. The runtime version of the neural net is a small program, compatible with customary micro controllers. These signal evaluation techniques are applicable to similar problems using sensor arrays or single sensors in a non-stationary operating mode.
---
paper_title: Electronic nose: A toxic gas sensor by polyaniline thin film conducting polymer
paper_content:
This paper presents study of toxic gas sensing property by polyaniline thin film conducting polymer for ammonia gas. In this work, the precipitate of the conducting polymer has been synthesized by chemical method. The conducting polymer film which acts as an active layer of sensor was prepared with the help of the precipitate (of the conducting polymer) by spin-coating technique. The deposited films were characterized by FTIR spectroscopy and four probe method. The study indicates that, the resistance of the film decreases with the increase in concentration of ammonia at the known volume.
---
paper_title: Conductive polymer‐carbon black composites‐based sensor arrays for use in an electronic nose
paper_content:
Polymer‐carbon black composites are a new class of chemical detecting sensors used in electronic noses. These composites are prepared by mixing carbon black and polymer in an appropriate solvent. The mixture is deposited on a substrate between two metal electrodes, whereby the solvent evaporates leaving a composite film. Arrays of these chemiresistors, made from a chemically diverse number of polymers and carbon black, swell reversibly, inducing a resistance change on exposure to chemical vapors. These arrays generate a pattern that is a unique fingerprint for the vapor being detected. With the aid of algorithms these patterns are processed and recognized. These arrays can detect and discriminate between a large number of chemical vapors.
---
paper_title: Gas Sensor Based on Photonic Crystal Fibres in the 2ν3 and ν2 + 2ν3 Vibrational Bands of Methane
paper_content:
Abstract: In this work, methane detection is performed on the 2 ” 3 and ” 2 + 2 ” 3 absorptionbands in the Near-Infrared (NIR) wavelength region using an all-fibre optical sensor. Hollow-core photonic bandgap fibres (HC-PBFs) are employed as gas cells due to their compactness,good integrability in optical systems and feasibility of long interaction lengths with gases.Sensing in the 2 ” 3 band of methane is demonstrated to achieve a detection limit one orderof magnitude better than that of the ” 2 + 2 ” 3 band. Finally, the filling time of a HC-PBF isdemonstrated to be dependent on the fibre length and geometry.Keywords: gas detectors; methane; fibre optic sensors; photonic crystal fibres; hollow-corephotonic bandgap fibres; absorption spectroscopy1. IntroductionMethane (CH 4 ) is a very common gas, found in many industrial applications where it carries a sig-nificant explosion hazard, and in the environment where it can give rise to strong greenhouse effectcontributing to global warming. It is therefore of great interest to develop methane sensors to accurately
---
paper_title: ZnO:Al Thin Film Gas Sensor for Detection of Ethanol Vapor
paper_content:
The ZnO:Al thin films were prepared by RF magnetron sputtering on Si substrate using Pt as interdigitated electrodes. The structure was characterized by XRD and SEM analyses, and the ethanol vapor gas sensing as well as electrical properties have been investigated and discussed. The gas sensing results show that the sensitivity for detecting 400 ppm ethanol vapor was ~20 at an operating temperature of 250°C. The high sensitivity, fast recovery, and reliability suggest that ZnO:Al thin film prepared by RF magnetron sputtering can be used for ethanol vapor gas sensing.
---
paper_title: Application of ultrasonic to a hydrogen sensor
paper_content:
A fast responce hydrogen sensor using ultrasonic was developed and demonstrated. It uses the difference in sound velocity between hydrogen and the air. Thus, it is possible to measure hydrogen concentration by measuring the sound velocity changing when hydrogen is mixed in the air. The hydrogen sensor using ultrasonic in this study can be very fast in response time which was less than 84 msec. We also have shown that it is possible to detect hydrogen concentration as low as 100 ppm with the distance between ultrasonic probes as small as 20 mm. Temperature variation test was also demonstrated between −10 and 50°C and confirmed that it can match with calculation results.
---
paper_title: Real-Time Gas Identification by Analyzing the Transient Response of Capillary-Attached Conductive Gas Sensor
paper_content:
In this study, the ability of the Capillary-attached conductive gas sensor (CGS) in real-time gas identification was investigated. The structure of the prototype fabricated CGS is presented. Portions were selected from the beginning of the CGS transient response including the first 11 samples to the first 100 samples. Different feature extraction and classification methods were applied on the selected portions. Validation of methods was evaluated to study the ability of an early portion of the CGS transient response in target gas (TG) identification. Experimental results proved that applying extracted features from an early part of the CGS transient response along with a classifier can distinguish short-chain alcohols from each other perfectly. Decreasing time of exposition in the interaction between target gas and sensing element improved the reliability of the sensor. Classification rate was also improved and time of identification was decreased. Moreover, the results indicated the optimum interval of the early transient response of the CGS for selecting portions to achieve the best classification rates.
---
paper_title: Thermal and flow analysis of SiC-based gas sensors for automotive applications
paper_content:
Different block and tube mounting alternatives for SiC-based gas sensors were studied by means of temperature measurements and simulation of heat transfer and gas flow for steady state conditions. The most preferable tube mounting design was determined. Simulation-based guidelines were developed for designing tube-mounted gas sensors in the exhaust pipes of diesel and petrol engines, taking into account thermal constraints and flow conditions.
---
paper_title: Bromocresol Green/Mesoporous Silica Adsorbent for Ammonia Gas Sensing via an Optical Sensing Instrument
paper_content:
A meso-structured Al-MCM-41 material was impregnated with bromocresol green (BG) dye and then incorporated into a UV-Vis DRA spectroscopic instrument for the online detection of ammonia gas. The absorption response of the Al-MCM-41/BG ammonia sensing material was very sensitive at the optical absorption wavelength of 630 nm. A high linear correlation was achieved for ppmv and sub-ppmv levels of ammonia gas. The response time for the quantitative detection of ammonia gas concentrations ranging from 0.25 to 2.0 ppmv was only a few minutes. The lower detection limit achieved was 0.185 ppmv. The color change process was fully reversible during tens of cycling tests. These features together make this mesoporous Al-MCM-41 material very promising for optical sensing applications.
---
paper_title: Theory of power laws for semiconductor gas sensors
paper_content:
It has long been known empirically that the electric resistance of a semiconductor gas sensor under exposure to a target gas (partial pressure P) is proportional to Pn where n is a constant fairly specific to the kind of target gas (power law). This paper aims at providing a theoretical basis to such power laws. It is shown that the laws can be derived by combining a depletion theory of semiconductor, which deals with the distribution of electrons between surface state (surface charge) and bulk, with the dynamics of adsorption and/or reactions of gases on the surface, which is responsible for accumulation or reduction of surface charges. The resulting laws describe well sensor response behavior to oxygen, reducing gases and oxidizing gases.
---
paper_title: Trace amount formaldehyde gas detection for indoor air quality monitoring
paper_content:
Formaldehyde is not only a carcinogenic chemical, but also causes sick building syndrome. Very small amounts of formaldehyde, such as those emitted from building materials and furniture, pose great concerns for human health. A Health Canada guideline, proposed in 2005, set the maximum formaldehyde concentration for long term exposure (8-hours averaged) as 40 ppb (50 μg/m3). This is a low concentration that commercially available formaldehyde sensors have great difficulty to detect both accurately and continuously. In this paper, we report a formaldehyde gas detection system which is capable of pre-concentrating formaldehyde gas using absorbent, and subsequently thermally desorbing the concentrated gas for detection by the electrochemical sensor. Initial results show that the system is able to detect formaldehyde gas at the ppb level, thus making it feasible to detect trace amount of formaldehyde in indoor environments.
---
paper_title: MICROMACHINED THIN FILM SNO2 GAS SENSORS IN TEMPERATURE-PULSED OPERATION MODE
paper_content:
Abstract Gas detection measurements based on a micromachined SnO 2 gas sensor with periodically pulsed heater voltage are presented. Additionally, the field-effect-induced changes in resistivity of the sensitive layer caused by the heater voltage were investigated. The combination of both results leads to an improved design for low power SnO 2 gas sensors. In temperature-pulsed mode, the sensor resistances were measured at constant delays after the pulse edges. The measurements were carried out with the common test gases carbon monoxide and nitric dioxide in synthetic air with 50% humidity. In the cold pulse phase, the CO sensor response is higher and shows only a slow decrease with increasing pulse duration. The sensor sensitivity is related to the pulsed heated mode, on the one hand, and the continuously heated, on the other. The comparison of the measurement results reveals that the temperature-pulsed operation mode (TPOM) caused a significant reduction of power consumption and higher sensitivity.
---
paper_title: A CMOS Single-Chip Gas Recognition Circuit for Metal Oxide Gas Sensor Arrays
paper_content:
This paper presents a CMOS single-chip gas recognition circuit, which encodes sensor array outputs into a unique sequence of spikes with the firing delay mapping the strength of the stimulation across the array. The proposed gas recognition circuit examines the generated spike pattern of relative excitations across the population of sensors and looks for a match within a library of 2-D spatio-temporal spike signatures. Each signature is drift insensitive, concentration invariant and is also a unique characteristic of the target gas. This VLSI friendly approach relies on a simple spatio-temporal code matching instead of existing computationally expensive pattern matching statistical techniques. In addition, it relies on a novel sensor calibration technique that does not require control or prior knowledge of the gas concentration. The proposed gas recognition circuit was implemented in a 0.35 μm CMOS process and characterized using an in-house fabricated 4 × 4 tin oxide gas sensor array. Experimental results show a correct detection rate of 94.9% when the gas sensor array is exposed to propane, ethanol and carbon monoxide.
---
paper_title: Gas sensing properties of CNT-SnO2 Nanocomposite Thin Film Prepared by E-beam Evaporation
paper_content:
This work presents new results on the gas sensing study of CNT-SnO2 nanocomposite thin film prepared by e-beam evaporation process. The SnO2 gas sensing layers have been achieved with varying CNT concentration. Structural and morphological characterization by means of SEM TEM and XRD indicate that CNTs is incorporated in form of small fragments uniformly dispersed within SnO2 crystal and CNT inclusion does not modify surface morphology and slightly alter the crystal structure. Electrical characterizations highlight a very peculiar and interesting behavior for the layer tested. The SnO2 film's conductance decreases by more than two orders of magnitude with 1%CNT concentration. As CNT content increases to 1%, CO, ethanol, and particularly NO2 responses are improved and the working temperature tends to reduce. In addition, 1% CNT-SnO2 exhibit maximized response to NO2 with sensitivity of ~10 at very low NO2 concentration of 250 ppb at 200degC.
---
paper_title: Highly sensitive and selective ammonia gas sensor
paper_content:
We have fabricated and examined an ammonia gas sensor with high sensitivity using thick-film technology. The sensing material of the gas sensor is FeO/sub x/-WO/sub 3/-SnO/sub 2/ oxide semiconductor. The sensor exhibits resistance increase upon exposure to low concentration of ammonia gas. The resistance of the sensor is decreased, on the other hand, for exposure to reducing gases such as ethyl alcohol, methane, propane and carbon monoxide. We have proposed and investigated a novel method for detecting ammonia gas quite selectively by using a sensor array with two sensing elements which contains an ammonia gas sensor and a compensation element. The compensation element is a Pt-doped WO/sub 3/-SnO/sub 2/ gas sensor which shows an opposite direction of resistance change in comparison with the ammonia gas sensor upon exposure to ammonia gas. Excellent selectivity has been achieved using the sensor array with two sensing elements.
---
paper_title: Enhancement of MOS gas sensor selectivity by 'on-chip' catalytic filtering
paper_content:
Abstract A novel approach to enhancing the selectivity of a thick-film metal-oxide-semiconductor gas sensor is presented. The approach complements the efforts of many investigators who have tried to improve metal-oxide-semiconductor gas sensor selectivity by altering the sensing material alone. The discussion and data presented will focus on how 'on-chip' catalytic filtering is employed to build a selective fuel gas sensor without any adverse effects on the sensor's sensitivity or speed of response. Data are also presented showing the sensor's stable response to methane over an extended concentration range for over three years.
---
paper_title: Semiconducting Metal Oxide Based Sensors for Selective Gas Pollutant Detection
paper_content:
A review of some papers published in the last fifty years that focus on the semiconducting metal oxide (SMO) based sensors for the selective and sensitive detection of various environmental pollutants is presented.
---
paper_title: A wireless home safety gas leakage detection system
paper_content:
A wireless safety device for gas leakage detection is proposed. The device is intended for use in household safety where appliances and heaters that use natural gas and liquid petroleum gas (LPG) may be a source of risk. The system also can be used for other applications in the industry or plants that depend on LPG and natural gas in their operations. The system design consists of two main modules: the detection and transmission module, and the receiving module. The detection and transmitting module detects the change of gas concentration using a special sensing circuit built for this purpose. This module checks if a change in concentration of gas(es) has exceeded a certain pre-determined threshold. If the sensor detects a change in gas concentration, it activates and audiovisual alarm and sends a signal to the receiver module. The receiver module acts as a mobile alarm device to allow the mobility within the house premises. The system was tested using LPG and the alarm was activated as a result of change in concentration.
---
paper_title: Advances in SAW Gas Sensors Based on the Condensate-Adsorption Effect
paper_content:
A surface-acoustic-wave (SAW) gas sensor with a low detection limit and fast response for volatile organic compounds (VOCs) based on the condensate-adsorption effect detection is developed. In this sensor a gas chromatography (GC) column acts as the separator element and a dual-resonator oscillator acts as the detector element. Regarding the surface effective permittivity method, the response mechanism analysis, which relates the condensate-adsorption effect, is performed, leading to the sensor performance prediction prior to fabrication. New designs of SAW resonators, which act as feedback of the oscillator, are devised in order to decrease the insertion loss and to achieve single-mode control, resulting in superior frequency stability of the oscillator. Based on the new phase modulation approach, excellent short-term frequency stability (±3 Hz/s) is achieved with the SAW oscillator by using the 500 MHz dual-port resonator as feedback element. In a sensor experiment investigating formaldehyde detection, the implemented SAW gas sensor exhibits an excellent threshold detection limit as low as 0.38 pg.
---
paper_title: Thin-film SnO2 sensor arrays controlled by variation of contact potential—a suitable tool for chemometric gas mixture analysis in the TLV range
paper_content:
The selectivity of SnO 2 thin-film gas sensors can be modulated by changing the contact areas of the contact electrodes. The investigated SnO 2 arrays have been structured with two mask steps only. Gas mixture analysis in the German threshold limit value (TLV) range has been performed with an integrated thin-film sensor array using principal component regression (PCR) analysis. As an example, NO 2 , CO, CH 4 and H 2 O mixtures in air have been analyzed. In this case additional doping materials, catalysts, temperature gradients, etc., to enhance selectivity are not necessary.
---
paper_title: Atomic layer deposition of tin dioxide sensing film in microhotplate gas sensors
paper_content:
We report the use of atomic layer deposition (ALD) to produce the gas-sensitive tin dioxide film in a microhotplate gas sensor. The performance of the device was demonstrated using ethanol, acetone and acrylonitrile as model analytes. Fast response times and low drift rates of the output signal were measured, indicating a structurally stable tin dioxide film and reflecting the capabilities of ALD in gas sensor applications. Fabrication of the microhotplate using tungsten metallization and plasma deposited silicon dioxide dielectrics is also detailed.
---
paper_title: Methane and Carbon Monoxide Gas Detection system based on semiconductor sensor
paper_content:
One of the most important actual problems in the gas detection field is that there are strong demands for gas methane leak detection and CO (carbon monoxide) detection to prevent explosions or CO poisoning accidents. In this sense, the present paper describes technical characteristics, test results, and a concluding application for methane and carbon monoxide based gas detection using a sensor which can detect both CO and methane with a single sensing element. The paper presents the detection method as well as the apparatus functional sketch.
---
paper_title: Metal Oxide Gas Sensors: Sensitivity and Influencing Factors
paper_content:
Conductometric semiconducting metal oxide gas sensors have been widely used and investigated in the detection of gases. Investigations have indicated that the gas sensing process is strongly related to surface reactions, so one of the important parameters of gas sensors, the sensitivity of the metal oxide based materials, will change with the factors influencing the surface reactions, such as chemical components, surface-modification and microstructures of sensing layers, temperature and humidity. In this brief review, attention will be focused on changes of sensitivity of conductometric semiconducting metal oxide gas sensors due to the five factors mentioned above.
---
paper_title: Multi-layered thick-film gas sensor array for selective sensing by catalytic filtering technology
paper_content:
A C3H8 gas sensor array with high sensitivity and good selectivity by the combination of the catalytic filtering and the gas diffusion control has been achieved. The gas sensor array consists of a couple of catalytic filter layers, one with a Pd and the other with a Pt, a SiO2 insulating layer and two sensing layers on an alumina substrate. The sensor array shows high sensitivity to 500 ppm of C3H8 at temperature above 400°C and good selectivity for the interfering gases such as CO and C2H5OH by using a simple signal processing technique.
---
paper_title: Low-power micro gas sensor
paper_content:
The stable and low-power heating characteristics of a microheater are very important for the micro gas sensor. Membrane-type gas sensors have been fabricated by silicon IC technology. Steady-state thermal analysis by the finite-element method is performed to optimize the thermal properties of the gas sensor. From the analysis, the desirable size of the microheater for low power consumption is determined. The heating properties of fabricated poly-Si and Pt microheaters have been tested. The sensing characteristics of the packaged microsensor are also examined.
---
paper_title: Metal oxide nano-crystals for gas sensing
paper_content:
Abstract This review article is focused on the description of metal oxide single crystalline nanostructures used for gas sensing. Metal oxide nano-wires are crystalline structures with precise chemical composition, surface terminations, and dislocation-defect free. Their nanosized dimension generate properties that can be significantly different from their coarse-grained polycrystalline counterpart. Surface effects appear because of the magnification in the specific surface of nanostructures, leading to an enhancement of the properties related to that, such as catalytic activity or surface adsorption. Properties that are basic phenomenon underlying solid-state gas sensors. Their use as gas-sensing materials should reduce instabilities, suffered from their polycrystalline counterpart, associated with grain coalescence and drift in electrical properties. High degree of crystallinity and atomic sharp terminations make them very promising for better understanding of sensing principles and for development of a new generation of gas sensors. These sensing nano-crystals can be used as resistors, in FET based or optical based gas sensors. The gas experiments presented confirm good sensing properties, the possibility to use dopants and catalyser such in thin film gas sensors and the real integration in low power consumption transducers of single crystalline nanobelts prove the feasibility of large scale manufacturing of well-organized sensor arrays based on different nanostructures. Nevertheless, a greater control in the growth is required for an application in commercial systems, together with a thorough understanding of the growth mechanism that can lead to a control in nano-wires size and size distributions, shape, crystal structure and atomic termination.
---
paper_title: The surface and materials science of tin oxide
paper_content:
Abstract The study of tin oxide is motivated by its applications as a solid state gas sensor material, oxidation catalyst, and transparent conductor. This review describes the physical and chemical properties that make tin oxide a suitable material for these purposes. The emphasis is on surface science studies of single crystal surfaces, but selected studies on powder and polycrystalline films are also incorporated in order to provide connecting points between surface science studies with the broader field of materials science of tin oxide. The key for understanding many aspects of SnO 2 surface properties is the dual valency of Sn. The dual valency facilitates a reversible transformation of the surface composition from stoichiometric surfaces with Sn 4+ surface cations into a reduced surface with Sn 2+ surface cations depending on the oxygen chemical potential of the system. Reduction of the surface modifies the surface electronic structure by formation of Sn 5s derived surface states that lie deep within the band gap and also cause a lowering of the work function. The gas sensing mechanism appears, however, only to be indirectly influenced by the surface composition of SnO 2 . Critical for triggering a gas response are not the lattice oxygen concentration but chemisorbed (or ionosorbed) oxygen and other molecules with a net electric charge. Band bending induced by charged molecules cause the increase or decrease in surface conductivity responsible for the gas response signal. In most applications tin oxide is modified by additives to either increase the charge carrier concentration by donor atoms, or to increase the gas sensitivity or the catalytic activity by metal additives. Some of the basic concepts by which additives modify the gas sensing and catalytic properties of SnO 2 are discussed and the few surface science studies of doped SnO 2 are reviewed. Epitaxial SnO 2 films may facilitate the surface science studies of doped films in the future. To this end film growth on titania, alumina, and Pt(1 1 1) is reviewed. Thin films on alumina also make promising test systems for probing gas sensing behavior. Molecular adsorption and reaction studies on SnO 2 surfaces have been hampered by the challenges of preparing well-characterized surfaces. Nevertheless some experimental and theoretical studies have been performed and are reviewed. Of particular interest in these studies was the influence of the surface composition on its chemical properties. Finally, the variety of recently synthesized tin oxide nanoscopic materials is summarized.
---
paper_title: Gas sensors based on anodic tungsten oxide
paper_content:
Nanostructured porous tungsten oxide materials were synthesized by the means of electrochemical etching (anodization) of tungsten foils in aqueous NaF electrolyte. Formation of the sub-micrometer size mesoporous particles has been achieved by infiltrating the pores with water. The obtained colloidal anodic tungsten oxide dispersions have been used to fabricate resistive WO3 gas sensors by drop casting the sub-micrometer size mesoporous particles between Pt electrodes on Si/SiO2 substrate followed by calcination at 400 °C in air for 2 h. The synthesized WO3 films show slightly nonlinear current–voltage characteristics with strong thermally activated carrier transport behavior measured at temperatures between −20 °C and 280 °C. Gas response measurements carried out in CO, H2, NO and O2 analytes (concentration from 1 to 640 ppm) in air as well as in Ar buffers (O2 only in Ar) exhibited a rapid change of sensor conductance for each gas and showed pronounced response towards H2 and NO in Ar and air, respectively. The response of the sensors was dependent on temperature and yielded highest values between 170 °C and 220 °C.
---
paper_title: A thin-film SnO2 sensor system for simultaneous detection of CO and NO2 with neural signal evaluation
paper_content:
Simultaneous CO and NO 2 measurements are of importance for the ventilation control of automobiles and other applications. For this purpose semiconducting SnO 2 sensors are often used. A well known disadvantage of SnO 2 sensors is the concurrent reaction of the oxidizing NO 2 and the reducing CO on the sensor surface, which causes a near zero sensor signal in the presence of both gases in a certain range of mixtures. A second disadvantage of SnO 2 sensors are the long rise and decay times of the sensor signal. The combination of different SnO 2 sensors, operated at different temperatures and combined with a signal evaluation system based on a specially trained neural forward network (artificial neural net (ANN)) solves this problem. The runtime version of the neural net is a small program, compatible with customary micro controllers. These signal evaluation techniques are applicable to similar problems using sensor arrays or single sensors in a non-stationary operating mode.
---
paper_title: Feasibility of wireless gas detection with an FMCW RADAR interrogation of passive RF gas sensor
paper_content:
The feasibility of the remote measurement of gas detection from an RF gas sensor has been experimentally investigated. It consists of a Frequency-Modulated Continuous-Wave (FMCW) RADAR interrogation of an antenna loaded by the passive sensor. The frequency band of the RADAR [28.8–31GHz] allows the detection of the resonant frequencies of Whispering Gallery Modes that are sensitive to gas concentration. Reported experimental results provide the proof-of-concept of remote measurement of gas concentration fluctuation from RADAR interrogation of this new generation of passive gas sensors.
---
paper_title: Detection of hydrogen fluoride using SnO2-based gas sensors : Understanding of the reactional mechanism
paper_content:
Tin dioxide-based gas sensors are very efficient devices for the detection of hydrogen fluoride in trace levels since amounts lower than 50 ppb can be detected. Considering the working temperatures of tin dioxide-based gas sensors which are included between 25 and 500 ◦C, the best sensitivity was obtained when the sensor's temperature was maintained at 380 ◦C. In order to explain why high sensitivity is obtained for this temperature and then to understand the reactional mechanism between HF molecules and tin dioxide surfaces, X-ray photoelectron spectroscopy investigations were performed on tin dioxide samples treated with HF vapors at temperatures ranging from 200 to 500 ◦C. For this temperature range, the comparison between the electrical response curves and the XPS characterization, led to the consideration of two separate temperature ranges, where the interaction mechanism between HF and SnO2 can be explained. For temperature lower than 380 ◦C, the adsorption of HF induces the formation of surface hydroxyl groups and SnF4 species. In that case, the electrical conductance of the sensitive material gradually increases. Beyond this temperature, water vapors desorb from tin dioxide surfaces and then the electrical conductance is lowered. Finally, for both ranges of temperatures, the interaction mechanism occurring at the gas/detector interface is proposed.
---
paper_title: Gas Sensors Based on Conducting Polymers
paper_content:
The gas sensors fabricated by using conducting polymers such as polyaniline (PAni), polypyrrole (PPy) and poly (3,4-ethylenedioxythiophene) (PEDOT) as the active layers have been reviewed. This review discusses the sensing mechanism and configurations of the sensors. The factors that affect the performances of the gas sensors are also addressed. The disadvantages of the sensors and a brief prospect in this research field are discussed at the end of the review.
---
paper_title: CMOS single-chip gas detection system comprising capacitive, calorimetric and mass-sensitive microsensors
paper_content:
A single-chip gas detection system fabricated in industrial CMOS technology combined with post-CMOS micro-machining is presented. The sensors rely on a chemo-sensitive polymer layer, which absorbs predominantly volatile organic compounds (VOCs). A mass-sensitive resonant-beam oscillator, a capacitive sensor incorporated into a second-order /spl Sigma//spl Delta/-modulator, a calorimetric sensor with low-noise signal conditioning circuitry and a temperature sensor are monolithically integrated on a single chip along with all necessary driving and signal conditioning circuitry. The preprocessed sensor signals are converted to the digital domain on chip. An additional integrated controller sets the sensor parameters and transmits the sensor values to an off-chip data recording unit via a standard serial interface. A 6-chip-array has been flip-chip packaged on a ceramic substrate, which forms part of a handheld VOC gas detection unit. Limits of detection (LOD) of 1-5 ppm n-octane, toluene or propan-1-ol have been achieved.
---
paper_title: Design of conducting polymer gas sensors : modelling and experiment
paper_content:
Abstract The use of conducting polymers as active materials in chemical sensors is growing rapidly; for example they have been used in the place of metal oxides to sense gases and vapours, such as NH 3 , NO 2 and alcohols. Here we model a polymer gas sensor in terms of homogeneous diffusion coupled to simple adsorption within a bounded layer. From the model we present analytical expressions of the adsorbate profiles for the diffusion-rate limited, reaction-rate limited and intermediate cases in terms of fundamental dimensionless parameters. The model is then used to calculate the conductance of a typical chemiresistor which consists of a pair of co-planar electrodes below on electropolymerised thin polymer film and on an impermeable substrate. The analytical expression for the electric field is combined with the diffusion reaction equations by assuming an single carrier conduction model. Finally, the theoretical chemiresistor response is calculated in six limiting cases and compared with experimental data on pyrrole-based conducting polymers. In practice the gas-polymer interaction is likely to be much more complex and so we are extending our model to consider in more detail the conduction principles.
---
paper_title: Electronic nose: A toxic gas sensor by polyaniline thin film conducting polymer
paper_content:
This paper presents study of toxic gas sensing property by polyaniline thin film conducting polymer for ammonia gas. In this work, the precipitate of the conducting polymer has been synthesized by chemical method. The conducting polymer film which acts as an active layer of sensor was prepared with the help of the precipitate (of the conducting polymer) by spin-coating technique. The deposited films were characterized by FTIR spectroscopy and four probe method. The study indicates that, the resistance of the film decreases with the increase in concentration of ammonia at the known volume.
---
paper_title: Comparative studies on polymer coated SAW and STW resonators for chemical gas sensor applications
paper_content:
This paper presents and compares experimental data from performance tests on polymer coated 433 MHz surface acoustic wave (SAW) and 1 GHz surface transverse wave (STW) based two-port resonators for chemical gas sensor applications. The acoustic devices were coated with gas sensitive polymer films of different thickness' and viscoelastic properties as parylene C, poly-(2-hydroxyethylmethacrylate) (PHEMA) and poly-(n-butyl-methacrylate) (PBMA). Then they were gas probed using perchloroethylene and water. The SAW versus STW sensor sensitivities, insertion loss, loaded Q and distortion of the frequency and phase responses during gas probing were evaluated and compared. It was found that STW devices, when coated with thin sensitive polymer layers, retain low loss, high Q and low noise and feature substantially higher relative gas sensitivities compared to their SAW counterparts. Coated SAW sensors can stand substantially thicker soft polymer films than STW ones but at the expense of highly degraded electrical and noise performance and moderate increase in relative sensitivity.
---
paper_title: CMOS single-chip gas detection system comprising capacitive, calorimetric and mass-sensitive microsensors
paper_content:
A single-chip gas detection system fabricated in industrial CMOS technology combined with post-CMOS micro-machining is presented. The sensors rely on a chemo-sensitive polymer layer, which absorbs predominantly volatile organic compounds (VOCs). A mass-sensitive resonant-beam oscillator, a capacitive sensor incorporated into a second-order /spl Sigma//spl Delta/-modulator, a calorimetric sensor with low-noise signal conditioning circuitry and a temperature sensor are monolithically integrated on a single chip along with all necessary driving and signal conditioning circuitry. The preprocessed sensor signals are converted to the digital domain on chip. An additional integrated controller sets the sensor parameters and transmits the sensor values to an off-chip data recording unit via a standard serial interface. A 6-chip-array has been flip-chip packaged on a ceramic substrate, which forms part of a handheld VOC gas detection unit. Limits of detection (LOD) of 1-5 ppm n-octane, toluene or propan-1-ol have been achieved.
---
paper_title: Study on selectivity enhancement of tin dioxide gas sensor using non-conducting polymer membrane
paper_content:
A non-conducting polymer membrane can be used as a molecular sieve to enhance the selectivity of a tin dioxide gas sensor. A commercial polymer, polyimide XU218 of the Eiba-Geigy company, is coated on the surface of a tin dioxide gas sensor FIGARO TGS842 which was designed to detect methane. The polymer coated sensor is calibrated in a high accuracy testing system which is controlled by a PC with GPIB and mass flow controllers (MFCs). Three gases, hydrogen, methane, and ammonia, are used to investigate the response of the sensor. It is found that the sensor shows a characteristic change in response to ammonia and an almost negligible change in response to hydrogen and methane.
---
paper_title: Conductive polymer‐carbon black composites‐based sensor arrays for use in an electronic nose
paper_content:
Polymer‐carbon black composites are a new class of chemical detecting sensors used in electronic noses. These composites are prepared by mixing carbon black and polymer in an appropriate solvent. The mixture is deposited on a substrate between two metal electrodes, whereby the solvent evaporates leaving a composite film. Arrays of these chemiresistors, made from a chemically diverse number of polymers and carbon black, swell reversibly, inducing a resistance change on exposure to chemical vapors. These arrays generate a pattern that is a unique fingerprint for the vapor being detected. With the aid of algorithms these patterns are processed and recognized. These arrays can detect and discriminate between a large number of chemical vapors.
---
paper_title: Gas sensing properties of CNT-SnO2 Nanocomposite Thin Film Prepared by E-beam Evaporation
paper_content:
This work presents new results on the gas sensing study of CNT-SnO2 nanocomposite thin film prepared by e-beam evaporation process. The SnO2 gas sensing layers have been achieved with varying CNT concentration. Structural and morphological characterization by means of SEM TEM and XRD indicate that CNTs is incorporated in form of small fragments uniformly dispersed within SnO2 crystal and CNT inclusion does not modify surface morphology and slightly alter the crystal structure. Electrical characterizations highlight a very peculiar and interesting behavior for the layer tested. The SnO2 film's conductance decreases by more than two orders of magnitude with 1%CNT concentration. As CNT content increases to 1%, CO, ethanol, and particularly NO2 responses are improved and the working temperature tends to reduce. In addition, 1% CNT-SnO2 exhibit maximized response to NO2 with sensitivity of ~10 at very low NO2 concentration of 250 ppb at 200degC.
---
paper_title: A wireless, passive carbon nanotube-based gas sensor
paper_content:
A gas sensor, comprised of a gas-responsive multiwall carbon nanotube (MWNT)-silicon dioxide (SiO/sub 2/) composite layer deposited on a planar inductor-capacitor resonant circuit is presented here for the monitoring of carbon dioxide (CO/sub 2/), oxygen (O/sub 2/), and ammonia (NH/sub 3/). The absorption of different gases in the MWNT-SiO/sub 2/ layer changes the permittivity and conductivity of the material and consequently alters the resonant frequency of the sensor. By tracking the frequency spectrum of the sensor with a loop antenna, humidity, temperature, as well as CO/sub 2/, O/sub 2/ and NH/sub 3/ concentrations can be determined, enabling applications such as remotely monitoring conditions inside opaque, sealed containers. Experimental results show the sensor response to CO/sub 2/ and O/sub 2/ is both linear and reversible. Both irreversible and reversible responses are observed in response to NH/sub 3/, indicating both physisorption and chemisorption of NH/sub 3/ by the carbon nanotubes. A sensor array, comprised of an uncoated, SiO/sub 2/ coated, and MWNT-SiO/sub 2/ coated sensor, enables CO/sub 2/ measurement to be automatically calibrated for operation in a variable humidity and temperature environment.
---
paper_title: Dynamic Control of Adsorption Sensitivity for Photo-EMF-Based Ammonia Gas Sensors Using a Wireless Network
paper_content:
This paper proposes an adsorption sensitivity control method that uses a wireless network and illumination light intensity in a photo-electromagnetic field (EMF)-based gas sensor for measurements in real time of a wide range of ammonia concentrations. The minimum measurement error for a range of ammonia concentration from 3 to 800 ppm occurs when the gas concentration magnitude corresponds with the optimal intensity of the illumination light. A simulation with LabView-engineered modules for automatic control of a new intelligent computer system was conducted to improve measurement precision over a wide range of gas concentrations. This gas sensor computer system with wireless network technology could be useful in the chemical industry for automatic detection and measurement of hazardous ammonia gas levels in real time.
---
paper_title: Observation of dynamic behavior of PD-generated SF/sub 6/ decompositions using carbon nanotube gas sensor
paper_content:
The authors had proposed a new detection method for partial discharge (PD) occurring in sulfur hexafluoride (SF/sub 6/) gas using a carbon nanotube (CNT) gas sensor. In the previous study, we had investigated the dependency of gas sensor response on applied voltage and gas sensor position. In this paper, a series of experiments was performed to observe dynamic behavior of decomposition gas using the CNT gas sensor. The gas sensor responses to PD under several SF/sup 6/ gas pressures were measured. It was found that the sensor response normalized by PD power increased almost linearly with the gas pressure. The result is useful to understand the mechanism how the gas pressure influences the generation and diffusion of decomposition gas. Effects of absorbent placed inside the discharge chamber were also investigated. Finally, the gas sensor responses were measured with a time interval after PD was extinguished to check if an offline diagnosis was possible. Although the sensor response decreased with elapsed time after PD was extinguished, it was still possible to detect the residual decomposition gas using the CNT gas sensor.
---
paper_title: Detection of individual gas molecules adsorbed on graphene
paper_content:
The ultimate aim of any detection method is to achieve such a level of sensitivity that individual quanta of a measured entity can be resolved. In the case of chemical sensors, the quantum is one atom or molecule. Such resolution has so far been beyond the reach of any detection technique, including solid-state gas sensors hailed for their exceptional sensitivity1, 2, 3, 4. The fundamental reason limiting the resolution of such sensors is fluctuations due to thermal motion of charges and defects5, which lead to intrinsic noise exceeding the sought-after signal from individual molecules, usually by many orders of magnitude. Here, we show that micrometre-size sensors made from graphene are capable of detecting individual events when a gas molecule attaches to or detaches from graphene's surface. The adsorbed molecules change the local carrier concentration in graphene one by one electron, which leads to step-like changes in resistance. The achieved sensitivity is due to the fact that graphene is an exceptionally low-noise material electronically, which makes it a promising candidate not only for chemical detectors but also for other applications where local probes sensitive to external charge, magnetic field or mechanical strain are required.
---
paper_title: The effect of sequence length on DNA decorated CNT gas sensors
paper_content:
This article reports the effect of deoxyribonucleic acid (DNA) sequence length on the sensing characteristics of DNA decorated single-walled carbon nanotube (SWNT) devices. First, SWNTs were assembled on micro electrodes via a versatile, solution based Dielectrophoresis process. Then four single-stranded poly-G oligomers with lengths of 8, 16, 24 and 32 were decorated on SWNTs and the response of SWNT sensors to methanol and IPA vapors were measured. We found that the optimum DNA sequence length for sensing applications was 24. Sequence length of DNA had a dramatic impact on the response of DNA-SWNT sensors. This phenomenon can be explained by the difference in binding affinities of nucleotides on SWNTs and the conformations that DNA forms on SWNTs. These experimental results have significant implications for the interactions between DNA and SWNTs. They can facilitate the development of DNA functionalized SWNT sensors in analytical chemistry, biochemistry and environmental monitoring applications.
---
paper_title: Remote Moisture Sensing utilizing Ordinary RFID Tags
paper_content:
The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.
---
paper_title: Differential Absorption Lidar to Measure Subhourly Variation of Tropospheric Ozone Profiles
paper_content:
A tropospheric ozone Differential Absorption Lidar system, developed jointly by The University of Alabama in Huntsville and the National Aeronautics and Space Administration, is making regular observations of ozone vertical distributions between 1 and 8 km with two receivers under both daytime and nighttime conditions using lasers at 285 and 291 nm. This paper describes the lidar system and analysis technique with some measurement examples. An iterative aerosol correction procedure reduces the retrieval error arising from differential aerosol backscatter in the lower troposphere. Lidar observations with coincident ozonesonde flights demonstrate that the retrieval accuracy ranges from better than 10% below 4 km to better than 20% below 8 km with 750-m vertical resolution and 10-min temporal integration.
---
paper_title: Photoacoustic Spectroscopy with Quantum Cascade Lasers for Trace Gas Detection
paper_content:
Abstract: Various applications, such as pollution monitoring, toxic-gas detection, noninvasive medical diagnostics and industrial process control, require sensitive and selectivedetection of gas traces with concentrations in the parts in 10 9 (ppb) and sub-ppb range.The recent development of quantum-cascade lasers (QCLs) has given a new aspect toinfrared laser-based trace gas sensors. In particular, single mode distributed feedback QCLsare attractive spectroscopic sources because of their excellent properties in terms of narrowlinewidth, average power and room temperature operation. In combination with these lasersources, photoacoustic spectroscopy offers the advantage of high sensitivity and selectivity,compact sensor platform, fast time-response and user friendly operation. This paper reportsrecent developments on quantum cascade laser-based photoacoustic spectroscopy for tracegas detection. In particular, different applications of a photoacoustic trace gas sensoremploying a longitudinal resonant cell with a detection limit on the order of hundred ppb ofozone and ammonia are discussed. We also report two QC laser-based photoacousticsensors for the detection of nitric oxide, for environmental pollution monitoring andmedical diagnostics, and hexamethyldisilazane, for applications in semiconductormanufacturing process.
---
paper_title: A study on NDIR-based CO2 sensor to apply remote air quality monitoring system
paper_content:
Recently, various CO2 sensors reported in the literature may be classified in two major categories depending on measuring principle of sensors. Fist, the chemical CO2 gas sensors have the principal advantage of a very low energy consumption and small size. On the negative side, It is difficult to apply to a variety of practical fields because they have a short lifetime and as well as low durability. Second, NDIR-based CO2 sensor is commonly used in monitoring indoor air quality due to relatively high accuracy compared with that of a chemical CO2 gas sensor. In this paper, therefore, we evaluate NDIR-based CO2 sensors to verify applicable potentialities of them in a remote air quality monitoring system. In addition, the principle and structure of NDIR CO2 sensor is discussed, and then we analyze the CO2 concentration has been measured in the real platform of subway station for a month. Finally this paper proposes that the accuracy of the measuring values from NDIR CO2 sensor is enough for indoor air quality monitoring applications.
---
paper_title: Detection of gases by correlation spectroscopy
paper_content:
This paper describes the detection of various common gases by means of Correlation Spectroscopy, employing a Complementary-Source-Modulation (CoSM) approach based on compact light-emitting diode (LED) sources. Theoretical results for the quantitative detection of O in air are presented, with the use of practical low cost LED sources.
---
paper_title: Compact Raman lidar for hydrogen gas leak detection
paper_content:
A compact Raman lidar system for hydrogen gas leak detection was constructed. Laser-induced fluorescence at a distance of 70 m and Raman scattering light from N 2 gas at short range could be detected.
---
paper_title: Measurement of CH4 by differential infrared optical absorption spectroscopy
paper_content:
To solve the problem that the current use of optical methods used to detect various gases existing in sensors and signal processing was too complicated, detection sensitivity was not high, detection of the type of gases was single. We used optical phase-shifted grating laser window adjust the differential infrared absorption spectrometry and studied Spectrum adjustment and detection signal processing problem. CH4 in cigarette main smoke of different kind brands was measured continuously by differential optical absorption spectroscopy (DOAS). By using RM200 rotating disk smoking machine with 20 smoking flue, 20 kinds of gas were inserted into the rotating disk and ignited every 3s. By use of DOAS technique, the measurement was completed with in 6~7min after the gas was transported into a White cell with 31.5m path length which was connected to the smoking machine. This technique can improve the measuremental temporal resolution compared with traditional method. At the beginning, the press in White cell was 5.2×104 Pa and it increased to 1.03×105Pa while the measurement finished. The result shows the concentration of CH4 in main gas is between 0.89mg/m3 and 1.54mg/m. Pool in the White cell entrance placed wide infrared wavelength LED light source, light source after a large-diameter phase-shifted fiber grating access to White cell, Through the piezoelectric ceramic (PZT) phase-shift controlled Phase-Shifted Fiber Bragg Grating to infra-red LED into a laser modulation of narrowband continuous changes in light, wavelength corresponding to various gases such as CH4 absorption spectrum. White cell output pool signals entered into the photoelectric detector and ADC further, controlled by computer-related demodulation digitally. It can achieve high sensitivity digital lock-in amplifier function and be suitable for high-resolution detection of trace gases. Through simulation studies and practical tests, The results show that the piezoelectric ceramic (PZT) control Phase-Shifted Fiber Bragg Grating spectral scan can effectively suppress the background optical interference, the elimination of Rayleigh scattering and Raman scattering. Phase-Shifted Fiber Bragg Grating modulation and digital lock-in amplifier detection sensitivity of the combination improve three orders of magnitude. The methods solve the problem a wide range of gas detection by one kind of equipment. The device is small size, low cost, easy to carry, can measure a wide range of gas content, and has an important application value for environmental monitoring. If the White pool with underwater ocean can detect trace gases, for rivers and the marine environment and resources to detect environmental monitoring is also of great significance.
---
paper_title: Tunable erbium-doped fiber ring laser for applications of infrared absorption spectroscopy
paper_content:
Abstract We fabricate a low noise erbium-doped fiber ring laser that can be continuously tuned over 102 nm by insertion of the fiber Fabry–Perot tunable filter (FFP-TF) in the ring cavity with a novel cavity structure and the optimal gain medium length. As an application of this fiber ring laser, we performed the absorption spectroscopy of acetylene ( 13 C 2 H 2 ) and hydrogen cyanide (H 13 C 14 N) and measure the absorption spectra of more than 50 transition lines of these gases with an excellent signal to noise ratio (SNR). The pressure broadening coefficients of four acetylene transition lines are obtained using this fiber ring laser and an external cavity laser diode.
---
paper_title: Investigation of Wavelength Modulation and Wavelength Sweep Techniques in Intracavity Fiber Laser for Gas Detection
paper_content:
Wavelength modulation technique (WMT) and wavelength sweep technique (WST) are introduced into intracavity fiber laser for both gas concentration sensing and absorption wavelength detection in this paper. The principle of gas sensing and spectral analysis using WMT and WST was studied. Polynomial fit was adopted to model the system nonlinear characteristic, based on which absorption wavelength can be detected. The system optimization and acetylene gas sensing were both realized, and the absolute detected error can be increased less than 75 ppm. The absorption wavelengths of the detected gas were calculated based on the polynomial fitting results of the system nonlinear. The absorption wavelengths of acetylene were detected using this method, with absolute error no more than 0.445 nm. The system has the ability of realizing both concentration sensing and gas-type recognition.
---
paper_title: In situ FTIR investigations of reverse water gas shift reaction activity at supercritical conditions
paper_content:
Abstract In situ Fourier transform infrared spectroscopy was employed to investigate the formation of CO and other adsorbed species on Al 2 O 3 -supported catalysts exposed to a supercritical mixture of CO 2 and H 2 ( P = 138 bar ; T = 342 K ; molar CO 2 / H 2 = 19 ). On just the Al 2 O 3 support, surface carbonates are observed consistent with literature reports with some evidence of surface formates. On Pd / Al 2 O 3 , however, several surface species including carbonates, formates, and CO are unambiguously observed. On Pd / Al 2 O 3 , a peak at 1900 – 1920 cm - 1 corresponds to adsorbed CO. Based on the presence of water peaks at reaction conditions, the reverse water gas shift reaction is the most plausible mechanism for CO formation. The CO peak evolves with time on stream, gradually increasing in intensity from ∼ 20 min to 5 h. This suggests that short-residence time continuous reactors are preferred over batch reactors to minimize the effects of possible catalyst deactivation by CO. Interestingly, the CO peak was not observed on either Ru / Al 2 O 3 or Ni / Al 2 O 3 catalysts suggesting that either of these catalysts may be better suited than Pt- or Pd-based catalysts for hydrogenation in supercritical CO 2 .
---
paper_title: Wavelength Sweep of Intracavity Fiber Laser for Low Concentration Gas Detection
paper_content:
Wavelength sweep technique (WST) is introduced into intracavity fiber laser (ICFL) for low concentration gas detection. The limitation induced by noise can be eliminated using this method, and the performance of the system is improved. The sensitivity of the system is reduced to less than 200 ppm. With WST, the sweeping characteristic of the ICFL can be described according to known gas absorption spectra.
---
paper_title: Photonic MEMS for NIR in-situ Gas Detection and Identification
paper_content:
We report on a novel sensing technique combining photonics and microelectromechanical systems (MEMS) for the detection and monitoring of gas emissions in environmental, medical, and industrial applications. We discuss how MEMS-tunable vertical-cavity surface-emitting lasers (VCSELs) can be exploited for in-situ detection and NIR spectroscopy of several gases, such as O2, N2O, CH4, HF, HCl, etc., with estimated sensitivities between 0.1 and 20 ppm on footprints <<10-3 mm3. The VCSELs can be electrostatically tuned with a continuous wavelength shift up to 20 nm, allowing for unambiguous NIR signature determination. Selective concentration analysis in heterogeneous gas compositions is enabled, thus paving the way to an integrated optical platform for multiplexed gas identification by bandgap and device engineering. We will discuss here, in particular, our efforts on the development of a 760 nm AlGaAs-based tunable VCSEL for O2 detection.
---
paper_title: Mobile robots with active IR-optical sensing for remote gas detection and source localization
paper_content:
While other robots use in-situ measurements for gas leak detection and localization, we propose to apply remote sensing. It is easier and safer to conduct, permits rapid scans and is applicable to leak sources high up. An IR-optical sensor is used, exploiting spectral absorption effects of gases. Tailored leak detection and localization strategies are proposed. A simulation environment with a 3D model of the gas concentration field is used for developing and testing the detection and localization strategies. The system performance is demonstrated in a case study with a chemical plant.
---
paper_title: Detection of trace concentrations of helium and argon in gas mixtures by laser-induced breakdown spectroscopy
paper_content:
We report what we believe to be the first demonstration of the detection of trace quantities of helium and argon in binary and ternary gas mixtures with nitrogen by laser-induced breakdown spectroscopy (LIBS). Although significant quenching of helium transitions due to collisional deactivation of excited species was observed, it was found that losses in analytical sensitivity could be minimized by increasing the laser irradiance and decreasing the pressure at which the analyses were performed. In consequence, limits of detection of parts-per-million and tens of parts-per-million and linear dynamic ranges of several orders of magnitude in analyte concentration were obtained. The results of this study suggest that LIBS may have potential applications in the detection of other noble gases at trace concentrations.
---
paper_title: Application research of laser gas detection technology in the analysis of sulphur hexafluoride
paper_content:
The decision of latent fault in SF 6 gas electrical equipment can be made more timely and accurately through online monitoring of content change of HF, one of SF 6 gas decomposition. Tunable diode laser absorption spectroscopic (TDLAS) technology applies the wavelength tunable characteristics of diode lasers to gain the absorption spectroscopy of the selected absorption line of the target gas for the qualitative or quantitative analysis of the target gas. Because of its high sensitivity, high selectivity, short response time and without cross-interference, etc., we can achieve online monitoring and make the detection of HF more direct, accurate and timely through installing the technical equipment in the SF 6 electrical equipment monitoring site location. The analysis shows that the TDLAS technology is an effective and reliable method of SF 6 online monitoring technique.
---
paper_title: Non-selective NDIR array for gas detection
paper_content:
A micro component for a non-selective NDIR (non dispersive infrared) gas detection system is presented in this work. This device consist of an IR detection module composed of a thermopile and a thin film filter array. The thermopile arrays (up to 4x4) are built on a silicon substrate by bulk micro-machining processes. The whole matrix is built on a thin freestanding silicon oxide/silicon nitride membrane of 2100x2100μm2 defined by anisotropic wet etching. To ensure the existence of hot and cold junctions for each detector we define on the insulating membrane absorbers and ribs, 6μm thick, by heavy boron doping of the silicon underneath. The ribs crisscross the membrane contacting the silicon bulk acting as a heat sink. Absorbers are located in the centre of each individual pseudo-membrane defined by the ribs intersection. Incident radiation heats up the absorber creating a temperature difference that is measured by the thermocouples that are placed between the absorber and the ribs. On a second chip, the elements of the filter array are fabricated in a matching configuration. The filters are built on a silicon substrate alternating thin films of different refraction index acting like a Fabry-Perot structure with 2-8μm silicon oxide cores. The transmitted filter peaks are not tuned for the detection of any specific substance: they configure a non selective general purpose filter array (400-4000 cm-1), making signal processing and pattern recognition techniques necessary. Both dies have been fabricated and characterized and have been successfully attached using flip-chip techniques. The measurements on these devices have been used to build an optical simulation tool that allows the assessment of the whole NDIR system behaviour in operating conditions.© (2005) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Midinfrared sensors meet nanotechnology: Trace gas sensing with quantum cascade lasers inside photonic band-gap hollow waveguides
paper_content:
An integrated midinfrared sensing system for trace level (ppb) gas analysis combining a quantum cascade laser with an emission frequency of 10.3μm with a frequency matched photonic band-gap hollow core waveguide has been developed, demonstrating the sensing application of photonic band-gap fibers. The photonic band-gap fiber simultaneously acts as a wavelength selective waveguide and miniaturized gas cell. The laser emission wavelength corresponds to the vibrational C–H stretch band of ethyl chloride gas. This sensing system enabled the detection of ethyl chloride at concentration levels of 30ppb (v∕v) with a response time of 8s probing a sample volume of only 1.5mL in a transmission absorption measurement within the photonic band-gap hollow core waveguide, which corresponds to a sensitivity improvement by three orders of magnitude compared to previously reported results obtained with conventional hollow waveguides.
---
paper_title: Comparison of infrared sources for a differential photoacoustic gas detection system
paper_content:
Abstract A prototype of the differential photoacoustic measurement system with an optical cantilever microphone has been developed. The system is based on gas filter correlation method. The proposed system allows real-time measurement of various IR-absorbing gases from the flowing sample or in the open air. Three setups with different kind of infrared sources were carried out to study selectivity and sensitivity of the prototype and applicability of the source types with differential method. The sources were a mechanically chopped blackbody radiator, electrically chopped blackbody radiator and mechanically chopped CO 2 -laser. A detection limit for C 2 H 4 was estimated with all three infrared sources. Cross sensitivity and detection limits of gases CH 4 , C 2 H 4 and CO 2 were measured with the mechanically chopped blackbody radiator. This crossinterference matrix was also modeled using HITRAN database and completed with CO and H 2 O. The measurements indicate that at least ppb-level detection of ethylene using CO 2 -laser, sub-ppm level with mechanically chopped blackbody and ppm-level with electrically modulated blackbody is possible with a proposed differential system.
---
paper_title: A CMOS Single-Chip Gas Recognition Circuit for Metal Oxide Gas Sensor Arrays
paper_content:
This paper presents a CMOS single-chip gas recognition circuit, which encodes sensor array outputs into a unique sequence of spikes with the firing delay mapping the strength of the stimulation across the array. The proposed gas recognition circuit examines the generated spike pattern of relative excitations across the population of sensors and looks for a match within a library of 2-D spatio-temporal spike signatures. Each signature is drift insensitive, concentration invariant and is also a unique characteristic of the target gas. This VLSI friendly approach relies on a simple spatio-temporal code matching instead of existing computationally expensive pattern matching statistical techniques. In addition, it relies on a novel sensor calibration technique that does not require control or prior knowledge of the gas concentration. The proposed gas recognition circuit was implemented in a 0.35 μm CMOS process and characterized using an in-house fabricated 4 × 4 tin oxide gas sensor array. Experimental results show a correct detection rate of 94.9% when the gas sensor array is exposed to propane, ethanol and carbon monoxide.
---
paper_title: Catalytic Fiber Bragg Grating Sensor for Hydrogen Leak Detection in Air
paper_content:
The explosion risk linked to the use of hydrogen as fuel requires low-cost and efficient sensors. We present here a multipoint in-fiber sensor capable of hydrogen leak detection in air as low as 1% concentration with a response time smaller than a few seconds. Our solution makes use of fiber Bragg gratings (FBGs) covered by a catalytic sensitive layer made of a ceramic doped with noble metal which, in turn, induces a temperature elevation around the FBGs in the presence of hydrogen in air.
---
paper_title: Olfactory detection of methane, propane, butane and hexane using conventional transmitter norms
paper_content:
Abstract A Si based four element integrated calorimetric sensor was constructed for olfactory detection of hydrocarbon mixtures up to their lower explosion limit (LEL) concentration. In order to construct a four-wire, inherently explosion-proof transmitter, the power consumption of each sensor was reduced below 50 mW and a sequential read-out scheme was adopted. Olfactory detection is based on the principal component analysis (PCA) and partial least square (PLS) fitting.
---
paper_title: An Analytic Model Of Thermal Drift In Piezoresistive Microcantilever Sensors
paper_content:
A closed-form semiempirical model has been developed to understand the physical origins of thermal drift in piezoresistive microcantilever sensors. The two-component model describes both the effects of temperature-related bending and heat dissipation on the piezoresistance. The temperature-related bending component is based on the Euler–Bernoulli theory of elastic deformation applied to a multilayer cantilever. The heat dissipation component is based on energy conservation per unit time for a piezoresistive cantilever in a Wheatstone bridge circuit, representing a balance between electrical power input and heat dissipation into the environment. Conduction and convection are found to be the primary mechanisms of heat transfer, and the dependence of these effects on the thermal conductivity, temperature, and flow rate of the gaseous environment is described. The thermal boundary layer value that defines the length scale of the heat dissipation phenomenon is treated as an empirical fitting parameter. Using t...
---
paper_title: Dynamic thermal conductivity sensor for gas detection
paper_content:
A dynamic thermal conductivity sensor for gas detection based on the transient thermal response of a SiC microplate slightly heated by a screen-printed Pt resistance is described. This sensor is developed for specific applications such as the determination of carbon monoxide content in hydrogen for fuel cell, or that of methane in biogas applications. On the contrary of existing devices, the apparatus developed here does not need any reference cell, it operates in transient mode near room temperature (�T ≈ 5 K in air), so has a very low power requirements (≈5 mW) and keeps the gas near thermal equilibrium which simplifies the mathematical model and eases data processing. In test gases mixtures (N2 + He), absolute and precise measurement of the gas thermal conductivity have been achieved, leading to the exact molar fraction of the gas to detect with a good reproducibility. © 2003 Elsevier B.V. All rights reserved.
---
paper_title: Evaluation of two gas chromatography-olfactometry methods: the detection frequency and perceived intensity method.
paper_content:
Two gas chromatography-olfactometry methods were evaluated in terms of repeatability, range of sensitivity and discriminating properties. Six volatile flavour compounds at various concentration levels were analysed by a panel of eight assessors using the detection frequency method and the perceived intensity method. The coefficient of variance, averaged over the individual compounds for three replicate samples, was 16% for the detection frequency method and 28% for the intensity method. The average correlation coefficient of the individual compounds with concentration was 0.93 (range 0.88-0.99) for the intensities. They were slightly higher than those for the detection frequencies (0.91, range 0.81-0.97). The detection frequency method was more accurate in terms of repeatability, and the intensity method was more accurate with regard to discrimination between concentration levels. The range of sensitivity was similar for both methods.
---
paper_title: Comparison of various detection limit estimates for volatile sulphur compounds by gas chromatography with pulsed flame photometric detection.
paper_content:
This paper addresses the variations that presently exist regarding the definition, determination, and reporting of detection limits for volatile sulphur compounds by gas chromatography with pulsed flame photometric detection (GC-PFPD). Gas standards containing hydrogen sulphide (H(2)S), carbonyl sulphide (COS), sulphur dioxide (SO(2)), methyl mercaptan (CH(3)SH), dimethyl sulphide (DMS), carbon disulphide (CS(2)), and dimethyl disulphide (DMDS) in concentrations varying from 0.36ppb (v/v) up to 1.5ppm (v/v) in nitrogen were prepared with permeation tubes and introduced in the gas chromatograph using a 0.25-ml gas sampling loop. After measuring the PFPD response versus concentration, the method detection limit (MDL), the Hubaux-Vos detection limit (x(D)), the absolute instrument sensitivity (AIS), and the sulphur detectivity (D(s)) were determined for each sulphur compound. The results show that the MDL determined by the US Environmental Protection Agency procedure consistently underestimates the minimum concentrations of volatile sulphur compounds that can be practically distinguished from the background noise with the PFPD. The Hubaux-Vos detection limits and the AIS values are several times higher than the MDL, and provide more conservative estimates of the lowest concentrations that can be reliably detected. Sulphur detectivities are well correlated with AIS values but only poorly correlated with MDL values. The AIS is recommended as a reliable and cost-effective measure of detection limit for volatile sulphur compounds by GC-PFPD, since the AIS is easier and faster to determine than the MDL and the Hubaux-Vos detection limit. In addition, this study confirmed that the PFPD response is nearly quadratic with respect to concentration for all volatile sulphur compounds.
---
paper_title: A prototype acoustic gas sensor based on attenuation
paper_content:
Acoustic attenuation provides the potential to identify and quantify gases in a mixture. We present results for a prototype attenuation gas sensor for binary gas mixtures. Tests are performed in a pressurized test cell between 0.2 and 32atm to accommodate the main molecular relaxation processes. Attenuation measurements using the 215-kHz sensor and a multiseparation, multifrequency research system both generally match theoretical predictions for mixtures of CO2 and CH4 with 2% air. As the pressure in the test cell increases, the standard deviation of sensor measurements typically decreases as a result of the larger gas acoustic impedance.
---
paper_title: Ultrasonic Nondestructive Evaluation Systems: Models and Measurements
paper_content:
Ultrasonic Nondestructive Evaluation Systems: Models and Measurements provides the latest information and techniques available for ultrasonic nondestructive evaluation (NDE) inspections. Using a systems level approach, this book employs aspects of Fourier analysis, linear system theory, and wave propagation and scattering theory to develop a comprehensive model of an entire ultrasonic measurement system. The book also describes in detail the measurements needed to obtain all the system model parameters. This integrated approach leads to a new model-based engineering technology for designing, using and optimizing ultrasonic nondestructive evaluation inspections. Practicing engineers, teachers, and students alike will learn about the latest developments in NDE technology, including a recently developed pulse-echo method for measuring the sensitivity of an ultrasonic transducer, and the use of Gaussian beam theory in simulating the wave fields generated by ultrasonic transducers. In addition, this unique book incorporates MATLAB examples and exercises which allow readers to conduct simulated inspections and implement the latest modeling technology. Written by recognized experts in NDE research, Ultrasonic Nondestructive Evaluation Systems: Models and Measurements is designed to combine well-developed techniques with the latest advances in technology.
---
paper_title: Gas density metering in ultrasonic gas flowmeters using impedance measurements and chemometrics
paper_content:
Measurements using ultrasonic transducers confirm that the density of the gaseous medium can be predicted from impedance measurements on the same transducers. Tests were performed with the following gases under different pressure: SF6, N2, He, and air. These gases were selected to achieve a large span of densities. Using chemometric techniques can circumvent any possible dual sensitivity of the transducers on temperature and density. Sensitivity analysis of the transducers as a function of density is also performed and compared with experimental results. The densitometer under discussion uses impedance variation as an intermediate variable. Impedance variations can be converted to variations in frequency. Finally, the scenario of ultrasonic mass flowmetering is discussed with plausible models for realising computer integrated mass measurements using ultrasonic transducers in dual mode, viz. Contra-propagating transit time and impedance modes.
---
paper_title: Gas concentration detection using ultrasonic based on wireless sensor networks
paper_content:
In this article, a low power consumption wireless sensor for binary gas mixtures real-time quantitative analysis is presented. This small size wireless gas sensor, which is developed based on an improved time-of-flight (TOF) method, can evaluate target gas in gas mixtures with high resolution. And the wireless sensor network system reduce power consumption using a power control IEEE802.15.4 standard, which can provide an effective mechanism for improving energy efficiency by decreasing the transmitting power directly. In contrast with some other gas analysis techniques, it avoids the problem of the secondary pollution and the short sensor life of using chemic sensor, and the high power consumption problem of traditional TOF methods, makes ultrasonic wireless gas sensor available. Currently, this gas quantitative analysis system consists of several Wireless Ultrasonic Gas Sensor nodes (WUGS) and a master node. The WUGS hardware consists of a microcontroller for obtaining measuring data from the ultrasonic channel, and a Zigbee transceiver for transmitting the data sets to a master sensor node. Furthermore, an environmental monitoring chamber was designed for testing this WUGS using sulfur hexafluoride (SF 6 ) and hydrogen (H 2 ) mixed with air respectively. The resolutions of the tests were within 30µV/V and 500µV/V, respectively. The power cost of the WUGS during the detection period is lower than 90mW. The main advantages of this gas concentration measurement are high accuracy to the gas of which molecular weight is different largely from air, low power consumption and ease of network setup. The sensors have shown good stability for more than three months in a high-voltage substation, for an SF 6 leak alarm system test.
---
paper_title: A Sonar-Based Technique for the Ratiometric Determination of Binary Gas Mixtures
paper_content:
Abstract We have developed an inexpensive sonar-based instrument to provide a routine on-line monitor of the composition and stability of several gas mixtures having application in a Cherenkov Ring Imaging Detector. The instrument is capable of detecting small (
---
paper_title: Application of ultrasonic to a hydrogen sensor
paper_content:
A fast responce hydrogen sensor using ultrasonic was developed and demonstrated. It uses the difference in sound velocity between hydrogen and the air. Thus, it is possible to measure hydrogen concentration by measuring the sound velocity changing when hydrogen is mixed in the air. The hydrogen sensor using ultrasonic in this study can be very fast in response time which was less than 84 msec. We also have shown that it is possible to detect hydrogen concentration as low as 100 ppm with the distance between ultrasonic probes as small as 20 mm. Temperature variation test was also demonstrated between −10 and 50°C and confirmed that it can match with calculation results.
---
paper_title: Whispering gallery dielectric resonator modes for W-band devices
paper_content:
The utilization of planar millimeter-wavelength whispering gallery dielectric resonator modes for the design of W-band directional filters and power combiners is studied. The device presented combines the output power of several millimeter-wavelength devices in a single step by means of whispering gallery dielectric resonator (DR) modes. At millimeter wavelengths the cylindrical DRs used on their conventional TE, TM, or hybrid modes are impractically small. When used in their whispering gallery modes (WGMs) these cylindrical DRs have dimensions larger than normal for millimeter wavelength. After a description of the WGM phenomena, both electromagnetic and circuit parameters of these resonators are defined when they are coupled with transmission lines. The analysis based on the ring resonator model makes it possible to predict the theoretical responses of any devices. Experimental results obtained for directional filters and for power combiners in W-band are given. >
---
paper_title: Hand-Held Miniature Chemical Analysis System (μChemlab) for Detection of Trace Concentrations of Gas Phase Analytes
paper_content:
A miniature, integrated chemical laboratory (μChemLab) is being developed that utilizes microfabrication to provide faster response, smaller size, lower power operation, and an ability to utilize multiple analysis channels for enhanced versatility and chemical discrimination. Improved sensitivity and selectivity are achieved with three cascaded components: (1) a sample collector/concentrator, (2) a gas chromatographic (GC) separator, and (3) a chemically selective surface acoustic wave (SAW) array detector. Prototypes of all three components have been developed and demonstrated both individually and when integrated on a novel electrical and fluidic printed circuit board. A hand-held autonomous system containing two analysis channels and all supporting electronics and user interfaces is currently being assembled and tested.
---
paper_title: Photoacoustic Spectroscopy with Quantum Cascade Lasers for Trace Gas Detection
paper_content:
Abstract: Various applications, such as pollution monitoring, toxic-gas detection, noninvasive medical diagnostics and industrial process control, require sensitive and selectivedetection of gas traces with concentrations in the parts in 10 9 (ppb) and sub-ppb range.The recent development of quantum-cascade lasers (QCLs) has given a new aspect toinfrared laser-based trace gas sensors. In particular, single mode distributed feedback QCLsare attractive spectroscopic sources because of their excellent properties in terms of narrowlinewidth, average power and room temperature operation. In combination with these lasersources, photoacoustic spectroscopy offers the advantage of high sensitivity and selectivity,compact sensor platform, fast time-response and user friendly operation. This paper reportsrecent developments on quantum cascade laser-based photoacoustic spectroscopy for tracegas detection. In particular, different applications of a photoacoustic trace gas sensoremploying a longitudinal resonant cell with a detection limit on the order of hundred ppb ofozone and ammonia are discussed. We also report two QC laser-based photoacousticsensors for the detection of nitric oxide, for environmental pollution monitoring andmedical diagnostics, and hexamethyldisilazane, for applications in semiconductormanufacturing process.
---
paper_title: Trace amount formaldehyde gas detection for indoor air quality monitoring
paper_content:
Formaldehyde is not only a carcinogenic chemical, but also causes sick building syndrome. Very small amounts of formaldehyde, such as those emitted from building materials and furniture, pose great concerns for human health. A Health Canada guideline, proposed in 2005, set the maximum formaldehyde concentration for long term exposure (8-hours averaged) as 40 ppb (50 μg/m3). This is a low concentration that commercially available formaldehyde sensors have great difficulty to detect both accurately and continuously. In this paper, we report a formaldehyde gas detection system which is capable of pre-concentrating formaldehyde gas using absorbent, and subsequently thermally desorbing the concentrated gas for detection by the electrochemical sensor. Initial results show that the system is able to detect formaldehyde gas at the ppb level, thus making it feasible to detect trace amount of formaldehyde in indoor environments.
---
paper_title: Enhanced toxic gas detection using a MEMS preconcentrator coated with the metal organic framework absorber
paper_content:
Widespread and timely sensing of explosives, toxic chemicals and industrial compounds needs fast, sensitive detection technology that is affordable and portable. Many gas detectors developed for portable applications are based on sensing a change in resistivity or other non-selective material properties, often leading to low sensitivity and selectivity. In this paper, we report a new generation of the UIUC MEMS gas preconcentrator (muGPC) and its integration into a microfluidic M-8 (muM8) detector to demonstrate an enhanced overall detection limit and selectivity in detecting a toxic gas simulant. The integration creates a portable sensor to sniff an analyte of interest at concentration of 10 ppb or below.
---
paper_title: Microfabricated preconcentrator-focuser for a microscale gas chromatograph
paper_content:
The design, fabrication, and testing of a preconcentrator-focuser (PCF), consisting of a thick micromachined Si heater packed with a small quantity of a granular adsorbent material are described. The PCF is developed to capture and concentrate vapors for subsequent focused thermal desorption and analysis in a micro gas chromatograph. The microheater contains an array of high-aspect-ratio, etched-Si heating elements, 520 /spl mu/m (h)/spl times/50 /spl mu/m (w)/spl times/3000 /spl mu/m (l), bounded by an annulus of Si and thermally isolated from the remaining substrate by an air gap. This structure is sandwiched between Pyrex glass plates with inlet/outlet ports that accept capillary tubes for sample flow and is sealed by anodic bonding (bottom) and rapidly annealed glass/metal/Si solder bonding (top). The large microheater surface area allows for high adsorption capacity and efficient, uniform thermal desorption of vapors captured on the adsorbent within the structure. The adsorbent consists of roughly spherical granules, /spl sim/200 /spl mu/m in diameter, of a high-surface-area, graphitized carbon. Key design considerations, fabrication technologies, and results of performance tests are presented with an emphasis on the thermal desorption characteristics of several representative volatile organic compounds as a function of volumetric flow rates and heating rates. Preconcentration factors as high as 5600 and desorbed peak widths as narrow as 0.8 s are achieved from 0.25-L samples of benzene at modest heating rates. The effects of operating variables on sensitivity, chromatographic resolution, and detection limits are assessed. Testing of this PCF with a micromachined separation column and integrated sensor array is discussed briefly.
---
paper_title: Methane and Carbon Monoxide Gas Detection system based on semiconductor sensor
paper_content:
One of the most important actual problems in the gas detection field is that there are strong demands for gas methane leak detection and CO (carbon monoxide) detection to prevent explosions or CO poisoning accidents. In this sense, the present paper describes technical characteristics, test results, and a concluding application for methane and carbon monoxide based gas detection using a sensor which can detect both CO and methane with a single sensing element. The paper presents the detection method as well as the apparatus functional sketch.
---
paper_title: Metal Oxide Gas Sensors: Sensitivity and Influencing Factors
paper_content:
Conductometric semiconducting metal oxide gas sensors have been widely used and investigated in the detection of gases. Investigations have indicated that the gas sensing process is strongly related to surface reactions, so one of the important parameters of gas sensors, the sensitivity of the metal oxide based materials, will change with the factors influencing the surface reactions, such as chemical components, surface-modification and microstructures of sensing layers, temperature and humidity. In this brief review, attention will be focused on changes of sensitivity of conductometric semiconducting metal oxide gas sensors due to the five factors mentioned above.
---
paper_title: Comparison of infrared sources for a differential photoacoustic gas detection system
paper_content:
Abstract A prototype of the differential photoacoustic measurement system with an optical cantilever microphone has been developed. The system is based on gas filter correlation method. The proposed system allows real-time measurement of various IR-absorbing gases from the flowing sample or in the open air. Three setups with different kind of infrared sources were carried out to study selectivity and sensitivity of the prototype and applicability of the source types with differential method. The sources were a mechanically chopped blackbody radiator, electrically chopped blackbody radiator and mechanically chopped CO 2 -laser. A detection limit for C 2 H 4 was estimated with all three infrared sources. Cross sensitivity and detection limits of gases CH 4 , C 2 H 4 and CO 2 were measured with the mechanically chopped blackbody radiator. This crossinterference matrix was also modeled using HITRAN database and completed with CO and H 2 O. The measurements indicate that at least ppb-level detection of ethylene using CO 2 -laser, sub-ppm level with mechanically chopped blackbody and ppm-level with electrically modulated blackbody is possible with a proposed differential system.
---
paper_title: Feasibility of wireless gas detection with an FMCW RADAR interrogation of passive RF gas sensor
paper_content:
The feasibility of the remote measurement of gas detection from an RF gas sensor has been experimentally investigated. It consists of a Frequency-Modulated Continuous-Wave (FMCW) RADAR interrogation of an antenna loaded by the passive sensor. The frequency band of the RADAR [28.8–31GHz] allows the detection of the resonant frequencies of Whispering Gallery Modes that are sensitive to gas concentration. Reported experimental results provide the proof-of-concept of remote measurement of gas concentration fluctuation from RADAR interrogation of this new generation of passive gas sensors.
---
paper_title: Design and Characterization of Micropost-Filled Reactor to Minimize Pressure Drop While Maximizing Surface-Area-to-Volume-Ratio
paper_content:
Micropost-filled reactors are commonly found in many micro total analysis system applications because of their high surface area for the surrounding volume. Design rules for micropost-filled reactors are presented here to optimize the performance of the micro-preconcentrator, which is a component of a micro gas chromatography system. The dimensionless figure of merit is proposed to be used to minimize the pressure drop while maximizing the surface-area-to-volume-ratio for a given overall channel geometry of the micropost-filled preconcentrator. Two independent models from the literature are used to predict the pressure drop across the micropost-filled channels for low Reynolds number flows. The pressure drop can be expressed solely as a function of a design parameter, β = a/s, a ratio of a radius of each post and a half-spacing between two adjacent posts. Pressure drop measurements are performed to experimentally corroborate the pressure drop model and the optimization using the dimensionless figure of merit. As the number of microposts; for a given β increases in a given channel size, a greater surface-area-to-volume-ratio will occur for a fixed pressure drop. Therefore, increasing the arrays of posts with smaller diameters and spacing will optimize the microreactor for higher surface area for a given flow resistance, at least until Knudsen flow begins to dominate.Copyright © 2006 by ASME
---
paper_title: Vertically Aligned ZnO Nanorod Arrays Coated with $\hbox{SnO}_{\bf 2}$/Noble Metal Nanoparticles for Highly Sensitive and Selective Gas Detection
paper_content:
Mimicking the biological olfactory receptor array that possesses large surface area for molecule capture, vertically aligned ZnO nanowire arrays, used as structural templates, were coated with SnO2/noble metal nanoparticles as active materials for fabrication of 3-D gas sensors. The gas sensors showed room-temperature responses to environmental toxic gases, such as NO2 and H2S, down to ppb level, which can be attributed to the large surface area of their 3D structure and catalytic behaviors of noble metals. A sensor array composed of three sensors with the different noble metals decoration (Pd, Pt, and Au) has shown capability to discriminate five different gases (H2S, NO2, NH3, H2, and CO) when using principal component analysis (PCA) incorporated the response speed as a discrimination factor. This study demonstrates a rational strategy to prepare sensing devices with 3-D structures for selective detection, which can be readily extended to other sensing materials that can be hardly grown as 3-D nanowire arrays.
---
paper_title: Evaluation of Multitransducer Arrays for the Determination of Organic Vapor Mixtures
paper_content:
A study of vapor recognition and quantification by polymer-coated multitransducer (MT) arrays is described. The primary data set consists of experimentally derived sensitivities for 11 organic vapors obtained from 15 microsensors comprising five cantilever, capacitor, and calorimeter devices coated with five different sorptive−polymer films. These are used in Monte Carlo simulations coupled with principal component regression models to assess expected performance. Recognition rates for individual vapors and for vapor mixtures of up to four components are estimated for single-transducer (ST) arrays of up to five sensors and MT arrays of up to 15 sensors. Recognition rates are not significantly improved by including more than five sensors in an MT array for any specific analysis, regardless of difficulty. Optimal MT arrays consistently outperform optimal ST arrays of similar size, and with judiciously selected 5-sensor MT arrays, one-third of all possible ternary vapor mixtures are reliably discriminated fr...
---
paper_title: Enhanced gas sensing by individual SnO2 nanowires and nanobelts functionalized with Pd catalyst particles.
paper_content:
The sensing ability of individual SnO2 nanowires and nanobelts configured as gas sensors was measured before and after functionalization with Pd catalyst particles. In situ deposition of Pd in the same reaction chamber in which the sensing measurements were carried out ensured that the observed modification in behavior was due to the Pd functionalization rather than the variation in properties from one nanowire to another. Changes in the conductance in the early stages of metal deposition (i.e., before metal percolation) indicated that the Pd nanoparticles on the nanowire surface created Schottky barrier-type junctions resulting in the formation of electron depletion regions within the nanowire, constricting the effective conduction channel and reducing the conductance. Pd-functionalized nanostructures exhibited a dramatic improvement in sensitivity toward oxygen and hydrogen due to the enhanced catalytic dissociation of the molecular adsorbate on the Pd nanoparticle surfaces and the subsequent diffusion ...
---
paper_title: Detection of gases with arrays of micromachined tin oxide gas sensors
paper_content:
Abstract A good detection of NO 2 , CO and toluene at low concentrations has been carried out by using a micromachined gas sensor array composed of three devices working at different temperatures. The structure is fabricated using standard microelectronic technologies and tin oxide layers as sensitive material. The total power consumption of the array is in the range of 150 mW and a good uniformity of temperature is achieved, thanks to a silicon plug placed under the active area of each sensor. With this device type, it is possible to discriminate gases in a mixture when each array microsensor is heated at a proper temperature.
---
paper_title: A Gradient Microarray Electronic Nose Based on Percolating SnO2 Nanowire Sensing Elements
paper_content:
Fabrication, characterization, and tests of the practical gradient microarray electronic nose with SnO2 nanowire gas-sensing elements are reported. This novel device has demonstrated an excellent performance as a gas sensor and e-nose system capable of promptly detecting and reliably discriminating between several reducing gases in air at a ppb level of concentration. It has been found that, in addition to the temperature gradient across the nanowire layer, the density and morphological inhomogeneities of nanowire mats define the discriminating power of the electronic nose.
---
paper_title: CMOS single-chip gas detection system comprising capacitive, calorimetric and mass-sensitive microsensors
paper_content:
A single-chip gas detection system fabricated in industrial CMOS technology combined with post-CMOS micro-machining is presented. The sensors rely on a chemo-sensitive polymer layer, which absorbs predominantly volatile organic compounds (VOCs). A mass-sensitive resonant-beam oscillator, a capacitive sensor incorporated into a second-order /spl Sigma//spl Delta/-modulator, a calorimetric sensor with low-noise signal conditioning circuitry and a temperature sensor are monolithically integrated on a single chip along with all necessary driving and signal conditioning circuitry. The preprocessed sensor signals are converted to the digital domain on chip. An additional integrated controller sets the sensor parameters and transmits the sensor values to an off-chip data recording unit via a standard serial interface. A 6-chip-array has been flip-chip packaged on a ceramic substrate, which forms part of a handheld VOC gas detection unit. Limits of detection (LOD) of 1-5 ppm n-octane, toluene or propan-1-ol have been achieved.
---
paper_title: Application of electronic nose technology in detection of combustible gas
paper_content:
Based on the study of the theory and constituent of the electronic nose systems, a set of combined gas sensor array, microcontroller and PC for detection of gas mixture is designed and constructed. Four kinds of gas (hydrogen, methane, acetylene, propane) are tested by the system. The feature parameters are picked up from each curve of the gas sensor's reaction. The experimental samples are analyzed by the method of PCA (principal component analysis) and BPNN (BP neural network). The result shows that the four kinds of gas could not be distinguished well by the method of PCA. While BPNN can solve the problem of sensor cross-sensitivity, and four kinds of gas can be identified qualitatively, the recognition probability of BPNN is more than 90%.
---
paper_title: A wireless, passive carbon nanotube-based gas sensor
paper_content:
A gas sensor, comprised of a gas-responsive multiwall carbon nanotube (MWNT)-silicon dioxide (SiO/sub 2/) composite layer deposited on a planar inductor-capacitor resonant circuit is presented here for the monitoring of carbon dioxide (CO/sub 2/), oxygen (O/sub 2/), and ammonia (NH/sub 3/). The absorption of different gases in the MWNT-SiO/sub 2/ layer changes the permittivity and conductivity of the material and consequently alters the resonant frequency of the sensor. By tracking the frequency spectrum of the sensor with a loop antenna, humidity, temperature, as well as CO/sub 2/, O/sub 2/ and NH/sub 3/ concentrations can be determined, enabling applications such as remotely monitoring conditions inside opaque, sealed containers. Experimental results show the sensor response to CO/sub 2/ and O/sub 2/ is both linear and reversible. Both irreversible and reversible responses are observed in response to NH/sub 3/, indicating both physisorption and chemisorption of NH/sub 3/ by the carbon nanotubes. A sensor array, comprised of an uncoated, SiO/sub 2/ coated, and MWNT-SiO/sub 2/ coated sensor, enables CO/sub 2/ measurement to be automatically calibrated for operation in a variable humidity and temperature environment.
---
paper_title: Remote Moisture Sensing utilizing Ordinary RFID Tags
paper_content:
The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.
---
paper_title: Detection and discrimination of pure gases and binary mixtures using a dual-modality microcantilever sensor
paper_content:
A new method for detecting and discriminating pure gases and binary mixtures has been investigated. This approach is based on two distinct physical mechanisms which can be simultaneously employed within a single microcantilever: heat dissipation and resonant damping in the viscous regime. An experimental study of the heat dissipation mechanism indicates that the sensor response is directly correlated to the thermal conductivity of the gaseous analyte. A theoretical data set of resonant damping was generated corresponding to the gas mixtures examined in the thermal response experiments. The combination of the thermal and resonant response data yields more distinct analyte signatures that cannot otherwise be obtained from the detection modes individually.
---
|
Title: A Survey on Gas Sensing Technology
Section 1: Introduction
Description 1: This section introduces the growing importance of gas sensing technology, its applications, and the main areas of research and development in the field.
Section 2: Classification of Gas Sensing Methods
Description 2: This section classifies the various gas sensing technologies based on different principles and properties, providing a foundational overview for further discussion.
Section 3: Performance Indicators and Gas Sensor's Stability
Description 3: This section discusses the key performance indicators used to evaluate gas sensors, such as sensitivity and selectivity, and explores factors that affect sensor stability.
Section 4: Metal Oxide Semiconductor
Description 4: This section covers sensors based on metal oxide semiconductors, detailing their operating principles, advantages, challenges, and applications.
Section 5: Polymers
Description 5: This section describes polymer-based gas sensors, explaining their working principles, advantages, limitations, and usage in detecting specific gases.
Section 6: Carbon Nanotubes
Description 6: This section explores the utilization of carbon nanotubes (CNTs) in gas sensing, highlighting their properties, performance, and potential applications.
Section 7: Moisture Absorbing Material
Description 7: This section details how moisture-absorbing materials are used in gas sensing, particularly in RFID-based systems for humidity detection.
Section 8: Classification Based on Sensors' Operating Modes
Description 8: This section classifies sensors based on their operating modes, such as direct contact and wireless transducers, and discusses their integration.
Section 9: Optical Methods
Description 9: This section gives an overview of optical methods used in gas sensing, discussing their principles, advantages, and limitations.
Section 10: Calorimetric Methods
Description 10: This section explains calorimetric methods for gas sensing, including catalytic and thermal conductivity sensors, and discusses their mechanisms and applications.
Section 11: Gas Chromatograph
Description 11: This section covers gas chromatography as a method for gas sensing, focusing on its analytical capabilities, sensitivity, and selectivity.
Section 12: Acoustic Methods
Description 12: This section describes gas sensing methods based on acoustic principles, such as speed of sound and attenuation, outlining their operational parameters and applications.
Section 13: Components of Wireless Sensor Networks
Description 13: This section introduces the components and functionalities of wireless sensor networks used in gas sensing technologies.
Section 14: Some Approaches to Improve Sensitivity and Selectivity
Description 14: This section discusses various approaches to enhance the sensitivity and selectivity of gas sensors, including modifying sensor materials and employing pre-concentration techniques.
Section 15: Conclusions
Description 15: This section summarizes the survey findings, compares different gas sensing technologies, discusses the factors affecting performance, and provides insights into future development trends in the field.
|
Overview of Spintronic Sensors, Internet of Things, and Smart Living
| 5 |
---
paper_title: Sensing as a Service Model for Smart Cities Supported by Internet of Things
paper_content:
The world population is growing at a rapid pace. Towns and cities are accommodating half of the world's population thereby creating tremendous pressure on every aspect of urban living. Cities are known to have large concentration of resources and facilities. Such environments attract people from rural areas. However, unprecedented attraction has now become an overwhelming issue for city governance and politics. The enormous pressure towards efficient city management has triggered various Smart City initiatives by both government and private sector businesses to invest in ICT to find sustainable solutions to the growing issues. The Internet of Things (IoT) has also gained significant attention over the past decade. IoT envisions to connect billions of sensors to the Internet and expects to use them for efficient and effective resource management in Smart Cities. Today infrastructure, platforms, and software applications are offered as services using cloud technologies. In this paper, we explore the concept of sensing as a service and how it fits with the Internet of Things. Our objective is to investigate the concept of sensing as a service model in technological, economical, and social perspectives and identify the major open challenges and issues.
---
paper_title: Towards the trillion sensors market
paper_content:
Purpose ::: ::: ::: ::: – This article aims to provide an insight into recent deliberations on the possibility of a global sensor market reaching one trillion units per annum within the next decade. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: – Following an introduction, which includes details of the TSensors Summit, this article discusses existing high volume sensor applications with multi-billion unit growth prospects. It then considers certain new and emerging applications, including the Internet of Things. This is followed by technological considerations and a brief discussion. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: – The possibility of a global sensor market reaching one trillion units per annum within the next decade is the topic of serious debate. Several applications representing multi-billion levels have been identified and the ongoing TSensors Summit activities seek to identify further high volume, high growth uses and the factors that will stimulate them. While MEMS will play a central role, other, often new sensor technologies will be vital to achieving the trillion unit level. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: – This article provides a timely review of recent deliberations surrounding the feasibility of achieving a global, trillion sensor market.
---
paper_title: Challenges in healthcare and welfare intercloud
paper_content:
In this paper, we introduce healthcare and welfare provisioning intercloud on an example of Aging in Place platform. We relate it to earlier work both in healthcare and intercloud domain. We describe goals and architecture of the platform and demonstrate benefits of applying Cloud Computing solutions to previously identified challenges. Subsequently, we list and analyze challenges that seem to be unique to such setting and identify risk factors.
---
paper_title: Wireless sensor networks in intelligent transportation systems
paper_content:
Wireless sensor networks (WSNs) offer the potential to significantly improve the efficiency of existing transportation systems. Currently, collecting traffic data for traffic planning and management is achieved mostly through wired sensors. The equipment and maintenance cost and time-consuming installations of existing sensing systems prevent large-scale deployment of real-time traffic monitoring and control. Small wireless sensors with integrated sensing, computing, and wireless communication capabilities offer tremendous advantages in low cost and easy installation. In this paper, we first survey existing WSN technologies for intelligent transportation systems (ITSs), including sensor technologies, energy-efficient networking protocols, and applications of sensor networks for parking lot monitoring, traffic monitoring, and traffic control. Then, we present new methods on applying WSNs in traffic modeling and estimation and traffic control, and show their improved performance over existing solutions. Copyright © 2008 John Wiley & Sons, Ltd.
---
paper_title: New challenges to power system planning and operation of smart grid development in China
paper_content:
The future development trend of electric power grid is smart grid, which includes such features as secure and reliable, efficient and economical, clean and green, flexible and compatible, open and interactive, integrated and so on. The concept and characteristics of smart grid are introduced in this paper. On the basis of practical national situation, the development plans of smart grid in china with Chinese characteristics are proposed. Smart grid development in china is bases on information technology, communication technology, computer technology with the high integration with infrastructure of generating, transmission and distribution power system. Besides, smart grid development in china brings forward many new challenge and requirements for power system planning and operation in 9 key technologies as below: 1. Planning and construction of strong ultra high voltage (UHV) power grid 2. Large-scale thermal power, hydropower and nuclear power bases integration of power grid 3. Large-scale renewable energy sources integration of power grid 4. Distributed generation and coordinated development of the grids of various voltage ratings 5. Study on smart grid planning and developing strategy 6. Improve the controllability of the power grid based on power electronics technology. 7. Superconductivity, energy storage and other new technologies widely used in power system 8. Power system security monitoring, fast simulation, intelligent decision-making and comprehensive defense technology 9. The application of emergency and restoration control technology in power system In response to the challenge, this paper presents the main research contents, detailed implementation plan and anticipated goals of above 9 key technologies. Some measures and suggestions for power system planning and operation of smart grid development in China are given in this paper.
---
paper_title: Magnetic Field Sensors Based on Giant Magnetoresistance (GMR) Technology: Applications in Electrical Current Sensing
paper_content:
The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.
---
paper_title: An Internet of Things Framework for Smart Energy in Buildings: Designs, Prototype, and Experiments
paper_content:
Smart energy in buildings is an important research area of Internet of Things (IoT). As important parts of the smart grids, the energy efficiency of buildings is vital for the environment and global sustainability. Using a LEED-gold-certificated green office building, we built a unique IoT experimental testbed for our energy efficiency and building intelligence research. We first monitor and collect 1-year-long building energy usage data and then systematically evaluate and analyze them. The results show that due to the centralized and static building controls, the actual running of green buildings may not be energy efficient even though they may be “green” by design. Inspired by “energy proportional computing” in modern computers, we propose an IoT framework with smart location-based automated and networked energy control, which uses smartphone platform and cloud-computing technologies to enable multiscale energy proportionality including building-, user-, and organizational-level energy proportionality. We further build a proof-of-concept IoT network and control system prototype and carried out real-world experiments, which demonstrate the effectiveness of the proposed solution. We envision that the broad application of the proposed solution has not only led to significant economic benefits in term of energy saving, improving home/office network intelligence, but also bought in a huge social implication in terms of global sustainability.
---
paper_title: Internet of Things for Smart Cities
paper_content:
The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.
---
paper_title: Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications
paper_content:
This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.
---
paper_title: A Study on the Sensitivity of a Spin Valve with Conetic-Based Free Layers
paper_content:
An exchange-biased spin valve with Conetic-based free layers of Co90Fe10, Co90Fe10/Conetic and Conetic was investigated. The spin valve with the Co90Fe10 free layer showed the highest giant magnetoresistance (GMR) ratio of 4% but showed the lowest normalized sensitivity of 0.02 Oe-1. The GMR ratio of 3% and the normalized sensitivity of 0.07 Oe-1 were obtained for the spin valve with the Co90Fe10/Conetic free layer after annealing. The spin valve having the Conetic free layer showed softer magnetic properties and well-defined smaller anisotropy than the other spin valves. Though the spin valve showed the lowest GMR of 0.4% after annealing, it showed the highest normalized sensitivity of 0.14 Oe-1. Our study shows that further improvement in MR response of spin valves with Conetic-based free layers can make a spin valve sensor promising for detecting extremely low fields.
---
paper_title: Nobel Lecture: Origin, development, and future of spintronics
paper_content:
Electrons have a charge and a spin, but until recently, charges and spins have been considered separately. In conventional electronics, the charges are manipulated by electric fields but the spins are ignored. Other classical technologies, magnetic recording, for example, are using the spin but only through its macroscopic manifestation, the magnetization of a ferromagnet. This picture started to change in 1988 when the discovery Baibich et al., 1988; Binash et al., 1989 of the giant magnetoresistance GMR of the magnetic multilayers opened the way to an efficient control of the motion of the electrons by acting on their spin through the orientation of a magnetization. This rapidly triggered the development of a new field of research and technology, today called spintronics and, like the GMR, exploiting the influence of the spin on the mobility of the electrons in ferromagnetic materials. Actually, the influence of the spin on the mobility of the electrons in ferromagnetic metals, first suggested by Mott 1936 , had been experimentally demonstrated and theoretically described in my Ph.D. thesis almost 20 years before the discovery of 1988. The GMR was the first step on the road of the exploitation of this influence to control an electrical current. Its application to the read heads of hard disks greatly contributed to the fast rise in the density of stored information and led to the extension of the hard disk technology to consumer’s electronics. Then, the development of spintronics revealed many other phenomena related to the control and manipulation of spin currents. Today this field of research is expanding considerably, with very promising new axes like the phenomena of spin transfer, spintronics with semiconductors, molecular spintronics, or single-electron spintronics.
---
paper_title: Magnetoresistance of FeCo Nanocontacts With Current-Perpendicular-to-Plane Spin-Valve Structure
paper_content:
We have achieved a magnetoresistance (MR) ratio of 7%-10% at a resistance area product (RA) of 0.5-1.5 Omegamum2 by ferromagnetic FeCo nanocontacts in Al nano-oxide-layer (NOL) with current-perpendicular-to-plane spin-valve (CPP-SV) structure. Conductive atomic-force-microscopy shows clear current-path regions of a few nanometers in size surrounded by the Al-NOL. The MR dependence on resistance area product (RA) is well explained by the current-confined-path model assuming that the spin-dependent scattering has an FeCo nanocontact origin, different from tunnel magnetoresistance (TMR). Resistance increases with increasing bias voltage, indicating joule heating by high-current density in nanocontacts, in contrast to TMR. The MR origin is mainly interpreted as spin-dependent scattering due to domain wall formed at ferromagnetic nanocontact
---
paper_title: MR ratio enhancement by NOL current-confined-path structures in CPP spin valves
paper_content:
We have compared the magnetoresistance (MR) performance of current-confined-path (CCP) current-perpendicular-to-plane giant magnetoresistance (CPP-GMR) spin valve films with a nano-oxide-layer (NOL), made between natural oxidation (NO) and ion-assisted oxidation (IAO). For the NO, an MR ratio was only 1.5% at an RA of 370 m/spl Omega//spl mu/m/sup 2/, whereas for the IAO, an MR ratio was greatly increased to 5.4% at an RA of 500 m/spl Omega//spl mu/m/sup 2/. Fitted data by the Valet-Fert model showing larger MR enhancement effect by the IAO is explained by the improvement of the metal-purity of the Cu inside the CCP structure. By further improvement of metal-purity of the Cu, a large MR ratio of more than 30% can be expected at a small RA of 300 m/spl Omega//spl mu/m/sup 2/. The CCP-CPP spin valve film is a promising candidate for realizing high-density recording heads for 200 to 400-Gbpsi recording.
---
paper_title: Enhancement of giant magnetoresistance by L21 ordering in Co2Fe(Ge0.5Ga0.5) Heusler alloy current-perpendicular-to-plane pseudo spin valves
paper_content:
We report large magnetoresistance (MR) output in fully epitaxial Co2Fe(Ge0.5Ga0.5)/Ag/Co2Fe(Ge0.5Ga0.5) current-perpendicular-to-plane pseudo spin valves. The resistance-area product change (ΔRA) of 12 mΩμm2 at room temperature (RT), equivalent to MR ratio of 57%, and ΔRA = 33 mΩμm2 at 10 K, equivalent to MR ratio of 183%, were obtained by using L21-ordered Co2Fe(Ge0.5Ga0.5) ferromagnetic electrodes. The bulk spin scattering asymmetry (β) were estimated to be ∼0.83 at RT and ∼0.93 at 10 K for the L21-ordered Co2Fe(Ge0.5Ga0.5) films by the Valet-Fert model, indicating that the L21-ordered Co2FeGe0.5Ga0.5 Heusler alloy is virtually half-metal at 10 K, but its half-metallicity is degraded at RT.
---
paper_title: Nobel Lecture: From spin waves to giant magnetoresistance and beyond
paper_content:
The “Institute for Magnetism” within the department for solid-state physics at the research center in Julich, Germany, which I joined in 1972, was founded in 1971 by Professor W. Zinn. The main research topic was the exploration of the model magnetic semiconductors EuO and EuS with Curie temperatures Tc=60 K and Tc =17 K, respectively. As I had been working with light scattering LS techniques before I came to Julich, I was very much interested in the observation of spin waves in magnetic materials by means of LS. LS can be performed with grating spectrometers, which is called Raman spectroscopy, and alternatively by Brillouin light scattering BLS spectroscopy. In the latter case, a Fabry-Perot FP interferometer is used for the frequency analysis of the scattered light see righthand side of Fig. 1 . The central part consists of two FP mirrors whose distance is scanned during operation. BLS spectroscopy is used when the frequency shift of the scattered light is small below 100 GHz , as expected for spin waves in ferromagnets. In the early 1970s, an interesting instrumental development took place in BLS, namely, the invention of the multipass operation, and later, the combination of two multipass interferometers in tandem. The inventor was Dr. J. A. Sandercock in Zurich. Since we had the opportunity to install a new laboratory, we decided in favor of BLS, initially using a single three-pass instrument as displayed on the righthand side of Fig. 1. With this, we started investigating spin waves in EuO. We indeed were able to find and identify the expected spin waves as shown by the peaks in Fig. 1 marked green . Different intensities on the Stokes S and antiStokes aS side were known from other work to be due to the magneto-optic interaction of light with the spin waves. The peaks marked red remained a puzzle for some time until good luck came to help us. Good luck in this case was a breakdown of the system, a repair and unintentional interchange of the leads when reconnecting the magnet to the power supply. To our surprise S and aS side were now reversed. To understand what this means, one has to know that classically S and aS scattering is related to the propagation direction of the observed mode, which is opposite for the two cases. This can be understood from the corresponding Doppler shift, which is to higher frequencies when the wave travels towards the observer and down when away from him. The position of the observer here would be the same as of the viewer in Fig. 1. The appearance of the red peak in the spectra on only either the S or the aS side can be explained by an unidirectional propagation of the corresponding spin wave along the surface of the sample. It can be reversed by reversing B0 and M. The unidirectional behavior of the wave can be understood on the basis of symmetry. For this, one has to know that axial vectors which appear in nature, such as B and M on the left-hand side of Fig. 1, reverse their sign under time inversion and so does the sense of the propagation of the surface wave as indicated. The upper and lower parts of Fig. 1, on the left-hand side therefore are linked by time inversion symmetry, which is valid without damping. Hence, the unidirectional behavior reflects the symmetry of the underlying system. Finally, the observed wave could be identified as the DamonEshbach DE surface mode known from theory and from microwave experiments. From the magnetic parameters of EuO, one predicts in the present case that the penetration depth of the DE mode will be a few 100 A. Sample thickness d is of the order of mm. Therefore, for the present purpose, EuO is opaque. In this case, the wave traveling on the backside of the sample in the opposite direction to the wave on the front side cannot be seen in this experiment. BLS is then either S or aS but not both at the same time. Due to all of these unique features, the results of Fig. 1 have also been chosen as examples for current research in magnetism in a textbook on “Solid State Physics” Ibach and Luth, 1995, p. 186 .
---
paper_title: Magnetic Field Sensors Based on Giant Magnetoresistance (GMR) Technology: Applications in Electrical Current Sensing
paper_content:
The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.
---
paper_title: Spin Valve Devices With Synthetic-Ferrimagnet Free-Layer Displaying Enhanced Sensitivity for Nanometric Sensors
paper_content:
Spin valves (SVs) with synthetic-antiferromagnet (SAF) pinned layers and synthetic-ferrimagnet (SF) free-layers deposited by ion beam deposition are optimized for incorporation in nanometric sensors. The results on combined SAF-SF structures indicate a reduced saturation and offset fields when compared with the simple top-pinned or SAF structure. Therefore, SAF-SF SV display sensitivities of ~0.025%/Oe (200 nm sensor), ~0.1%/Oe (500 nm sensor), and ~0.2%/Oe (1 μm sensor), which correspond to an improvement of 2x, 4.5x, and 7x, respectively, when compared with all other SV stacks tested. The results are relevant for geometries where nanometric SVs are incorporated within very narrow gaps of magnetic flux concentrators leading to superior and competitive gains in sensitivity. These geometries have the unique feature of submicrometric spatial resolution, and have high impact on surface scanning applications.
---
paper_title: Comparison of the soft magnetic properties of permalloy and conetic thin films
paper_content:
Abstract The soft magnetic properties of the substrate/[non-buffer or buffer Ta]/[permalloy (Ni 80 Fe 20 ) or conetic (Ni 77 Fe 14 Cu 5 Mo 4 )]/Ta prepared by ion beam sputter deposition are investigated. The value of the surface resistance of the conetic film is twice as high as that of the permalloy film. The value of the coercivity and magnetic susceptibility of the conetic film decreased by 25% and doubled relative to that of the permalloy film. The coercivity, with a value of 0.12 Oe, and the magnetic susceptibility, with a value of 1.2×10 4 for the conetic film, are suitable for soft magnetic biosensor applications.
---
paper_title: Fabrication of Fully-Epitaxial Co$_{{{2}}}$MnSi/Ag/Co$_{{{2}}}$MnSi Giant Magnetoresistive Devices by Elevated Temperature Deposition
paper_content:
(001)-oriented epitaxial Co2MnSi (CMS) films were grown by elevating the substrate temperature during deposition instead of a conventional post-annealing process. The CMS film deposited at 250°C showed a very flat surface morphology, large saturation magnetization, and a highly L21-ordered crystal structure. A CMS/Ag/CMS giant-magnetoresistive device with CMS layers grown at 250°C showed the high magnetoresistance ratio (33%) at room temperature. This ratio was close to that realized in the case of samples fabricated using post-annealing at 500-550°C. The elevated temperature deposition for the Heusler electrodes is a promising method to fabricate a practical magnetic read sensor for next generation hard disc drives without high temperature annealing process over 300°C.
---
paper_title: Giant tunneling magnetoresistance up to 330% at room temperature in sputter deposited Co2FeAl/MgO/CoFe magnetic tunnel junctions
paper_content:
Magnetoresistance ratio up to 330% at room temperature (700% at 10 K) has been obtained in a spin-valve-type magnetic tunnel junction (MTJ) consisting of a full-Heusler alloy Co2FeAl electrode and a MgO tunnel barrier fabricated on a single crystal MgO (001) substrate by sputtering method. The output voltage of the MTJ at one-half of the zero-bias value was found to be as high as 425 mV, which is the largest reported to date in MTJs using Heusler alloy electrodes. The present finding suggests that Co2FeAl may be one of the most promising candidates for future spintronics devices applications.
---
paper_title: Improvement of the low-frequency sensitivity of MgO-based magnetic tunnel junctions by annealing
paper_content:
Magnetic tunnel junctions can serve as ultrasensitive low-frequency magnetic sensors, however, their low-frequency performance is limited by low-frequency noise, i.e., 1/f noise. In this paper, we investigate the 1/f noise in MgO magnetic tunnel junctions (MTJs) with a tunneling magnetoresistance (TMR) of 160%, and examine the influence of annealing and MTJ size. The results show that the annealing process can not only dramatically improve the TMR, but can also strongly decrease the MTJ noise. The effect is discussed in terms of the structure of MgO barriers and tunneling probabilities. Increasing the MTJ area to 6400 μm2 yields a voltage spectral density as low as 11 nV/Hz1/2 at 1000 Hz. The possible reasons for the area dependence are discussed.
---
paper_title: Effect of electrode composition on the tunnel magnetoresistance of pseudo-spin-valve magnetic tunnel junction with a MgO tunnel barrier
paper_content:
The authors investigate the effect of electrode composition on the tunnel magnetoresistance (TMR) ratio of (CoxFe100−x)80B20∕MgO∕(CoxFe100−x)80B20 pseudo-spin-valve magnetic tunnel junctions (MTJs). TMR ratio is found to strongly depend on the composition and thicknesses of CoFeB. High resolution transmission electron microscopy shows that the crystallization process of CoFeB during annealing depends on the composition and the thicknesses of the CoFeB film, resulting in different TMR ratios. A TMR ratio of 500% at room temperature and of 1010% at 5K are observed in a MTJ having 4.3nm and 4-nm-thick (Co25Fe75)80B20 electrodes with a 2.1-nm-thick MgO barrier annealed at 475°C.
---
paper_title: Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers
paper_content:
Magnetically engineered magnetic tunnel junctions (MTJs) show promise as non-volatile storage cells in high-performance solid-state magnetic random access memories (MRAM)1. The performance of these devices is currently limited by the modest (<∼70%) room-temperature tunnelling magnetoresistance (TMR) of technologically relevant MTJs. Much higher TMR values have been theoretically predicted for perfectly ordered (100) oriented single-crystalline Fe/MgO/Fe MTJs. Here we show that sputter-deposited polycrystalline MTJs grown on an amorphous underlayer, but with highly oriented (100) MgO tunnel barriers and CoFe electrodes, exhibit TMR values of up to ∼220% at room temperature and ∼300% at low temperatures. Consistent with these high TMR values, superconducting tunnelling spectroscopy experiments indicate that the tunnelling current has a very high spin polarization of ∼85%, which rivals that previously observed only using half-metallic ferromagnets2. Such high values of spin polarization and TMR in readily manufactureable and highly thermally stable devices (up to 400 °C) will accelerate the development of new families of spintronic devices.
---
paper_title: Improved tunnel magnetoresistance of magnetic tunnel junctions with Heusler Co2FeAl0.5Si0.5 electrodes fabricated by molecular beam epitaxy
paper_content:
The authors have developed a magnetic tunnel junction of Co2FeAl0.5Si0.5 electrodes and a MgO barrier fabricated by molecular beam epitaxy and observed that this device had a tunnel magnetoresistance ratio of 386% at approximately 300 K and 832% at 9 K. The lower Co2FeAl0.5Si0.5 electrode was annealed during and after deposition resulting in a highly ordered structure with small roughness. This highly ordered structure could be obtained by annealing treatment even at low temperatures. Furthermore, a weak temperature dependence of the tunnel magnetoresistance ratio was observed for the developed magnetic tunnel junction.
---
paper_title: Influence of growth and annealing conditions on low-frequency magnetic 1/f noise in MgO magnetic tunnel junctions
paper_content:
Magnetic 1/f noise is compared in magnetic tunnel junctions with electron-beam evaporated and sputtered MgO tunnel barriers in the annealing temperature range 350 - 425 °C. The variation of the magnetic noise parameter (αmag) of the reference layer with annealing temperature mainly reflects the variation of the pinning effect of the exchange-bias layer. A reduction in αmag with bias is associated with the bias dependence of the tunneling magnetoresistance. The related magnetic losses are parameterized by a phase lag e, which is nearly independent of bias especially below 100 mV. The similar changes in magnetic noise with annealing temperature and barrier thickness for two types of MgO magnetic tunnel junctions indicate that the barrier layer quality does not affect the magnetic losses in the reference layer.
---
paper_title: Giant magnetic tunneling effect in Fe/Al2O3/Fe junction
paper_content:
Abstract A giant magnetoresistance ratio of 30% at 4.2 K and 18% at 300 K was observed for the first time in an Fe/Al 2 O 3 /Fe junction. The conductance at room temperature was expressed well by G =96.2 (1 + 0.09 cos θ)(Ω -1 ), where θ is the angle between the magnetizations of two iron electrodes. The dependence of the magnetoresistance ratio, saturated resistance and also the tunneling current on temperature were measured in the range 4.2–300 K. The results support the claim that the giant magnetoresistance is due to the magnetic tunneling of electrons between the electrodes through the thin Al 2 O 3 insulator.
---
paper_title: Shot Noise in Magnetic Tunnel Junctions: Evidence for Sequential Tunneling
paper_content:
We report the experimental observation of sub-Poissonian shot noise in single magnetic tunnel junctions, indicating the importance of tunneling via impurity levels inside the tunnel barrier. For junctions with weak zero-bias anomaly in conductance, the Fano factor (normalized shot noise) depends on the magnetic configuration being enhanced for antiparallel alignment of the ferromagnetic electrodes. We propose a model of sequential tunneling through nonmagnetic and paramagnetic impurity levels inside the tunnel barrier to qualitatively explain the observations.
---
paper_title: Breakdown mechanisms in MgO based magnetic tunnel junctions and correlation with low frequency noise
paper_content:
Magnetic tunnel junctions (MTJs) are very attractive for magnetic random access memories (MRAMs), thanks to their combination of non-volatility, speed, low power and endurance. In particular spin transfer torque (STT) RAMs based on STT writing show a very good downsize scalability. However, an issue is that at each write event, the MTJ is submitted to an important electrical stress due to write voltage of the order of half of the electrical breakdown voltage. Here we present a study of breakdown mechanisms in MgO based MTJ performed under pulsed conditions. We developed a model of charge trapping/detrapping on barrier defects to explain and predict device endurance. We also show that endurance is correlated to low frequency 1/f noise and that such noise measurement could thus be used as a non destructive and predictive tool for the reliability of the devices.
---
paper_title: Sub-Poissonian shot noise in CoFeB/MgO/CoFeB-based magnetic tunneling junctions
paper_content:
We measured the shot noise in the CoFeB/MgO/CoFeB-based magnetic tunneling junctions with a high tunneling magnetoresistance ratio (over 200% at 3 K). Although the Fano factor in the anti-parallel configuration is close to unity, it is observed to be typically 0.91\pm0.01 in the parallel configuration. It indicates the sub-Poissonian process of the electron tunneling in the parallel configuration due to the relevance of the spin-dependent coherent transport in the low bias regime.
---
paper_title: 60% magnetoresistance at room temperature in Co–Fe/Al–O/Co–Fe tunnel junctions oxidized with Kr–O2 plasma
paper_content:
The influence of the mixed inert gas species for plasma oxidization process of a metallic Al layer on the tunnel magnetoresistance (TMR) was investigated for a magnetic tunnel junction (MTJ), Ta 50 A/Cu 200 A/Ta 200 A/Ni–Fe 50 A/Cu 50 A/Mn75Ir25 100 A/Co70Fe30 25 A/Al–O/Co70Fe30 25 A/Ni–Fe 100 A/Cu 200 A/Ta 50 A. Using Kr–O2 plasma, a 58.8% of TMR ratio was obtained at room temperature after annealing the junction at 300 °C, while the achieved TMR ratio of the MTJ fabricated with usual Ar–O2 plasma remained 48.6%. A faster oxidization rate of the Al layer by using Kr–O2 plasma is a possible cause to prevent the over oxidization of the Al layer, which depolarizes the surface of the underlaid ferromagnetic electrode, and to realize a large magnetoresistance.
---
paper_title: Dilute ferromagnetic semiconductors: Physics and spintronic structures
paper_content:
This review compiles results of experimental and theoretical studies on thin films and quantum structures of semiconductors with randomly distributed Mn ions, which exhibit spintronic functionalities associated with collective ferromagnetic spin ordering. Properties of p-type Mn-containing III-V as well as II-VI, V_2-VI_3, IV-VI, I-II-V, and elemental group IV semiconductors are described paying particular attention to the most thoroughly investigated system (Ga,Mn)As that supports the hole-mediated ferromagnetic order up to 190 K for the net concentration of Mn spins below 10%. Multilayer structures showing efficient spin injection and spin-related magnetotransport properties as well as enabling magnetization manipulation by strain, light, electric fields, and spin currents are presented together with their impact on metal spintronics. The challenging interplay between magnetic and electronic properties in topologically trivial and non-trivial systems is described, emphasizing the entangled roles of disorder and correlation at the carrier localization boundary. Finally, the case of dilute magnetic insulators is considered, such as (Ga,Mn)N, where low temperature spin ordering is driven by short-ranged superexchange that is ferromagnetic for certain charge states of magnetic impurities.
---
paper_title: Epitaxial Co2Cr0.6Fe0.4Al thin films and magnetic tunnelling junctions
paper_content:
Epitaxial thin films of the theoretically predicted half metal Co2Cr0.6Fe0.4Al were deposited by dc magnetron sputtering on different substrates and buffer layers. The samples were characterized by x-ray and electron beam diffraction (RHEED) demonstrating the B2 order of the Heusler compound with only a small fraction of disorder on the Co sites. Magnetic tunnelling junctions with Co2Cr0.6Fe0.4Al electrode, AlOx barrier and Co counter electrode were prepared. From the Julliere model a spin polarization of Co2Cr0.6Fe0.4Al of 54% at T = 4 K was deduced. The relation between the annealing temperature of the Heusler electrodes and the magnitude of the tunnelling magnetoresistance effect was investigated and the results are discussed in the framework of morphology and surface order based on in situ scanning tunnelling microscopy (STM) and RHEED investigations.
---
paper_title: Experimental investigations of SiO2 based ferrite magnetic tunnel junction
paper_content:
We report experimental results of ferrite based magnetic tunnel junction. Ferrite junction and spin transport through SiO2 were interesting since they can readily replace the conventional electronics. We fabricated a cobalt ferrite/SiO2/cobalt nickel ferrite based magnetic tunnel junction over a copper coated n-silicon substrate using a RF/DC magnetron sputtering. The tunneling magnetoresistance shows a very good response to applied field and we achieved a TMR of about 16%. Although theoretically it was predicted infinite TMR for half metallic ferromagnetic junction, the deviation was explained on the basis of incoherent scattering along the interfaces.
---
paper_title: Tunnel magnetoresistance of 604% at 300K by suppression of Ta diffusion in CoFeB∕MgO∕CoFeB pseudo-spin-valves annealed at high temperature
paper_content:
The authors observed tunnel magnetoresistance (TMR) ratio of 604% at 300K in Ta∕Co20Fe60B20∕MgO∕Co20Fe60B20∕Ta pseudo-spin-valve magnetic tunnel junction annealed at 525°C. To obtain high TMR ratio, it was found critical to anneal the structure at high temperature above 500°C, while suppressing the Ta diffusion into CoFeB electrodes and in particular to the CoFeB∕MgO interface. X-ray diffraction measurement of MgO on SiO2 or Co20Fe60B20 shows that an improvement of MgO barrier quality, in terms of the degree of the (001) orientation and stress relaxation, takes place at annealing temperatures above 450°C. The highest TMR ratio observed at 5K was 1144%.
---
paper_title: 230% room temperature magnetoresistance in CoFeB/MgO/CoFeB magnetic tunnel junctions
paper_content:
The magnetoresistance ratio of 230% at room temperature is reported. This was achieved in spin-valve type magnetic tunnel junctions using MgO barrier layer and amorphous CoFeB ferromagnetic electrodes fabricated on thermally oxidized Si substrates. The amorphous CoFeB electrodes are of great advantage to the polycrystalline FeCo electrodes in achieving a high homogeneity in small 100 nm-sized MTJs.
---
paper_title: Giant room-temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions
paper_content:
The tunnel magnetoresistance (TMR) effect in magnetic tunnel junctions (MTJs)1,2 is the key to developing magnetoresistive random-access-memory (MRAM), magnetic sensors and novel programmable logic devices3,4,5. Conventional MTJs with an amorphous aluminium oxide tunnel barrier, which have been extensively studied for device applications, exhibit a magnetoresistance ratio up to 70% at room temperature6. This low magnetoresistance seriously limits the feasibility of spintronics devices. Here, we report a giant MR ratio up to 180% at room temperature in single-crystal Fe/MgO/Fe MTJs. The origin of this enormous TMR effect is coherent spin-polarized tunnelling, where the symmetry of electron wave functions plays an important role. Moreover, we observed that their tunnel magnetoresistance oscillates as a function of tunnel barrier thickness, indicating that coherency of wave functions is conserved across the tunnel barrier. The coherent TMR effect is a key to making spintronic devices with novel quantum-mechanical functions, and to developing gigabit-scale MRAM.
---
paper_title: Band overlap via chemical pressure control in double perovskite (Sr2−xCax)FeMoO6 (0 ⩽ x ⩽ 2.0) with TMR effect
paper_content:
Abstract The chemical pressure control in (Sr 2− x Ca x )FeMoO 6 (0 ⩽ x ⩽ 2.0) with double perovskite structure has been investigated systematically. We have performed first-principles total energy and electronic structure calculations for x = 0 and x = 2.0. The increasing Ca content in (Sr 2− x Ca x )FeMoO 6 samples increases the magnetic moment close to the theoretical value due to reduction of Fe/Mo anti-site disorder. An increasing Ca content results in increasing (Fe 2+ + Mo 6+ )/(Fe 3+ + Mo 5+ ) band overlap rather than bandwidth changes. This is explained from simple ionic size arguments and is supported by X-ray absorption near edge structure (XANES) spectra and band structure calculations.
---
paper_title: Dependence of magnetoresistance on temperature and applied voltage in a 82Ni-Fe/Al-Al2O3/Co tunneling junction
paper_content:
Abstract The dependence of magnetoresistance on temperature T and on the applied voltage V in a 82Ni-Fe/Al-Al 2 O 3 /Co tunneling junction has been studied. The magnetoresistance Δ R / R increased rapidly at T ⩽ 30 K and V ⩽2.5 mV. The result is discussed by taking into account the tunneling conductance due to ferromagnetic tunneling and nonferromagnetic tunneling causing the zero-bias anomaly.
---
paper_title: 70% TMR at room temperature for SDT sandwich junctions with CoFeB as free and reference Layers
paper_content:
Spin dependent tunneling (SDT) wafers were deposited using dc magnetron sputtering. SDT junctions were patterned and connected with one layer of metal lines using photolithography techniques. These junctions have a typical stack structure of Si(100)-Si/sub 3/N/sub 4/-Ru-CoFeB-Al/sub 2/O/sub 3/-CoFeB-Ru-FeCo-CrMnPt with the antiferromagnet CrMnPt layers for pinning at the top. High-resolution transmission electron microscopy (HRTEM) reveals that the CoFeB has an amorphous structure and a smooth interface with the Al/sub 2/O/sub 3/ tunnel barrier. Although it is difficult to pin the amorphous CoFeB directly from the top, the use of a synthetic antiferromagnet (SAF) pinned layer structure allows sufficient rigidity of the reference CoFeB layer. The tunnel junctions were annealed at 250/spl deg/C for 1 h and tested for magneto-transport properties with tunnel magnetoresistive (TMR) values as high as 70.4% at room temperature, which is the highest value ever reported for such a sandwich structure. This TMR value translates to a spin polarization of 51% for CoFeB, which is likely to be higher at lower temperatures. These junctions also have a low coercivity (Hc) and a low parallel coupling field (Hcoupl). The combination of a high TMR, a low Hc, and a low Hcoupl is ideal for magnetic field sensor applications.
---
paper_title: Spin-tunnel-junction thermal stability and interface interdiffusion above 300 °C
paper_content:
Spin tunnel junctions (CoFe/Al2O3/CoFe/MnIr) were fabricated with tunneling magnetoresistance (TMR) of 39%–41% after anneal at 300 °C, decreasing to 4%–6% after anneal at 410 °C. Junction resistance decreases from (0.8–1.6) to (0.5–0.8) M Ω μm2 during anneal. The pinned-layer moment decreases by 44% after anneal at 435 °C, but the free-layer moment does not change. The current–voltage characteristics change significantly and become asymmetric above 300 °C. Rutherford backscattering analysis (RBS) shows that above 300 °C, strong interdiffusion starts at the CoFe/MnIr interface with Mn moving into CoFe, causing the electrode moment to decrease. Mn eventually reaches the Al2O3/CoFe interface contributing to the TMR decrease. RBS analysis of a separate CoFe/Al2O3/CoFe structure shows only minor structural changes at the CoFe/Al2O3 interfaces after anneal at 435 °C, possibly leading to a second mechanism for the loss of interface polarization and TMR.
---
paper_title: Spin dependent electron tunneling between ferromagnetic films
paper_content:
Abstract The hysteresis of tunneling resistance in Gd/GdO x / Fe and Fe / GdO x / Fe junctions in the external magnetic field has been observed. The relative changes of tunneling resistance were of order 2.5–7.7%. The domain structures and magnetization reversal processes in the area of tunneling junctions were investigated by defocused electron microscopy. We found that the ferromagnetic coupling between electrodes of the junction strongly modified the domain structures and magnetization reversal processes in the area of tunnel junction. The hysteresis of tunneling resistance was closely connected with magnetization processes in the junction area. We estimated the polarization of tunneling densities of states for Fe electrodes to be not lower than 17% and the change of barrier height corresponding to the barrier spin filtering effect to be of order 1 meV.
---
paper_title: Giant tunneling magnetoresistance up to 410% at room temperature in fully epitaxial Co∕MgO∕Co magnetic tunnel junctions with bcc Co(001) electrodes
paper_content:
Fully epitaxial Co(001)∕MgO(001)∕Co(001) magnetic tunnel junctions (MTJs) with metastable bcc Co(001) electrodes were fabricated with molecular beam epitaxy. The MTJs exhibited giant magnetoresistance (MR) ratios up to 410% at room temperature, the highest value reported to date. Temperature dependence of the MR ratio was observed to be very small compared with fully epitaxial Fe∕MgO∕Fe and textured CoFeB∕MgO∕CoFeB MTJs. The MR ratio of the Co∕MgO∕Co MTJ showed larger bias voltage dependence than that of the epitaxial Fe∕MgO∕Fe MTJs, which probably reflects the band structures of bcc Co and Fe for the k‖=0 direction.
---
paper_title: Fabrication of fully epitaxial Co2MnSi∕MgO∕Co2MnSi magnetic tunnel junctions
paper_content:
Fully epitaxial magnetic tunnel junctions (MTJs) were fabricated with full-Heusler alloy Co2MnSi thin films as both lower and upper electrodes and with a MgO tunnel barrier. The fabricated MTJs showed clear exchange-biased tunnel magnetoresistance (TMR) characteristics with high TMR ratios of 179% at room temperature (RT) and 683% at 4.2K. In addition, the TMR ratio exhibited oscillations as a function of the MgO tunnel barrier thickness (tMgO) at RT, having a period of 0.28nm, for tMgO ranging from 1.8to3.0nm.
---
paper_title: Low frequency noise peak near magnon emission energy in magnetic tunnel junctions
paper_content:
We report on the low frequency (LF) noise measurements in magnetic tunnel junctions (MTJs) below 4 K and at low bias, where the transport is strongly affected by scattering with magnons emitted by hot tunnelling electrons, as thermal activation of magnons from the environment is suppressed. For both CoFeB/MgO/CoFeB and CoFeB/AlOx/CoFeB MTJs, enhanced LF noise is observed at bias voltage around magnon emission energy, forming a peak in the bias dependence of noise power spectra density, independent of magnetic configurations. The noise peak is much higher and broader for unannealed AlOx-based MTJ, and besides Lorentzian shape noise spectra in the frequency domain, random telegraph noise (RTN) is visible in the time traces. During repeated measurements the noise peak reduces and the RTN becomes difficult to resolve, suggesting defects being annealed. The Lorentzian shape noise spectra can be fitted with bias-dependent activation of RTN, with the attempt frequency in the MHz range, consistent with magnon dyn...
---
paper_title: Co2FeAl based magnetic tunnel junctions with BaO and MgO/BaO barriers
paper_content:
We succeed to integrate BaO as a tunneling barrier into Co2FeAl based magnetic tunnel junctions (MTJs). By means of Auger electron spectroscopy it could be proven that the applied annealing temperatures during BaO deposition and afterwards do not cause any diffusion of Ba neither into the lower Heusler compound lead nor into the upper Fe counter electrode. Nevertheless, a negative tunnel magnetoresistance (TMR) ratio of -10% is found for Co2FeAl (24 nm) / BaO (5 nm) / Fe (7 nm) MTJs, which can be attributed to the preparation procedure and can be explained by the formation of Co- and Fe-oxides at the interfaces between the Heusler and the crystalline BaO barrier by comparing with theory. Although an amorphous structure of the BaO barrier seems to be confirmed by high-resolution transmission electron microscopy (TEM), it cannot entirely be ruled out that this is an artifact of TEM sample preparation due to the sensitivity of BaO to moisture. By replacing the BaO tunneling barrier with an MgO/BaO double lay...
---
paper_title: Very low 1/f barrier noise in sputtered MgO magnetic tunnel junctions with high tunneling magnetoresistance
paper_content:
Low frequency 1/f barrier noise has been investigated in sputtered MgO magnetic tunnel junctions (MTJs) with a tunneling magnetoresistance ratio of up to 330% at room temperature. The lowest normalized noise parameter α of the tunnel barrier reaches 2.5 × 10−12–2.1 × 10−11 μm2, which is comparable to that found in MTJs with the MgO barrier grown by MBE or electron–beam evaporation. This normalized barrier noise is almost bias independent in the voltage range of up to ±1.2 V. The low noise level and high voltage stability may reflect the high quality of the sputtered MgO with a uniform distribution of defects in the MgO layer.
---
paper_title: 1/f noise in MgO double-barrier magnetic tunnel junctions
paper_content:
Low frequency noise has been investigated in MgO double-barrier magnetic tunnel junctions (DMTJs) with tunneling magnetoresistance (TMR) ratios up to 250% at room temperature. The noise shows a 1/f frequency spectrum and the minimum of the noise magnitude parameter is 1.2×10−10 μm2 in the parallel state for DMTJs annealed at 375 °C. The bias dependence of noise and TMR suggests that DMTJs with MgO barriers can be useful for magnetic field sensor applications.
---
paper_title: Optimized thickness of superconducting aluminum electrodes for measurement of spin polarization with MgO tunnel barriers
paper_content:
Superconducting tunneling spectroscopy (STS) is one of the most useful techniques for measuring the tunneling spin polarization of magnetic materials, typically carried out using aluminum electrodes. Recent studies using MgO barriers have shown the extreme sensitivity of the spin polarization to annealing at temperatures up to ∼400°C. Here the authors show that by optimizing the thickness of aluminum superconducting electrodes, STS measurements can be carried out even for such high annealing temperatures.
---
paper_title: Linearization and Field Detectivity in Magnetic Tunnel Junction Sensors Connected in Series Incorporating 16 nm-Thick NiFe Free Layers
paper_content:
In this work, arrays of MgO-based magnetic tunnel junction elements (5×20 μm2) connected in series are studied for sensor applications. Linearization is obtained by combining shape anisotropy with a longitudinal hard bias field set by CoCrPt permanent magnets integrated on the sides of the array. We show that this architecture has the drawback of a large footprint in the chip, but is largely compensated by the weak bias voltage dependence and huge electric robustness, when comparing with individual magnetic tunnel junctions.
---
paper_title: MEMS approach for making a low cost, high sensitivity magnetic sensor
paper_content:
An approach based on electromechanical systems (MEMS) technology will be presented that essentially eliminates the problem of 1/f noise in small magnetic sensors. The sensor based on this approach, the MEMS flux concentrator sensor, should outperform all other high sensitivity vector magnetometers in terms of cost, power consumption, and sensitivity. The method for achieving this improvement is to employ flux concentrators on MEMS structures that are near the magnetic sense element. The motion of the flux concentrators modulates the magnetic field and shifts the operating frequency of the sensor to higher frequencies where 1/f noise is unimportant. The data presented here provides our first evidence that the MEMS flux concentrator improves the signal to noise ratio at low frequencies.
---
paper_title: MgO based picotesla field sensors
paper_content:
MgO magnetic tunnel junctions with RA=150Ωμm2 and tunnel magnetic resistance=100% were patterned into pillars with different geometries with areas up to 2000μm2. Sensors were incorporated in 500nm thick Co94Zr3Nb4 flux guides with different shapes and free layer stabilization was achieved through internal (Co66Cr16Pt18 pads) and external longitudinal bias fields (3.5mT). Sensitivities of 870%∕mT were obtained in the center of the transfer curve. Noise levels of 97pT∕Hz0.5 at 10Hz, 51pT∕Hz0.5 at 30Hz, and 2pT∕Hz0.5 at 500kHz were measured in the linear region of the transfer curve.
---
paper_title: Low frequency picotesla field detection using hybrid MgO based tunnel sensors
paper_content:
Low frequency ultrasensitive magnetic sensors are required for magnetocardiography (1pT at 10Hz) applications. MgO based magnetic tunnel junctions with RA=100–150Ωμm2 and tunnel magnetic resistance=100% were patterned into rectangular pillars with side up to 50μm. Sensors were incorporated in 500nm thick Co94Zr3Nb4 flux guides. Sensor linearization was achieved through internal (Co66Cr16Pt18 pads) and external (3.5mT) longitudinal bias fields resulting in sensitivities of 720%∕mT. Noise levels of 97pT∕Hz0.5 at 10Hz, 51pT∕Hz0.5 at 30Hz, and 2pT∕Hz0.5 at 500kHz were measured in the linear region of the transfer curve.
---
paper_title: Magnetic tunnel junction sensors with pTesla sensitivity for biomedical imaging
paper_content:
Ultrasensitive magnetic field sensors at low frequencies are necessary for several biomedical applications. Suitable devices can be achieved by using large area magnetic tunnel junction sensors combined with permanent magnets to stabilize the magnetic configuration of the free layer and improve linearity. However, further increase in sensitivity and consequently detectivity are achieved by incorporating also soft ferromagnetic flux guides. A detailed study of tunnel junction sensors with variable areas and aspect ratios is presented in this work. In addition, the effect in the sensors transfer curve, namely in their coercivity and sensitivity, as a consequence of the incorporation of permanent magnets and flux guides is also thoroughly discussed. Using sensors with a tunnel magnetoresistance of ~200 %, incorporating both permanent magnets and flux guides sensitivities of 220-260 %/mT were obtained for high aspect ratio sensors, increasing to values larger than ~2000%/mT for large areas and low aspect ratio sensors. Measured noise levels of the final device at 10 Hz yield 3.9×10 -17 V 2 /Hz, leading to an improved lowest detectable field of ~ 94 pT/ Hz 0.5 .
---
paper_title: Linearization strategies for high sensitivity magnetoresistive sensors
paper_content:
Ultrasensitive magnetic field sensors envisaged for applications on biomedical imaging require the detection of low-intensity and low-frequency signals. Therefore linear magnetic sensors with enhanced sensitivity low noise levels and improved field detection at low operating frequencies are necessary. Suitable devices can be designed using magnetoresistive sensors, with room temperature operation, adjustable detected field range, CMOS compatibility and cost-effective production. The advent of spintronics set the path to the technological revolution boosted by the storage industry, in particular by the development of read heads using magnetoresistive devices. New multilayered structures were engineered to yield devices with linear output. We present a detailed study of the key factors influencing MR sensor performance (materials, geometries and layout strategies) with focus on different linearization strategies available. Furthermore strategies to improve sensor detection levels are also addressed with best reported values of ∼40 pT/√Hz at 30 Hz, representing a step forward the low field detection at room temperature.
---
paper_title: Magnetic Field Sensors Based on Giant Magnetoresistance (GMR) Technology: Applications in Electrical Current Sensing
paper_content:
The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.
---
paper_title: Iddq testing for CMOS VLSI
paper_content:
It is little more than 15-years since the idea of Iddq testing was first proposed. Many semiconductor companies now consider Iddq testing as an integral part of the overall testing for all IC's. This paper describes the present status of Iddq testing along with the essential items and necessary data related to Iddq testing. As part of the introduction, a historical background and discussion is given on why this test method has drawn attention. A section on physical defects with in-depth discussion and examples is used to illustrate why a test method outside the voltage environment is required. Data with additional information from case studies is used to explain the effectiveness of Iddq testing. In Section IV, design issues, design styles, Iddq test vector generation and simulation methods are discussed. The concern of whether Iddq testing will remain useful in deep submicron technologies is addressed (Section V). The use of Iddq testing for reliability screening is described (Section VI). The current measurement methods for Iddq testing are given (Section VII) followed by comments on the economics of Iddq testing (Section VIII). In Section IX pointers to some recent research are given and finally, concluding remarks are given in Section X.
---
paper_title: Power pin testing: making the test coverage complete
paper_content:
Most modern ICs have multiple power and ground pins. It becomes necessary that all these pins are indeed connected to the board. In this paper a structural test method is presented for testing the connections of power and ground pins to the board. This on-chip test method is based on supply current detection between a bonding pad and its connection to the IC power distribution network. The method has been implemented and evaluated on a CMOS IC, using IEEE Std. 1149.1 to control and observe the embedded current sensors.
---
paper_title: Non-contact electric-coupling-based and magnetic-field-sensing-assisted technique for monitoring voltage of overhead power transmission lines
paper_content:
Adopting non-contact electric coupling for voltage monitoring avoids electrical connection to high-voltage transmission lines, making installation of the sensing platforms easier and enabling wide-area deployment. However, coupled transformation matrix is difficult to determine in practice as the exact spatial positions of the transmission lines are typically unknown and dynamic. This problem can be overcome by integrating magnetic field sensing with stochastic optimization algorithm. In this technique, magnetic signals emanated from the overhead transmission lines are measured by magnetic sensor arrays to determine the coupling coefficients so that the high voltage of the transmission lines can be deduced from the induced voltage in the copper bars on the ground. This proposed technique can be potentially implemented with low-cost copper induction bars and magnetoresistive (MR) sensors. Thus sectional and wide-area deployment may become possible and Ferranti effect and travelling lightning waves can then be monitored. Its enhanced sensing ability over potential transformers (PTs) can largely improve transient-fault identification and save cost for transmission-network inspection.
---
paper_title: Nonintrusive Current Sensor for the Two-Wire Power Cords
paper_content:
This paper presents a nonintrusive current sensor for the two-wire power cords connected to household and commercial appliances. The sensor uses a gapped magnetic core surrounding the power cord to constrain the distribution of the magnetic field induced by the current. The magnetic fields in four air gaps of the magnetic core are respectively measured by four magnetoresistive (MR) sensors. The current in the power cord and the rotation angle of the power cord around its central axis are calculated together by the output voltages of the four MR sensors. The current sensor encircles the power cord without any alignment process and can detect the current nonintrusively in the two-wire power cords of any type, such as zip-cord or nonmetallic sheathed cable. The experimental results show that the nonlinearity of the sensor is ±1.5% for the current range from 0 to 20 A with the maximum error limited to ±1.8%, this sensor is ideally suited for measuring the end use of electric power in residential and commercial areas.
---
paper_title: Design, installation, and field experience with an overhead transmission dynamic line rating system
paper_content:
Dynamic line rating (DLR) systems for high voltage overhead transmission lines have been installed by three utilities over the past five years. These DLR systems utilize the Power Donut/sup TM/ to measure load and conductor temperature at several locations over the length of the circuit. The effective wind acting on the conductor at each site is determined in real-time by a dynamic heat balance and used to compute the normal, LTE and STE ratings each minute. The lowest ratings of all locations define the circuit's ratings and are sent to SCADA as analog signals. This method is known as the conductor temperature model (CTM) as opposed to the weather model (WM) which calculates ratings using weather data only. These first DLR installations have also measured weather parameters at each ground station location to allow a comparison of the two methods. Data from the latest of these systems is presented and the behavior of the real time ratings are discussed.
---
paper_title: A vehicle detection algorithm based on wireless magnetic sensor networks
paper_content:
At present, the research of parking management system based on Wireless Sensor Networks (WSN) and Anisotropic Magneto Resistive (AMR) sensors have made great progress. However, due to the diversity of the vehicle magnetic signal and the interference caused by the adjacent parking spaces, vehicle detection technique using wireless magnetic sensor networks is still immature. For accurately detecting a parking vehicle in a parking lot, we propose a detection algorithm named Relative Extremum Algorithm (REA). On the parking lot at Shenzhen Institutes of Advanced Technology (SIAT), 82 sensor devices are deployed to evaluate the performance of REA. By running the system for more than half a year, we observe that the vehicle detection accuracy of the REA is above 98.8%.
---
paper_title: A Parking Occupancy Detection Algorithm Based on AMR Sensor
paper_content:
Recently, with the explosive increase of automobiles in cities, parking problems are serious and even worsen in many cities. This paper proposes a new algorithm for parking occupancy detection based on the use of anisotropic magnetoresistive sensors. Parking occupancy detection is abstracted as binary pattern recognition problem. According to the status of the parking space, the recognition result contains two categories: vacant and occupied. The feature extraction method of the parking magnetic signal is proposed. In addition, the classification criteria are derived based on the distance discriminate analysis method. Eighty-two sensor nodes are deployed on the roadside parking spaces. By running the system for six months, we observed that the accuracy rate of the proposed parking occupancy detection algorithm is better than 98%.
---
paper_title: Traffic measurement and vehicle classification with a single magnetic sensor
paper_content:
Wireless magnetic sensor networks offer a very attractive, low-cost alternative to inductive loops for traffic measurement in freeways and at intersections. In addition to vehicle count, occupancy and speed, the sensors yield traffic information (such as vehicle classification) that cannot be obtained from loop data. Because such networks can be deployed in a very short time, they can also be used (and reused) for temporary traffic measurement. This paper reports the detection capabilities of magnetic sensors, based on two field experiments. The first experiment collected a two-hour trace of measurements on Hearst Avenue in Berkeley. The vehicle detection rate is better than 99 percent (100 percent for vehicles other than motorcycles); and estimates of vehicle length and speed appear to be better than 90 percent. Moreover, the measurements also give inter-vehicle spacing or headways, which reveal such interesting phenomena as platoon formation downstream of a traffic signal. Results of the second experiment are preliminary. Sensor data from 37 passing vehicles at the same site are processed and classified into 6 types. Sixty percent of the vehicles are classified correctly, when length is not used as a feature. The classification algorithm can be implemented in real time by the sensor node itself, in contrast to other methods based on high scan-rate inductive loop signals, which require extensive offline computation. We believe that when length is used as a feature, 80-90 percent of vehicles will be correctly classified.
---
paper_title: A Cross-Correlation Technique for Vehicle Detections in Wireless Magnetic Sensor Network
paper_content:
Vehicle detections are an important research field and attract many researchers. Most research efforts have been focused on vehicle parking detection (VPD) in indoor parking lot. For on-street parking, strong noise disturbances affect detection accuracy. To deal with vehicle detections in on-street environment, we propose two vehicle detection algorithms based on a cross-correlation technique in wireless magnetic sensor networks. One is for VPD, and the other one is for vehicle speed estimation (VSE). The proposed VPD algorithm combines the state-machine detection and the cross-correlation detection. In the VSE, speed estimation is based on the calculation of the normalized cross correlation between the signals of two sensors along the road with a certain spacing. Experimental results show that the VPD has an accuracy of 99.65% for arrival and 99.44% for departure, while the VSE has an accuracy of 92%.
---
paper_title: The Development of Vehicle Position Estimation Algorithms Based on the Use of AMR Sensors
paper_content:
This paper focuses on the use of anisotropic magnetoresistive (AMR) sensors for imminent crash detection in cars. The AMR sensors are used to measure the magnetic field from another vehicle in close proximity to estimate relative position and velocity from the measurement. An analytical formulation for the relationship between magnetic field and vehicle position is developed. The challenges in the use of the AMR sensors include their nonlinear behavior, limited range, and magnetic signature levels that vary with each type of car. An adaptive filter based on the iterated extended Kalman filter (IEKF) is developed to automatically tune filter parameters for each encountered car and to reliably estimate relative car position. The utilization of an additional sonar sensor during the initial detection of the encountered vehicle is shown to highly speed up the parameter convergence of the filter. Experimental results are presented from a number of tests with various vehicles to show that the proposed sensor system is viable.
---
paper_title: Vehicle detection using a magnetic field sensor
paper_content:
The measurement of vehicle magnetic moments and the results from use of a fluxgate magnetic sensor to actuate a lighting system from the magnetic fields of passing vehicles is reported. A typical U.S. automobile has a magnetic moment of about 200 A-m2(Ampere-meters2), while for a school bus it is about 2000 A-m2. When the vehicle is modeled as an ideal magnetic dipole with a moment of 200 A-m2, the predicted results from an analysis of the sensor-vehicle geometry agree closely with observations of the system response to automobiles.
---
paper_title: VEHICLE DETECTION AND COMPASS APPLICATIONS USING AMR MAGNETIC SENSORS
paper_content:
This paper presents a review of magnetic sensing and discusses applications for magnetic sensors. Focus is on magnetic sensing in vehicle detection and navigation that is based on magnetic fields.
---
paper_title: Portable Roadside Sensors for Vehicle Counting, Classification, and Speed Measurement
paper_content:
This paper focuses on the development of a portable roadside magnetic sensor system for vehicle counting, classification, and speed measurement. The sensor system consists of wireless anisotropic magnetic devices that do not require to be embedded in the roadway-the devices are placed next to the roadway and measure traffic in the immediately adjacent lane. An algorithm based on a magnetic field model is proposed to make the system robust to the errors created by larger vehicles driving in the nonadjacent lane. These false calls cause an 8% error if uncorrected. The use of the proposed algorithm reduces this error to only 1%. Speed measurement is based on the calculation of the cross correlation between longitudinally spaced sensors. Fast computation of the cross correlation is enabled by using frequency-domain signal processing techniques. An algorithm for automatically correcting for any small misalignment of the sensors is utilized. A high-accuracy differential Global Positioning System is used as a reference to measure vehicle speeds to evaluate the accuracy of the speed measurement from the new sensor system. The results show that the maximum error of the speed estimates is less than 2.5% over the entire range of 5-27 m/s (11-60 mi/h). Vehicle classification is done based on the magnetic length and an estimate of the average vertical magnetic height of the vehicle. Vehicle length is estimated from the product of occupancy and estimated speed. The average vertical magnetic height is estimated using two magnetic sensors that are vertically spaced by 0.25 m. Finally, it is shown that the sensor system can be used to reliably count the number of right turns at an intersection, with an accuracy of 95%. The developed sensor system is compact, portable, wireless, and inexpensive. Data are presented from a large number of vehicles on a regular busy urban road in the Twin Cities, MN, USA.
---
paper_title: Vehicle Detection and Classification for Low-Speed Congested Traffic With Anisotropic Magnetoresistive Sensor
paper_content:
A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.
---
paper_title: Wireless magnetic sensors for traffic surveillance
paper_content:
Sensys Networks' VDS240 vehicle detection system is a wireless sensor network composed of a collection of 3'' by 3'' by 2'' sensor nodes put in the center of a lane and a 6'' by 4'' by 4'' access point (AP) box placed 15' high on the side of the road. A node measures changes in the earth's magnetic field induced by a vehicle, processes the measurements to detect the vehicle, and transfers the processed data via radio to the AP. The AP combines data from the nodes into information for the local controller or the Traffic Management Center (TMC). An AP communicates via radio directly with up to 96 nodes within a range of 150'; a Repeater extends the range to 1000'. This range makes it suitable to deploy VDS240 networks for traffic counts, stop-bar and advance detection, and measurement of queue lengths on ramps and at intersections, as well as parking guidance and enforcement. VDS240 is self-calibrating, IP-addressable and remotely monitored. Data are not lost because unacknowledged data packets are retransmitted. The accuracy of VDS240 for vehicle counts, speed and occupancy is comparable to that of well-tuned loops. Because the nodes report individual vehicle events, the AP also calculates individual vehicle lengths, speeds and inter-vehicle headways--measurements that can be used for new traffic applications. In July 2007, VDS240 systems were deployed in arterials and freeways in several cities and states, and 30 customer trials were underway in the US, Australia, Europe and South Africa.
---
paper_title: Experimental study of a vehicle detector with an AMR sensor
paper_content:
Abstract This paper proposes a vehicle detector with an anisotropic magnetoresistive (AMR) sensor and addresses experimental study carried out to show the detector's characteristics and performance. The detector consists of an AMR sensor and mechanical and electronic apparatuses. Composed of four magnetoresistors, the AMR sensor senses disturbance of the earth's magnetic field caused by a moving vehicle over the sensor itself and then produces an output indicative of the moving vehicle. Experiments have been carried out with three stages. At the first stage, the outputs of the sensor have been analyzed to show the validity of the detector's circuit and the detecting method. At the second stage, the detector has been tested on a local highway in Korea. Through the field tests, the outputs of the detector in response to various kinds of moving vehicles have been collected and analyzed. At the final stage, to verify the performance of the detector, traffic volumes on the highway have been measured with the detector and compared with the exact traffic volumes in a highly congested traffic.
---
paper_title: Wireless sensor networks in intelligent transportation systems
paper_content:
Wireless sensor networks (WSNs) offer the potential to significantly improve the efficiency of existing transportation systems. Currently, collecting traffic data for traffic planning and management is achieved mostly through wired sensors. The equipment and maintenance cost and time-consuming installations of existing sensing systems prevent large-scale deployment of real-time traffic monitoring and control. Small wireless sensors with integrated sensing, computing, and wireless communication capabilities offer tremendous advantages in low cost and easy installation. In this paper, we first survey existing WSN technologies for intelligent transportation systems (ITSs), including sensor technologies, energy-efficient networking protocols, and applications of sensor networks for parking lot monitoring, traffic monitoring, and traffic control. Then, we present new methods on applying WSNs in traffic modeling and estimation and traffic control, and show their improved performance over existing solutions. Copyright © 2008 John Wiley & Sons, Ltd.
---
paper_title: A Cross-Correlation Technique for Vehicle Detections in Wireless Magnetic Sensor Network
paper_content:
Vehicle detections are an important research field and attract many researchers. Most research efforts have been focused on vehicle parking detection (VPD) in indoor parking lot. For on-street parking, strong noise disturbances affect detection accuracy. To deal with vehicle detections in on-street environment, we propose two vehicle detection algorithms based on a cross-correlation technique in wireless magnetic sensor networks. One is for VPD, and the other one is for vehicle speed estimation (VSE). The proposed VPD algorithm combines the state-machine detection and the cross-correlation detection. In the VSE, speed estimation is based on the calculation of the normalized cross correlation between the signals of two sensors along the road with a certain spacing. Experimental results show that the VPD has an accuracy of 99.65% for arrival and 99.44% for departure, while the VSE has an accuracy of 92%.
---
paper_title: Automatic vehicle classification using wireless magnetic sensor
paper_content:
This paper proposes an extension to our previous work on an automatic low-computed vehicle classification using embedded wireless magnetic sensor. A realization of our vehicle classification on embedded wireless magnetic sensor is studied in this work. The implementation allows real-time vehicle classification based on vehicle magnetic length, averaged energy, and Hill-pattern peaks. The system automatically detects vehicles, extracts features, and classifies them. The three features are of low-computation. We classify vehicles into 4 types: motorcycle, car, pickup and van. The classification shows a promising result. It can classify motorcycle with 95% accuracy. The classification rates between 70%-80% are achieved with car, pickup and van due to their similarity in these extracted features. The results obtained are comparable to our implementation using PC in our previous work and demonstrate that the algorithm can be realized on the embedded wireless magnetic sensor.
---
paper_title: Portable Roadside Sensors for Vehicle Counting, Classification, and Speed Measurement
paper_content:
This paper focuses on the development of a portable roadside magnetic sensor system for vehicle counting, classification, and speed measurement. The sensor system consists of wireless anisotropic magnetic devices that do not require to be embedded in the roadway-the devices are placed next to the roadway and measure traffic in the immediately adjacent lane. An algorithm based on a magnetic field model is proposed to make the system robust to the errors created by larger vehicles driving in the nonadjacent lane. These false calls cause an 8% error if uncorrected. The use of the proposed algorithm reduces this error to only 1%. Speed measurement is based on the calculation of the cross correlation between longitudinally spaced sensors. Fast computation of the cross correlation is enabled by using frequency-domain signal processing techniques. An algorithm for automatically correcting for any small misalignment of the sensors is utilized. A high-accuracy differential Global Positioning System is used as a reference to measure vehicle speeds to evaluate the accuracy of the speed measurement from the new sensor system. The results show that the maximum error of the speed estimates is less than 2.5% over the entire range of 5-27 m/s (11-60 mi/h). Vehicle classification is done based on the magnetic length and an estimate of the average vertical magnetic height of the vehicle. Vehicle length is estimated from the product of occupancy and estimated speed. The average vertical magnetic height is estimated using two magnetic sensors that are vertically spaced by 0.25 m. Finally, it is shown that the sensor system can be used to reliably count the number of right turns at an intersection, with an accuracy of 95%. The developed sensor system is compact, portable, wireless, and inexpensive. Data are presented from a large number of vehicles on a regular busy urban road in the Twin Cities, MN, USA.
---
paper_title: Vehicle Detection and Classification for Low-Speed Congested Traffic With Anisotropic Magnetoresistive Sensor
paper_content:
A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.
---
paper_title: Magnetoresistive-based biosensors and biochips.
paper_content:
Over the past five years, magnetoelectronics has emerged as a promising new platform technology for biosensor and biochip development. The techniques are based on the detection of the magnetic fringe field of a magnetically labeled biomolecule interacting with a complementary biomolecule bound to a magnetic-field sensor. Magnetoresistive-based sensors, conventionally used as read heads in hard disk drives, have been used in combination with biologically functionalized magnetic labels to demonstrate the detection of molecular recognition. Real-world bio-applications are now being investigated, enabling tailored device design, based on sensor and label characteristics. This detection platform provides a robust, inexpensive sensing technique with high sensitivity and considerable scope for quantitative signal data, enabling magnetoresistive biochips to meet specific diagnostic needs that are not met by existing technologies.
---
paper_title: Detection of 10-nm Superparamagnetic Iron Oxide Nanoparticles Using Exchange-Biased GMR Sensors in Wheatstone Bridge
paper_content:
We demonstrated the use of exchange-biased giant magnetoresistance (GMR) sensors in Wheatstone bridge for the detection of 10-nm superparamagnetic iron oxide nanoparticles (SPIONs). The SPIONs were synthesized via coprecipitation method, exhibiting a superparamagnetic behavior with saturation magnetization of 57 emu/g. The output voltage signal of the Wheatstone bridge exhibits log-linear function of the concentration of SPIONs (from 10 ng/ml to 0.1 mg/ml), making the sensors suitable for use as a SPION concentration detector. Thus the combination of 10 nm SPIONs and the exchange-biased GMR sensors has potential to be used in the bio-detection applications where ultra-small bio-labels are needed.
---
paper_title: Matrix-insensitive protein assays push the limits of biosensors in medicine
paper_content:
Advances in biosensor technologies for in vitro diagnostics have the potential to transform the practice of medicine. Despite considerable work in the biosensor field, there is still no general sensing platform that can be ubiquitously applied to detect the constellation of biomolecules in diverse clinical samples (for example, serum, urine, cell lysates or saliva) with high sensitivity and large linear dynamic range. A major limitation confounding other technologies is signal distortion that occurs in various matrices due to heterogeneity in ionic strength, pH, temperature and autofluorescence. Here we present a magnetic nanosensor technology that is matrix insensitive yet still capable of rapid, multiplex protein detection with resolution down to attomolar concentrations and extensive linear dynamic range. The matrix insensitivity of our platform to various media demonstrates that our magnetic nanosensor technology can be directly applied to a variety of settings such as molecular biology, clinical diagnostics and biodefense.
---
paper_title: Magnetic Nanoparticle Sensors
paper_content:
Many types of biosensors employ magnetic nanoparticles (diameter = 5–300 nm) or magnetic particles (diameter = 300–5,000 nm) which have been surface functionalized to recognize specific molecular targets. Here we cover three types of biosensors that employ different biosensing principles, magnetic materials, and instrumentation. The first type consists of magnetic relaxation switch assay-sensors, which are based on the effects magnetic particles exert on water proton relaxation rates. The second type consists of magnetic particle relaxation sensors, which determine the relaxation of the magnetic moment within the magnetic particle. The third type is magnetoresistive sensors, which detect the presence of magnetic particles on the surface of electronic devices that are sensitive to changes in magnetic fields on their surface. Recent improvements in the design of magnetic nanoparticles (and magnetic particles), together with improvements in instrumentation, suggest that magnetic material-based biosensors may become widely used in the future.
---
paper_title: Magnetoresistive-based biosensors and biochips.
paper_content:
Over the past five years, magnetoelectronics has emerged as a promising new platform technology for biosensor and biochip development. The techniques are based on the detection of the magnetic fringe field of a magnetically labeled biomolecule interacting with a complementary biomolecule bound to a magnetic-field sensor. Magnetoresistive-based sensors, conventionally used as read heads in hard disk drives, have been used in combination with biologically functionalized magnetic labels to demonstrate the detection of molecular recognition. Real-world bio-applications are now being investigated, enabling tailored device design, based on sensor and label characteristics. This detection platform provides a robust, inexpensive sensing technique with high sensitivity and considerable scope for quantitative signal data, enabling magnetoresistive biochips to meet specific diagnostic needs that are not met by existing technologies.
---
paper_title: Integration of Magnetoresistive Biochips on a CMOS Circuit
paper_content:
Since 2006, fully scalable matrix-based magnetoresistive biochips have been proposed. This integration was initially achieved with thin film switching devices and moved to complementary metal-oxide-semiconductor (CMOS) switching devices and electronics. In this paper, a new microfabrication process is proposed to integrate magnetoresistive sensors on a small CMOS chip (4 mm2). This chip includes a current generator, multiplexers, and a diode in series with a spin valve as matrix element. In this configuration, it is shown that the fabricated spin-valves have similar magnetic characteristics when compared to standalone spin valves. This validates the successfulness of the developed microfabrication process. The noise of each matrix element is further characterized and compared to the noise of a standalone spin valve and a portable electronic platform designed to perform biological assays. Although the noise is still higher, the spin valve integrated on the CMOS chip enables an increase in density and compactness of the measuring electronics.
---
paper_title: Magnetic Nanoparticles for Magnetoresistance-Based Biodetection
paper_content:
Magnetic nanoparticles (MNPs) have been studied widely as a powerful diagnostic probe and therapeutic agent for biomedical applications. In recent years, they are also found to be sensitive to magnetoresistive (MR) devices and MNP-MR biochips are predicted to be more affordable, portable and sensitive than the conventional optical detection methods. In this MNP-MR biochip design, MNP probes are required to have high magnetic moment, high susceptibility, and be target-specific. This review summarizes recent advances in chemical syntheses and functionalization of MNPs with controlled magnetic properties for sensitive MR detection and for bio-sensing applications.
---
paper_title: Biological applications of magnetic nanoparticles.
paper_content:
In this review an overview about biological applications of magnetic colloidal nanoparticles will be given, which comprises their synthesis, characterization, and in vitro and in vivo applications. The potential future role of magnetic nanoparticles compared to other functional nanoparticles will be discussed by highlighting the possibility of integration with other nanostructures and with existing biotechnology as well as by pointing out the specific properties of magnetic colloids. Current limitations in the fabrication process and issues related with the outcome of the particles in the body will be also pointed out in order to address the remaining challenges for an extended application of magnetic nanoparticles in medicine.
---
paper_title: Detection of Magnetically Labelled Microcarriers for Suspension Based Bioassay Technologies
paper_content:
Microarrays and suspension-based assay technologies have attracted significant interest over the past decade with applications ranging from medical diagnostics to high throughput molecular biology. The throughput and sensitivity of a microarray will always be limited by the array density and slow reaction kinetics. Suspension (or bead) based technologies offer a conceptually different approach, improving detection by substituting a fixed plane of operation with millions of microcarriers. However, these technologies are currently limited by the number of unique labels that can be generated in order to identify the molecular probes on the surface. We have proposed a novel suspension-based technology that utilizes patterned magnetic films for the purpose of generating a writable label. The microcarriers consist of an SU-8 substrate that can be functionalized with various chemical or biological probes and magnetic elements, which are individually addressable by a magnetic sensor. The magnetization of each element is aligned in one of two stable directions, thereby acting as a magnetic bit. In order to detect the stray field and identify the magnetic labels, we have developed a microfluidic device with an integrated tunneling magnetoresistive (TMR) sensor, sourced from Micro Magnetics Inc. We present the TMR embedding architecture as well as detection results demonstrating the feasibility of magnetic labeling for lab-on-a-chip applications.
---
paper_title: Giant magnetoresistive biochip for DNA detection and HPV genotyping.
paper_content:
A giant magnetoresistive (GMR) biochip based on spin valve sensor array and magnetic nanoparticle labels was developed for inexpensive, sensitive and reliable DNA detection. The DNA targets detected in this experiment were PCR products amplified from Human Papillomavirus (HPV) plasmids. The concentrations of the target DNA after PCR were around 10 nM in most cases, but concentrations of 10 pM were also detectable, which is demonstrated by experiments with synthetic DNA samples. A mild but highly specific surface chemistry was used for probe oligonucleotide immobilization. Double modulation technique was used for signal detection in order to reduce the 1/f noise in the sensor. Twelve assays were performed with an accuracy of approximately 90%. Magnetic signals were consistent with particle coverage data measured with Scanning Electron Microscopy (SEM). More recent research on microfluidics showed the potential of reducing the assay time below one hour. This is the first demonstration of magnetic DNA detection using plasmid-derived samples. This study provides a direct proof that GMR sensors can be used for biomedical applications.
---
paper_title: Magnetoresistive performance and comparison of supermagnetic nanoparticles on giant magnetoresistive sensor-based detection system
paper_content:
Giant magnetoresistive (GMR) biosensors have emerged as powerful tools for ultrasensitive, multiplexed, real-time electrical readout, and rapid biological/chemical detection while combining with magnetic particles. Finding appropriate magnetic nanoparticles (MNPs) and its influences on the detection signal is a vital aspect to the GMR bio-sensing technology. Here, we report a GMR sensor based detection system capable of stable and convenient connection, and real-time measurement. Five different types of MNPs with sizes ranging from 10 to 100 nm were investigated for GMR biosensing. The experiments were accomplished with the aid of DNA hybridization and detection architecture on GMR sensor surface. We found that different MNPs markedly affected the final detection signal, depending on their characteristics of magnetic moment, size, and surface-based binding ability, etc. This work may provide a useful guidance in selecting or preparing MNPs to enhance the sensitivity of GMR biosensors, and eventually lead to a versatile and portable device for molecular diagnostics.
---
paper_title: Quantitative detection of DNA labeled with magnetic nanoparticles using arrays of MgO-based magnetic tunnel junction sensors
paper_content:
We have demonstrated the detection of 2.5μM target DNA labeled with 16nm Fe3O4 nanoparticles (NPs) using arrays of magnetic tunnel junction sensors with (001)-oriented MgO barrier layers. A MTJ sensor bridge was designed to detect the presence of magnetic NPs bonded with target DNA. A raw signal of 72μV was obtained using complementary target DNA, as compared with a nonspecific bonding signal of 25μV from noncomplementary control DNA. Our results indicate that the current system’s detection limit for analyte DNA is better than 100nM.
---
paper_title: Advances in Giant Magnetoresistance Biosensors With Magnetic Nanoparticle Tags: Review and Outlook
paper_content:
We present a review of giant magnetoresistance (GMR) spin valve sensors designed for detection of magnetic nanoparticles as biomolecular labels (nanotags) in magneto-nano biodetection technology. We discuss the intricacy of magneto-nano biosensor design and show that as few as approximately 14 monodisperse 16-nm superparamagnetic nanoparticles can be detected by submicron spin valve sensors at room temperature without resorting to lock-in (narrow band) detection. GMR biosensors and biochips have been successfully applied to the detection of biological events in the form of both protein and DNA assays with great speed, sensitivity, selectivity, and economy. The limit of molecular detection is well below 10 pM in concentration, and the protein or DNA assay time can be under two hours. The technology is highly scalable to deep multiplex detection of biomarkers in a complex disease, and amenable to integration of microfluidics and CMOS electronics for portable applications. On-chip CMOS circuitry makes a sensor density of 0.1-1 million sensors per square centimeter feasible and affordable. The theoretical and experimental results thus far suggest that magneto-nano biochip-based GMR sensor arrays and nanotags hold great promise in biomedicine, particularly for point-of-care molecular diagnostics of cancer, infectious diseases, radiation injury, cardiac diseases, and other diseases.
---
paper_title: Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions
paper_content:
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
---
paper_title: The History of WiMAX: A Complete Survey of the Evolution in Certification and Standardization for IEEE 802.16 and WiMAX
paper_content:
Most researchers are familiar with the technical features of WiMAX technology but the evolution that WiMAX went through, in terms of standardization and certification, is missing and unknown to most people. Knowledge of this historical process would however aid to understand how WiMAX has become the widespread technology that it is today. Furthermore, it would give insight in the steps to undertake for anyone aiming at introducing a new wireless technology on a worldwide scale. Therefore, this article presents a survey on all relevant activities that took place within three important organizations: the 802.16 Working Group of the IEEE (Institute of Electrical and Electronics Engineers) for technology development and standardization, the WiMAX Forum for product certification and the ITU (International Telecommunication Union) for international recognition. An elaborated and comprehensive overview of all those activities is given, which reveals the importance of the willingness to innovate and to continuously incorporate new ideas in the IEEE standardization process and the importance of the WiMAX Forum certification label granting process to ensure interoperability. We also emphasize the steps that were taken in cooperating with the ITU to improve the international esteem of the technology. Finally, a WiMAX trend analysis is made. We showed how industry interest has fluctuated over time and quantified the evolution in WiMAX product certification and deployments. It is shown that most interest went to the 2.5 GHz and 3.5 GHz frequencies, that most deployments are in geographic regions with a lot of developing countries and that the highest people coverage is achieved in Asia Pacific. This elaborated description of all standardization and certification activities, from the very start up to now, will make the reader comprehend how past and future steps are taken in the development process of new WiMAX features.
---
paper_title: Future internet: The Internet of Things
paper_content:
Nowadays, the main communication form on the Internet is human-human. But it is foreseeable that in a near soon that any object will have a unique way of identification and can be addressed so that every object can be connected. The Internet will become to the Internet of Things. The communicate forms will expand from human-human to human-human, human-thing and thing-thing (also called M2M).This will bring a new ubiquitous computing and communication era and change people's life extremely. Radio Frequency Identification techniques (RFID) and related identification technologies will be the cornerstones of the upcoming Internet of Things (IOT).This paper aims to show a skeleton of the Internet of Things and we try to address some essential issues of the Internet of Things like its architecture and the interoperability, etc. At the beginning we describe an overview of the Internet of Things. Then we give our architecture design proposal of the Internet of Things and then we design a specific the Internet of Things application model which can apply to automatic facilities management in the smart campus. At last, we discuss some open questions about the Internet of Things.
---
paper_title: Research on the architecture of Internet of Things
paper_content:
The Internet of Things is a technological revolution that represents the future of computing and communications. It is not the simple extension of the Internet or the Telecommunications Network. It has the features of both the Internet and the Telecommunications Network, and also has its own distinguishing feature. Through analysing the current accepted three-layer structure of the Internet of things, we suggest that the three-layer structure can't express the whole features and connotation of the Internet of Things. After reanalysing the technical framework of the Internet and the Logical Layered Architecture of the Telecommunication Management Network, we establish new five-layer architecture of the Internet of Things. We believe this architecture is more helpful to understand the essence of the Internet of Things, and we hope it is helpful to develop the Internet of Things.
---
paper_title: The Internet of Things: A survey
paper_content:
This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details.
---
paper_title: Wireless sensor networks: a survey
paper_content:
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
---
paper_title: Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges
paper_content:
The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits.
---
paper_title: Study and application on the architecture and key technologies for IOT
paper_content:
IOT refers to the third scientific and economic tide after computer, Internet in the global information industry, having attracted highly attention of governments, enterprises, and academia, and brought a huge new market for the communication industry. At present, global main operators and equipment suppliers begin to provide M2M business and solution. Here mainly introduces the concept of IOT, analyses the structure of IOT: perception layer, network layer, application layer. Sets forth the key technologies of IOT: RFID, network communication, etc. At length, it proposes the future development and reform trend of IOT.
---
paper_title: Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions
paper_content:
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
---
paper_title: The Internet of Things: A survey
paper_content:
This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details.
---
paper_title: Perpetual and low-cost power meter for monitoring residential and industrial appliances
paper_content:
The recent research efforts in smart grids and residential power management are oriented to monitor pervasively the power consumption of appliances in domestic and non-domestic buildings. Knowing the status of a residential grid is fundamental to keep high reliability levels while real time monitoring of electric appliances is important to minimize power waste in buildings and to lower the overall energy cost. Wireless Sensor Networks (WSNs) are a key enabling technology for this application field because they consist of low-power, non-invasive and cost-effective intelligent sensor devices. We present a wireless current sensor node (WCSN) for measuring the current drawn by single appliances. The node can self-sustain its operations by harvesting energy from the monitored current. Two AAA batteries are used only as secondary power supply to guarantee a fast start-up of the system. An active ORing subsystem selects automatically the suitable power source, minimizing power losses typical of the classic diode configuration. The node harvests energy when the power consumed by the device under measurement is in the range 10W÷10kW, which also corresponds to the range of current 50mA÷50A drawn directly from the main. Finally the node features a low-power, 32-bit microcontroller for data processing and a wireless transceiver to send data via the IEEE 802.15.4 standard protocol.
---
paper_title: Wireless sensor networks: a survey
paper_content:
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
---
paper_title: Energy conservation in wireless sensor networks: A survey
paper_content:
In the last years, wireless sensor networks (WSNs) have gained increasing attention from both the research community and actual users. As sensor nodes are generally battery-powered devices, the critical aspects to face concern how to reduce the energy consumption of nodes, so that the network lifetime can be extended to reasonable times. In this paper we first break down the energy consumption for the components of a typical sensor node, and discuss the main directions to energy conservation in WSNs. Then, we present a systematic and comprehensive taxonomy of the energy conservation schemes, which are subsequently discussed in depth. Special attention has been devoted to promising solutions which have not yet obtained a wide attention in the literature, such as techniques for energy efficient data acquisition. Finally we conclude the paper with insights for research directions about energy conservation in WSNs.
---
paper_title: A Nonintrusive Power Supply Design for Self-Powered Sensor Networks in the Smart Grid by Scavenging Energy From AC Power Line
paper_content:
An advanced metering and monitoring system based on autonomous, ubiquitous and maintenance-free wireless sensor networks is of great significance to the smart grid. However, the power supply for sensor nodes (especially those installed on high-voltage side) remains one of the most challenging issues. To date, miniaturized, reliable, low-cost and flexible designs catering the massive application of self-powered sensor nodes in the smart grid are still limited. This paper presents a nonintrusive design of power supply to support the sensor network applied in the smart grid. Using a cantilever-structured magnetoelectric (ME) composite, the energy harvester is able to scavenge energy from the power-frequency (50 Hz) magnetic field distributed around the transmission line. Design considerations for this specific type of scavenger have been discussed, and optimized energy harvester prototypes have been fabricated, which are further tested on a power line platform. Experimental results show that the single-cell and double-cell energy harvesters are capable of producing 0.62 mW and 1.12 mW at 10 A, respectively, while corresponding power outputs are enhanced to 4.11 mW and 9.40 mW at 40 A. The good energy harvesting ability of this particular ME composite indicates its great potential to make a nonintrusive, miniaturized, flexible and cost-effective power supply, which possesses great application prospects in the smart grid.
---
paper_title: Routing Techniques in Wireless Sensor Networks: A Survey
paper_content:
Wireless sensor networks consist of small nodes with sensing, computation, and wireless communications capabilities. Many routing, power management, and data dissemination protocols have been specifically designed for WSNs where energy awareness is an essential design issue. Routing protocols in WSNs might differ depending on the application and network architecture. In this article we present a survey of state-of-the-art routing techniques in WSNs. We first outline the design challenges for routing protocols in WSNs followed by a comprehensive survey of routing techniques. Overall, the routing techniques are classified into three categories based on the underlying network structure: flit, hierarchical, and location-based routing. Furthermore, these protocols can be classified into multipath-based, query-based, negotiation-based, QoS-based, and coherent-based depending on the protocol operation. We study the design trade-offs between energy and communication overhead savings in every routing paradigm. We also highlight the advantages and performance issues of each routing technique. The article concludes with possible future research areas.
---
paper_title: Minimum-energy mobile wireless networks revisited
paper_content:
We propose a protocol that, given a communication network, computes a subnetwork such that, for every pair $(u,v)$ of nodes connected in the original network, there is a minimum-energy path between $u$ and $v$ in the subnetwork (where a minimum-energy path is one that allows messages to be transmitted with a minimum use of energy). The network computed by our protocol is in general a subnetwork of the one computed by the protocol given in [13]. Moreover, our protocol is computationally simpler. We demonstrate the performance improvements obtained by using the subnetwork computed by our protocol through simulation.
---
paper_title: Integration of plug-in hybrid electric vehicles into energy and comfort management for smart building
paper_content:
Abstract The smart building and plug-in hybrid electric vehicle (PHEV) are two promising technologies. The integration of these two emerging technologies holds great promises in improving the power supply reliability and the flexibility of building energy and comfort management. The overall control goal of the smart building is to maximize the customer comfort with minimum power consumption. In this study, multi-agent technology coupled with particle swarm optimization (PSO) is proposed to address the control challenge. The proper aggregation of a number of PHEVs turns out to be able to provide both capacity and energy to make the building more economical and more reliable by impacting the building energy flow. Case studies and simulation results are presented and discussed in the paper.
---
paper_title: Multi-port topology for composite energy storage and its control strategy in micro-grid
paper_content:
Energy storage system (ESS) plays a significant role in micro-grid, but single energy storage can't meet the comprehensive requirements of micro-grid applications. Due to technical and economic reasons, composite energy storage is a better choice nowadays, which can combine all the suitable energy storages together, take the best of their positives to meet customer's demand. So the topology for composite energy storage is very important. Typical topology can be used for composite energy storage is compared, upon which, a novel multi-port port based on DC-link interfacing and magnetic coupling method is proposed for composite energy storage application. The multi-port topology allows different kinds of energy storage and/or micro-sources to get in, where energy can be controlled to flow bi-directionally, in and out of energy storage. And a novel Economical inverter based on three phase four switch topology is introduced for micro-grid interface of composite energy storage, which have features of low cost, easy to control and suitable for energy storage application. A double layer control strategy is designed for composite energy storage, the upper layer is responsible for energy management, and a knowledge based energy management method is utilized to separate different energy storage. In the lower layer, a phase shift PWM method is utilized to control energy flow between different ports. Simulation experiments testified their feasibility
---
paper_title: Noncontact Power Meter
paper_content:
Energy metering is increasingly important in today's power grid. With real-time power meters, utilities can efficiently incorporate renewables and consumers can tailor their demand accordingly. Several high-profile attempts at delivering realtime energy analytics to users, including Google Power Meter and Microsoft Hohm, have essentially failed because of a lack of sufficient richness and access to data at adequate bandwidth for reasonable cost. High performance meters can provide adequate data, but require custom installation at prohibitive expense, e.g., requiring an electrician for installation. This paper presents hardware and signal processing algorithms that enable high bandwidth measurements of voltage, current, harmonics, and power on an aggregate electrical service (such as a residential powerline) for nonintrusive analysis with hardware that requires no special skill or safety considerations for installation.
---
paper_title: Photovoltaic and wind energy systems monitoring and building/home energy management using ZigBee devices within a smart grid
paper_content:
The actual electric grid was developed to offer electricity to the clients from centralized generation, so with large-scale distributed renewable generation there is an urgent need for a more flexible, reliable and smarter grid. The wireless technologies are becoming an important asset in the smart grid, particularly the ZigBee devices. These smart devices are gaining increased acceptance, not only for building and home automation, but also for energy management, efficiency optimization and metering services, being able to operate for long periods of time without maintenance needs. In this context, this paper provides new comprehensive field tests using open source tools with ZigBee technologies for monitoring photovoltaic and wind energy systems, and also for building and home energy management. Our experimental results demonstrate the proficiency of ZigBee devices applied in distributed renewable generation and smart metering systems.
---
paper_title: A review on optimized control systems for building energy and comfort management of smart sustainable buildings
paper_content:
Buildings all around the world consume a significant amount of energy, which is more or less one-third of the total primary energy resources. This has raised concerns over energy supplies, rapid energy resource depletion, rising building service demands, improved comfort life styles along with the increased time spent in buildings; consequently, this has shown a rising energy demand in the near future. However, contemporary buildings’ energy efficiency has been fast tracked solution to cope/limit the rising energy demand of this sector. Building energy efficiency has turned out to be a multi-faceted problem, when provided with the limitation for the satisfaction of the indoor comfort index. However, the comfort level for occupants and their behavior have a significant effect on the energy consumption pattern. It is generally perceived that energy unaware activities can also add one-third to the building’s energy performance. Researchers and investigators have been working with this issue for over a decade; yet it remains a challenge. This review paper presents a comprehensive and significant research conducted on state-of-the-art intelligent control systems for energy and comfort management in smart energy buildings (SEB’s). It also aims at providing a building research community for better understanding and up-to-date knowledge for energy and comfort related trends and future directions. The main table summarizes 121 works closely related to the mentioned issue. Key areas focused on include comfort parameters, control systems, intelligent computational methods, simulation tools, occupants’ behavior and preferences, building types, supply source considerations and countries research interest in this sector. Trends for future developments and existing research in this area have been broadly studied and depicted in a graphical layout. In addition, prospective future advancements and gaps have also been discussed comprehensively.
---
paper_title: A novel current sensor for home energy use monitoring
paper_content:
This paper presents a novel magnetic sensor array based technique for measuring currents in a group of enclosed conductors. It is designed specifically for monitoring the real-time power consumptions of North American homes. The technique consists of three key components: magnetic field sensors deployed in close proximity of the power conductors to be measured; algorithms to compute the conductor currents based on the magnetic fields measured; a simple, integrated sensor calibration and communication scheme. Prototype devices have been developed based on the technique. Extensive lab and field tests have demonstrated that the technique can provide adequate current measurements for residential homes. Combining with the non-intrusive load monitoring methods, the proposed measurement technique represents an attractive platform to create a complete home energy use tracking system.
---
paper_title: Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads
paper_content:
Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain.
---
paper_title: Getting to green: understanding resource consumption in the home
paper_content:
Rising global energy demands, increasing costs and limited natural resources mean that householders are more conscious about managing their domestic resource consumption. Yet, the question of what tools Ubicomp researchers can create for residential resource management remains open. To begin to address this omission, we present a qualitative study of 15 households and their current management practices around the water, electricity and natural gas systems in the home. We find that in-the-moment resource consumption is mostly invisible to householders and that they desire more real-time information to help them save money, keep their homes comfortable and be environmentally friendly. Designing for domestic sustainability therefore turns on improving the visibility of resource production and consumption costs as well as supporting both individuals and collectives in behavior change. Domestic sustainability also highlights the caveat of potentially creating a green divide by making resource management available only to those who can afford the technologies to support being green. Finally, we suggest that the Ubicomp community can contribute to the domestic and broader sustainability agenda by incorporating green values in designs and highlight the challenge of collecting data on being green.
---
paper_title: Towards a zero-configuration wireless sensor network architecture for smart buildings
paper_content:
Today's buildings account for a large fraction of our energy consumption. In an effort to economize scarce fossil fuels on earth, sensor networks are a valuable tool to increase the energy efficiency of buildings without severely reducing our quality of life. Within a smart building many sensors and actuators are interconnected to form a control system. Nowadays, the deployment of a building control system is complicated because of different communication standards. In this paper, we present a web services-based approach to integrate resource constrained sensor and actuator nodes into IP-based networks. A key feature of our approach is its capability for automatic service discovery. For this purpose, we implemented an API to access services on sensor nodes following the architectural style of representational state transfer (REST). We implemented a prototype application based on TinyOS 2.1 on a custom sensor node platform with 8 Kbytes of RAM and an IEEE 802.15.4 compliant radio transceiver.
---
paper_title: Development of an energy monitoring system for large public buildings
paper_content:
Abstract Building energy consumption is an important component of the total social energy consumption, especially for large public buildings in China. Building energy conservation is one valid method for increasing energy efficiency. In order to locate the status of energy consumption for large public buildings, such as supermarket, government office buildings, hospital and campus buildings, an Internet-based energy monitoring system was developed. The implementation of this system was introduced in detail, including the principle of selecting monitoring points at the bottom layer, the design of a data collector with storage function avoiding data loss caused by network faults, and the development of database and application software at the top layer. The monitoring platform releases the energy consumption data on the web, so the authorized users can access the data any time and anywhere through Internet. In addition, the data display system is given in form of graph-charts or tables, so users can choose any kind of data they want to see, either instant values uploaded every 5 min or historic data uploaded every hour, day and month. The platform has been used in some large public buildings in Liaoning Province in China over two years, and the results confirm its feasibility and validity.
---
paper_title: Intelligent Management Systems for Energy Efficiency in Buildings: A Survey
paper_content:
In recent years, reduction of energy consumption in buildings has increasingly gained interest among researchers mainly due to practical reasons, such as economic advantages and long-term environmental sustainability. Many solutions have been proposed in the literature to address this important issue from complementary perspectives, which are often hard to capture in a comprehensive manner. This survey article aims at providing a structured and unifying treatment of the existing literature on intelligent energy management systems in buildings, with a distinct focus on available architectures and methodology supporting a vision transcending the well-established smart home vision, in favor of the novel Ambient Intelligence paradigm. Our exposition will cover the main architectural components of such systems, beginning with the basic sensory infrastructure, moving on to the data processing engine where energy-saving strategies may be enacted, to the user interaction interface subsystem, and finally to the actuation infrastructure necessary to transfer the planned modifications to the environment. For each component, we will analyze different solutions, and we will provide qualitative comparisons, also highlighting the impact that a single design choice can have on the rest of the system.
---
paper_title: Sensors in Intelligent Buildings
paper_content:
We've all heard of `sick-building' syndrome and the misery this can inflict in the workplace in terms of poor health and lost production. The notion of the Intelligent Building is the modern civil engineer's Big Idea in tackling these and other such deficiencies. The intelligent building can adapt itself to maintain an optimized environment. This ability relies on sensors as a front-line technology - the subject of this volume. Gassmann and Meixner tackle the subject matter by using five categories of intelligent building technology: energy and HVAC (heating, ventilation and air conditioning), information and transportation, safety and security, maintenance, and facility management. These categories of home and building technology are intended to encompass domestic as well as workplace and public environments, but as the introduction states, the breakthrough into the domestic market is not quite here yet. They have targeted, successfully in this reviewer's opinion, the researcher and designer in the field who has his or her own specific interest in sensor issues. Each section of the book contains a number of articles contributed from predominantly European, particularly German and Swiss, research institutions. They cover subjects as diverse as inflatable buildings and a combination of a sensing chair and sensing floor that allows the exact interaction of a human user with his/her immediate surroundings to be integrated within an information system. A fascinating item on biometric access security systems brings to life the `spy-thriller' world of automatic iris and fingerprint scanners as a means of secure access to key-less buildings. The discussion extends to threats to such systems whereby `replay' attacks, for example presenting a face recognition system with a photograph of an individual, reveal the necessary further steps if the technology is to be considered safe. Inevitably though, it is the massive strides in communications and processing technology that are visible in the contributions to this volume that point to where most of the advances will come. The massive interconnection issue of numerous sensors, the `nervous system' of the building is approached in items covering the complexities of fieldbus systems and the use of `wireless' in-building networks. This well presented volume with good quality illustration ends in an investigation of system technologies for the automated home of the near future. Presence-controlled heating and air-conditioning, telemonitoring of the sick or elderly, gas and fire detection and of course a high data rate communication backbone to link everything: all these could feature in the future household. We shall see. Peter Foote
---
paper_title: Current and Voltage Reconstruction From Non-Contact Field Measurements
paper_content:
Non-contact electromagnetic field sensors can monitor voltage and current in multiple-conductor cables from a distance. Knowledge of the cable and sensor geometry is generally required to determine the transformation that recovers voltages and currents from the sensed electromagnetic fields. This paper presents a new calibration technique that enables the use of non-contact sensors without the prior knowledge of conductor geometry. Calibration of the sensors is accomplished with a reference load or through observation of in situ loads.
---
paper_title: An integrated system for buildings’ energy-efficient automation: Application in the tertiary sector
paper_content:
Although integrated building automation systems have become increasingly popular, an integrated system which includes remote control technology to enable real-time monitoring of the energy consumption by energy end-users, as well as optimization functions is required. To respond to this common interest, the main aim of the paper is to present an integrated system for buildings’ energy-efficient automation. The proposed system is based on a prototype software tool for the simulation and optimization of energy consumption in the building sector, enhancing the interactivity of building automation systems. The system can incorporate energy-efficient automation functions for heating, cooling and/or lighting based on recent guidance and decisions of the National Law, energy efficiency requirements of EN 15232 and ISO 50001 Energy Management Standard among others. The presented system was applied to a supermarket building in Greece and focused on the remote control of active systems.
---
paper_title: A WSN-based testbed for energy efficiency in buildings
paper_content:
Residential and business buildings account for a large fraction of the overall world energy consumption. Despite the high energy costs and the raising awareness about the impact on climate changes, a significant part of energy consumption in buildings is still due to an improper use of electrical appliances. In this paper we propose GreenBuilding, a sensor-based system for automated power management of electrical appliances in a building. We implemented GreenBuilding as a prototype system and deployed it in a real household scenario to perform a prolonged experimental analysis. The obtained results show that GreenBuilding is able to provide significant energy savings by using appropriate energy conservation strategies tailored to specific appliances.
---
paper_title: A Novel Approach for Fault Location of Overhead Transmission Line With Noncontact Magnetic-Field Measurement
paper_content:
Prompt and accurate location of faults in a large-scale transmission system can accelerate system restoration, reduce outage time, and improve system reliability. Traditional approaches are categorized into traveling-wave-based and impedance-based measurement techniques. The traveling-wave-based approach requires detection devices to connect to the high-voltage transmission line, making the solution complex and costly. And the impedance-measurement-based approach is highly dependent on the quality of the signal and affected by fault resistance, ground resistance and non-homogeneity in line configuration. Hence, these approaches may cause a location error that is unacceptable in certain operation cases. In this paper, a novel approach based on noncontact magnetic-field measurement is proposed. With the magnetic field measured along the transmission line by using highly sensitive, broadband, and a low-cost magnetoresistive magnetic sensor, the fault span can be located. The collected data can be further used for identifying the fault type and location within the fault span. The overall system was designed and numerical simulations were performed on typical tower configurations. The simulated results verify the validity of the proposed scheme.
---
paper_title: Challenges in reliability, security, efficiency, and resilience of energy infrastructure: Toward smart self-healing electric power grid
paper_content:
This article deals with the challenges in reliability, security, efficiency, and resilience of energy infrastructure for a smart self-healing electric power grid. The electricity grid faces three looming challenges: its organization, its technical ability to meet 25 year and 50 year electricity needs, and its ability to increase its efficiency without diminishing its reliability and security. These three are not unrelated, as the grid's present organization reflects an earlier time when electrification was developing, objectives and needs were simpler, and today's technology was still over the horizon. Given economic, societal, and quality-of-life issues and the ever-increasing interdependencies among infrastructures, a key challenge before us is whether the electricity infrastructure will evolve to become the primary support for the 21st century's digital society a smart grid with self-healing capabilities or be left behind as an 20th century industrial relic.
---
paper_title: Effect of sag on transmission line
paper_content:
This paper looks into the factors affecting sag in conductors while erecting an overhead line, it is very important that conductors are under safe tension. If the conductors are too much stretched between supports, the stress in the conductor may reach unsafe value and in certain cases the conductors may break due to excessive tension. The conductor sag should be kept to a minimum in order to reduce the conductor material required and to avoid extra pole height for sufficient clearance above ground level. It is also desirable that tension in the conductor should be low to avoid the mechanical failure of conductor and to permit the use of less strong supports. The effects of sag on electrical construction were examined. Also, a look into the solution to effect of sag were examined. The meaning sag was not left out. This write up suggested ways to reduce the effect of sag on overhead line conductors.
---
paper_title: Design, installation, and field experience with an overhead transmission dynamic line rating system
paper_content:
Dynamic line rating (DLR) systems for high voltage overhead transmission lines have been installed by three utilities over the past five years. These DLR systems utilize the Power Donut/sup TM/ to measure load and conductor temperature at several locations over the length of the circuit. The effective wind acting on the conductor at each site is determined in real-time by a dynamic heat balance and used to compute the normal, LTE and STE ratings each minute. The lowest ratings of all locations define the circuit's ratings and are sent to SCADA as analog signals. This method is known as the conductor temperature model (CTM) as opposed to the weather model (WM) which calculates ratings using weather data only. These first DLR installations have also measured weather parameters at each ground station location to allow a comparison of the two methods. Data from the latest of these systems is presented and the behavior of the real time ratings are discussed.
---
paper_title: A vehicle detection algorithm based on wireless magnetic sensor networks
paper_content:
At present, the research of parking management system based on Wireless Sensor Networks (WSN) and Anisotropic Magneto Resistive (AMR) sensors have made great progress. However, due to the diversity of the vehicle magnetic signal and the interference caused by the adjacent parking spaces, vehicle detection technique using wireless magnetic sensor networks is still immature. For accurately detecting a parking vehicle in a parking lot, we propose a detection algorithm named Relative Extremum Algorithm (REA). On the parking lot at Shenzhen Institutes of Advanced Technology (SIAT), 82 sensor devices are deployed to evaluate the performance of REA. By running the system for more than half a year, we observe that the vehicle detection accuracy of the REA is above 98.8%.
---
paper_title: Reducing emergency services response time in smart cities: An advanced adaptive and fuzzy approach
paper_content:
Nowadays, the unprecedented increase in road traffic congestion has led to severe consequences on individuals, economy and environment, especially in urban areas in most of big cities worldwide. The most critical among the above consequences is the delay of emergency vehicles, such as ambulances and police cars, leading to increased deaths on roads and substantial financial losses. To alleviate the impact of this problem, we design an advanced adaptive traffic control system that enables faster emergency services response in smart cities while maintaining a minimal increase in congestion level around the route of the emergency vehicle. This can be achieved with a Traffic Management System (TMS) capable of implementing changes to the road network's control and driving policies following an appropriate and well-tuned adaptation strategy. This latter is determined based on the severity of the emergency situation and current traffic conditions estimated using a fuzzy logic-based scheme. The obtained simulation results, using a set of typical road networks, have demonstrated the effectiveness of our approach in terms of the significant reduction of emergency vehicles' response time and the negligible disruption caused to the non-emergency vehicles travelling on the same road network.
---
paper_title: Green Transport System: A Technology Demonstration of Adaptive Road Lighting with Giant Magnetoresistive Sensor Network for Energy Efficiency and Reducing Light Pollution
paper_content:
To enhance energy efficiency and reduce light pollution of overnight road lighting in suburban traffic, we propose a novel green transport system based on giant magnetoresistive (GMR) sensors. The basic principle is to detect the perturbation to the earth magnetic field by a ferrous vehicle with GMR sensors. This system can switch on the road lighting to full illumination gradually before the motor vehicle arrives and dim it out after the vehicle leaves without the driver noticing. Based on a sparse suburban road in the countryside of Hong Kong, a demonstration model was constructed to illustrate its feasibility. GMR sensors and the associated electrical energy control components including signal processors, relays, and dimmers were integrated into a complete system. The experimental result indicates that the sensing principle is feasible and the whole system can function together coherently to achieve over 90% energy saving. Such system can be scaled up to be implemented in real road conditions.
---
paper_title: Train Detection by Magnetic Field Measurement with Giant Magnetoresistive Sensors for High-Speed Railway
paper_content:
Train detection, as part of the railway signaling system, is important for safe operation of high-speed railway. The recent flourishing development of high-speed railway stimulates the research need of train detection technology to enhance the safety and reliability of train operation. This paper proposes a new technique for train detection through magnetic field measurement by giant magnetoresistive sensors. This technology was studied by the analysis of magnetic field distribution in the high-speed rail system obtained from modeling and simulation. The results verify the feasibility for detection of train presence, number of rolling stocks, speed, and length. It can overcome the limitations of track circuits and provide additional measurement capabilities to the signaling system. This detection system can be built with low cost and minimal maintenance load as well as compacted construction. Therefore, it may serve as a new train detection system to help improve the current systems, enhancing and promoting the safety and reliability of high-speed rail system.
---
paper_title: Wireless sensor networks for traffic management and road safety
paper_content:
Wireless sensor networks (WSN) employ self-powered sensing devices that are mutually interconnected through wireless ad-hoc technologies. This study illustrates the basics of WSN-based traffic monitoring and summarises the possible benefits in Intelligent Transport Systems (ITS) applications for the improvement of quality and safety of mobility. Compared with conventional infrastructure-based monitoring systems, this technology facilitates a denser deployment of sensors along the road, resulting in a higher spatial resolution of traffic parameter sampling. An experimental data analysis reported in this study shows how the high spatial resolution can enhance the reliability of traffic modelling as well as the accuracy of short-term traffic state prediction. The analysis uses the data published by the freeway performance measurement system of the University of California-Berkeley and the California Department of Transportation. A microscopic cellular automata model is used to estimate traffic flow and occupancy over time on a road segment in which a relevant traffic-flow anomaly is detected. The analysis shows that the estimate accuracy improves for increasing number of active sensors, as feasible in the case of WSN-based monitoring systems.
---
paper_title: A Communications-Oriented Perspective on Traffic Management Systems for Smart Cities: Challenges and Innovative Approaches
paper_content:
The growing size of cities and increasing population mobility have determined a rapid increase in the number of vehicles on the roads, which has resulted in many challenges for road traffic management authorities in relation to traffic congestion, accidents, and air pollution. Over the recent years, researchers from both industry and academia have been focusing their efforts on exploiting the advances in sensing, communication, and dynamic adaptive technologies to make the existing road traffic management systems (TMSs) more efficient to cope with the aforementioned issues in future smart cities. However, these efforts are still insufficient to build a reliable and secure TMS that can handle the foreseeable rise of population and vehicles in smart cities. In this survey, we present an up-to-date review of the different technologies used in the different phases involved in a TMS and discuss the potential use of smart cars and social media to enable fast and more accurate traffic congestion detection and mitigation. We also provide a thorough study of the security threats that may jeopardize the efficiency of the TMS and endanger drivers' lives. Furthermore, the most significant and recent European and worldwide projects dealing with traffic congestion issues are briefly discussed to highlight their contribution to the advancement of smart transportation. Finally, we discuss some open challenges and present our own vision to develop robust TMSs for future smart cities.
---
paper_title: Wireless sensor networks for healthcare: A survey
paper_content:
Becoming mature enough to be used for improving the quality of life, wireless sensor network technologies are considered as one of the key research areas in computer science and healthcare application industries. The pervasive healthcare systems provide rich contextual information and alerting mechanisms against odd conditions with continuous monitoring. This minimizes the need for caregivers and helps the chronically ill and elderly to survive an independent life, besides provides quality care for the babies and little children whose both parents have to work. Although having significant benefits, the area has still major challenges which are investigated in this paper. We provide several state of the art examples together with the design considerations like unobtrusiveness, scalability, energy efficiency, security and also provide a comprehensive analysis of the benefits and challenges of these systems.
---
paper_title: Cloud-enabled wireless body area networks for pervasive healthcare
paper_content:
With the support of mobile cloud computing, wireless body area networks can be significantly enhanced for massive deployment of pervasive healthcare applications. However, several technical issues and challenges are associated with the integration of WBANs and MCC. In this article, we study a cloud-enabled WBAN architecture and its applications in pervasive healthcare systems. We highlight the methodologies for transmitting vital sign data to the cloud by using energy-efficient routing, cloud resource allocation, semantic interactions, and data security mechanisms.
---
paper_title: nanoLAB: An ultraportable, handheld diagnostic laboratory for global health
paper_content:
Driven by scientific progress and economic stimulus, medical diagnostics will move to a stage in which straightforward medical diagnoses are independent of physician visits and large centralized laboratories. The future of basic diagnostic medicine will lie in the hands of private individuals. We have taken significant strides towards achieving this goal by developing an autoassembly assay for disease biomarker detection which obviates the need for washing steps and is run on a handheld sensing platform. By coupling magnetic nanotechnology with an array of magnetically responsive nanosensors, we demonstrate a rapid, multiplex immunoassay that eliminates the need for trained technicians to run molecular diagnostic tests. Furthermore, the platform is battery-powered and ultraportable, allowing the assay to be run anywhere in the world by any individual.
---
paper_title: Point of Care Diagnostics: Status and Future
paper_content:
Introduction A Why POC Diagnostics? B Time B Patient Responsibility and Compliance B Cost B Diagnostic Targets C Proteins C Metabolites and Other Small Molecules C Nucleic Acids C Human Cells D Microbes/Pathogens D Drugs and Food Safety D Current Context of POC Assays E POC Glucose Assays E Lateral Flow Assays E Limitations of “Traditional” POC Approaches F Enabling Technologies G Printing and Laminating G Microfluidic Technologies and Approaches: “Unit Operations” for POC Devices G Pumping and Valving H Mixing I Separation I Reagent Storage J Sample Preparation K Surface Chemistry and Device Substrates L Physical Adsorption L Bioaffinity Attachment L Covalent Attachment M Substrate Materials M Detection M Electrochemical Detection N Optical Detection N Magnetic Detection N Label-Free Methods O Enabling Multiplexed Assays O Recent Innovation O Lateral Flow Assay Technologies O Proteins P Antibodies P Protein Expression and Purification Q Nucleic Acids Q Aptamers R Infectious Diseases and Food/Water Safety R Blood Chemistry S Coagulation Markers S Whole Cells S Trends, Unmet Needs, Perspectives T Glucose T Global Health and the Developing World T Personalized Medicine and Home Testing U Technology Trends U Multiplexing V Author Information V Biographies V Acknowledgment W References W
---
paper_title: Improving Healthcare Accessibility through Point-of-Care Technologies
paper_content:
Background: The NIH is committed to improving healthcare quality in the US and has set up initiatives to address problems such as the fragmented nature of healthcare provision. A hypothesis has been developed that testing closer to the point at which care is delivered may reduce fragmentation of care and improve outcomes. Methods: The National Institute of Biomedical Imaging and Bioengineering (NIBIB), the NIH’s National Heart, Lung, and Blood Institute, and the National Science Foundation sponsored a workshop, “Improving Health Care Accessibility through Point-of-Care Technologies,” in April 2006. The workshop assessed the clinical needs and opportunities for point-of-care (POC) technologies in primary care, the home, and emergency medical services and reviewed minimally invasive and noninvasive testing, including imaging, and conventional testing based on sensor and lab-on-a-chip technologies. Emerging needs of informatics and telehealth and healthcare systems engineering were considered in the POC testing context. Additionally, implications of evidence-based decision-making were reviewed, particularly as it related to the challenges in producing reliable evidence, undertaking regulation, implementing evidence responsibly, and integrating evidence into health policy. Results: Many testing procedures were considered to be valuable in the clinical settings discussed. Technological solutions were proposed to meet these needs, as well as the practical requirements around clinical process change and regulation. From these considerations, a series of recommendations was formulated for development of POC technologies based on input from the symposium attendees. Conclusion: NIBIB has developed a funding initiative to establish a Point-of-Care Technologies Research Network that will work to bridge the technology/clinical gap and provide the partnerships necessary for the application of technologies to pressing clinical needs in POC testing.
---
|
<format>
Title: Overview of Spintronic Sensors, Internet of Things, and Smart Living
Section 1: Introduction
Description 1: Introduce the concept of smart living and the role of IoT and spintronic sensors in achieving it.
Section 2: Spintronic sensors
Description 2: Explain the basic principles and different types of spintronic sensors, along with their fabrication and sensing capabilities.
Section 3: Winning combination: IoT and spintronic sensors
Description 3: Provide a brief introduction to the concept and architecture of IoT, and discuss how wireless spintronic sensor networks (WSSNs) can be integrated into the IoT platform.
Section 4: IoT and spintronic sensors enabling smart living
Description 4: Discuss how WSSN-based solutions can support smart building, smart grid, smart transport, and smart healthcare applications.
Section 5: Conclusion and outlook
Description 5: Summarize the paper and provide future directions for the development and integration of spintronic sensors and IoT for smart living.
</format>
|
Security Threats and Privacy Issues in Vehicular Ad-Hoc Network (VANET): Survey and Perspective
| 9 |
---
paper_title: AN INSIGHT OVERVIEW OF ISSUES AND CHALLENGES IN VEHICULARADHOC NETWORK
paper_content:
Vehicular Ad hoc Networks is a kind of special wireless ad hoc network, which has the characteristics of high node mobility and fast topology changes. The Vehicular Networks can provide wide variety of services, range from safety-related warning systems to improved navigation mechanisms as well as information and entertainment applications. These additional features make the routing and other services more challenging and cause vulnerability in network services. These problems include network architecture, vanet protocols, routing algorithms, as well as security issues. In this paper, we provide a review for the researches related to Vehicular Ad Hoc Networks and also try to propose solution for related issues and challenges.
---
paper_title: Minimization of Denial of services attacks in Vehicular Adhoc networking by applying different constraints
paper_content:
The security of Vehicular ad hoc networking is of great importance as it involves serious life threats. Thus to provide secure communication amongst Vehicles on road, the conventional security system is not enough. It is necessary to prevent the network resources from wastage and give them protection against malicious nodes so that to ensure the data bandwidth availability to the legitimate nodes of the network. This work is related to provide a non conventional security system by introducing some constraints to minimize the DoS (Denial of services) especially data and bandwidth. The data packets received by a node in the network will pass through a number of tests and if any of the test fails, the node will drop those data packets and will not forward it anymore. Also if a node claims to be the nearest node for forwarding emergency messages then the sender can effectively identify the true or false status of the claim by using these constraints. Consequently the DoS(Denial of Services) attack is
---
paper_title: Vehicular Ad Hoc Networks
paper_content:
Vehicular Ad Hoc Network (VANET) is an emerging new technology integrating Ad Hoc network, cellular technology and wireless LAN (WLAN) to achieve vehicle to vehicle and vehicle to infrastructure communication for intelligent transportation systems (ITS). VANETs are distinguished from other kinds of Ad Hoc networks by node movement characteristics, their hybrid network architectures and new application scenarios. The vehicular network provides wide variety of services, ranging from safety-related warning systems to improved navigation mechanisms as well as information and entertainment applications. Therefore, VANETs pose many unique networking research challenges, and the design of efficient routing protocols that not only forward packets with good end to end delay but also take into consideration the reliability and progress in data packets forwarding. In this paper, we provide a review of VANETs architecture, its characteristics, applications various routing protocols and challenges.
---
paper_title: Security Attacks and Solutions in Vehicular Ad Hoc Networks: A Survey
paper_content:
Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.
---
paper_title: An overview of mobile ad hoc networks: applications and challenges
paper_content:
In the past few years, we have seen a rapid expansion in the field of mobile computing due to the proliferation of inexpensive, widely available wireless devices. However, current devices, applications and protocols are solely focused on cellular or wireless local area networks (WLANs), not taking into account the great potential offered by mobile ad hoc networking. A mobile ad hoc network is an autonomous collection of mobile devices (laptops, smart phones, sensors, etc.) that communicate with each other over wireless links and cooperate in a distributed manner in order to provide the necessary network functionality in the absence of a fixed infrastructure. This type of network, operating as a stand-alone network or with one or multiple points of attachment to cellular networks or the Internet, paves the way for numerous new and exciting applications. Application scenarios include, but are not limited to: emergency and rescue operations, conference or campus settings, car networks, personal networking, etc. ::: This paper provides insight into the potential applications of ad hoc networks and discusses the technological challenges that protocol designers and network developers are faced with. These challenges include routing, service and resource discovery, Internet connectivity, billing and security.
---
paper_title: DSRC-Type Communication System for Realizing Telematics Services
paper_content:
The decisive factor for the popularization of telematics services in the future is the ability to produce advanced services that match the needs of drivers and passengers. The Dedicated Short Range Communications (DSRC)type of wireless communication system, which is being developed for the Intelligent Transport Systems (ITS), is one of the broadband wireless systems contributing to the improvement of driver and passenger needs, such as convenience, entertainment, comfort and safety. In this paper, the possibility of applying this system to telematics services will be explored. Further, a description of the future prospects for ubiquitous networking will be provided, in the knowledge that the existence of information communication equipment mounted vehicles will remain.
---
paper_title: Mobile Ad Hoc Networking: Imperatives and Challenges
paper_content:
A mobile ad hoc network (MANET), sometimes called a mobile mesh network, is a self- configuring network of mobile devices connected by wireless links. The Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. It represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, ''ad-hoc'' network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure. Ad hoc networking concept is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the adhoc paradigm. Recently, the introduction of new technologies such as the Bluetooth, IEEE 802.11 and Hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent evolutions have been generating a renewed and growing interest in the research and development of MANET.So,we would like to propose the challenges and imperativels that the MANET is facing.
---
paper_title: On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks
paper_content:
We argue that the traditional notion of trust as a relation among entities, while useful, becomes insufficient for emerging data-centric mobile ad hoc networks. In these systems, setting the data trust level equal to the trust level of the data- providing entity would ignore system salient features, rendering applications ineffective and systems inflexible. This would be even more so if their operation is ephemeral, i.e., characterized by short-lived associations in volatile environments. In this paper, we address this challenge by extending the traditional notion of trust to data-centric trust: trustworthiness attributed to node-reported data per se. We propose a framework for data-centric trust establishment: First, trust in each individual piece of data is computed; then multiple, related but possibly contradictory, data are combined; finally, their validity is inferred by a decision component based on one of several evidence evaluation techniques. We consider and evaluate an instantiation of our framework in vehicular networks as a case study. Our simulation results show that our scheme is highly resilient to attackers and converges stably to the correct decision.
---
paper_title: Vehicular Ad-Hoc Networks (VANETs) - An Overview and Challenges
paper_content:
Vehicu lar ad-hoc networks (VANETs) technology has emerged as an important research area over the last few years. Being ad-hoc in nature, VA NET is a type of networks that is created from the concept of establishing a network of cars for a specific need or situation. VA NETs have now been established as reliable networks that vehicles use for co mmunication purpose on highways or urban environments. Along with the benefits, there arise a large number of challenges in VANET such as provisioning of QoS, h igh connectivity and bandwidth and security to vehicle and individual privacy. This article presents state-of-the-art of VANET and discusses the related issues. Network architecture, signal modeling and propagation mechanis m, mobility modeling, routing protocols and network security are discussed in detail. Main findings of this paper are that an efficient and robust VANET is one wh ich satisfies all design parameters such as QoS, minimu m latency, low BER and high PDR. So me key research areas and challenges in VANET are presented at the end of the paper.
---
paper_title: Security Attacks and Solutions in Vehicular Ad Hoc Networks: A Survey
paper_content:
Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.
---
|
Title: Security Threats and Privacy Issues in Vehicular Ad-Hoc Network (VANET): Survey and Perspective
Section 1: Introduction
Description 1: Introduce the concept of Vehicular Ad-Hoc Networks (VANETs) and its significance. Mention the purpose and scope of the paper.
Section 2: Overview of Vehicular Ad-Hoc Network (VANET)
Description 2: Provide a detailed description of VANET, including its definition, characteristics, and how it functions.
Section 3: Security Threats in VANET
Description 3: Discuss various security threats faced by VANETs, such as eavesdropping, spoofing, denial-of-service attacks, etc.
Section 4: Privacy Issues in VANET
Description 4: Explain the privacy concerns associated with VANETs, including potential misuse of data and challenges in maintaining user privacy.
Section 5: Impact of Mobility on Security
Description 5: Analyze how the mobility of nodes affects the security and reliability of VANETs, and discuss the resulting challenges.
Section 6: Fault Detection and Network Management
Description 6: Address the issues of fault detection and network management in VANETs, highlighting why they are more complex compared to other networks.
Section 7: Solutions and Mitigations
Description 7: Outline potential solutions and mitigation strategies to address the security threats and privacy issues in VANETs.
Section 8: Future Research Directions
Description 8: Suggest areas for future research, identifying gaps in current knowledge and proposing directions for further study.
Section 9: Conclusion
Description 9: Summarize the key points discussed in the paper and reinforce the importance of addressing security and privacy issues in VANETs.
|
An Overview on Gripping Force Measurement at the Micro and Nano-Scales Using Two-Fingered Microrobotic Systems
| 16 |
---
paper_title: Manipulation at the NanoNewton level: Micrograpsing for mechanical characterization of biomaterials
paper_content:
This paper presents the use of a monolithic, force-feedback MEMS (microelectomechanical systems) microgripper for characterizing both elastic and viscoelastic properties of highly deformable hydrogel microcapsules (15–25µm) at wet state during micromanipulation. The single-chip microgripper integrates an electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5nN) and the other for gripping force measurements (force resolution: 19.9nN). Through nanoNewton force measurements, closed-loop force control, and visual tracking, the system quantified Young's modulus values and viscoelastic parameters of alginate microcapsules, demonstrating an easy-to-operate, accurate compression testing technique for characterizing soft, micrometer-sized biomaterials.
---
paper_title: Autonomous micromanipulation using a new strategy of accurate release by rolling
paper_content:
This paper presents our work in developing an autonomous micromanipulation system. The originality of our system is that it takes advantage of adhesion to grip micro-objects by using a single fingered gripper. This is in fact a tipless cantilever previously designed for atomic force microscopy applications. We describe vision techniques employed to process images provided by an optical microscope, allowing to position accurately the end-effector for a gripping task. A theoretical study of the direct force measurement device and an experimental validation show how we can improve the measurement of impact and contact forces. Then we explain the strategy used to bring the gripper into contact with the object, based on force control and kinematic redundancy. Finally, a simplified model of the release task is proposed in order to determine conditions that allow to roll the object, and then to place it with precision.
---
paper_title: Quadrilateral Modelling and Robust Control of a Nonlinear Piezoelectric Cantilever
paper_content:
Piezocantilevers are commonly used for the actuation of micromechatronic systems. These systems are generally used to perform micromanipulation tasks which require high positioning accuracy. However, the nonlinearities, i.e., the hysteresis and the creep, of piezoelectric materials and the influence of the environment (vibrations, temperature change, etc.) create difficulties for such a performance to be achieved. Various models have been used to take into account the nonlinearities but they are often complex. In this paper, we study a one degree of freedom piezoelectric cantilever. For that, we propose a simple new model where the hysteresis curve is approximated by a quadrilateral and the creep is considered to be a disturbance. To facilitate the modelling, we first demonstrate that the dynamic hysteresis of the piezocantilever is equivalent to a static hysteresis, i.e., a varying gain, in series with a linear dynamic part. The obtained model is used to synthesize a linear robust controller, making it possible to achieve the performances required in micromanipulation tasks. The experimental results show the relevance of the combination of the developed model and the synthesized robust H infin controller.
---
paper_title: Optical tweezers: the next generation
paper_content:
JOHANNES Kepler is famous for discovering the laws of planetary motion, but he is less well known for writing what may have been the first science-fiction story to involve space travel. During his observations, the German astronomer noticed that tails of comets always point away from the Sun, which suggested that the Sun was exerting a sort of radiant pressure. This led him in 1609 – the year in which he published the first of his laws – to propose sailing from the Earth to the Moon on light itself. Of course, that was and still is the stuff of science fiction, but 400 years later Kepler's initial ideas about moving matter with light are very much a reality.
---
paper_title: Force feedback-based microinstrument for measuring tissue properties and pulse in microsurgery
paper_content:
Miniaturized and "smart" instruments capable of characterizing the mechanical properties of tiny biological tissues are needed for research in biology, physiology and biomechanics, and can find very important clinical applications for diagnostics and minimally invasive surgery (MIS). We are developing a set of robotic microinstruments designed to augment the performance of the surgeon during MIS. These microtools are intended to restore (or even enhance) the finger palpation capabilities that the surgeon exploits to characterize tissue hardness and to measure pulsating vessels in traditional surgery, but that are substantially reduced in MIS. The paper describes the main features and the performance of a prototype miniature robotic instrument consisting of a microfabricated microgripper, instrumented with semiconductor strain-gauges as force sensors. For the (in vitro) experiments reported in the paper, the microgripper is mounted on a workstation and teleoperated. A haptic interface provides force feedback to the operator. We have demonstrated that the system can discriminate tiny skin samples based on their different elastic properties, and feel microvessels based on pulsating fluid flowing through them.
---
paper_title: Magnetic separation techniques: their application to medicine
paper_content:
Whilst separation techniques relying on gravitational forces have become relatively sophisticated in their application to biology the same is not true for magnetic separation procedures. The use of the latter has been limited to the few cells which contain paramagnetic iron. However with the development of several different types of magnetic particles and selective delivery system (e.g. monoclonal antibodies) the use of magnetic separation techniques is growing rapidly. This review describes the different types of particles currently available, the magnetic separation technique applied to the different magnetic compounds and illustrates major uses to which magnetic separation procedures are currently applied in the area of biology and medicine.
---
paper_title: Vacuum tool for handling microobjects with a NanoRobot
paper_content:
One of the basic tasks a robot has to perform is to manipulate objects. For macroscopic applications mechanical grippers usually grasp the workpiece. Because of the different scaling of gravity and adhesion such tools are no more suitable in micro manipulation. A better strategy is to use adhesion forces or vacuum. Therefore, after a brief introduction to adhesion phenomenons this paper focuses on the investigation of a vacuum gripping tool consisting of a glass pipette and a computer controlled vacuum supply. Special emphasis is laid on the optimization of the tool's parameters in order to improve its pick and place capability. The tool has been integrated into the ETHZ NanoRobot and tested on dedicated benchmark test. It was possible to grip 100 /spl mu/m sized diamond crystals and deposit them at arbitrary positions. Also emergency routines that allow to reliably get rid of sticking particles are presented. Finally the new setup of the multi-tool NanoRobot containing both the vacuum pipette and a microfabricated gripper is presented. It enables handling of micro-parts with more flexibility by working hand in hand.
---
paper_title: Modeling the trajectory of a microparticle in a dielectrophoresis device
paper_content:
Micro- and nanoparticles can be trapped by a nonuniform electric field through the effect of the dielectrophoretic principle. Dielectrophoresis (DEP) is used to separate, manipulate, and detect microparticles in several domains, such as in biological or carbon nanotube manipulations. Current methods to simulate the trajectory of microparticles under a DEP force field are based on finite element model (FEM), which requires new simulations when electrode potential is changed, or on analytic equations limited to very simple geometries. In this paper, we propose a hybrid method, between analytic and numeric calculations and able to simulate complex geometries and to easily change the electrode potential along the trajectory. A small number of FEM simulations are used to create a database, which enables online calculation of the object trajectory as a function of electrode potentials.
---
paper_title: Overview of Microgrippers and Design of a Micromanipulation Station Based on a MMOC Microgripper
paper_content:
This paper deals with an overview of recent microgrippers. As the end-effectors of micromanipulation systems, microgrippers are crucial point of such systems for their efficiency and their reliability. The performances of current microgrippers are presented and offer a stroke extending from 50mum to approximately 2 mm and a maximum forces varying from 0.1 mN to 600 mN. Then, micromanipulation system based on a piezoelectric microgripper and a SCARA robot is presented
---
paper_title: Resource Letter: LBOT-1: Laser-based optical tweezers.
paper_content:
This Resource Letter provides a guide to the literature on optical tweezers, also known as laser-based, gradient-force optical traps. Journal articles and books are cited for the following main topics: general papers on optical tweezers, trapping instrument design, optical detection methods, optical trapping theory, mechanical measurements, single molecule studies, and sections on biological motors, cellular measurements and additional applications of optical tweezers.
---
paper_title: Improvement of Strain Gauges Microforces Measurement using Kalman Optimal Filtering
paper_content:
Manipulation of small components and assembly of microsystems require force measurement. In the microworld (the world of very small components), signal/noise ratio is very low due to the weak amplitude of the signals. To be used in feedback control or in a micromanipulation system, a force sensor must allow static and dynamic measurements. In this paper, we present a micro-force measurement system based on the use of strain gauges and a Kalman optimal filter. Using a model of the measurement system and a statistical description of the noise, the optimal filter allows filtering the noise without loss of dynamic measurement. The performances of the measurement system are improved and fast force variations can be measured.
---
paper_title: Squeeze film air bearings using piezoelectric bending elements
paper_content:
Reference LSA-CONF-2000-016 Conference Web Site: http://server.woomeranet.com.au/~movic2000/default.html Record created on 2006-12-07, modified on 2016-08-08
---
paper_title: A micro-particle positioning technique combining an ultrasonic manipulator and a microgripper
paper_content:
The acoustic radiation force acts on particles suspended in a fluid in which acoustic waves are present. It can be used to establish a force field throughout the fluid volume capable of positioning the particles in predictable locations. Here, a device is developed which positions the particles in a single line by the sequential use of two excitation frequencies which have been identified by a finite element model of the system. The device is designed such that at one end there is an opening which allows the fingers of a microgripper to enter the fluid chamber. Hence the gripper can be used to remove the last particle in the line. The high accuracy of the positioning of the particles prior to gripping means that the microgripper needs just to return to a fixed position in order to remove subsequent particles. Furthermore, the effects of the microgripper fingers entering the fluid volume whilst the ultrasound field is excited are examined. One result being the release of a particle stuck to a gripper finger. It is believed that this combination of techniques allows for considerable scope in the automation of microgripping procedures.
---
paper_title: Design, fabrication, and testing of a 3-DOF HARM micromanipulator on (1 1 1) silicon substrate
paper_content:
In this study, a novel HARM (high aspect ratio micromachining) micromanipulator fabricated on (1 1 1) silicon wafer is reported. The micromanipulator consists of a positioning stage, a robot arm, supporting platforms, conducting wires, and bonding pads. These components are monolithically integrated on a chip through the presented processes. The three-degrees-of-freedom (3-DOF) positioning of the micromanipulator is realized by using the integration of two linear comb actuators and a vertical comb actuator. The robot arm is used to manipulate samples with dimension in the order of several microns to several hundred microns, for instance, optical fibers and biological samples. The robot arm could be a gripper, a needle, a probe, or even a pipette. Since the micromanipulator is made of single crystal silicon, it has superior mechanical properties. A micro gripper has also been successfully designed and fabricated. © 2005 Published by Elsevier B.V.
---
paper_title: Development of a tactile low-cost microgripper with integrated force sensor
paper_content:
This paper describes recent results of the development of a novel tactile force-sensing microgripper based on a flexure hinge fabricated in stainless steel by wired electro discharge machining (EDM). The gripper was equipped with a commercial semiconductor strain-gauge and a piezo stack. The microgripper is an end-effector of a microrobot developed to grasp and manipulate tiny objects. Acquiring force-information with the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform micromanipulation or assembly tasks.
---
paper_title: Capillary self-alignment assisted hybrid robotic handling for ultra-thin die stacking
paper_content:
Ultra-thin dies are difficult to package because of their fragility and flexibility. Current ultra-thin die integration technology for 3D microsystems relies on robotic pick-and-place machines and machine vision, which has rather limited throughput for high-accuracy assembly of fragile ultra-thin dies. In this paper, we report a hybrid assembly strategy that consists of robotic pick-and-place using a vacuum micro-gripper, and droplet self-alignment by capillary force. Ultra-thin dies with breakable links are chosen as part of the assembly strategy. Experimental results show that we can align ultra-thin (10μm) dies with sub-micron accuracy without machine vision. A fully automatic sequence of stacking several of these dies is demonstrated. Up to 12 ultra-thin dies have been stacked. These early results show that die-to-die integration of ultra-thin dies with higher throughput than the current industry robot is possible by applying both robotic handling and droplet self-alignment to ultra-thin die assembly.
---
paper_title: A hybrid-type electrostatically driven microgripper with an integrated vacuum tool
paper_content:
Abstract Reliable and accurate pick and release of microobjects is a long-standing challenge at the microscale manipulation field. This paper presents a new hybrid type of microelectromechanical systems (MEMS) microgripper integrated with an electrostatic mechanism and vacuum technology to achieve a reliable and accurate manipulation of microobjects. The microgripper is fabricated by a surface and bulk micromachining technology. Vacuum pipelines are constructed by bonding technique. The pick and release micromanipulation of microobjects is accomplished by electrostatic driving force caused by comb structure and an auxiliary air pressure force from air pump. A deflection of 25 μm at the tip of the gripper is achieved with the applied voltage of 80 V. The performance of this new hybrid type of microgripper was experimentally demonstrated through the manipulation of 100–200 μm polystyrene balls. Experimental results show that this microgripper can successfully fulfil the pick and release micromanipulation. Theoretical analyses were conducted to have a more comprehensive understanding of the release principle. Based on this paper, the new kind of hybrid microgripper will possibly provide an effective solution to the manipulation of submicrometer-sized objects.
---
paper_title: Haptic Teleoperation for 3-D Microassembly of Spherical Objects
paper_content:
In this paper, teleoperated 3-D microassembly of spherical objects with haptic feedback is presented. A dual-tip gripper controlled through a haptic interface is used to pick-and-place microspheres (diameter: 4-6 μm). The proposed approach to align the gripper with the spheres is based on a user-driven exploration of the object to be manipulated. The haptic feedback is based on amplitude measurements from cantilevers in dynamic mode. That is, the operator perceives the contact while freely exploring the manipulation area. The data recorded during this exploration are processed online and generate a virtual guide to pull the user to the optimum contact point, allowing correct positioning of the dual tips. A preliminary scan is not necessary to compute the haptic feedback, which increases the intuitiveness of our system. For the pick-and-place operation, two haptic feedback schemes are proposed to either provide users with information about microscale interactions occurring during the operation, or to assist them while performing the task. As experimental validation, a two-layer pyramid composed of four microspheres is built in ambient conditions.
---
paper_title: Noise characterization in millimeter sized micromanipulation systems
paper_content:
Abstract Efficient and dexterous manipulation of very small (micrometer and millimeter sized) objects require the use of high precision micromanipulation systems. The accuracy of the positioning is nevertheless limited by the noise due to vibrations of the end effectors making it difficult to achieve precise micrometer and nanometer displacements to grip small objects. The purpose of this paper is to analyze the sources of noise and to take it into account in dynamic models of micromanipulation systems. Environmental noise is studied considering the following sources of noise: ground motion and acoustic noises. Each source of noise is characterized in different environmental conditions and a separate description of their effects is investigated on micromanipulation systems using millimeter sized cantilevers as end effectors. Then, using the finite difference method (FDM), a dynamic model taking into account studied noises is used. Ground motion is described as a disturbance transmitted by the clamping to the tip of the cantilever and acoustic noises as external uniform and orthogonal waves. For model validation, an experimental setup including cantilevers of different lengths is designed and a high resolution laser interferometer is used for vibration measurements. Results show that the model allows a physical interpretation about the sources of noise and vibrations in millimeter sized micromanipulation systems leading to new perspectives for high positioning accuracy in robotics micromanipulation through active noise control.
---
paper_title: Manipulation at the NanoNewton level: Micrograpsing for mechanical characterization of biomaterials
paper_content:
This paper presents the use of a monolithic, force-feedback MEMS (microelectomechanical systems) microgripper for characterizing both elastic and viscoelastic properties of highly deformable hydrogel microcapsules (15–25µm) at wet state during micromanipulation. The single-chip microgripper integrates an electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5nN) and the other for gripping force measurements (force resolution: 19.9nN). Through nanoNewton force measurements, closed-loop force control, and visual tracking, the system quantified Young's modulus values and viscoelastic parameters of alginate microcapsules, demonstrating an easy-to-operate, accurate compression testing technique for characterizing soft, micrometer-sized biomaterials.
---
paper_title: Automated nanohandling by microrobots
paper_content:
The rapid development of nanotechnology has created a need for advanced nanohandling tools and techniques. One active branch of research in this area focuses on the use of microrobots for automated handling of micro- and nanoscale objects. Automated Nanohandling by Microrobots presents work on the development of a versatile microrobot-based nanohandling robot station inside a scanning electron microscope (SEM). The SEM serves as a powerful vision sensor, providing a high resolution and a high depth of focus, allowing different fields of application to be opened up. The pre-conditions for using a SEM are high-precision, user-friendly microrobots which can be integrated into the SEM chamber and equipped with application-specific tools. Automated Nanohandling by Microrobots introduces an actuation principle for such microrobots and presents a new robot design. Different aspects of this research field regarding the hardware and software implementation of the system components, including the sensory feedback for automated nanohandling, are discussed in detail. Extensive applications of the microrobot station for nanohandling, nano-characterization and nanostructuring are provided, together with the experimental results. Based upon the Microrobotics course for students of computer sciences and physics at the University of Oldenburg, Automated Nanohandling by Microrobots provides the practicing engineer and the engineering student with an introduction to the design and applications of robot-based nanohandling devices. Those unfamiliar with the subject will find the text, which is complemented throughout by the extensive use of illustrations, clear and simple to understand.
---
paper_title: Microrobotic cell injection
paper_content:
Advances in microbiology demonstrate the need for manipulating individual biological cells, such as for cell injection which includes pronuclei injection and intracytoplasmic injection. Conventionally, cell injection has been conducted manually. In this paper, we present a microrobotic system capable of performing automatic embryo pronuclei DNA injection autonomously and semi-autonomously through a hybrid visual servoing control scheme. After injection, the DNA injected embryos were transferred into a pseudopregnant foster female mouse to reproduce transgenic mice for cancer studies. Experimental results show that the injection success rate was 100%. The system setup, hybrid control scheme and other important issues in this application, such as automatic focusing, are discussed.
---
paper_title: NanoLab: A nanorobotic system for automated pick-and-place handling and characterization of CNTs
paper_content:
Carbon nanotubes (CNTs) are one of the most promising materials for nanoelectronic applications. Before bringing CNTs into large-scale production, a reliable nanorobotic system for automated handling and characterization as well as prototyping of CNT-based components is essential. This paper presents the NanoLab setup, a nanorobotic system that combines specially developed key components such as electrothermal microgrippers and mobile microrobots inside a scanning electron microscope. The working principle and fabrication of mobile microrobots and electrothermal microgripper as well as their interaction and integration is described. Furthermore, the NanoLab is used to explore novel key strategies such as automated locating of CNTs for pick-and-place handling and methods for electrical characterization of CNTs. The results have been achieved within the framework of a European research project where the scientific knowledge will be transfered into an industrial system that will be commercially available for potential customers.
---
paper_title: Improvement of Strain Gauges Microforces Measurement using Kalman Optimal Filtering
paper_content:
Manipulation of small components and assembly of microsystems require force measurement. In the microworld (the world of very small components), signal/noise ratio is very low due to the weak amplitude of the signals. To be used in feedback control or in a micromanipulation system, a force sensor must allow static and dynamic measurements. In this paper, we present a micro-force measurement system based on the use of strain gauges and a Kalman optimal filter. Using a model of the measurement system and a statistical description of the noise, the optimal filter allows filtering the noise without loss of dynamic measurement. The performances of the measurement system are improved and fast force variations can be measured.
---
paper_title: Automated nanohandling by microrobots
paper_content:
The rapid development of nanotechnology has created a need for advanced nanohandling tools and techniques. One active branch of research in this area focuses on the use of microrobots for automated handling of micro- and nanoscale objects. Automated Nanohandling by Microrobots presents work on the development of a versatile microrobot-based nanohandling robot station inside a scanning electron microscope (SEM). The SEM serves as a powerful vision sensor, providing a high resolution and a high depth of focus, allowing different fields of application to be opened up. The pre-conditions for using a SEM are high-precision, user-friendly microrobots which can be integrated into the SEM chamber and equipped with application-specific tools. Automated Nanohandling by Microrobots introduces an actuation principle for such microrobots and presents a new robot design. Different aspects of this research field regarding the hardware and software implementation of the system components, including the sensory feedback for automated nanohandling, are discussed in detail. Extensive applications of the microrobot station for nanohandling, nano-characterization and nanostructuring are provided, together with the experimental results. Based upon the Microrobotics course for students of computer sciences and physics at the University of Oldenburg, Automated Nanohandling by Microrobots provides the practicing engineer and the engineering student with an introduction to the design and applications of robot-based nanohandling devices. Those unfamiliar with the subject will find the text, which is complemented throughout by the extensive use of illustrations, clear and simple to understand.
---
paper_title: Development of a piezoelectric polymer-based sensorized microgripper for microassembly and micromanipulation
paper_content:
This paper presents the design, fabrication, and calibration of a piezoelectric polymer-based sensorized microgripper. Electro discharge machining technology is employed to fabricate the superelastic alloy-based microgripper. It was experimentally tested to show the improvement of mechanical performance. For integration of force sensor in the microgripper, the sensor design based on the piezoelectric polymer PVDF film and fabrication process are presented. The calibration and performance test of the force sensor-integrated microgripper are experimentally carried out. The force sensor-integrated microgripper is applied to fine alignment tasks of micro opto-electrical components. Experimental results show that it can successfully provide force feedback to the operator through the haptic device and play a main role in preventing damage of assembly parts by adjusting the teaching command.
---
paper_title: Manipulation at the NanoNewton level: Micrograpsing for mechanical characterization of biomaterials
paper_content:
This paper presents the use of a monolithic, force-feedback MEMS (microelectomechanical systems) microgripper for characterizing both elastic and viscoelastic properties of highly deformable hydrogel microcapsules (15–25µm) at wet state during micromanipulation. The single-chip microgripper integrates an electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5nN) and the other for gripping force measurements (force resolution: 19.9nN). Through nanoNewton force measurements, closed-loop force control, and visual tracking, the system quantified Young's modulus values and viscoelastic parameters of alginate microcapsules, demonstrating an easy-to-operate, accurate compression testing technique for characterizing soft, micrometer-sized biomaterials.
---
paper_title: Three-dimensional automated micromanipulation using a nanotip gripper with multi-feedback
paper_content:
In this paper, three-dimensional (3D) automated micromanipulation at the scale of several micrometers using a nanotip gripper with multi-feedback is presented. The gripper is constructed from protrudent tips of two individually actuated atomic force microscope cantilevers; each cantilever is equipped with an optical lever. A manipulation protocol allows these two cantilevers to form a gripper to pick and place micro-objects without adhesive-force obstacles in air. For grasping, amplitude feedback from the dithering cantilever with its normal resonant frequency is used to search a grasping point by laterally scanning the side of the microspheres. Real-time force sensing is available for monitoring the whole pick-and-place process with pick-up, transport and release steps. For trajectory planning, an algorithm based on the shortest path solution is used to obtained 3D micropatterns with high levels of efficiency. In experiments, 20 microspheres with diameters from 3 µm to 4 µm were manipulated and 5 3D micropyramids with two layers were built. Three-dimensional micromanipulation and microassembly at the scale of several microns to the submicron scale could become feasible through the newly developed 3D micromanipulation system with a nanotip gripper.
---
paper_title: Vision-based force measurement
paper_content:
This paper demonstrates a method to visually measure the force distribution applied to a linearly elastic object using the contour data in an image. The force measurement is accomplished by making use of the result from linear elasticity that the displacement field of the contour of a linearly elastic object is sufficient to completely recover the force distribution applied to the object. This result leads naturally to a deformable template matching approach where the template is deformed according to the governing equations of linear elasticity. An energy minimization method is used to match the template to the contour data in the image. This technique of visually measuring forces we refer to as vision-based force measurement (VBFM). VBFM has the potential to increase the robustness and reliability of micromanipulation and biomanipulation tasks where force sensing is essential for success. The effectiveness of VBFM is demonstrated for both a microcantilever beam and a microgripper. A sensor resolution of less than +/-3 nN for the microcantilever and +/-3 mN for the microgripper was achieved using VBFM. Performance optimizations for the energy minimization problem are also discussed that make this algorithm feasible for real-time applications.
---
paper_title: Force Sensing and Control in Micromanipulation
paper_content:
In micromanipulation, the size of the manipulated object is usually much less than 1 mm in a single dimension, in which case gravitational and inertial forces are no longer dominant. This leads to problems (for manipulation through force) that are not evident in the macroworld, and for which the macroworld techniques alone may not be adequate to provide solutions. This paper surveys critical issues and their available solutions related to force control in micromanipulation. It focuses on: 1) techniques for dealing with adhesion forces and 2) methods for force sensing and control
---
paper_title: Improvement of Strain Gauges Microforces Measurement using Kalman Optimal Filtering
paper_content:
Manipulation of small components and assembly of microsystems require force measurement. In the microworld (the world of very small components), signal/noise ratio is very low due to the weak amplitude of the signals. To be used in feedback control or in a micromanipulation system, a force sensor must allow static and dynamic measurements. In this paper, we present a micro-force measurement system based on the use of strain gauges and a Kalman optimal filter. Using a model of the measurement system and a statistical description of the noise, the optimal filter allows filtering the noise without loss of dynamic measurement. The performances of the measurement system are improved and fast force variations can be measured.
---
paper_title: PI force control of a microgripper for assembling biomedical microdevices
paper_content:
Recent work is presented on the control of a microgripper based on flexure joints, fabricated by LIGA and instrumented with force sensors. The force sensors are semiconductor strain gauges which have been integrated in the microgripper and experimentally characterised. The microgripper is the core component of a workstation developed to grasp and manipulate tiny objects, such as components of biomedical microdevices. A proportional integral (PI) force control of the microgripper has been implemented. The results of the tracking experiments prove that this control algorithm can assure performance suitable for the intended applications.
---
paper_title: Force feedback-based microinstrument for measuring tissue properties and pulse in microsurgery
paper_content:
Miniaturized and "smart" instruments capable of characterizing the mechanical properties of tiny biological tissues are needed for research in biology, physiology and biomechanics, and can find very important clinical applications for diagnostics and minimally invasive surgery (MIS). We are developing a set of robotic microinstruments designed to augment the performance of the surgeon during MIS. These microtools are intended to restore (or even enhance) the finger palpation capabilities that the surgeon exploits to characterize tissue hardness and to measure pulsating vessels in traditional surgery, but that are substantially reduced in MIS. The paper describes the main features and the performance of a prototype miniature robotic instrument consisting of a microfabricated microgripper, instrumented with semiconductor strain-gauges as force sensors. For the (in vitro) experiments reported in the paper, the microgripper is mounted on a workstation and teleoperated. A haptic interface provides force feedback to the operator. We have demonstrated that the system can discriminate tiny skin samples based on their different elastic properties, and feel microvessels based on pulsating fluid flowing through them.
---
paper_title: Integrated microendeffector for micromanipulation
paper_content:
Micromanipulation is needed for assembly and maintenance of micromachines and their parts. If the handled objects are miniaturized, interactive forces, such as the van der Waals force, surface tension force, and electrostatic force between microobjects and gripper surface become dominant in the air, and they act as adhesive forces. We cannot neglect such adhesive forces in micromanipulation. Considering the physical phenomena in the microworld, we propose reduction methods for adhesive forces. Surface roughness of the endeffector surface is effective to reduce the van der Waals force. We propose making the micropyramids on the endeffector surface by micromachining techniques. We designed and made a prototype of the microgripper with a microendeffector, the gripping surface of which is formed to have several micropyramids. Experimental results show effectiveness of the micropyramids for reduction of the adhesive force. We also made a semiconductor strain gauge at the end of the microendeffector for grasping force measurement. Both micropyramids and integrated piezoresistive force sensor are fabricated on the microendeffector by the micromachining techniques. Performance of this force sensor is shown.
---
paper_title: Towards a force-controlled microgripper for assembling biomedical microdevices
paper_content:
This paper presents recent results on the development and control of a microgripper based on flexure joints, fabricated by LIGA and instrumented with semiconductor strain-gauge force sensors. The microgripper is the end-effector of a workstation developed to grasp and manipulate tiny objects such as the components of a typical biomedical microdevice. The development of the force control in the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform assembly tasks for biomedical microdevices. As a step towards the definition of the force control strategy, system identification techniques have been used to model the microgripper. Results indicate that a proportional integral (PI) controller could be used to assure, at the same time, closed-loop stability of the system, and a bandwidth suitable for the intended applications. The force control is based on strain-gauge sensors which have been integrated in the microgripper and experimentally characterized. Sensor response in the idling condition and during grasp showed that they can provide useful information for force control of the microgripper.
---
paper_title: A flexure-based gripper for small-scale manipulation
paper_content:
A small-scale flexure-based gripper was designed for manipulation tasks requiring precision position and force control. The gripper is actuated by a piezoelectric ceramic stack actuator and utilizes strain gages to measure both the gripping force and displacement. The position and force bandwidths were designed for ten Hertz and one hundred Hertz, respectively, in order to afford human-based teleoperative transparency. The gripper serves effectively as a one degree-of-freedom investigation of compliant mechanism design for position and force controlled micromanipulation. Data is presented that characterizes the microgripper performance under both pure position and pure force control, followed by a discussion of the attributes and limitations of flexure-based design.
---
paper_title: Vision-based force measurement
paper_content:
This paper demonstrates a method to visually measure the force distribution applied to a linearly elastic object using the contour data in an image. The force measurement is accomplished by making use of the result from linear elasticity that the displacement field of the contour of a linearly elastic object is sufficient to completely recover the force distribution applied to the object. This result leads naturally to a deformable template matching approach where the template is deformed according to the governing equations of linear elasticity. An energy minimization method is used to match the template to the contour data in the image. This technique of visually measuring forces we refer to as vision-based force measurement (VBFM). VBFM has the potential to increase the robustness and reliability of micromanipulation and biomanipulation tasks where force sensing is essential for success. The effectiveness of VBFM is demonstrated for both a microcantilever beam and a microgripper. A sensor resolution of less than +/-3 nN for the microcantilever and +/-3 mN for the microgripper was achieved using VBFM. Performance optimizations for the energy minimization problem are also discussed that make this algorithm feasible for real-time applications.
---
paper_title: The effect of material properties and gripping force on micrograsping
paper_content:
This paper presents our work in developing a force controlled microgripper and micrograsping strategies using optical beam deflection techniques. The optical beam deflection sensor is based on modified atomic force microscopy techniques and is able to resolve forces below a nano-Newton. A variety of gripper fingers made from materials with different conductivity and surface roughness is analyzed theoretically and experimentally using the force sensor. These results provide insight into the mechanics of micromanipulation, and the results are used to develop micrograsping strategies. A design of a microfabricated force controlled microgripper is presented along with initial experimental results in applying various gripping forces to microparts. The results demonstrate the important role gripping force plays in the grasping and releasing of microparts.
---
paper_title: Vision-based sensing of forces in elastic objects
paper_content:
Abstract A minimally intrusive, vision-based, computational force sensor for elastically deformable objects is proposed in this paper. Estimating forces from the visually measured displacements is straightforward in the case of the linear problem of small displacements, but not in the case of the large displacements where geometric non-linearities must be taken into account. From the images of the object taken before and after the deformation, we compute the deformation gradients and logarithmic strains. Using the stress–strain relationships for the material, we compute the Cauchy’s stresses and from this we estimate the locations and magnitudes of the external forces that caused the deformation. A sensitivity analysis is performed to examine the effect of small deviations in the experimentally captured displacements on the estimated external forces. This analysis showed that the small-strain case is more sensitive and prone to numerical errors than the large-strain case. Additionally, a related method that is indirect and iterative is also presented in which we assume that we know the locations of the external forces. Numerical and experimental studies are presented for both micro- and macro-scale objects. The main conclusion of this work is that the vision-based force estimation is viable if the displacements of the deforming object can be captured accurately.
---
paper_title: Current integration force and displacement self-sensing method for cantilevered piezoelectric actuators
paper_content:
This paper presents a new method of self-sensing both of the displacement and the external applied force at the tip of piezoelectric cantilevers. Integrated electric current across piezoelectric actuators is compensated against material nonlinearities (creep, hysteresis) to provide reliable information. We propose to compensate the hysteresis by using the Prandtl-Ishlinskii static approach while an auto regressive and moving average exogenous (ARMAX) model is used to minimize the creep influence. The quasistatic estimation, electronic circuit, and aspects related to long-term charge preservations are described or referenced. As an experiment, we tested the actuator entering in contact with a fixed force sensor. An input signal of 20 V peak-to-peak (10% of maximum range) led to force self-sensing errors inferior to +/-8%. A final discussion about method accuracy and its limitations is made.
---
paper_title: Towards a force-controlled microgripper for assembling biomedical microdevices
paper_content:
This paper presents recent results on the development and control of a microgripper based on flexure joints, fabricated by LIGA and instrumented with semiconductor strain-gauge force sensors. The microgripper is the end-effector of a workstation developed to grasp and manipulate tiny objects such as the components of a typical biomedical microdevice. The development of the force control in the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform assembly tasks for biomedical microdevices. As a step towards the definition of the force control strategy, system identification techniques have been used to model the microgripper. Results indicate that a proportional integral (PI) controller could be used to assure, at the same time, closed-loop stability of the system, and a bandwidth suitable for the intended applications. The force control is based on strain-gauge sensors which have been integrated in the microgripper and experimentally characterized. Sensor response in the idling condition and during grasp showed that they can provide useful information for force control of the microgripper.
---
paper_title: Automatic dextrous microhandling based on a 6-DOF microgripper
paper_content:
The demand for automatic quality inspection of individual micro-components has increased strongly in the micro-optoelectronics industry. In many cases, quality inspection of those micro-components needs dexterous microhandling. However, automatic dexterous handling of microcomponents is a very challenging issue which is barely explored. This paper presents a high-DOF (degrees of freedom) automatic dexterous handling and inspection system for micro-optoelectronic components using a novel 6-DOF piezoelectric microgripper. The control system includes three hierarchical layers: an actuator control layer, a motion planning layer and a mission control layer. Machine vision is applied in automatic manipulation. The performance of the system is demonstrated in a vision-based automatic inspection task for 300 × 300 × 100- μ m-sized optoelectronics components and reached a cycle time of 7.1 ± 0.3 s.
---
paper_title: Development of a tactile low-cost microgripper with integrated force sensor
paper_content:
This paper describes recent results of the development of a novel tactile force-sensing microgripper based on a flexure hinge fabricated in stainless steel by wired electro discharge machining (EDM). The gripper was equipped with a commercial semiconductor strain-gauge and a piezo stack. The microgripper is an end-effector of a microrobot developed to grasp and manipulate tiny objects. Acquiring force-information with the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform micromanipulation or assembly tasks.
---
paper_title: Development of a piezoelectric polymer-based sensorized microgripper for microassembly and micromanipulation
paper_content:
This paper presents the design, fabrication, and calibration of a piezoelectric polymer-based sensorized microgripper. Electro discharge machining technology is employed to fabricate the superelastic alloy-based microgripper. It was experimentally tested to show the improvement of mechanical performance. For integration of force sensor in the microgripper, the sensor design based on the piezoelectric polymer PVDF film and fabrication process are presented. The calibration and performance test of the force sensor-integrated microgripper are experimentally carried out. The force sensor-integrated microgripper is applied to fine alignment tasks of micro opto-electrical components. Experimental results show that it can successfully provide force feedback to the operator through the haptic device and play a main role in preventing damage of assembly parts by adjusting the teaching command.
---
paper_title: Manipulation at the NanoNewton level: Micrograpsing for mechanical characterization of biomaterials
paper_content:
This paper presents the use of a monolithic, force-feedback MEMS (microelectomechanical systems) microgripper for characterizing both elastic and viscoelastic properties of highly deformable hydrogel microcapsules (15–25µm) at wet state during micromanipulation. The single-chip microgripper integrates an electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5nN) and the other for gripping force measurements (force resolution: 19.9nN). Through nanoNewton force measurements, closed-loop force control, and visual tracking, the system quantified Young's modulus values and viscoelastic parameters of alginate microcapsules, demonstrating an easy-to-operate, accurate compression testing technique for characterizing soft, micrometer-sized biomaterials.
---
paper_title: Design, simulation and testing of electrostatic SOI MUMPs based microgripper integrated with capacitive contact sensor
paper_content:
Abstract In this paper a detailed modeling, simulations and testing of a novel electrostatically actuated microgripper integrated with capacitive contact sensor is presented. Microgripper is actuated with lateral comb drive system and transverse comb system is used to sense contact between micro-object and microgripper jaws. The design is optimized in standard SOI-MUMPs micromachining process using L-Edit of MEMS-Pro. Finite element analysis of microgripper is performed in COVENTOR-WARE which shows total displacement of 15.5 μm at the tip of jaws when voltage of 50 Vdc is applied at the actuator. Finite element analysis of sensor part is performed and results are compared with analytical model. Modal analysis is performed to investigate mode shapes and natural frequencies of the microgripper. Microgripper is tested experimentally and total displacement of 17 μm is achieved at the tip of microgripper. The slight difference between finite element analysis and experimental results is due to small variations in the material properties, deposited during the fabrication process. The change in capacitance of capacitive contact sensor is linearly calibrated with the change in the displacement. The sensitivity of contact sensor is 90 fF/μm. The total size of microgripper is 5.03 mm × 6.5 mm.
---
paper_title: Monolithically Integrated Two-Axis Microtensile Tester for the Mechanical Characterization of Microscopic Samples
paper_content:
This paper describes the first monolithically integrated two-axis microtensile tester and its application to the automated stiffness measurement of single epidermal plant cells. The tensile tester consists of a two-axis electrostatic actuator with integrated capacitive position sensors and a two-axis capacitive microforce sensor. It is fabricated using a bulk silicon microfabrication process. The actuation range is +/-16 m along both axes with a position resolution of 20 nm. The force sensor is capable of measuring forces up to +/-60 N with a resolution down to 60 nN. The position-feedback sensors as well as the force sensor are calibrated by direct comparison with reference standards. A complete uncertainty analysis through the entire calibration chain based on the Monte Carlo method is presented. The functionality of the tensile tester is demonstrated by the automated stiffness measurement of the elongated cells in plant hairs (trichomes) as a function of their size. This enables a quantitative understanding and a model-based simulation of plant growth based on actual measurement data.
---
paper_title: Design of a Micro-Gripper and an Ultrasonic Manipulator for Handling Micron Sized Objects
paper_content:
This work reports on a system consisting of a MEMS (microelectromechanical system) gripper and an ultrasonic manipulator. The gripper is electrostatically actuated and includes an integrated force sensor measuring the gripping force. The device is monolithically fabricated using a silicon-on-insulator (SOI) fabrication process. The resolution of the force sensor is in the sub-micronewton range and, therefore, provides feedback of the forces that dominate the micromanipulation processes. A MEMS ultrasonic device is described which aligns small objects such as biological cells prior to manipulation with the gripper. The concept is demonstrated with polymer spheres, glass spheres and Hela cancer cells, thus providing a useful tool in micro-robotics and biological research.
---
paper_title: Automatic dextrous microhandling based on a 6-DOF microgripper
paper_content:
The demand for automatic quality inspection of individual micro-components has increased strongly in the micro-optoelectronics industry. In many cases, quality inspection of those micro-components needs dexterous microhandling. However, automatic dexterous handling of microcomponents is a very challenging issue which is barely explored. This paper presents a high-DOF (degrees of freedom) automatic dexterous handling and inspection system for micro-optoelectronic components using a novel 6-DOF piezoelectric microgripper. The control system includes three hierarchical layers: an actuator control layer, a motion planning layer and a mission control layer. Machine vision is applied in automatic manipulation. The performance of the system is demonstrated in a vision-based automatic inspection task for 300 × 300 × 100- μ m-sized optoelectronics components and reached a cycle time of 7.1 ± 0.3 s.
---
paper_title: Development of a tactile low-cost microgripper with integrated force sensor
paper_content:
This paper describes recent results of the development of a novel tactile force-sensing microgripper based on a flexure hinge fabricated in stainless steel by wired electro discharge machining (EDM). The gripper was equipped with a commercial semiconductor strain-gauge and a piezo stack. The microgripper is an end-effector of a microrobot developed to grasp and manipulate tiny objects. Acquiring force-information with the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform micromanipulation or assembly tasks.
---
paper_title: Noise characterization in millimeter sized micromanipulation systems
paper_content:
Abstract Efficient and dexterous manipulation of very small (micrometer and millimeter sized) objects require the use of high precision micromanipulation systems. The accuracy of the positioning is nevertheless limited by the noise due to vibrations of the end effectors making it difficult to achieve precise micrometer and nanometer displacements to grip small objects. The purpose of this paper is to analyze the sources of noise and to take it into account in dynamic models of micromanipulation systems. Environmental noise is studied considering the following sources of noise: ground motion and acoustic noises. Each source of noise is characterized in different environmental conditions and a separate description of their effects is investigated on micromanipulation systems using millimeter sized cantilevers as end effectors. Then, using the finite difference method (FDM), a dynamic model taking into account studied noises is used. Ground motion is described as a disturbance transmitted by the clamping to the tip of the cantilever and acoustic noises as external uniform and orthogonal waves. For model validation, an experimental setup including cantilevers of different lengths is designed and a high resolution laser interferometer is used for vibration measurements. Results show that the model allows a physical interpretation about the sources of noise and vibrations in millimeter sized micromanipulation systems leading to new perspectives for high positioning accuracy in robotics micromanipulation through active noise control.
---
paper_title: Monolithically Integrated Two-Axis Microtensile Tester for the Mechanical Characterization of Microscopic Samples
paper_content:
This paper describes the first monolithically integrated two-axis microtensile tester and its application to the automated stiffness measurement of single epidermal plant cells. The tensile tester consists of a two-axis electrostatic actuator with integrated capacitive position sensors and a two-axis capacitive microforce sensor. It is fabricated using a bulk silicon microfabrication process. The actuation range is +/-16 m along both axes with a position resolution of 20 nm. The force sensor is capable of measuring forces up to +/-60 N with a resolution down to 60 nN. The position-feedback sensors as well as the force sensor are calibrated by direct comparison with reference standards. A complete uncertainty analysis through the entire calibration chain based on the Monte Carlo method is presented. The functionality of the tensile tester is demonstrated by the automated stiffness measurement of the elongated cells in plant hairs (trichomes) as a function of their size. This enables a quantitative understanding and a model-based simulation of plant growth based on actual measurement data.
---
paper_title: The effect of material properties and gripping force on micrograsping
paper_content:
This paper presents our work in developing a force controlled microgripper and micrograsping strategies using optical beam deflection techniques. The optical beam deflection sensor is based on modified atomic force microscopy techniques and is able to resolve forces below a nano-Newton. A variety of gripper fingers made from materials with different conductivity and surface roughness is analyzed theoretically and experimentally using the force sensor. These results provide insight into the mechanics of micromanipulation, and the results are used to develop micrograsping strategies. A design of a microfabricated force controlled microgripper is presented along with initial experimental results in applying various gripping forces to microparts. The results demonstrate the important role gripping force plays in the grasping and releasing of microparts.
---
paper_title: Development of a tactile low-cost microgripper with integrated force sensor
paper_content:
This paper describes recent results of the development of a novel tactile force-sensing microgripper based on a flexure hinge fabricated in stainless steel by wired electro discharge machining (EDM). The gripper was equipped with a commercial semiconductor strain-gauge and a piezo stack. The microgripper is an end-effector of a microrobot developed to grasp and manipulate tiny objects. Acquiring force-information with the microgripper is of fundamental importance in order to achieve the dexterity and sensing capabilities required to perform micromanipulation or assembly tasks.
---
paper_title: Noise characterization in millimeter sized micromanipulation systems
paper_content:
Abstract Efficient and dexterous manipulation of very small (micrometer and millimeter sized) objects require the use of high precision micromanipulation systems. The accuracy of the positioning is nevertheless limited by the noise due to vibrations of the end effectors making it difficult to achieve precise micrometer and nanometer displacements to grip small objects. The purpose of this paper is to analyze the sources of noise and to take it into account in dynamic models of micromanipulation systems. Environmental noise is studied considering the following sources of noise: ground motion and acoustic noises. Each source of noise is characterized in different environmental conditions and a separate description of their effects is investigated on micromanipulation systems using millimeter sized cantilevers as end effectors. Then, using the finite difference method (FDM), a dynamic model taking into account studied noises is used. Ground motion is described as a disturbance transmitted by the clamping to the tip of the cantilever and acoustic noises as external uniform and orthogonal waves. For model validation, an experimental setup including cantilevers of different lengths is designed and a high resolution laser interferometer is used for vibration measurements. Results show that the model allows a physical interpretation about the sources of noise and vibrations in millimeter sized micromanipulation systems leading to new perspectives for high positioning accuracy in robotics micromanipulation through active noise control.
---
paper_title: Modeling the trajectory of a microparticle in a dielectrophoresis device
paper_content:
Micro- and nanoparticles can be trapped by a nonuniform electric field through the effect of the dielectrophoretic principle. Dielectrophoresis (DEP) is used to separate, manipulate, and detect microparticles in several domains, such as in biological or carbon nanotube manipulations. Current methods to simulate the trajectory of microparticles under a DEP force field are based on finite element model (FEM), which requires new simulations when electrode potential is changed, or on analytic equations limited to very simple geometries. In this paper, we propose a hybrid method, between analytic and numeric calculations and able to simulate complex geometries and to easily change the electrode potential along the trajectory. A small number of FEM simulations are used to create a database, which enables online calculation of the object trajectory as a function of electrode potentials.
---
paper_title: Fusing force and vision feedback for micromanipulation
paper_content:
We present experimental results that investigate the integration of two disparate sensing modalities, force and vision, for sensor-based microassembly. By integrating these sensing modes, we are able to provide feedback in a task-oriented frame of reference over a broad range of motion with an extremely high precision. An optical microscope is used to provide visual feedback down to micron resolutions. We have developed an optical beam deflection sensor to provide nanonewton level force feedback or nanometric level position feedback. The value of integrating these two disparate sensing modalities is demonstrated during controlled micropart impact experiments. Using force feedback alone to control micropart contact transitions, impact forces of over 140 nN were generated before the desired contact force of 2 nN was achieved. When visual servoing is integrated with the force control framework, micropart impact forces of only 9 nN and final contact forces of 2 nN were easily achieved.
---
paper_title: The effect of material properties and gripping force on micrograsping
paper_content:
This paper presents our work in developing a force controlled microgripper and micrograsping strategies using optical beam deflection techniques. The optical beam deflection sensor is based on modified atomic force microscopy techniques and is able to resolve forces below a nano-Newton. A variety of gripper fingers made from materials with different conductivity and surface roughness is analyzed theoretically and experimentally using the force sensor. These results provide insight into the mechanics of micromanipulation, and the results are used to develop micrograsping strategies. A design of a microfabricated force controlled microgripper is presented along with initial experimental results in applying various gripping forces to microparts. The results demonstrate the important role gripping force plays in the grasping and releasing of microparts.
---
paper_title: Automatic dextrous microhandling based on a 6-DOF microgripper
paper_content:
The demand for automatic quality inspection of individual micro-components has increased strongly in the micro-optoelectronics industry. In many cases, quality inspection of those micro-components needs dexterous microhandling. However, automatic dexterous handling of microcomponents is a very challenging issue which is barely explored. This paper presents a high-DOF (degrees of freedom) automatic dexterous handling and inspection system for micro-optoelectronic components using a novel 6-DOF piezoelectric microgripper. The control system includes three hierarchical layers: an actuator control layer, a motion planning layer and a mission control layer. Machine vision is applied in automatic manipulation. The performance of the system is demonstrated in a vision-based automatic inspection task for 300 × 300 × 100- μ m-sized optoelectronics components and reached a cycle time of 7.1 ± 0.3 s.
---
|
Title: An Overview on Gripping Force Measurement at the Micro and Nano-Scales Using Two-Fingered Microrobotic Systems
Section 1: Introduction
Description 1: Introduce the topic of microrobotics and the importance of gripping force measurement at micro and nano scales using two-fingered microrobotic systems.
Section 2: Force measurement
Description 2: Discuss the range of gripping forces required for different applications and the importance of force measurement in microrobotics.
Section 3: Piezoresistive force sensors
Description 3: Explain the principle, advantages, and limitations of piezoresistive force sensors used in microrobotic applications.
Section 4: Piezoelectric force sensors
Description 4: Describe the piezoelectric force sensors, their working principle, and their suitability for dynamic force measurements.
Section 5: Capacitive force sensors
Description 5: Detail the capacitive force sensors, their construction, and the benefits they offer over other types of sensors.
Section 6: Optical force sensors
Description 6: Cover the use of optical force sensors in microrobotics, including the advantages and challenges associated with these sensors.
Section 7: Force measurement by vision
Description 7: Discuss the techniques of force measurement by vision, its high-resolution capabilities, and its applications in microrobotic manipulation tasks.
Section 8: Discussion about force sensor resolution
Description 8: Provide a discussion on the resolutions of different force sensors and the impact of environmental factors on these resolutions.
Section 9: Two-fingered micromanipulation systems and force measurement at the micro-scale
Description 9: Present a chronological overview of the development and application of two-fingered micromanipulation systems with embedded force sensors.
Section 10: From 1992: the extensive use of strain gauges
Description 10: Summarize early developments in microrobotics focusing on the use of strain gauges for force measurement in micromanipulation.
Section 11: From 2000: Toward the measurement of forces at the nano-Newton scale by the use of external measurement systems
Description 11: Explore advancements made in force measurement at the nano-Newton scale through the use of external systems before the development of embedded sensors.
Section 12: From 2003: Toward embedded and high resolution force sensors
Description 12: Discuss the drive towards embedding high-resolution force sensors within microrobots and the associated challenges.
Section 13: From 2006: The use of MEMS-based microgrippers with embedded capacitive force sensors
Description 13: Highlight significant advances made with MEMS-based microgrippers and capacitive force sensors for precise force measurement and control.
Section 14: Discussion
Description 14: Summarize the main characteristics, performance measures, and challenges of gripping force sensors in current microrobotic systems.
Section 15: Conclusion
Description 15: Recap the developments in gripping force measurement technologies and their applications in microrobotics.
Section 16: Future trends
Description 16: Outline potential future directions in the field of gripping force measurement at the micro and nano scales.
|
A survey of methods for time series change point detection
| 14 |
---
paper_title: Contrast and change mining
paper_content:
Because the world with its markets, innovations, and customers is changing faster than ever before, the key to survival for businesses is the ability to detect, assess, and respond to changing conditions timely and intelligently. Understanding changes and reacting to or acting upon them therefore become a strategic issue not only for companies but also in many other domains. The corresponding need for knowledge has been answered by data mining research by proposing a multitude of methods for analyzing different aspects of change. This article provides an overview of recent works on methods for change analysis, thereby focusing on contrast mining and change mining, the two emerging subfields of contemporary data mining research. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 215–230 DOI: 10.1002/widm.27
---
paper_title: Boosting Classifiers for Drifting Concepts
paper_content:
In many real-world classification tasks, data arrives over time and the target concept to be learned from the data stream may change over time. Boosting methods are well-suited for learning from data streams, but do not address this concept drift problem. This paper proposes a boosting-like method to train a classifier ensemble from data streams that naturally adapts to concept drift. Moreover, it allows to quantify the drift in terms of its base learners. Similar as in regular boosting, examples are re-weighted to induce a diverse ensemble of base models. In order to handle drift, the proposed method continuously re-weights the ensemble members based on their performance on the most recent examples only. The proposed strategy adapts quickly to different kinds of concept drift. The algorithm is empirically shown to outperform learning algorithms that ignore concept drift. It performs no worse than advanced adaptive time window and example selection strategies that store all the data and are thus not suited for mining massive streams. The proposed algorithm has low computational costs.
---
paper_title: Inertial hidden markov models: modeling change in multivariate time series
paper_content:
Faced with the problem of characterizing systematic changes in multivariate time series in an unsupervised manner, we derive and test two methods of regularizing hidden Markov models for this task. Regularization on state transitions provides smooth transitioning among states, such that the sequences are split into broad, contiguous segments. Our methods are compared with a recent hierarchical Dirichlet process hidden Markov model (HDP-HMM) and a baseline standard hidden Markov model, of which the former suffers from poor performance on moderate-dimensional data and sensitivity to parameter settings, while the latter suffers from rapid state transitioning, over-segmentation and poor performance on a segmentation task involving human activity accelerometer data from the UCI Repository. The regularized methods developed here are able to perfectly characterize change of behavior in the human activity data for roughly half of the real-data test cases, with accuracy of 94% and low variation of information. In contrast to the HDP-HMM, our methods provide simple, drop-in replacements for standard hidden Markov model update rules, allowing standard expectation maximization (EM) algorithms to be used for learning.
---
paper_title: Unsupervised change analysis using supervised learning
paper_content:
We propose a formulation of a new problem, which we call change analysis, and a novel method for solving the problem. In contrast to the existing methods of change (or outlier) detection, the goal of change analysis goes beyond detecting whether or not any changes exist. Its ultimate goal is to find the explanation of the changes.While change analysis falls in the category of unsupervised learning in nature, we propose a novel approach based on supervised learning to achieve the goal. The key idea is to use a supervised classifier for interpreting the changes. A classifier should be able to discriminate between the two data sets if they actually come from two different data sources. In other words, we use a hypothetical label to train the supervised learner, and exploit the learner for interpreting the change. Experimental results using real data show the proposed approach is promising in change analysis as well as concept drift analysis.
---
paper_title: Adaptive Change Detection in Heart Rate Trend Monitoring in Anesthetized Children
paper_content:
The proposed algorithm is designed to detect changes in the heart rate trend signal which fits the dynamic linear model description. Based on this model, the interpatient and intraoperative variations are handled by estimating the noise covariances via an adaptive Kalman filter. An exponentially weighted moving average predictor switches between two different forgetting coefficients to allow the historical data to have a varying influence in prediction. The cumulative sum testing of the residuals identifies the change points online. The algorithm was tested on a substantial volume of real clinical data. Comparison of the proposed algorithm with Trigg's approach revealed that the algorithm performs more favorably with a shorter delay. The receiver operating characteristic curve analysis indicates that the algorithm outperformed the change detection by clinicians in real time
---
paper_title: Automatic change detection in multimodal serial MRI: application to multiple sclerosis lesion evolution
paper_content:
The automatic analysis of subtle changes between MRI scans is an important tool for assessing disease evolution over time. Manual labeling of evolutions in 3D data sets is tedious and error prone. Automatic change detection, however, remains a challenging image processing problem. A variety of MRI artifacts introduce a wide range of unrepresentative changes between images, making standard change detection methods unreliable. In this study we describe an automatic image processing system that addresses these issues. Registration errors and undesired anatomical deformations are compensated using a versatile multiresolution deformable image matching method that preserves significant changes at a given scale. A nonlinear intensity normalization method is associated with statistical hypothesis test methods to provide reliable change detection. Multimodal data is optionally exploited to reduce the false detection rate. The performance of the system was evaluated on a large database of 3D multimodal, MR images of patients suffering from relapsing remitting multiple sclerosis (MS). The method was assessed using receiver operating characteristics (ROC) analysis, and validated in a protocol involving two neurologists. The automatic system outperforms the human expert, detecting many lesion evolutions that are missed by the expert, including small, subtle changes.
---
paper_title: Online Bayesian change point detection algorithms for segmentation of epileptic activity
paper_content:
Epilepsy is a dynamic disease in which the brain transitions between different states. In this paper, we focus on the problem of identifying the time points, referred to as change points, where the transitions between these different states happen. A Bayesian change point detection algorithm that does not require the knowledge of the total number of states or the parameters of the probability distribution modeling the activity of epileptic brain in each of these states is developed in this paper. This algorithm works in online mode making it amenable for real-time monitoring. To reduce the quadratic complexity of this algorithm, an approximate algorithm with linear complexity in the number of data points is also developed. Finally, we use these algorithms on ECoG recordings of an epileptic patient to locate the change points and determine segments corresponding to different brain states.
---
paper_title: Change-Point Detection of Climate Time Series by Nonparametric Method
paper_content:
In one of the data mining techniques, change-point detection is of importance in evaluating time series measured in real world. For decades this technique has been developed as a nonlinear dynam- ics. We apply the method for detecting the change points, Singular Spectrum Transformation (SST), to the climate time series. To know where the struc- tures of climate data sets change can reveal a climate background. In this paper we discuss the structures of precipitation data in Kenya and Wrangel Island (Arctic land) by using the SST.
---
paper_title: Comparison of techniques for detection of discontinuities in temperature series
paper_content:
Several techniques for the detection of discontinuities in temperature series are evaluated. Eight homogenization techniques were compared using simulated datasets reproducing a vast range of possible situations. The simulated data represent homogeneous series and series having one or more steps. Although the majority of the techniques considered in this study perform very well, two methods seem to work slightly better than the others: the standard normal homogeneity test without trend, and the multiple linear regression technique. Both methods are distinctive because of their sensitivity concerning homogeneous series and their ability to detect one or several steps properly within an inhomogeneous series. Copyright © 2003 Environment Canada. Published by John Wiley & Sons, Ltd.
---
paper_title: Audio segmentation for speech recognition using segment features
paper_content:
Audio segmentation is an essential preprocessing step in several audio processing applications with a significant impact e.g. on speech recognition performance. We introduce a novel framework which combines the advantages of different well known segmentation methods. An automatically estimated log-linear segment model is used to determine the segmentation of an audio stream in a holistic way by a maximum a posteriori decoding strategy, instead of classifying change points locally. A comparison to other segmentation techniques in terms of speech recognition performance is presented, showing a promising segmentation quality of our approach.
---
paper_title: Bayesian on-line spectral change point detection: a soft computing approach for on-line ASR
paper_content:
Current automatic speech recognition (ASR) works in off-line mode and needs prior knowledge of the stationary or quasi-stationary test conditions for expected word recognition accuracy. These requirements limit the application of ASR for real-world applications where test conditions are highly non-stationary and are not known a priori. This paper presents an innovative frame dynamic rapid adaptation and noise compensation technique for tracking highly non-stationary noises and its application for on-line ASR. The proposed algorithm is based on a soft computing model using Bayesian on-line inference for spectral change point detection (BOSCPD) in unknown non-stationary noises. BOSCPD is tested with the MCRA noise tracking technique for on-line rapid environmental change learning in different non-stationary noise scenarios. The test results show that the proposed BOSCPD technique reduces the delay in spectral change point detection significantly compared to the baseline MCRA and its derivatives. The proposed BOSCPD soft computing model is tested for joint additive and channel distortions compensation (JAC)-based on-line ASR in unknown test conditions using non-stationary noisy speech samples from the Aurora 2 speech database. The simulation results for the on-line AR show significant improvement in recognition accuracy compared to the baseline Aurora 2 distributed speech recognition (DSR) in batch-mode.
---
paper_title: Image change detection algorithms: a systematic survey
paper_content:
Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.
---
paper_title: Scalable Time Series Change Detection for Biomass Monitoring Using Gaussian Process.
paper_content:
Biomass monitoring, specifically, detecting changes in the biomass or vegetation of a geographical region, is vital for studying the carbon cycle of the system and has significant implications in the context of understanding climate change and its impacts. Recently, several time series change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) has been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. In our previous work we proposed an efficient Toeplitz matrix based solution for scalable GP parameter estimation. In this paper we apply these solutions to a GP based change detection algorithm. The proposed change detection algorithm requires a memory footprint which is linear in the length of the input time series and runs in time which is quadratic to the length of the input time series. Experimental results show that both serial and parallel implementations of our proposed method achieve significant speedups over the serial implementation. Finally, we demonstrate the effectiveness of the proposed change detection method in identifying changes in Normalized Difference Vegetation Index (NDVI) data. Increasing availability of high resolution remote sensing data has encouraged researchers to extract knowledge from these massive spatio-temporal data sets in order to solve different problems pertain- ing to our ecosystem. Land use land cover (LULC) monitoring, specifically identifying changes in land cover, is one such problem that has significant applications in detecting deforestation, crop ro- tation, urbanization, forest fires, and other such phenomenon. The knowledge about the land cover changes can then be used by policy makers to take important decisions regarding urban planning, natural resource management, water source management, etc. In this paper we focus on the problem of identifying changes in the biomass or vegetation in a geographical region. Biomass is defined as the mass of living biological organisms in a unit area. In the context of this study, we restrict our monitoring to plant (specifically crop) biomass over large geographic regions. In recent years biomass monitoring is increasingly becoming important, as biomass is a great source of renewable energy. Moreover, biomass monitoring is also important from the changing climate perspective, as changes in climate are reflected in the change in biomass, and vice versa. The knowledge about biomass changes over time across a geographical region can be used estimate quantitative biophysical parameters which can be incorporated into global climate models. The launch of NASA's Terra satellite in December of 1999, with the Moderate Resolution Imag- ing Spectroradiometer (MODIS) instrument aboard, introduced a new opportunity for terrestrial remote sensing. MODIS data sets represent a new and improved capability for terrestrial satel- lite remote sensing aimed at meeting the needs of global change research. With thirty-six spectral bands, seven designed for use in terrestrial application, MODIS provides daily coverage, of moderate spatial resolution, of most areas on the earth. Land cover products are available in 250m, 500m, or 1000m resolutions (17). MODIS land products are generally available within weeks or even days
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: Multiple-change-point detection for high dimensional time series via sparsified binary segmentation
paper_content:
Time series segmentation, which is also known as multiple-change-point detection, is a well-established problem. However, few solutions have been designed specifically for high dimensional situations. Our interest is in segmenting the second-order structure of a high dimensional time series. In a generic step of a binary segmentation algorithm for multivariate time series, one natural solution is to combine cumulative sum statistics obtained from local periodograms and cross-periodograms of the components of the input time series. However, the standard 'maximum' and 'average' methods for doing so often fail in high dimensions when, for example, the change points are sparse across the panel or the cumulative sum statistics are spuriously large. We propose the sparsified binary segmentation algorithm which aggregates the cumulative sum statistics by adding only those that pass a certain threshold. This 'sparsifying' step reduces the influence of irrelevant noisy contributions, which is particularly beneficial in high dimensions. To show the consistency of sparsified binary segmentation, we introduce the multivariate locally stationary wavelet model for time series, which is a separate contribution of this work.
---
paper_title: Gaussian process for nonstationary time series prediction
paper_content:
In this paper, the problem of time series prediction is studied. A Bayesian procedure based on Gaussian process models using a nonstationary covariance function is proposed. Experiments proved the approach effectiveness with an excellent prediction and a good tracking. The conceptual simplicity, and good performance of Gaussian process models should make them very attractive for a wide range of problems.
---
paper_title: Estimation and comparison of multiple change-point models
paper_content:
This paper provides a new Bayesian approach for models with multiple change points. The centerpiece of the approach is a formulation of the change-point model in terms of a latent discrete state variable that indicates the regime from which a particular observation has been drawn. This state variable is specified to evolve according to a discrete-time discrete-state Markov process with the transition probabilities constrained so that the state variable can either stay at the current value or jump to the next higher value. This parameterization exactly reproduces the change point model. The model is estimated by Markov chain Monte Carlo methods using an approach that is based on Chib (1996). This methodology is quite valuable since it allows for the fitting of more complex change point models than was possible before. Methods for the computation of Bayes factors are also developed. All the techniques are illustrated using simulated and real data sets.
---
paper_title: Bayesian on-line spectral change point detection: a soft computing approach for on-line ASR
paper_content:
Current automatic speech recognition (ASR) works in off-line mode and needs prior knowledge of the stationary or quasi-stationary test conditions for expected word recognition accuracy. These requirements limit the application of ASR for real-world applications where test conditions are highly non-stationary and are not known a priori. This paper presents an innovative frame dynamic rapid adaptation and noise compensation technique for tracking highly non-stationary noises and its application for on-line ASR. The proposed algorithm is based on a soft computing model using Bayesian on-line inference for spectral change point detection (BOSCPD) in unknown non-stationary noises. BOSCPD is tested with the MCRA noise tracking technique for on-line rapid environmental change learning in different non-stationary noise scenarios. The test results show that the proposed BOSCPD technique reduces the delay in spectral change point detection significantly compared to the baseline MCRA and its derivatives. The proposed BOSCPD soft computing model is tested for joint additive and channel distortions compensation (JAC)-based on-line ASR in unknown test conditions using non-stationary noisy speech samples from the Aurora 2 speech database. The simulation results for the on-line AR show significant improvement in recognition accuracy compared to the baseline Aurora 2 distributed speech recognition (DSR) in batch-mode.
---
paper_title: Long-term temperature trends and variability on Spitsbergen: the extended Svalbard Airport temperature series, 1898–2012
paper_content:
One of the few long instrumental records available for the Arctic is the Svalbard Airport composite series that hitherto began in 1911, with observations made on Spitsbergen, the largest island in the Svalbard Archipelago. This record has now been extended to 1898 with the inclusion of observations made by hunting and scientific expeditions. Temperature has been observed almost continuously in Svalbard since 1898, although at different sites. It has therefore been possible to create one composite series for Svalbard Airport covering the period 1898–2012, and this valuable new record is presented here. The series reveals large temperature variability on Spitsbergen, with the early 20th century warming as one striking feature: an abrupt change from the cold 1910s to the local maxima of the 1930s and 1950s. With the inclusion of the new data it is possible to show that the 1910s were colder than the years at the start of the series. From the 1960s, temperatures have increased, so the present temperature level is significantly higher than at any earlier period in the instrumental history. For the entire period, and for all seasons, there are positive, statistically significant trends. Regarding the annual mean, the total trend is 2.6°C/century, whereas the largest trend is in spring, at 3.9°C/century. In Europe, it is the Svalbard Archipelago that has experienced the greatest temperature increase during the latest three decades. The composite series may be downloaded from the home page of the Norwegian Meteorological Institute and should be used with reference to the present article. Keywords: Arctic; homogenization; Spitsbergen; Svalbard; temperature records; temperature trends. (Published: 22 January 2014) To access the supplementary material for this article, please see Supplementary files in the column to the right (under Article Tools). Citation: Polar Research 2014, 33 , 21349, http://dx.doi.org/10.3402/polar.v33.21349
---
paper_title: Time Series Analysis and Its Applications
paper_content:
Characteristics of time series.- Time series regression and exploratory data analysis.- ARIMA models.- Spectral analysis and filtering.- Additional time domain topics.- State-space models.- Statistical methods in the frequency domain.
---
paper_title: Semi-supervised time series classification
paper_content:
The problem of time series classification has attracted great interest in the last decade. However current research assumes the existence of large amounts of labeled training data. In reality, such data may be very difficult or expensive to obtain. For example, it may require the time and expertise of cardiologists, space launch technicians, or other domain specialists. As in many other domains, there are often copious amounts of unlabeled data available. For example, the PhysioBank archive contains gigabytes of ECG data. In this work we propose a semi-supervised technique for building time series classifiers. While such algorithms are well known in text domains, we will show that special considerations must be made to make them both efficient and effective for the time series domain. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, handwritten documents, and video datasets. The experimental results demonstrate that our approach requires only a handful of labeled examples to construct accurate classifiers.
---
paper_title: Clustering of time-series subsequences is meaningless: implications for previous and future research
paper_content:
Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention. We make an amazing claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work.
---
paper_title: Change-Point Detection in Time-Series Data by Relative Density-Ratio Estimation
paper_content:
The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm that is based on non-parametric divergence estimation between two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on real-world human-activity sensing, speech, and Twitter datasets, we demonstrate the usefulness of the proposed method.
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: A regularized kernel-based approach to unsupervised audio segmentation
paper_content:
We introduce a regularized kernel-based rule for unsupervised change detection based on a simpler version of the recently proposed kernel Fisher discriminant ratio. Compared to other kernel-based change detectors found in the literature, the proposed test statistic is easier to compute and has a known asymptotic distribution which can effectively be used to set the false alarm rate a priori. This technique is applied for segmenting tracks from TV shows, both for segmentation into semantically homogeneous sections (applause, movie, music, etc.) and for speaker diarization within the speech sections. On these tasks, the proposed approach outperforms other kernel-based tests and is competitive with a standard HMM-based supervised alternative.
---
paper_title: A novel changepoint detection algorithm
paper_content:
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: Optimal Composition of Real-Time Systems
paper_content:
Abstract Real-time systems are designed for environments in which the utility of actions is strongly time-dependent. Recent work by Dean, Horvitz and others has shown that anytime algorithms are a useful tool for real-time system design, since they allow computation time to be traded for decision quality. In order to construct complex systems, however, we need to be a ble to compose larger systems from smaller, reusable anytime modules. This paper addresses two basic problems associated with composition: how to ensure the interruptibility of the composed system; and how to allocate computation time optimally among the components. The first problem is solved by a simple and general construction that incurs only a small, constant penalty. The second is solved by an off-line compilation process. We show that the general compilation problem is NP-complete. However, efficient local compilation techniques, working on a single program structure at a time, yield glo bally optimal allocations for a large class of programs. We illustrate these results with two simple applications.
---
paper_title: Polishing the Right Apple: Anytime Classification Also Benefits Data Streams with Constant Arrival Times
paper_content:
Classification of items taken from data streams requires algorithms that operate in time sensitive and computationally constrained environments. Often, the available time for classification is not known a priori and may change as a consequence of external circumstances. Many traditional algorithms are unable to provide satisfactory performance while supporting the highly variable response times that exemplify such applications. In such contexts, anytime algorithms, which are amenable to trading time for accuracy, have been found to be exceptionally useful and constitute an area of increasing research activity. Previous techniques for improving anytime classification have generally been concerned with optimizing the probability of correctly classifying individual objects. However, as we shall see, serially optimizing the probability of correctly classifying individual objects K times, generally gives inferior results to batch optimizing the probability of correctly classifying K objects. In this work, we show that this simple observation can be exploited to improve overall classification performance by using an anytime framework to allocate resources among a set of objects buffered from a fast arriving stream. Our ideas are independent of object arrival behavior, and, perhaps unintuitively, even in data streams with constant arrival rates our technique exhibits a marked improvement in performance. The utility of our approach is demonstrated with extensive experimental evaluations conducted on a wide range of diverse datasets.
---
paper_title: Scalable machine learning for massive datasets: Fast summation algorithms
paper_content:
Huge data sets containing millions of training examples with a large number of attributes are relatively easy to gather. However one of the bottlenecks for successful inference is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either O (N2) or O (N3), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up some of these MVP primitives by fast approximate algorithms that scale as O (N) and also provide high accuracy guarantees . I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. The proposed algorithms have been applied to speedup kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.
---
paper_title: An Algorithm Based on Singular Spectrum Analysis for Change-Point Detection
paper_content:
This paper is devoted to application of the singular-spectrum analysis to sequential detection of changes in time series. An algorithm of change-point detection in time series, based on sequential application of the singular-spectrum analysis is developed and studied. The algorithm is applied to different data sets and extensively studied numerically. For specific models, several numerical approximations to the error probabilities and the power function of the algorithm are obtained. Numerical comparisons with other methods are given.
---
paper_title: Learning transportation mode from raw gps data for geographic applications on the web
paper_content:
Geographic information has spawned many novel Web applications where global positioning system (GPS) plays important roles in bridging the applications and end users. Learning knowledge from users' raw GPS data can provide rich context information for both geographic and mobile applications. However, so far, raw GPS data are still used directly without much understanding. In this paper, an approach based on supervised learning is proposed to automatically infer transportation mode from raw GPS data. The transportation mode, such as walking, driving, etc., implied in a user's GPS data can provide us valuable knowledge to understand the user. It also enables context-aware computing based on user's present transportation mode and design of an innovative user interface for Web users. Our approach consists of three parts: a change point-based segmentation method, an inference model and a post-processing algorithm based on conditional probability. The change point-based segmentation method was compared with two baselines including uniform duration based and uniform length based methods. Meanwhile, four different inference models including Decision Tree, Bayesian Net, Support Vector Machine (SVM) and Conditional Random Field (CRF) are studied in the experiments. We evaluated the approach using the GPS data collected by 45 users over six months period. As a result, beyond other two segmentation methods, the change point based method achieved a higher degree of accuracy in predicting transportation modes and detecting transitions between them. Decision Tree outperformed other inference models over the change point based segmentation method.
---
paper_title: Time Series Epenthesis: Clustering Time Series Streams Requires Ignoring Some Data
paper_content:
Given the pervasiveness of time series data in all human endeavors, and the ubiquity of clustering as a data mining application, it is somewhat surprising that the problem of time series clustering from a single stream remains largely unsolved. Most work on time series clustering considers the clustering of individual time series, e.g., gene expression profiles, individual heartbeats or individual gait cycles. The few attempts at clustering time series streams have been shown to be objectively incorrect in some cases, and in other cases shown to work only on the most contrived datasets by carefully adjusting a large set of parameters. In this work, we make two fundamental contributions. First, we show that the problem definition for time series clustering from streams currently used is inherently flawed, and a new definition is necessary. Second, we show that the Minimum Description Length (MDL) framework offers an efficient, effective and essentially parameter-free method for time series clustering. We show that our method produces objectively correct results on a wide variety of datasets from medicine, zoology and industrial process analyses.
---
paper_title: Using mobile phones to determine transportation modes
paper_content:
As mobile phones advance in functionality and capability, they are being used for more than just communication. Increasingly, these devices are being employed as instruments for introspection into habits and situations of individuals and communities. Many of the applications enabled by this new use of mobile phones rely on contextual information. The focus of this work is on one dimension of context, the transportation mode of an individual when outside. We create a convenient (no specific position and orientation setting) classification system that uses a mobile phone with a built-in GPS receiver and an accelerometer. The transportation modes identified include whether an individual is stationary, walking, running, biking, or in motorized transport. The overall classification system consists of a decision tree followed by a first-order discrete Hidden Markov Model and achieves an accuracy level of 93.6p when tested on a dataset obtained from sixteen individuals.
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: Semi-supervised time series classification
paper_content:
The problem of time series classification has attracted great interest in the last decade. However current research assumes the existence of large amounts of labeled training data. In reality, such data may be very difficult or expensive to obtain. For example, it may require the time and expertise of cardiologists, space launch technicians, or other domain specialists. As in many other domains, there are often copious amounts of unlabeled data available. For example, the PhysioBank archive contains gigabytes of ECG data. In this work we propose a semi-supervised technique for building time series classifiers. While such algorithms are well known in text domains, we will show that special considerations must be made to make them both efficient and effective for the time series domain. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, handwritten documents, and video datasets. The experimental results demonstrate that our approach requires only a handful of labeled examples to construct accurate classifiers.
---
paper_title: An online kernel change detection algorithm
paper_content:
A number of abrupt change detection methods have been proposed in the past, among which are efficient model-based techniques such as the Generalized Likelihood Ratio (GLR) test. We consider the case where no accurate nor tractable model can be found, using a model-free approach, called Kernel change detection (KCD). KCD compares two sets of descriptors extracted online from the signal at each time instant: The immediate past set and the immediate future set. Based on the soft margin single-class Support Vector Machine (SVM), we build a dissimilarity measure in feature space between those sets, without estimating densities as an intermediary step. This dissimilarity measure is shown to be asymptotically equivalent to the Fisher ratio in the Gaussian case. Implementation issues are addressed; in particular, the dissimilarity measure can be computed online in input space. Simulation results on both synthetic signals and real music signals show the efficiency of KCD.
---
paper_title: Understanding mobility based on GPS data
paper_content:
Both recognizing human behavior and understanding a user's mobility from sensor data are critical issues in ubiquitous computing systems. As a kind of user behavior, the transportation modes, such as walking, driving, etc., that a user takes, can enrich the user's mobility with informative knowledge and provide pervasive computing systems with more context information. In this paper, we propose an approach based on supervised learning to infer people's motion modes from their GPS logs. The contribution of this work lies in the following two aspects. On one hand, we identify a set of sophisticated features, which are more robust to traffic condition than those other researchers ever used. On the other hand, we propose a graph-based post-processing algorithm to further improve the inference performance. This algorithm considers both the commonsense constraint of real world and typical user behavior based on location in a probabilistic manner. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change point-based segmentation method and Decision Tree-based inference model, the new features brought an eight percent improvement in inference accuracy over previous result, and the graph-based post-processing achieve a further four percent enhancement.
---
paper_title: Comprehensive Context Recognizer Based on Multimodal Sensors in a Smartphone
paper_content:
Recent developments in smartphones have increased the processing capabilities and equipped these devices with a number of built-in multimodal sensors, including accelerometers, gyroscopes, GPS interfaces, Wi-Fi access, and proximity sensors. Despite the fact that numerous studies have investigated the development of user-context aware applications using smartphones, these applications are currently only able to recognize simple contexts using a single type of sensor. Therefore, in this work, we introduce a comprehensive approach for context aware applications that utilizes the multimodal sensors in smartphones. The proposed system is not only able to recognize different kinds of contexts with high accuracy, but it is also able to optimize the power consumption since power-hungry sensors can be activated or deactivated at appropriate times. Additionally, the system is able to recognize activities wherever the smartphone is on a human's body, even when the user is using the phone to make a phone call, manipulate applications, play games, or listen to music. Furthermore, we also present a novel feature selection algorithm for the accelerometer classification module. The proposed feature selection algorithm helps select good features and eliminates bad features, thereby improving the overall accuracy of the accelerometer classifier. Experimental results show that the proposed system can classify eight activities with an accuracy of 92.43%.
---
paper_title: Unsupervised change analysis using supervised learning
paper_content:
We propose a formulation of a new problem, which we call change analysis, and a novel method for solving the problem. In contrast to the existing methods of change (or outlier) detection, the goal of change analysis goes beyond detecting whether or not any changes exist. Its ultimate goal is to find the explanation of the changes.While change analysis falls in the category of unsupervised learning in nature, we propose a novel approach based on supervised learning to achieve the goal. The key idea is to use a supervised classifier for interpreting the changes. A classifier should be able to discriminate between the two data sets if they actually come from two different data sources. In other words, we use a hypothetical label to train the supervised learner, and exploit the learner for interpreting the change. Experimental results using real data show the proposed approach is promising in change analysis as well as concept drift analysis.
---
paper_title: Learning transportation mode from raw gps data for geographic applications on the web
paper_content:
Geographic information has spawned many novel Web applications where global positioning system (GPS) plays important roles in bridging the applications and end users. Learning knowledge from users' raw GPS data can provide rich context information for both geographic and mobile applications. However, so far, raw GPS data are still used directly without much understanding. In this paper, an approach based on supervised learning is proposed to automatically infer transportation mode from raw GPS data. The transportation mode, such as walking, driving, etc., implied in a user's GPS data can provide us valuable knowledge to understand the user. It also enables context-aware computing based on user's present transportation mode and design of an innovative user interface for Web users. Our approach consists of three parts: a change point-based segmentation method, an inference model and a post-processing algorithm based on conditional probability. The change point-based segmentation method was compared with two baselines including uniform duration based and uniform length based methods. Meanwhile, four different inference models including Decision Tree, Bayesian Net, Support Vector Machine (SVM) and Conditional Random Field (CRF) are studied in the experiments. We evaluated the approach using the GPS data collected by 45 users over six months period. As a result, beyond other two segmentation methods, the change point based method achieved a higher degree of accuracy in predicting transportation modes and detecting transitions between them. Decision Tree outperformed other inference models over the change point based segmentation method.
---
paper_title: Using mobile phones to determine transportation modes
paper_content:
As mobile phones advance in functionality and capability, they are being used for more than just communication. Increasingly, these devices are being employed as instruments for introspection into habits and situations of individuals and communities. Many of the applications enabled by this new use of mobile phones rely on contextual information. The focus of this work is on one dimension of context, the transportation mode of an individual when outside. We create a convenient (no specific position and orientation setting) classification system that uses a mobile phone with a built-in GPS receiver and an accelerometer. The transportation modes identified include whether an individual is stationary, walking, running, biking, or in motorized transport. The overall classification system consists of a decision tree followed by a first-order discrete Hidden Markov Model and achieves an accuracy level of 93.6p when tested on a dataset obtained from sixteen individuals.
---
paper_title: Change Detection in Streaming Multivariate Data Using Likelihood Detectors
paper_content:
Change detection in streaming data relies on a fast estimation of the probability that the data in two consecutive windows come from different distributions. Choosing the criterion is one of the multitude of questions that need to be addressed when designing a change detection procedure. This paper gives a log-likelihood justification for two well-known criteria for detecting change in streaming multidimensional data: Kullback-Leibler (K-L) distance and Hotelling's T-square test for equal means (H). We propose a semiparametric log-likelihood criterion (SPLL) for change detection. Compared to the existing log-likelihood change detectors, SPLL trades some theoretical rigor for computation simplicity. We examine SPLL together with K-L and H on detecting induced change on 30 real data sets. The criteria were compared using the area under the respective Receiver Operating Characteristic (ROC) curve (AUC). SPLL was found to be on the par with H and better than K-L for the nonnormalized data, and better than both on the normalized data.
---
paper_title: Change-Point Detection in Time-Series Data by Relative Density-Ratio Estimation
paper_content:
The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm that is based on non-parametric divergence estimation between two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on real-world human-activity sensing, speech, and Twitter datasets, we demonstrate the usefulness of the proposed method.
---
paper_title: Cusum techniques for timeslot sequences with applications to network surveillance
paper_content:
We develop two cusum change-point detection algorithms for data network monitoring applications where numerous and various performance and reliability metrics are available to aid with the early identification of realized or impending failures. We confront three significant challenges with our cusum algorithms: (1) the need for nonparametric techniques so that a wide variety of metrics can be included in the monitoring process, (2) the need to handle time varying distributions for the metrics that reflect natural cycles in work load and traffic patterns, and (3) the need to be computationally efficient with the massive amounts of data that are available for processing. The only critical assumption we make when developing the algorithms is that suitably transformed observations within a defined timeslot structure are independent and identically distributed under normal operating conditions. To facilitate practical implementations of the algorithms, we present asymptotically valid thresholds. Our research was motivated by a real-world application and we use that context to guide the design of a simulation study that examines the sensitivity of the cusum algorithms.
---
paper_title: Multiple-change-point detection for high dimensional time series via sparsified binary segmentation
paper_content:
Time series segmentation, which is also known as multiple-change-point detection, is a well-established problem. However, few solutions have been designed specifically for high dimensional situations. Our interest is in segmenting the second-order structure of a high dimensional time series. In a generic step of a binary segmentation algorithm for multivariate time series, one natural solution is to combine cumulative sum statistics obtained from local periodograms and cross-periodograms of the components of the input time series. However, the standard 'maximum' and 'average' methods for doing so often fail in high dimensions when, for example, the change points are sparse across the panel or the cumulative sum statistics are spuriously large. We propose the sparsified binary segmentation algorithm which aggregates the cumulative sum statistics by adding only those that pass a certain threshold. This 'sparsifying' step reduces the influence of irrelevant noisy contributions, which is particularly beneficial in high dimensions. To show the consistency of sparsified binary segmentation, we introduce the multivariate locally stationary wavelet model for time series, which is a separate contribution of this work.
---
paper_title: A unifying framework for detecting outliers and change points from non-stationary time series data
paper_content:
We are concerned with the issues of outlier detection and change point detection from a data stream. In the area of data mining, there have been increased interest in these issues since the former is related to fraud detection, rare event discovery, etc., while the latter is related to event/trend by change detection, activity monitoring, etc. Specifically, it is important to consider the situation where the data source is non-stationary, since the nature of data source may change over time in real applications. Although in most previous work outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them on the basis of the theory of on-line learning of non-stationary time series. In this framework a probabilistic model of the data source is incrementally learned using an on-line discounting learning algorithm, which can track the changing data source adaptively by forgetting the effect of past data gradually. Then the score for any given data is calculated to measure its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. Further change points in a data stream are detected by applying this scoring method into a time series of moving averaged losses for prediction using the learned model. Specifically we develop an efficient algorithms for on-line discounting learning of auto-regression models from time series data, and demonstrate the validity of our framework through simulation and experimental applications to stock market data analysis.
---
paper_title: Break detection in the covariance structure of multivariate time series models
paper_content:
In this paper, we introduce an asymptotic test procedure to assess the stability of volatilities and cross-volatilites of linear and nonlinear multivariate time series models. The test is very flexible as it can be applied, for example, to many of the multivariate GARCH models established in the literature, and also works well in the case of high dimensionality of the underlying data. Since it is nonparametric, the procedure avoids the difficulties associated with parametric model selection, model fitting and parameter estimation. We provide the theoretical foundation for the test and demonstrate its applicability via a simulation study and an analysis of financial data. Extensions to multiple changes and the case of infinite fourth moments are also discussed.
---
paper_title: Change-Point Detection with Feature Selection in High-Dimensional Time-Series Data
paper_content:
Change-point detection is the problem of finding abrupt changes in time-series, and it is attracting a lot of attention in the artificial intelligence and data mining communities. In this paper, we present a supervised learning based change-point detection approach in which we use the separability of past and future data at time t (they are labeled as +1 and -1) as plausibility of change-points. Based on this framework, we propose a detection measure called the additive Hilbert-Schmidt Independence Criterion (aHSIC), which is defined as the weighted sum of the HSIC scores between features and its corresponding binary labels. Here, the HSIC is a kernel-based independence measure. A novel aspect of the aHSIC score is that it can incorporate feature selection during its detection measure estimation. More specifically, we first select features that are responsible for an abrupt change by using a supervised approach, and then compute the aHSIC score by employing the selected features. Thus, compared with traditional detection measures, our approach tends to be robust as regards noise features, and so the aHSIC is suitable for a use with high-dimensional time-series change-point detection problems. We demonstrate that the proposed change-point detection method is promising through extensive experiments on synthetic data sets and a real-world human activity data set.
---
paper_title: PCA Feature Extraction for Change Detection in Multidimensional Unlabeled Data
paper_content:
When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.
---
paper_title: Change Detection in Multivariate Datastreams: Likelihood and Detectability Loss
paper_content:
We address the problem of detecting changes in multivariate datastreams, and we investigate the intrinsic difficulty that change-detection methods have to face when the data dimension scales. In particular, we consider a general approach where changes are detected by comparing the distribution of the log-likelihood of the datastream over different time windows. Despite the fact that this approach constitutes the frame of several change-detection methods, its effectiveness when data dimension scales has never been investigated, which is indeed the goal of our paper. We show that the magnitude of the change can be naturally measured by the symmetric Kullback-Leibler divergence between the pre- and post-change distributions, and that the detectability of a change of a given magnitude worsens when the data dimension increases. This problem, which we refer to as \emph{detectability loss}, is due to the linear relationship between the variance of the log-likelihood and the data dimension. We analytically derive the detectability loss on Gaussian-distributed datastreams, and empirically demonstrate that this problem holds also on real-world datasets and that can be harmful even at low data-dimensions (say, 10).
---
paper_title: Change-Point Detection in Time-Series Data Based on Subspace Identification
paper_content:
In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.
---
paper_title: Change-Point Detection in Time-Series Data by Relative Density-Ratio Estimation
paper_content:
The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm that is based on non-parametric divergence estimation between two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on real-world human-activity sensing, speech, and Twitter datasets, we demonstrate the usefulness of the proposed method.
---
paper_title: Change-Point Detection of Climate Time Series by Nonparametric Method
paper_content:
In one of the data mining techniques, change-point detection is of importance in evaluating time series measured in real world. For decades this technique has been developed as a nonlinear dynam- ics. We apply the method for detecting the change points, Singular Spectrum Transformation (SST), to the climate time series. To know where the struc- tures of climate data sets change can reveal a climate background. In this paper we discuss the structures of precipitation data in Kenya and Wrangel Island (Arctic land) by using the SST.
---
paper_title: An Algorithm Based on Singular Spectrum Analysis for Change-Point Detection
paper_content:
This paper is devoted to application of the singular-spectrum analysis to sequential detection of changes in time series. An algorithm of change-point detection in time series, based on sequential application of the singular-spectrum analysis is developed and studied. The algorithm is applied to different data sets and extensively studied numerically. For specific models, several numerical approximations to the error probabilities and the power function of the algorithm are obtained. Numerical comparisons with other methods are given.
---
paper_title: Scalable Time Series Change Detection for Biomass Monitoring Using Gaussian Process.
paper_content:
Biomass monitoring, specifically, detecting changes in the biomass or vegetation of a geographical region, is vital for studying the carbon cycle of the system and has significant implications in the context of understanding climate change and its impacts. Recently, several time series change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) has been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. In our previous work we proposed an efficient Toeplitz matrix based solution for scalable GP parameter estimation. In this paper we apply these solutions to a GP based change detection algorithm. The proposed change detection algorithm requires a memory footprint which is linear in the length of the input time series and runs in time which is quadratic to the length of the input time series. Experimental results show that both serial and parallel implementations of our proposed method achieve significant speedups over the serial implementation. Finally, we demonstrate the effectiveness of the proposed change detection method in identifying changes in Normalized Difference Vegetation Index (NDVI) data. Increasing availability of high resolution remote sensing data has encouraged researchers to extract knowledge from these massive spatio-temporal data sets in order to solve different problems pertain- ing to our ecosystem. Land use land cover (LULC) monitoring, specifically identifying changes in land cover, is one such problem that has significant applications in detecting deforestation, crop ro- tation, urbanization, forest fires, and other such phenomenon. The knowledge about the land cover changes can then be used by policy makers to take important decisions regarding urban planning, natural resource management, water source management, etc. In this paper we focus on the problem of identifying changes in the biomass or vegetation in a geographical region. Biomass is defined as the mass of living biological organisms in a unit area. In the context of this study, we restrict our monitoring to plant (specifically crop) biomass over large geographic regions. In recent years biomass monitoring is increasingly becoming important, as biomass is a great source of renewable energy. Moreover, biomass monitoring is also important from the changing climate perspective, as changes in climate are reflected in the change in biomass, and vice versa. The knowledge about biomass changes over time across a geographical region can be used estimate quantitative biophysical parameters which can be incorporated into global climate models. The launch of NASA's Terra satellite in December of 1999, with the Moderate Resolution Imag- ing Spectroradiometer (MODIS) instrument aboard, introduced a new opportunity for terrestrial remote sensing. MODIS data sets represent a new and improved capability for terrestrial satel- lite remote sensing aimed at meeting the needs of global change research. With thirty-six spectral bands, seven designed for use in terrestrial application, MODIS provides daily coverage, of moderate spatial resolution, of most areas on the earth. Land cover products are available in 250m, 500m, or 1000m resolutions (17). MODIS land products are generally available within weeks or even days
---
paper_title: Change-point detection for recursive Bayesian geoacoustic inversions
paper_content:
In order to carry out geoacoustic inversion in low signal-to-noise ratio (SNR) conditions, extended duration observations coupled with source and/or receiver motion may be necessary. As a result, change in the underlying model parameters due to time or space is anticipated. In this paper, an inversion method is proposed for cases when the model parameters change abruptly or slowly. A model parameter change-point detection method is developed to detect the change in the model parameters using the importance samples and corresponding weights that already are available from the recursive Bayesian inversion. If the model parameter change abruptly, a change-point will be detected and the inversion will restart with the pulse measurement after the changepoint. If the model parameters change gradually, the inversion (based on constant model parameters) may proceed until the accumulated model parameter mismatch is significant and triggers the detection of a change-point. These change-point detections form the heu...
---
paper_title: A Bayesian Analysis for Change Point Problems
paper_content:
A sequence of observations undergoes sudden changes at unknown times. We model the process by supposing that there is an underlying sequence of parameters partitioned into contiguous blocks of equal parameter values; the beginning of each block is said to be a change point. Observations are then assumed to be independent in different blocks given the sequence of parameters. In a Bayesian analysis it is necessary to give probability distributions to both the change points and the parameters. We use product partition models (Barry and Hartigan 1992), which assume that the probability of any partition is proportional to a product of prior cohesions, one for each block in the partition, and that given the blocks the parameters in different blocks have independent prior distributions. Given the observations a new product partition model holds, with posterior cohesions for the blocks and new independent block posterior distributions for parameters. The product model thus provides a convenient machinery for allo...
---
paper_title: Gaussian process for nonstationary time series prediction
paper_content:
In this paper, the problem of time series prediction is studied. A Bayesian procedure based on Gaussian process models using a nonstationary covariance function is proposed. Experiments proved the approach effectiveness with an excellent prediction and a good tracking. The conceptual simplicity, and good performance of Gaussian process models should make them very attractive for a wide range of problems.
---
paper_title: Gaussian Process Change Point Models
paper_content:
We combine Bayesian online change point detection with Gaussian processes to create a nonparametric time series model which can handle change points. The model can be used to locate change points in an online manner; and, unlike other Bayesian online change point detection algorithms, is applicable when temporal correlations in a regime are expected. We show three variations on how to apply Gaussian processes in the change point context, each with their own advantages. We present methods to reduce the computational burden of these models and demonstrate it on several real world data sets.
---
paper_title: Bayesian online changepoint detection to improve transparency in human-machine interaction systems
paper_content:
This paper discusses a way to improve transparency in human-machine interaction systems when no force sensors are available for both the human and the machine. In most cases, position-error based control with fixed proportional-derivative (PD) controllers provides poor transparency. We resolve this issue by utilizing a gain switching method, switching them to be high or low values in response to estimated force changes at the slave environment. Since the slave-environment forces change abruptly in real time, it is difficult to set the precise value of the threshold for these gain switching decisions. Moreover, the threshold value has to be observed and tuned in advance to utilize the gain switching approach. Thus, we adopt Bayesian online changepoint detection to detect the abrupt slave environment change. This changepoint detection is based on the Bayes' theorem which is typically used in probability and statistics applications to generate the posterior distribution of unknown parameters given both data and prior distribution. We then show experimental results which demonstrate the Bayesian online changepoint detection has the ability to discriminate both free motion and hard contact. Additionally, we incorporate the online changepoint detection in our proposed gain switching controller and show the superiority of our proposed controller via experiment.
---
paper_title: Estimation and comparison of multiple change-point models
paper_content:
This paper provides a new Bayesian approach for models with multiple change points. The centerpiece of the approach is a formulation of the change-point model in terms of a latent discrete state variable that indicates the regime from which a particular observation has been drawn. This state variable is specified to evolve according to a discrete-time discrete-state Markov process with the transition probabilities constrained so that the state variable can either stay at the current value or jump to the next higher value. This parameterization exactly reproduces the change point model. The model is estimated by Markov chain Monte Carlo methods using an approach that is based on Chib (1996). This methodology is quite valuable since it allows for the fitting of more complex change point models than was possible before. Methods for the computation of Bayes factors are also developed. All the techniques are illustrated using simulated and real data sets.
---
paper_title: Online Bayesian change point detection algorithms for segmentation of epileptic activity
paper_content:
Epilepsy is a dynamic disease in which the brain transitions between different states. In this paper, we focus on the problem of identifying the time points, referred to as change points, where the transitions between these different states happen. A Bayesian change point detection algorithm that does not require the knowledge of the total number of states or the parameters of the probability distribution modeling the activity of epileptic brain in each of these states is developed in this paper. This algorithm works in online mode making it amenable for real-time monitoring. To reduce the quadratic complexity of this algorithm, an approximate algorithm with linear complexity in the number of data points is also developed. Finally, we use these algorithms on ECoG recordings of an epileptic patient to locate the change points and determine segments corresponding to different brain states.
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: A regularized kernel-based approach to unsupervised audio segmentation
paper_content:
We introduce a regularized kernel-based rule for unsupervised change detection based on a simpler version of the recently proposed kernel Fisher discriminant ratio. Compared to other kernel-based change detectors found in the literature, the proposed test statistic is easier to compute and has a known asymptotic distribution which can effectively be used to set the false alarm rate a priori. This technique is applied for segmenting tracks from TV shows, both for segmentation into semantically homogeneous sections (applause, movie, music, etc.) and for speaker diarization within the speech sections. On these tasks, the proposed approach outperforms other kernel-based tests and is competitive with a standard HMM-based supervised alternative.
---
paper_title: Kernel change-point analysis
paper_content:
We introduce a kernel-based method for change-point analysis within a sequence of temporal observations. Change-point analysis of an unlabelled sample of observations consists in, first, testing whether a change in the distribution occurs within the sample, and second, if a change occurs, estimating the change-point instant after which the distribution of the observations switches from one distribution to another different distribution. We propose a test statistic based upon the maximum kernel Fisher discriminant ratio as a measure of homogeneity between segments. We derive its limiting distribution under the null hypothesis (no change occurs), and establish the consistency under the alternative hypothesis (a change occurs). This allows to build a statistical hypothesis testing procedure for testing the presence of a change-point, with a prescribed false-alarm probability and detection probability tending to one in the large-sample setting. If a change actually occurs, the test statistic also yields an estimator of the change-point location. Promising experimental results in temporal segmentation of mental tasks from BCI data and pop song indexation are presented.
---
paper_title: Graph-Based Change-Point Detection
paper_content:
We consider the testing and estimation of change-points -- locations where the distribution abruptly changes -- in a data sequence. A new approach, based on scan statistics utilizing graphs representing the similarity between observations, is proposed. The graph-based approach is non-parametric, and can be applied to any data set as long as an informative similarity measure on the sample space can be defined. Accurate analytic approximations to the significance of graph-based scan statistics for both the single change-point and the changed interval alternatives are provided. Simulations reveal that the new approach has better power than existing approaches when the dimension of the data is moderate to high. The new approach is illustrated on two applications: The determination of authorship of a classic novel, and the detection of change in a network over time.
---
paper_title: Complex network from pseudoperiodic time series: topology versus dynamics.
paper_content:
We construct complex networks from pseudoperiodic time series, with each cycle represented by a single node in the network. We investigate the statistical properties of these networks for various time series and find that time series with different dynamics exhibit distinct topological structures. Specifically, noisy periodic signals correspond to random networks, and chaotic time series generate networks that exhibit small world and scale free features. We show that this distinction in topological structure results from the hierarchy of unstable periodic orbits embedded in the chaotic attractor. Standard measures of structure in complex networks can therefore be applied to distinguish different dynamic regimes in time series. Application to human electrocardiograms shows that such statistical properties are able to differentiate between the sinus rhythm cardiograms of healthy volunteers and those of coronary care patients.
---
paper_title: An exact distribution-free test comparing two multivariate distributions based on adjacency
paper_content:
A new test is proposed comparing two multivariate distributions by using distances between observations. Unlike earlier tests using interpoint distances, the new test statistic has a known exact distribution and is exactly distribution free. The interpoint distances are used to construct an optimal non-bipartite matching, i.e. a matching of the observations into disjoint pairs to minimize the total distance within pairs. The cross-match statistic is the number of pairs containing one observation from the first distribution and one from the second. Distributions that are very different will exhibit few cross-matches. When comparing two discrete distributions with finite support, the test is consistent against all alternatives. The test is applied to a study of brain activation measured by functional magnetic resonance imaging during two linguistic tasks, comparing brains that are impaired by arteriovenous abnormalities with normal controls. A second exact distribution-free test is also discussed: it ranks the pairs and sums the ranks of the cross-matched pairs. Copyright 2005 Royal Statistical Society.
---
paper_title: From time series to complex networks: the visibility graph
paper_content:
In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined in order to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view.
---
paper_title: An online algorithm for segmenting time series
paper_content:
In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.
---
paper_title: Time Series Epenthesis: Clustering Time Series Streams Requires Ignoring Some Data
paper_content:
Given the pervasiveness of time series data in all human endeavors, and the ubiquity of clustering as a data mining application, it is somewhat surprising that the problem of time series clustering from a single stream remains largely unsolved. Most work on time series clustering considers the clustering of individual time series, e.g., gene expression profiles, individual heartbeats or individual gait cycles. The few attempts at clustering time series streams have been shown to be objectively incorrect in some cases, and in other cases shown to work only on the most contrived datasets by carefully adjusting a large set of parameters. In this work, we make two fundamental contributions. First, we show that the problem definition for time series clustering from streams currently used is inherently flawed, and a new definition is necessary. Second, we show that the Minimum Description Length (MDL) framework offers an efficient, effective and essentially parameter-free method for time series clustering. We show that our method produces objectively correct results on a wide variety of datasets from medicine, zoology and industrial process analyses.
---
paper_title: Clustering Time Series Using Unsupervised-Shapelets
paper_content:
Time series clustering has become an increasingly important research topic over the past decade. Most existing methods for time series clustering rely on distances calculated from the entire raw data using the Euclidean distance or Dynamic Time Warping distance as the distance measure. However, the presence of significant noise, dropouts, or extraneous data can greatly limit the accuracy of clustering in this domain. Moreover, for most real world problems, we cannot expect objects from the same class to be equal in length. As a consequence, most work on time series clustering only considers the clustering of individual time series "behaviors," e.g., individual heart beats or individual gait cycles, and contrives the time series in some way to make them all equal in length. However, contriving the data in such a way is often a harder problem than the clustering itself. In this work, we show that by using only some local patterns and deliberately ignoring the rest of the data, we can mitigate the above problems and cluster time series of different lengths, i.e., cluster one heartbeat with multiple heartbeats. To achieve this we exploit and extend a recently introduced concept in time series data mining called shapelets. Unlike existing work, our work demonstrates for the first time the unintuitive fact that shapelets can be learned from unlabeled time series. We show, with extensive empirical evaluation in diverse domains, that our method is more accurate than existing methods. Moreover, in addition to accurate clustering results, we show that our work also has the potential to give insights into the domains to which it is applied.
---
paper_title: Automated Change Detection and Reactive Clustering in Multivariate Streaming Data
paper_content:
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
---
paper_title: A Bayesian Approach to Concept Drift
paper_content:
To cope with concept drift, we placed a probability distribution over the location of the most-recent drift point. We used Bayesian model comparison to update this distribution from the predictions of models trained on blocks of consecutive observations and pruned potential drift points with low probability. We compare our approach to a non-probabilistic method for drift and a probabilistic method for change-point detection. In our experiments, our approach generally yielded improved accuracy and/or speed over these other methods.
---
paper_title: Concept Drift Detection Through Resampling
paper_content:
Detecting changes in data-streams is an important part of enhancing learning quality in dynamic environments. We devise a procedure for detecting concept drifts in data-streams that relies on analyzing the empirical loss of learning algorithms. Our method is based on obtaining statistics from the loss distribution by reusing the data multiple times via resampling. We present theoretical guarantees for the proposed procedure based on the stability of the underlying learning algorithms. Experimental results show that the method has high recall and precision, and performs well in the presence of noise.
---
|
Title: A Survey of Methods for Time Series Change Point Detection
Section 1: Introduction
Description 1: Introduce the significance of time series analysis and change point detection in various fields and provide motivating examples.
Section 2: Background
Description 2: Clarify the basic concepts related to change point detection, including definitions and problem formulation.
Section 3: Criteria
Description 3: Discuss the practical challenges in applying change point detection, such as online detection, scalability, algorithm constraints, and performance evaluation.
Section 4: Review
Description 4: Provide an overview of the main categories of change point detection algorithms, including both supervised and unsupervised methods.
Section 5: Supervised Methods
Description 5: Detail various supervised learning approaches used in change point detection, such as binary and multi-class classifiers.
Section 6: Unsupervised Methods
Description 6: Explore different unsupervised learning techniques applied to time series change point detection.
Section 7: Likelihood Ratio Methods
Description 7: Discuss methods that utilize likelihood ratios to compare probability distributions and identify change points.
Section 8: Subspace Model Methods
Description 8: Present approaches based on subspace modeling for analyzing changes in time series sequences.
Section 9: Probabilistic Methods
Description 9: Outline probabilistic techniques like Bayesian methods for change point detection.
Section 10: Kernel-Based Methods
Description 10: Describe kernel-based methods which employ test statistics to assess the homogeneity of data.
Section 11: Graph-Based Methods
Description 11: Introduce graph-based frameworks that apply two-sample tests on equivalent graphs to detect change points.
Section 12: Clustering Methods
Description 12: Explain clustering approaches that use techniques like sliding window and minimum description length for change point detection.
Section 13: Discussion and Comparison
Description 13: Compare and contrast the surveyed methods based on key criteria to help practitioners choose the most appropriate algorithm.
Section 14: Conclusions and Challenges for Future Work
Description 14: Summarize the state of change point detection research, identify challenges, and suggest directions for future research.
|
A survey of dual-feasible and superadditive functions
| 23 |
---
paper_title: A cutting-plane approach for the two-dimensional orthogonal non-guillotine cutting problem
paper_content:
Abstract The two-dimensional orthogonal non-guillotine cutting problem (NGCP) appears in many industries (like wood and steel industries) and consists in cutting a rectangular master surface into a number of rectangular pieces, each with a given size and value. The pieces must be cut with their edges always parallel or orthogonal to the edges of the master surface ( orthogonal cuts ). The objective is to maximize the total value of the pieces cut. In this paper, we propose a two-level approach for solving the NGCP, where, at the first level, we select the subset of pieces to be packed into the master surface without specifying the layout, while at a second level we check only if a feasible packing layout exists. This approach has been already proposed by Fekete and Schepers [S.P. Fekete, J. Schepers, A new exact algorithm for general orthogonal d -dimensional knapsack problems, ESA 97, Springer Lecture Notes in Computer Science 1284 (1997) 144–156; S.P. Fekete, J. Schepers, On more-dimensional packing III: Exact algorithms, Tech. Rep. 97.290, Universitat zu Koln, Germany, 2000; S.P. Fekete, J. Schepers, J.C. van der Veen, An exact algorithm for higher-dimensional orthogonal packing, Tech. Rep. Under revision on Operations Research, Braunschweig University of Technology, Germany, 2004] and Caprara and Monaci [A. Caprara, M. Monaci, On the two-dimensional knapsack problem, Operations Research Letters 32 (2004) 2–14]. We propose improved reduction tests for the NGCP and a cutting-plane approach to be used in the first level of the tree search to compute effective upper bounds. Computational tests on problems derived from the literature show the effectiveness of the proposed approach, that is able to reduce the number of nodes generated at the first level of the tree search and the number of times the existence of a feasible packing layout is tested.
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: Integer and Combinatorial Optimization
paper_content:
FOUNDATIONS. The Scope of Integer and Combinatorial Optimization. Linear Programming. Graphs and Networks. Polyhedral Theory. Computational Complexity. Polynomial-Time Algorithms for Linear Programming. Integer Lattices. GENERAL INTEGER PROGRAMMING. The Theory of Valid Inequalities. Strong Valid Inequalities and Facets for Structured Integer Programs. Duality and Relaxation. General Algorithms. Special-Purpose Algorithms. Applications of Special- Purpose Algorithms. COMBINATORIAL OPTIMIZATION. Integral Polyhedra. Matching. Matroid and Submodular Function Optimization. References. Indexes.
---
paper_title: A branch-and-price-and-cut algorithm for the pattern minimization problem
paper_content:
In cutting stock problems, after an optimal (minimal stock usage) cutting plan has been devised, one might want to further reduce the operational costs by minimizing the number of setups. A setup operation occurs each time a different cutting pattern begins to be produced. The related optimization problem is known as the Pattern Minimization Problem, and it is particularly hard to solve exactly. In this paper, we present different techniques to strengthen a formulation proposed in the literature. Dual feasible functions are used for the first time to derive valid inequalities from different constraints of the model, and from linear combinations of constraints. A new arc flow formulation is also proposed. This formulation is used to define the branching scheme of our branch-and-price-and-cut algorithm, and it allows the generation of even stronger cuts by combining the branching constraints with other constraints of the model. The computational experiments conducted on instances from the literature show that our algorithm finds optimal integer solutions faster than other approaches. A set of computational results on random instances is also reported.
---
paper_title: Bin packing with items uniformly distributed over intervals [a,b]
paper_content:
We consider the problem of packing n items which are drawn uniformly from intervals of the form [a,b], where 0 ≪ a ≪ b ≪ 1. For a fairly large class of a and b, we determine a lower bound on the asymptotic expected number of bins used in an optimum packing. The method of proof is related to the dual of the linear programming problem corresponding to the bin packing problem. We demonstrate that these bounds are tight by providing simple packing strategies which achieve them.
---
paper_title: A General Framework for Bounds for Higher-Dimensional Orthogonal Packing Problems
paper_content:
Higher-dimensional orthogonal packing problems have a wide range of practical applications, including packing, cutting, and scheduling. In the context of a branch-and-bound framework for solving these packing problems to optimality, it is of crucial importance to have good and easy bounds for an optimal solution. Previous efforts have produced a number of special classes of such bounds. Unfortunately, some of these bounds are somewhat complicated and hard to generalize. We present a new approach for obtaining classes of lower bounds for higher-dimensional packing problems; our bounds improve and simplify several well-known bounds from previous literature. In addition, our approach provides an easy framework for proving correctness of new bounds.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: A Subadditive Approach to Solve Linear Integer Programs
paper_content:
A method is presented for solving pure integer programs by a subadditive method. This work extends to the integer linear problem a method for solving the group problem. It uses some elements of both enumeration and cutting plane theory in a unified setting. The method generates a subadditive function and solves the original integer linear program.
---
paper_title: An improved typology of cutting and packing problems
paper_content:
The number of publications in the area of Cutting and Packing (C&P) has increased considerably over the last two decades. The typology of C&P problems introduced by Dyckhoff [Dyckhoff, H., 1990. A typology of cutting and packing problems. European Journal of Operational Research 44, 145–159] initially provided an excellent instrument for the organisation and categorisation of existing and new literature. However, over the years also some deficiencies of this typology became evident, which created problems in dealing with recent developments and prevented it from being accepted more generally. In this paper, the authors present an improved typology, which is partially based on Dyckhoff’s original ideas, but introduces new categorisation criteria, which define problem categories different from those of Dyckhoff. Furthermore, a new, consistent system of names is suggested for these problem categories. Finally, the practicability of the new scheme is demonstrated by using it as a basis for a categorisation of the C&P literature from the years between 1995 and 2004. � 2006 Elsevier B.V. All rights reserved.
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: A Linear Programming Approach to the Cutting Stock Problem---Part II
paper_content:
In this paper, the methods for stock cutting outlined in an earlier paper in this Journal [Opns Res 9, 849--859 1961] are extended and adapted to the specific full-scale paper trim problem. The paper describes a new and faster knapsack method, experiments, and formulation changes. The experiments include ones used to evaluate speed-up devices and to explore a connection with integer programming. Other experiments give waste as a function of stock length, examine the effect of multiple stock lengths on waste, and the effect of a cutting knife limitation. The formulation changes discussed are i limitation on the number of cutting knives available, n balancing of multiple machine usage when orders are being filled from more than one machine, and m introduction of a rational objective function when customers' orders are not for fixed amounts, but rather for a range of amounts. The methods developed are also applicable to a variety of cutting problems outside of the paper industry.
---
paper_title: A new LP-based lower bound for the cumulative scheduling problem
paper_content:
Abstract This paper deals with the computation of lower bounds for Cumulative Scheduling Problems. Based on a new linear programming formulation, these lower bounds take into account how resource requirements can be satisfied simultaneously for a given resource capacity. One of the main interests of this paper is that the solutions of the LP can be tabulated, for a given value of resource capacity. Thus, even if it is based on a linear programming formulation, the computation of the bounds is low time consuming as confirmed by our computational results on the Resource Constrained Project Scheduling Problem.
---
paper_title: A Linear Programming Approach to the Cutting-Stock Problem
paper_content:
The cutting-stock problem is the problem of filling an order at minimum cost for specified numbers of lengths of material to be cut from given stock lengths of given cost. When expressed as an integer programming problem the large number of variables involved generally makes computation infeasible. This same difficulty persists when only an approximate solution is being sought by linear programming. In this paper, a technique is described for overcoming the difficulty in the linear programming formulation of the problem. The technique enables one to compute always with a matrix which has no more columns than it has rows.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: A cutting-plane approach for the two-dimensional orthogonal non-guillotine cutting problem
paper_content:
Abstract The two-dimensional orthogonal non-guillotine cutting problem (NGCP) appears in many industries (like wood and steel industries) and consists in cutting a rectangular master surface into a number of rectangular pieces, each with a given size and value. The pieces must be cut with their edges always parallel or orthogonal to the edges of the master surface ( orthogonal cuts ). The objective is to maximize the total value of the pieces cut. In this paper, we propose a two-level approach for solving the NGCP, where, at the first level, we select the subset of pieces to be packed into the master surface without specifying the layout, while at a second level we check only if a feasible packing layout exists. This approach has been already proposed by Fekete and Schepers [S.P. Fekete, J. Schepers, A new exact algorithm for general orthogonal d -dimensional knapsack problems, ESA 97, Springer Lecture Notes in Computer Science 1284 (1997) 144–156; S.P. Fekete, J. Schepers, On more-dimensional packing III: Exact algorithms, Tech. Rep. 97.290, Universitat zu Koln, Germany, 2000; S.P. Fekete, J. Schepers, J.C. van der Veen, An exact algorithm for higher-dimensional orthogonal packing, Tech. Rep. Under revision on Operations Research, Braunschweig University of Technology, Germany, 2004] and Caprara and Monaci [A. Caprara, M. Monaci, On the two-dimensional knapsack problem, Operations Research Letters 32 (2004) 2–14]. We propose improved reduction tests for the NGCP and a cutting-plane approach to be used in the first level of the tree search to compute effective upper bounds. Computational tests on problems derived from the literature show the effectiveness of the proposed approach, that is able to reduce the number of nodes generated at the first level of the tree search and the number of times the existence of a feasible packing layout is tested.
---
paper_title: Integer and Combinatorial Optimization
paper_content:
FOUNDATIONS. The Scope of Integer and Combinatorial Optimization. Linear Programming. Graphs and Networks. Polyhedral Theory. Computational Complexity. Polynomial-Time Algorithms for Linear Programming. Integer Lattices. GENERAL INTEGER PROGRAMMING. The Theory of Valid Inequalities. Strong Valid Inequalities and Facets for Structured Integer Programs. Duality and Relaxation. General Algorithms. Special-Purpose Algorithms. Applications of Special- Purpose Algorithms. COMBINATORIAL OPTIMIZATION. Integral Polyhedra. Matching. Matroid and Submodular Function Optimization. References. Indexes.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: A branch-and-price-and-cut algorithm for the pattern minimization problem
paper_content:
In cutting stock problems, after an optimal (minimal stock usage) cutting plan has been devised, one might want to further reduce the operational costs by minimizing the number of setups. A setup operation occurs each time a different cutting pattern begins to be produced. The related optimization problem is known as the Pattern Minimization Problem, and it is particularly hard to solve exactly. In this paper, we present different techniques to strengthen a formulation proposed in the literature. Dual feasible functions are used for the first time to derive valid inequalities from different constraints of the model, and from linear combinations of constraints. A new arc flow formulation is also proposed. This formulation is used to define the branching scheme of our branch-and-price-and-cut algorithm, and it allows the generation of even stronger cuts by combining the branching constraints with other constraints of the model. The computational experiments conducted on instances from the literature show that our algorithm finds optimal integer solutions faster than other approaches. A set of computational results on random instances is also reported.
---
paper_title: Integer and Combinatorial Optimization
paper_content:
FOUNDATIONS. The Scope of Integer and Combinatorial Optimization. Linear Programming. Graphs and Networks. Polyhedral Theory. Computational Complexity. Polynomial-Time Algorithms for Linear Programming. Integer Lattices. GENERAL INTEGER PROGRAMMING. The Theory of Valid Inequalities. Strong Valid Inequalities and Facets for Structured Integer Programs. Duality and Relaxation. General Algorithms. Special-Purpose Algorithms. Applications of Special- Purpose Algorithms. COMBINATORIAL OPTIMIZATION. Integral Polyhedra. Matching. Matroid and Submodular Function Optimization. References. Indexes.
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: Valid inequalities based on simple mixed-integer sets
paper_content:
In this paper we use facets of simple mixed-integer sets with three variables to derive a parametric family of valid inequalities for general mixed-integer sets. We call these inequalities two-step MIR inequalities as they can be derived by applying the simple mixed-integer rounding (MIR) principle of Wolsey (1998) twice. The two-step MIR inequalities define facets of the master cyclic group polyhedron of Gomory (1969). In addition, they dominate the strong fractional cuts of Letchford and Lodi (2002).
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: A Subadditive Approach to Solve Linear Integer Programs
paper_content:
A method is presented for solving pure integer programs by a subadditive method. This work extends to the integer linear problem a method for solving the group problem. It uses some elements of both enumeration and cutting plane theory in a unified setting. The method generates a subadditive function and solves the original integer linear program.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: A branch-and-price-and-cut algorithm for the pattern minimization problem
paper_content:
In cutting stock problems, after an optimal (minimal stock usage) cutting plan has been devised, one might want to further reduce the operational costs by minimizing the number of setups. A setup operation occurs each time a different cutting pattern begins to be produced. The related optimization problem is known as the Pattern Minimization Problem, and it is particularly hard to solve exactly. In this paper, we present different techniques to strengthen a formulation proposed in the literature. Dual feasible functions are used for the first time to derive valid inequalities from different constraints of the model, and from linear combinations of constraints. A new arc flow formulation is also proposed. This formulation is used to define the branching scheme of our branch-and-price-and-cut algorithm, and it allows the generation of even stronger cuts by combining the branching constraints with other constraints of the model. The computational experiments conducted on instances from the literature show that our algorithm finds optimal integer solutions faster than other approaches. A set of computational results on random instances is also reported.
---
paper_title: Bin packing with items uniformly distributed over intervals [a,b]
paper_content:
We consider the problem of packing n items which are drawn uniformly from intervals of the form [a,b], where 0 ≪ a ≪ b ≪ 1. For a fairly large class of a and b, we determine a lower bound on the asymptotic expected number of bins used in an optimum packing. The method of proof is related to the dual of the linear programming problem corresponding to the bin packing problem. We demonstrate that these bounds are tight by providing simple packing strategies which achieve them.
---
paper_title: The Two-Dimensional Finite Bin Packing Problem. Part II: New lower and upper bounds
paper_content:
This paper is the second of a two part series and describes new lower and upper bounds for a more general version of the Two-Dimensional Finite Bin Packing Problem (2BP) than the one considered in Part I (see Boschetti and Mingozzi 2002). With each item is associated an input parameter specifying if it has a fixed orientation or it can be rotated by \(90^{\circ}\). This problem contains as special cases the oriented and non-oriented 2BP. The new lower bound is based on the one described in Part I for the oriented 2BP. The computational results on the test problems derived from the literature show the effectiveness of the new proposed lower and upper bounds.
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: The Two-Dimensional Finite Bin Packing Problem. Part II: New lower and upper bounds
paper_content:
This paper is the second of a two part series and describes new lower and upper bounds for a more general version of the Two-Dimensional Finite Bin Packing Problem (2BP) than the one considered in Part I (see Boschetti and Mingozzi 2002). With each item is associated an input parameter specifying if it has a fixed orientation or it can be rotated by \(90^{\circ}\). This problem contains as special cases the oriented and non-oriented 2BP. The new lower bound is based on the one described in Part I for the oriented 2BP. The computational results on the test problems derived from the literature show the effectiveness of the new proposed lower and upper bounds.
---
paper_title: New lower bounds for the three-dimensional finite bin packing problem
paper_content:
The three-dimensional finite bin packing problem (3BP) consists of determining the minimum number of large identical three-dimensional rectangular boxes, bins, that are required for allocating without overlapping a given set of three-dimensional rectangular items. The items are allocated into a bin with their edges always parallel or orthogonal to the bin edges. The problem is strongly NP-hard and finds many practical applications. We propose new lower bounds for the problem where the items have a fixed orientation and then we extend these bounds to the more general problem where for each item the subset of rotations by 90° allowed is specified. The proposed lower bounds have been evaluated on different test problems derived from the literature. Computational results show the effectiveness of the new lower bounds.
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: A branch-and-price-and-cut algorithm for the pattern minimization problem
paper_content:
In cutting stock problems, after an optimal (minimal stock usage) cutting plan has been devised, one might want to further reduce the operational costs by minimizing the number of setups. A setup operation occurs each time a different cutting pattern begins to be produced. The related optimization problem is known as the Pattern Minimization Problem, and it is particularly hard to solve exactly. In this paper, we present different techniques to strengthen a formulation proposed in the literature. Dual feasible functions are used for the first time to derive valid inequalities from different constraints of the model, and from linear combinations of constraints. A new arc flow formulation is also proposed. This formulation is used to define the branching scheme of our branch-and-price-and-cut algorithm, and it allows the generation of even stronger cuts by combining the branching constraints with other constraints of the model. The computational experiments conducted on instances from the literature show that our algorithm finds optimal integer solutions faster than other approaches. A set of computational results on random instances is also reported.
---
paper_title: Integer and Combinatorial Optimization
paper_content:
FOUNDATIONS. The Scope of Integer and Combinatorial Optimization. Linear Programming. Graphs and Networks. Polyhedral Theory. Computational Complexity. Polynomial-Time Algorithms for Linear Programming. Integer Lattices. GENERAL INTEGER PROGRAMMING. The Theory of Valid Inequalities. Strong Valid Inequalities and Facets for Structured Integer Programs. Duality and Relaxation. General Algorithms. Special-Purpose Algorithms. Applications of Special- Purpose Algorithms. COMBINATORIAL OPTIMIZATION. Integral Polyhedra. Matching. Matroid and Submodular Function Optimization. References. Indexes.
---
paper_title: A Subadditive Approach to Solve Linear Integer Programs
paper_content:
A method is presented for solving pure integer programs by a subadditive method. This work extends to the integer linear problem a method for solving the group problem. It uses some elements of both enumeration and cutting plane theory in a unified setting. The method generates a subadditive function and solves the original integer linear program.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: Valid inequalities based on simple mixed-integer sets
paper_content:
In this paper we use facets of simple mixed-integer sets with three variables to derive a parametric family of valid inequalities for general mixed-integer sets. We call these inequalities two-step MIR inequalities as they can be derived by applying the simple mixed-integer rounding (MIR) principle of Wolsey (1998) twice. The two-step MIR inequalities define facets of the master cyclic group polyhedron of Gomory (1969). In addition, they dominate the strong fractional cuts of Letchford and Lodi (2002).
---
paper_title: Valid inequalities based on simple mixed-integer sets
paper_content:
In this paper we use facets of simple mixed-integer sets with three variables to derive a parametric family of valid inequalities for general mixed-integer sets. We call these inequalities two-step MIR inequalities as they can be derived by applying the simple mixed-integer rounding (MIR) principle of Wolsey (1998) twice. The two-step MIR inequalities define facets of the master cyclic group polyhedron of Gomory (1969). In addition, they dominate the strong fractional cuts of Letchford and Lodi (2002).
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: Strengthening Chvatal-Gomory Cuts and Gomory fractional cuts
paper_content:
Chvatal-Gomory and Gomory fractional cuts are well-known cutting planes for pure integer programming problems. Various methods for strengthening them are known, for example based on subadditive functions or disjunctive techniques. We present a new and surprisingly simple strengthening procedure, discuss its properties, and present some computational results.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: The Two-Dimensional Finite Bin Packing Problem. Part II: New lower and upper bounds
paper_content:
This paper is the second of a two part series and describes new lower and upper bounds for a more general version of the Two-Dimensional Finite Bin Packing Problem (2BP) than the one considered in Part I (see Boschetti and Mingozzi 2002). With each item is associated an input parameter specifying if it has a fixed orientation or it can be rotated by \(90^{\circ}\). This problem contains as special cases the oriented and non-oriented 2BP. The new lower bound is based on the one described in Part I for the oriented 2BP. The computational results on the test problems derived from the literature show the effectiveness of the new proposed lower and upper bounds.
---
paper_title: Valid inequalities based on simple mixed-integer sets
paper_content:
In this paper we use facets of simple mixed-integer sets with three variables to derive a parametric family of valid inequalities for general mixed-integer sets. We call these inequalities two-step MIR inequalities as they can be derived by applying the simple mixed-integer rounding (MIR) principle of Wolsey (1998) twice. The two-step MIR inequalities define facets of the master cyclic group polyhedron of Gomory (1969). In addition, they dominate the strong fractional cuts of Letchford and Lodi (2002).
---
paper_title: A Subadditive Approach to Solve Linear Integer Programs
paper_content:
A method is presented for solving pure integer programs by a subadditive method. This work extends to the integer linear problem a method for solving the group problem. It uses some elements of both enumeration and cutting plane theory in a unified setting. The method generates a subadditive function and solves the original integer linear program.
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: A Linear Programming Approach to the Cutting-Stock Problem
paper_content:
The cutting-stock problem is the problem of filling an order at minimum cost for specified numbers of lengths of material to be cut from given stock lengths of given cost. When expressed as an integer programming problem the large number of variables involved generally makes computation infeasible. This same difficulty persists when only an approximate solution is being sought by linear programming. In this paper, a technique is described for overcoming the difficulty in the linear programming formulation of the problem. The technique enables one to compute always with a matrix which has no more columns than it has rows.
---
paper_title: New classes of fast lower bounds for bin packing problems
paper_content:
The bin packing problem is one of the classical NP-hard optimization problems. In this paper, we present a simple generic approach for obtaining new fast lower bounds, based on dual feasible functions. Worst-case analysis as well as computational results show that one of our classes clearly outperforms the previous best economical lower bound for the bin packing problem by Martello and Toth, which can be understood as a special case. In particular, we prove an asymptotic worst-case performance of 3/4 for a bound that can be computed in linear time for items sorted by size. In addition, our approach provides a general framework for establishing new bounds.
---
paper_title: New reduction procedures and lower bounds for the two-dimensional bin packing problem with fixed orientation
paper_content:
The two-dimensional bin-packing problem (2BP) consists of minimizing the number of identical rectangles used to pack a set of smaller rectangles. In this paper, we propose new lower bounds for 2BP in the discrete case. They are based on the total area of the items after application of dual feasible functions (DFF). We also propose the new concept of data-dependent dual feasible functions (DDFF), which can also be applied to a 2BP instance. We propose two families of Discrete DFF and DDFF and show that they lead to bounds which strictly dominate those obtained previously. We also introduce two new reduction procedures and report computational experiments on our lower bounds. Our bounds improve on the previous best results and close 22 additional instances of a well-known established benchmark derived from literature.
---
paper_title: Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem
paper_content:
The cutting stock problem is that of finding a cutting of stock material to meet demands for small pieces of prescribed dimensions while minimising the amount of waste. Because changing over from one cutting pattern to another involves significant setups, an auxiliary problem is to minimise the number of different patterns that are used. The pattern minimisation problem is significantly more complex, but it is of great practical importance. In this paper, we propose an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program is solved using a column generation approach where the subproblem is a nonlinear integer program that can be decomposed into multiple bounded integer knapsack problems. At each node of the branch-and-bound tree, the linear programming relaxation of our formulation is made tighter by adding super-additive inequalities. Branching rules are presented that yield a balanced tree. Incumbent solutions are obtained using a rounding heuristic. The resulting branch-and-price-and-cut procedure is used to produce optimal or approximately optimal solutions for a set of real-life problems.
---
paper_title: A new LP-based lower bound for the cumulative scheduling problem
paper_content:
Abstract This paper deals with the computation of lower bounds for Cumulative Scheduling Problems. Based on a new linear programming formulation, these lower bounds take into account how resource requirements can be satisfied simultaneously for a given resource capacity. One of the main interests of this paper is that the solutions of the LP can be tabulated, for a given value of resource capacity. Thus, even if it is based on a linear programming formulation, the computation of the bounds is low time consuming as confirmed by our computational results on the Resource Constrained Project Scheduling Problem.
---
paper_title: A branch-and-price-and-cut algorithm for the pattern minimization problem
paper_content:
In cutting stock problems, after an optimal (minimal stock usage) cutting plan has been devised, one might want to further reduce the operational costs by minimizing the number of setups. A setup operation occurs each time a different cutting pattern begins to be produced. The related optimization problem is known as the Pattern Minimization Problem, and it is particularly hard to solve exactly. In this paper, we present different techniques to strengthen a formulation proposed in the literature. Dual feasible functions are used for the first time to derive valid inequalities from different constraints of the model, and from linear combinations of constraints. A new arc flow formulation is also proposed. This formulation is used to define the branching scheme of our branch-and-price-and-cut algorithm, and it allows the generation of even stronger cuts by combining the branching constraints with other constraints of the model. The computational experiments conducted on instances from the literature show that our algorithm finds optimal integer solutions faster than other approaches. A set of computational results on random instances is also reported.
---
paper_title: Computing redundant resources for the resource constrained project scheduling problem
paper_content:
Several efficient lower bounds and time-bound adjustment methods for the resource constrained project scheduling problem (RCPSP) have recently been proposed. Some of them are based on redundant resources. In this paper we define redundant functions which are very useful for computing redundant resources. We also describe an algorithm for computing all maximal redundant functions. Once all these redundant functions have been determined, we have to identify those that are useful for bounding. Surprisingly, their number is reasonable even for large resource capacities, so a representative subset of them can be tabulated to be used efficiently. Computational results on classical RCPSP instances confirm their usefulness.
---
paper_title: CUTGEN1: a problem generator for the standard one-dimensional cutting stock problem
paper_content:
In this paper a problem generator for the Standard One-dimensional Cutting Stock Problem (1D-CSP) is developed. The problem is defined and its parameters are identified. Then it is shown what features have been included in the program in order to allow for the generation of easily reproducible random problem instances. Finally, by applying the generator a set of benchmark problems is identified.
---
|
Title: A Survey of Dual-Feasible and Superadditive Functions
Section 1: Introduction
Description 1: This section introduces the concept of dual-feasible functions, their applications, and the objective of the survey.
Section 2: DFF and superadditive functions
Description 2: This section provides definitions and properties of dual-feasible and superadditive functions and explains their relationships.
Section 3: Dual-feasible functions
Description 3: This section elaborates on the concept of dual-feasible functions and presents alternative definitions and properties.
Section 4: Superadditive functions and maximal DFF
Description 4: This section discusses the role of superadditive nondecreasing functions, introduces the notion of maximal DFF, and defines their properties.
Section 5: DFF and valid inequalities
Description 5: This section highlights the use of dual-feasible functions in generating valid inequalities for integer programs.
Section 6: Frameworks for creating valid DFF
Description 6: This section presents frameworks for obtaining complex superadditive functions by combining simpler ones.
Section 7: Compositions and linear combinations
Description 7: This section explains how to combine dual-feasible functions using linear combinations and compositions.
Section 8: Finding a corresponding dominant function by symmetry
Description 8: This section demonstrates the process of creating a maximal dual-feasible function from a non-maximal one by utilizing symmetry.
Section 9: Improving a function by studying its limiting behavior
Description 9: This section describes a method for obtaining a maximal DFF from a non-maximal superadditive function by analyzing its limiting behavior.
Section 10: Using two different functions for x and r_x
Description 10: This section illustrates the approach of using two different DFFs for integer and fractional parts of values.
Section 11: Using the ceiling function
Description 11: This section discusses how to derive superadditive functions by leveraging the ceiling function.
Section 12: Comparative analysis of dual-feasible functions
Description 12: This section surveys various dual-feasible functions proposed in the literature, providing explicit definitions and determining their maximality.
Section 13: Fekete and Schepers
Description 13: This section examines the dual-feasible functions proposed by Fekete and Schepers and their properties.
Section 14: Boschetti and Mingozzi
Description 14: This section details the functions introduced by Boschetti and Mingozzi, highlighting their applications and effectiveness.
Section 15: Carlier et al.
Description 15: This section describes the contributions of Carlier et al. to dual-feasible functions and their improvements.
Section 16: Vanderbeck
Description 16: This section explores the dual-feasible functions used by Vanderbeck and their impact on pattern minimization problems.
Section 17: Burdett and Johnson
Description 17: This section outlines the function for rational values proposed by Burdett and Johnson and its application in deriving valid inequalities.
Section 18: Letchford and Lodi
Description 18: This section explains the dual-feasible function proposed by Letchford and Lodi to strengthen Chvátal-Gomory and Gomory fractional cuts.
Section 19: Dash and Günlük
Description 19: This section introduces a maximal dual-feasible function derived by Dash and Günlük from their extended mixed-integer rounding inequalities.
Section 20: Summary
Description 20: This section summarizes key findings, dominance relations, maximality properties, and applications of dual-feasible functions discussed in the survey.
Section 21: Bin-packing instances
Description 21: This section compares different dual-feasible functions for various bin-packing problem instances and evaluates their lower-bounding effectiveness.
Section 22: Composing functions
Description 22: This section analyzes the performance improvements obtained by composing different dual-feasible functions.
Section 23: Strengthening the column generation model for the PMP with DFF
Description 23: This section evaluates the effectiveness of dual-feasible functions in strengthening valid inequalities for the pattern minimization problem.
|
Continuous Opinion Dynamics under Bounded Confidence: A Survey
| 7 |
---
paper_title: A Discrete Nonlinear and Non-Autonomous Model of Consensus Formation
paper_content:
Consensus formation among n experts is modeled as a positive discrete dynamical system in n dimensions. The well–known linear but non–autonomous model is extended to a nonlinear one admitting also various kinds of averaging beside the weighted arithmetic mean. For this model a sufficient condition for reaching a consensus is presented. As a special case consensus formation under bounded confidence is analyzed.
---
paper_title: Modelling collective opinion formation by means of active Brownian particles
paper_content:
The concept of active Brownian particles is used to model a collective opinion formation process. It is assumed that individuals in community create a two-component communication field that influences the change of opinions of other persons and/or can induce their migration. The communication field is described by a reaction-diffusion equation, the opinion change of the individuals is given by a master equation, while the migration is described by a set of Langevin equations, coupled by the communication field. In the mean-field limit holding for fast communication we derive a critical population size, above which the community separates into a majority and a minority with opposite opinions. The existence of external support (e.g. from mass media) changes the ratio between minority and majority, until above a critical external support the supported subpopulation exists always as a majority. Spatial effects lead to two critical "social" temperatures, between which the community exists in a metastable state, thus fluctuations below a certain critical wave number may result in a spatial opinion separation. The range of metastability is particularly determined by a parameter characterizing the individual response to the communication field. In our discussion, we draw analogies to phase transitions in physical systems. PACS. 05.40.-a Fluctuation phenomena, random processes, noise, and Brownian motion - 05.65.+b Self- organized systems - 87.23.Ge Dynamics of social systems
---
paper_title: A stabilization theorem for dynamics of continuous opinions
paper_content:
A stabilization theorem for processes of opinion dynamics is presented. The theorem is applicable to a wide class of models of continuous opinion dynamics based on averaging (like the models of Hegselmann-Krause and Weisbuch-Deffuant). The analysis detects self-confidence as a driving force of stabilization.
---
paper_title: Communication regimes in opinion dynamics: Changing the number of communicating agents
paper_content:
This article contributes in four ways to the research on time-discrete continuous opinion dynamics with compromising agents. First, communication regimes are introduced as an elementary concept of opinion dynamic models. Second, we develop a model that covers two major models of continuous opinion dynamics, i.e. the basic model of Deffuant and Weisbuch as well as the model of Krause and Hegselmann. To combine these models, which handle different numbers of communicating agents, we convert the convergence parameter of Deffuant and Weisbuch into a parameter called self-support. Third, we present simulation results that shed light on how the number of communicating agents but also how the self-support affect opinion dynamics. The fourth contribution is a theoretically driven criterion when to stop a simulation and how to extrapolate to infinite many steps.
---
paper_title: How can extremism prevail? A study based on the relative agreement interaction model
paper_content:
Abstract: We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model.
---
paper_title: Consensus Strikes Back in the Hegselmann-Krause Model of Continuous Opinion Dynamics Under Bounded Confidence
paper_content:
The agent-based bounded confidence model of opinion dynamics of Hegselmann and Krause (2002) is reformulated as an interactive Markov chain. This abstracts from individual agents to a population model which gives a good view on the underlying attractive states of continuous opinion dynamics. We mutually analyse the agent-based model and the interactive Markov chain with a focus on the number of agents and onesided dynamics. Finally, we compute animated bifurcation diagrams that give an overview about the dynamical behavior. They show an interesting phenomenon when we lower the bound of confidence: After the first bifurcation from consensus to polarisation consensus strikes back for a while.
---
paper_title: Continuous Opinion Dynamics: Insights through Interactive Markov Chains
paper_content:
We reformulate the agent-based opinion dynamics models of Weisbuch-Deffuant and Hegselmann-Krause as interactive Markov chains. So we switch the scope from a finite number of n agents to a finite number of n opinion classes. Thus, we will look at an infinite population distributed to opinion classes instead of agents with real number opinions. The interactive Markov chains show similar dynamical behavior as the agent-based models: stabilization and clustering. Our framework leads to a discrete bifurcation diagram for each model which gives a good view on the driving forces and the attractive states of the system. The analysis shows that the emergence of minor clusters in the Weisbuch-Deffuant model and of meta-stable states with very slow convergence to consensus in the Hegselmann Krause model are intrinsic to the dynamical behavior.
---
paper_title: Better Being Third Than Second In A Search For A Majority Opinion
paper_content:
Monte Carlo simulations of a Sznajd model show that if a near-consensus is formed out of four initially equally widespread opinions, the one which at intermediate times is second in the number of adherents usually loses out against the third-placed opinion.
---
paper_title: Consensus Strikes Back in the Hegselmann-Krause Model of Continuous Opinion Dynamics Under Bounded Confidence
paper_content:
The agent-based bounded confidence model of opinion dynamics of Hegselmann and Krause (2002) is reformulated as an interactive Markov chain. This abstracts from individual agents to a population model which gives a good view on the underlying attractive states of continuous opinion dynamics. We mutually analyse the agent-based model and the interactive Markov chain with a focus on the number of agents and onesided dynamics. Finally, we compute animated bifurcation diagrams that give an overview about the dynamical behavior. They show an interesting phenomenon when we lower the bound of confidence: After the first bifurcation from consensus to polarisation consensus strikes back for a while.
---
paper_title: Continuous Opinion Dynamics: Insights through Interactive Markov Chains
paper_content:
We reformulate the agent-based opinion dynamics models of Weisbuch-Deffuant and Hegselmann-Krause as interactive Markov chains. So we switch the scope from a finite number of n agents to a finite number of n opinion classes. Thus, we will look at an infinite population distributed to opinion classes instead of agents with real number opinions. The interactive Markov chains show similar dynamical behavior as the agent-based models: stabilization and clustering. Our framework leads to a discrete bifurcation diagram for each model which gives a good view on the driving forces and the attractive states of the system. The analysis shows that the emergence of minor clusters in the Weisbuch-Deffuant model and of meta-stable states with very slow convergence to consensus in the Hegselmann Krause model are intrinsic to the dynamical behavior.
---
paper_title: Dynamics of structured attitudes and opinions
paper_content:
Opinion dynamic models frequently assume an opinion to be a onedimensional concept and only sometimes vectors of opinions are considered. Contrary, psychological attitude theory considers an attitude to be a multidimensional concept, where each dimension is composed of beliefs and evaluative components. In this article we extend classic basic models of opinion dynamics such that they can capture this multidimensionality. We present models where individuals talk about difierent issues, e.g. evaluations or beliefs about the relation between objects and attributes. We present reinterpretations the results of previous simulations and we present results of simulations of new dynamics. Those new dynamics are based on a hierarchical approach to bounded confldence.
---
paper_title: Consensus Strikes Back in the Hegselmann-Krause Model of Continuous Opinion Dynamics Under Bounded Confidence
paper_content:
The agent-based bounded confidence model of opinion dynamics of Hegselmann and Krause (2002) is reformulated as an interactive Markov chain. This abstracts from individual agents to a population model which gives a good view on the underlying attractive states of continuous opinion dynamics. We mutually analyse the agent-based model and the interactive Markov chain with a focus on the number of agents and onesided dynamics. Finally, we compute animated bifurcation diagrams that give an overview about the dynamical behavior. They show an interesting phenomenon when we lower the bound of confidence: After the first bifurcation from consensus to polarisation consensus strikes back for a while.
---
paper_title: Compromise and Synchronization in Opinion Dynamics
paper_content:
Abstract.We discuss two models of opinion dynamics. We first present a brief review of ::: the Hegselmann and Krause (HK) compromise model in two dimensions, ::: showing that it is possible to simulate the dynamics ::: in the limit of an infinite number of agents by solving numerically a rate equation for ::: a continuum distribution of opinions. Then, we discuss the Opinion Changing Rate (OCR) model, ::: which allows to study under which conditions a group of agents with a ::: different natural tendency (rate) to change opinion can find the ::: agreement. In the context of the this model, consensus is viewed as a synchronization ::: process.
---
paper_title: Modelling Group Opinion Shift to Extreme : the Smooth Bounded Confidence Model
paper_content:
We consider the phenomenon of opinion shift to the extreme reported in the social psychology literature. We argue that a good candidate to model this phenomenon can be a new variant of the bounded confidence (BC) model, the smooth BC model which we propose in this paper. This model considers individuals with a continuous opinion and an uncertainty. Individuals interact by random pairs, and attract each other's opinion proportionally to a Gaussian function of the distance between their opinions. We first show that this model presents a shift to the extreme when we introduce extremists (very convinced individuals with extreme opinions) in the population, even if there is the same number of extremists located at each extreme. This behaviour is similar to the one already identified with other versions of BC model. Then we propose a modification of the smooth BC model to account for the social psychology data and theories related to this phenomenon. The modification is based on the hypothesis of perspective taking (empathy) in the context of consensus seeking.
---
paper_title: MONTE CARLO SIMULATION OF DEFFUANT OPINION DYNAMICS WITH QUALITY DIFFERENCES
paper_content:
In this work, the consequences of different opinion qualities in the Deffuant model were examined. If these qualities are randomly distributed, no different behavior was observed. In contrast to that, systematically assigned qualities had strong effects to the final opinion distribution. There was a high probability that the strongest opinion was one with a high quality. Furthermore, under the same conditions, this major opinion was much stronger than in the models without systematic differences. Finally, a society with systematic quality differences needed more tolerance to form a complete consensus than one without or with unsystematic ones.
---
paper_title: Truth and Cognitive Division of Labour First Steps towards a Computer Aided Social Epistemology
paper_content:
The paper analyzes the chances for the truth to be found and broadly accepted under conditions of cognitive division of labour combined with a social exchange process. Cognitive division of labour means, that only some individuals are active truth seekers, possibly with different capacities. The social exchange process consists in an exchange of opinions between all individuals, whether truth seekers or not. We de- velop a model which is investigated by both, mathematical tools and computer simulations. As an analytical result the Funnel theorem states that under rather weak conditions on the social process a consensus on the truth will be reached if all individuals posses an arbitrarily small inclination for truth seeking. The Leading the pack theorem states that under certain conditions even a single truth seeker may lead all individuals to the truth. Systematic simulations analyze how close and how fast groups can get to the truth depending on the frequency of truth seekers, their capacities as truth seekers, the position of the truth (more to the extreme or more in the centre of an opinion space), and the willingness to take into account the opinions of others when exchanging and updating opinions. A tricky movie visualizes simulations results in a parameter space of higher dimensions.
---
paper_title: Minorities in a Model for Opinion Formation
paper_content:
We study a model for social influence in which the agents' opinion is a continuous variable [G. Weisbuch et al., Complexity \textbf{7}, 2, 55 (2002)]. The convergent opinion adjustment process takes place as a result of random binary encounters whenever the difference between agents' opinions is below a given threshold. The inhomogeneity in the dynamics gives rise to a complex steady state structure, which is also highly dependent on the threshold and on the convergence parameter of the model.
---
paper_title: A pr 2 00 4 The role of network topology on extremism propagation with the Relative Agreement opinion dynamics
paper_content:
In Deffuant et al. (J. Artif. Soc. Soc. Simulation 5 (2002) 4), we proposed a simple model of opinion dynamics, which we used to simulate the influence of extremists in a population. Simulations were run without any specific interaction structure and varying the simulation parameters, we observed different attractors such as predominance of centrism or of extremism. We even observed in certain conditions, that the whole population drifts to one extreme of the opinion, even if initially there are an equal number of extremists at each extreme of the opinion axis. In the present paper, we study the influence of the social networks on the presence of such a dynamical behavior. In particular, we use small-world networks with variable connectivity and randomness of the connections. We find that the drift to a single extreme appears only beyond a critical level of connectivity, which decreases when the randomness increases.
---
paper_title: Communication regimes in opinion dynamics: Changing the number of communicating agents
paper_content:
This article contributes in four ways to the research on time-discrete continuous opinion dynamics with compromising agents. First, communication regimes are introduced as an elementary concept of opinion dynamic models. Second, we develop a model that covers two major models of continuous opinion dynamics, i.e. the basic model of Deffuant and Weisbuch as well as the model of Krause and Hegselmann. To combine these models, which handle different numbers of communicating agents, we convert the convergence parameter of Deffuant and Weisbuch into a parameter called self-support. Third, we present simulation results that shed light on how the number of communicating agents but also how the self-support affect opinion dynamics. The fourth contribution is a theoretically driven criterion when to stop a simulation and how to extrapolate to infinite many steps.
---
paper_title: Attitude Dynamics With Limited Verbalisation Capabilities
paper_content:
This article offers a new perspective for research on opinion dynamics. It demonstrates the importance of the distinction of opinion and attitude, which originally has been discussed in literature on consumer behaviour. As opinions are verbalised attitudes not only biases in interpretation and adoption processes have to be considered but also verbalisation biases should be addressed. Such biases can be caused by language deficits or social norms. The model presented in this article captures the basic features of common opinion dynamic models and additionally biases in the verbalisation process. Further, it gives a first analysis of this model and shows that precision as bias in the verbalisation process can influence the dynamics significantly. Presenting and applying the concept of area of influential attitudes the impact of each parameter (selective attitude, selective interpretation, and precision) is analysed independently. Some preliminary results for combined effects are presented.
---
paper_title: How can extremism prevail? A study based on the relative agreement interaction model
paper_content:
Abstract: We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model.
---
paper_title: Opinion Dynamics Driven by Various Ways of Averaging
paper_content:
The paper treats opinion dynamics under bounded confidence when agents employ, beside an arithmetic mean, means like a geometric mean, a power mean or a random mean in aggregating opinions. The different kinds of collective dynamics resulting from these various ways of averaging are studied and compared by simulations. Particular attention is given to the random mean which is a new concept introduced in this paper. All those concrete means are just particular cases of a partial abstract mean, which also is a new concept. This comprehensive concept of averaging opinions is investigated also analytically and it is shown in particular, that the dynamics driven by it always stabilizes in a certain pattern of opinions.
---
|
Title: Continuous Opinion Dynamics under Bounded Confidence: A Survey
Section 1: Introduction
Description 1: Write an introductory overview discussing the emergence of opinion dynamics, the nature of continuous opinion dynamics, and relevant models and their historical contexts.
Section 2: The models
Description 2: Describe the two main models of continuous opinion dynamics (Deffuant-Weisbuch and Hegselmann-Krause) in both their agent-based and density-based formulations, including definitions and key processes.
Section 3: The Deffuant-Weisbuch model
Description 3: Provide a detailed explanation of the Deffuant-Weisbuch (DW) model, including its agent-based and density-based versions, mathematical formulations, and behavior under different conditions.
Section 4: The Hegselmann-Krause model
Description 4: Offer a thorough description of the Hegselmann-Krause (HK) model, outlining its agent-based and density-based versions, mathematical formulations, and unique characteristics.
Section 5: Bifurcation Diagrams
Description 5: Present and explain the bifurcation diagrams for both homogeneous density-based models, highlighting the location and transitions of clusters concerning the bound of confidence.
Section 6: Extensions
Description 6: Review various extensions to the basic continuous opinion dynamics models, including multidimensional opinions, heterogeneous bounds of confidence, social networks, different communication regimes, and other miscellaneous factors.
Section 7: Conclusions and open problems
Description 7: Summarize the key findings, highlight the importance of these models, and discuss the remaining open problems and future research directions in the field of continuous opinion dynamics.
|
Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing
| 14 |
---
paper_title: SCHEDULING IN CLOUD COMPUTING
paper_content:
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time, improve the resource utilization and also improve the server performance and throughput. This method or protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols. Here we assign the priority to the job which gives better performance to the computer and try my best to minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
---
paper_title: A heuristic resource scheduling scheme in time-constrained networks
paper_content:
Abstract Sensor device is emerging as a promising enabler for the development of new solutions in a plethora of Internet of Things (IoT) applications. With the explosion of connected devices, it is essential for conversion gateway between the Internet and sensor nodes to support end-to-end (e2e) interoperability because the current Internet Protocol (IT) does not support end-to-end delay in IEEE 802.15.4e. As part of IoT, we propose a scheduling scheme of multiple channels and multiple timeslots to minimize the e2e delay in multi-hop environments. The proposed greedy heuristic approach is compared with the meta-heuristics in terms of the given end-to-end delay bound. Although the meta-heuristics is more accurate in finding a global optimum or sub-optimal values than the greedy heuristic approach, this advantage comes at the expense of high complexity. The simulation results show that the proposed scheme reduces the complexity by obtaining suboptimal solutions that satisfy the e2e delay requirement.
---
paper_title: Above the Clouds: A Berkeley View of Cloud Computing
paper_content:
Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing.
---
paper_title: Workflow scheduling in cloud: a survey
paper_content:
To program in distributed computing environments such as grids and clouds, workflow is adopted as an attractive paradigm for its powerful ability in expressing a wide range of applications, including scientific computing, multi-tier Web, and big data processing applications. With the development of cloud technology and extensive deployment of cloud platform, the problem of workflow scheduling in cloud becomes an important research topic. The challenges of the problem lie in: NP-hard nature of task-resource mapping; diverse QoS requirements; on-demand resource provisioning; performance fluctuation and failure handling; hybrid resource scheduling; data storage and transmission optimization. Consequently, a number of studies, focusing on different aspects, emerged in the literature. In this paper, we firstly conduct taxonomy and comparative review on workflow scheduling algorithms. Then, we make a comprehensive survey of workflow scheduling in cloud environment in a problem---solution manner. Based on the analysis, we also highlight some research directions for future investigation.
---
paper_title: A decentralized self-adaptation mechanism for service-based applications in the cloud
paper_content:
Cloud computing, with its promise of (almost) unlimited computation, storage, and bandwidth, is increasingly becoming the infrastructure of choice for many organizations. As cloud offerings mature, service-based applications need to dynamically recompose themselves to self-adapt to changing QoS requirements. In this paper, we present a decentralized mechanism for such self-adaptation, using market-based heuristics. We use a continuous double-auction to allow applications to decide which services to choose, among the many on offer. We view an application as a multi-agent system and the cloud as a marketplace where many such applications self-adapt. We show through a simulation study that our mechanism is effective for the individual application as well as from the collective perspective of all applications adapting at the same time.
---
paper_title: Scaling in Cloud Computing
paper_content:
The cloud computing is a resource and offers computer assets with services instead of a deliverable product which allows storage and shar- ing of the files of multiple types like audio, video, software's, data files and many more. The data is shared over internet cloud storage and can be accessed for free and also at an affordable price. The effective way of sharing the information and technology by collaboration the real world to availed the competitive advantages. This paper makes a brief description about the cloud computing and its scaling techniques the main explanation is on vertical scaling and horizontal scaling with examples.
---
paper_title: A survey on security issues in service delivery models of cloud computing
paper_content:
Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology's (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system.
---
paper_title: The management of security in Cloud computing
paper_content:
Cloud computing has elevated IT to newer limits by offering the market environment data storage and capacity with flexible scalable computing processing power to match elastic demand and supply, whilst reducing capital expenditure. However the opportunity cost of the successful implementation of Cloud computing is to effectively manage the security in the cloud applications. Security consciousness and concerns arise as soon as one begins to run applications beyond the designated firewall and move closer towards the public domain. The purpose of the paper is to provide an overall security perspective of Cloud computing with the aim to highlight the security concerns that should be properly addressed and managed to realize the full potential of Cloud computing. Gartner's list on cloud security issues, as well the findings from the International Data Corporation enterprise panel survey based on cloud threats, will be discussed in this paper.
---
paper_title: The Performance Evaluation of Proactive Fault Tolerant Scheme over Cloud using CloudSim Simulator
paper_content:
The main issues in a cloud based environment are security, process fail rate and performance. Fault tolerance plays a key role in ensuring high serviceability and reliability in cloud. Nowadays, demands for high fault tolerance, high serviceability and high reliability are becoming unprecedentedly strong, building a high fault tolerance, high serviceability and high reliability cloud is a critical, challenging, and urgently required task. A lot of research is currently underway to analyze how clouds can provide fault tolerance for an application. When numbers of processes are too many and any virtual machine is overloaded then the processes are failed causing lot of rework and annoyance for the users. The major cause of the failure of the processes at the virtual machine level are overloading of virtual machines, extra resource requirements of the existing processes etc. This paper introduces dynamic load balancing techniques for cloud environment in which RAM/Broker (resource awareness module) proactively decides whether the process can be applied on an existing virtual machine or it should be assigned to a different virtual machine created a fresh or any other existing virtual machine. So, in this way it can tackle the occurrence of fault. This paper also proposed a mechanism which proactively decides the load on virtual machines and according to the requirement either creates a new virtual machine or uses an existing virtual machine for the assigning the process. Once a process completes it will update the virtual machine status on the broker service so that other processes can be assigned to it.
---
paper_title: Above the Clouds: A Berkeley View of Cloud Computing
paper_content:
Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing.
---
paper_title: Service Level Agreement in Cloud Computing
paper_content:
Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS) attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal.
---
paper_title: A Categorisation of Cloud Computing Business Models
paper_content:
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn.
---
paper_title: Systematic Reviews in the Social Sciences : A Practical Guide
paper_content:
Such diverse thinkers as Lao-Tze, Confucius, and U.S. Defense Secretary Donald Rumsfeld have all pointed out that we need to be able to tell the difference between real and assumed knowledge. The systematic review is a scientific tool that can help with this difficult task. It can help, for example, with appraising, summarising, and communicating the results and implications of otherwise unmanageable quantities of data. This is important because quite often there are so many studies, and their results are often so conflicting, that no policymaker or practitioner could possibly carry out this task themselves.Systematic review methods have been widely used in health care, and are becoming increasingly common in the social sciences (fostered, for example, by the work of the Campbell Collaboration). ::: ::: This book outlines the rationale and methods of systematic reviews, giving worked examples from social science and other fields. It requires no previous knowledge, but takes the reader through the process stage by stage. It draws on examples from such diverse fields as psychology, criminology, education, transport, social welfare, public health, and housing and urban policy, among others.The book includes detailed sections on assessing the quality of both quantitative, and qualitative research; searching for evidence in the social sciences;meta-analytic and other methods of evidence synthesis; publication bias; heterogeneity; and approaches to dissemination.
---
paper_title: Cloud Computing and Grid Computing 360-Degree Compared
paper_content:
Cloud Computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for Cloud Computing and there seems to be no consensus on what a Cloud is. On the other hand, Cloud Computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established Grid Computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast Cloud Computing with Grid Computing from various angles and give insights into the essential characteristics of both.
---
paper_title: Scaling in Cloud Computing
paper_content:
The cloud computing is a resource and offers computer assets with services instead of a deliverable product which allows storage and shar- ing of the files of multiple types like audio, video, software's, data files and many more. The data is shared over internet cloud storage and can be accessed for free and also at an affordable price. The effective way of sharing the information and technology by collaboration the real world to availed the competitive advantages. This paper makes a brief description about the cloud computing and its scaling techniques the main explanation is on vertical scaling and horizontal scaling with examples.
---
paper_title: The management of security in Cloud computing
paper_content:
Cloud computing has elevated IT to newer limits by offering the market environment data storage and capacity with flexible scalable computing processing power to match elastic demand and supply, whilst reducing capital expenditure. However the opportunity cost of the successful implementation of Cloud computing is to effectively manage the security in the cloud applications. Security consciousness and concerns arise as soon as one begins to run applications beyond the designated firewall and move closer towards the public domain. The purpose of the paper is to provide an overall security perspective of Cloud computing with the aim to highlight the security concerns that should be properly addressed and managed to realize the full potential of Cloud computing. Gartner's list on cloud security issues, as well the findings from the International Data Corporation enterprise panel survey based on cloud threats, will be discussed in this paper.
---
paper_title: SCHEDULING IN CLOUD COMPUTING
paper_content:
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time, improve the resource utilization and also improve the server performance and throughput. This method or protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols. Here we assign the priority to the job which gives better performance to the computer and try my best to minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
---
paper_title: Utility functions in autonomic systems
paper_content:
Utility functions provide a natural and advantageous framework for achieving self-optimization in distributed autonomic computing systems. We present a distributed architecture, implemented in a realistic prototype data center, that demonstrates how utility functions can enable a collection of autonomic elements to continually optimize the use of computational resources in a dynamic, heterogeneous environment. Broadly, the architecture is a two-level structure of independent autonomic elements that supports flexibility, modularity, and self-management. Individual autonomic elements manage application resource usage to optimize local service-level utility functions, and a global arbiter allocates resources among application environments based on resource-level utility functions obtained from the managers of the applications. We present empirical data that demonstrate the effectiveness of our utility function scheme in handling realistic, fluctuating Web-based transactional workloads running on a Linux cluster.
---
paper_title: DepSky: Dependable and Secure Storage in a Cloud-of-Clouds
paper_content:
The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this work we present DepSky, a system that improves the availability, integrity, and confidentiality of information stored in the cloud through the encryption, encoding, and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability, and in most cases, the access latency, when compared with cloud providers individually. Moreover, the monetary costs of using DepSky in this scenario is at most twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits.
---
paper_title: A Categorisation of Cloud Computing Business Models
paper_content:
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn.
---
paper_title: Logging Solutions to Mitigate Risks Associated with Threats in Infrastructure as a Service Cloud
paper_content:
Cloud computing offers computational resources such as processing, networking, and storage to customers. However, the cloud also brings with it security concerns which affect both cloud consumers and providers. The Cloud Security Alliance (CSA) define the security concerns as the seven main threats. This paper investigates how threat number one (malicious activities performed in consumers' virtual machines/VMs) can affect the security of both consumers and providers. It proposes logging solutions to mitigate risks associated with this threat. We systematically design and implement a prototype of the proposed logging solutions in an IaaS to record the history of customer VM's files. The proposed system can be modified in order to record VMs' process behaviour log files. These log files can assist in identifying malicious activities (spamming) performed in the VMs as an example of how the proposed solutions benefits the provider side. The proposed system can record the log files while having a smaller trusted computing base compared to previous work. Thus, the logging solutions in this paper can assist in mitigating risks associated with the CSA threats to benefit consumers and providers.
---
paper_title: Pricing Cloud Compute Commodities: A Novel Financial Economic Model
paper_content:
In this study, we design, develop, and simulate a cloud resources pricing model that satisfies two important constraints: the dynamic ability of the model to provide a high satisfaction guarantee measured as Quality of Service (QoS) - from users perspectives, profitability constraints - from the cloud service providers perspectives We employ financial option theory and treat the cloud resources as underlying assets to capture the realistic value of the cloud compute commodities (C3). We then price the cloud resources using our model. We discuss the results for four different metrics that we introduce to guarantee the quality of service and price as follows: (a) Moore's law based depreciation of asset values, (b) new technology based volatility measures in capturing price changes, (c) a new financial option pricing based model combining the above two concepts, and (d) the effect of age of resources and depreciation of cloud resource on QoS. We show that the cloud parameters can be mapped to financial economic model and we discuss the results of cloud compute commodity pricing for various parameters, such as the age of the resource, quality of service, and contract period.
---
paper_title: Dynamic Resource Allocation in Computing Clouds Using Distributed Multiple Criteria Decision Analysis
paper_content:
In computing clouds, it is desirable to avoid wasting resources as a result of under-utilization and to avoid lengthy response times as a result of over-utilization. In this paper, we propose a new approach for dynamic autonomous resource management in computing clouds. The main contribution of this work is two-fold. First, we adopt a distributed architecture where resource management is decomposed into independent tasks, each of which is performed by Autonomous Node Agents that are tightly coupled with the physical machines in a data center. Second, the Autonomous Node Agents carry out configurations in parallel through Multiple Criteria Decision Analysis using the PROMETHEE method. Simulation results show that the proposed approach is promising in terms of scalability, feasibility and flexibility.
---
paper_title: Adaptive Management of Virtualized Resources in Cloud Computing Using Feedback Control
paper_content:
Cloud computing as newly emergent computing environment offers dynamic flexible infrastructures and QoS guaranteed services in pay-as-you-go manner to the public. System virtualization technology which renders flexible and scalable system services is the base of the cloud computing. How to provide a self-managing and autonomic infrastructure for cloud computing through virtualization becomes an important challenge. In this paper, using feedback control theory, we present VM-based architecture for adaptive management of virtualized resources in cloud computing and model an adaptive controller that dynamically adjusts multiple virtualized resources utilization to achieve application Service Level Objective (SLO) in cloud computing. Compared with Xen, KVM is chosen as a virtual machine monitor (VMM) to implement the architecture. Evaluation of the proposed controller model showed that the model could allocate resources reasonably in response to the dynamically changing resource requirements of different applications which execute on different VMs in the virtual resource pool to achieve applications SLOs.
---
paper_title: SPORC: Group Collaboration using Untrusted Cloud Resources
paper_content:
Cloud-based services are an attractive deployment model for user-facing applications like word processing and calendaring. Unlike desktop applications, cloud services allow multiple users to edit shared state concurrently and in real-time, while being scalable, highly available, and globally accessible. Unfortunately, these benefits come at the cost of fully trusting cloud providers with potentially sensitive and important data. ::: ::: To overcome this strict tradeoff, we present SPORC, a generic framework for building a wide variety of collaborative applications with untrusted servers. In SPORC, a server observes only encrypted data and cannot deviate from correct execution without being detected. SPORC allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. We demonstrate SPORC's flexibility through two prototype applications: a causally-consistent key-value store and a browser-based collaborative text editor. ::: ::: Conceptually, SPORC illustrates the complementary benefits of operational transformation (OT) and fork* consistency. The former allows SPORC clients to execute concurrent operations without locking and to resolve any resulting conflicts automatically. The latter prevents a misbehaving server from equivocating about the order of operations unless it is willing to fork clients into disjoint sets. Notably, unlike previous systems, SPORC can automatically recover from such malicious forks by leveraging OT's conflict resolution mechanism.
---
paper_title: Venus: verification for untrusted cloud storage
paper_content:
This paper presents Venus, a service for securing user interaction with untrusted cloud storage. Specifically, Venus guarantees integrity and consistency for applications accessing a key-based object store service, without requiring trusted components or changes to the storage provider. Venus completes all operations optimistically, guaranteeing data integrity. It then verifies operation consistency and notifies the application. Whenever either integrity or consistency is violated, Venus alerts the application. We implemented Venus and evaluated it with Amazon S3 commodity storage service. The evaluation shows that it adds no noticeable overhead to storage operations.
---
paper_title: Static and dynamic server allocation in systems with on/off sources
paper_content:
A system consisting of a number of servers, where demands of different types arrive in bursts (modelled by interrupted Poisson processes), is examined in the steady state. The problem is to decide how many servers to allocate to each job type, so as to minimize a cost function expressed in terms of average queue sizes. First, an exact analysis is provided for an isolated IPP/M/n queue. The results are used to compute the optimal static server allocation policy. The latter is then compared to four heuristic policies which employ dynamic switching of servers from one queue to another (such switches take time and hence incur costs).
---
paper_title: Depot: Cloud Storage with Minimal Trust
paper_content:
The paper describes the design, implementation, and evaluation of Depot, a cloud storage system that minimizes trust assumptions. Depot tolerates buggy or malicious behavior by any number of clients or servers, yet it provides safety and liveness guarantees to correct clients. Depot provides these guarantees using a two-layer architecture. First, Depot ensures that the updates observed by correct nodes are consistently ordered under Fork-Join-Causal consistency (FJC). FJC is a slight weakening of causal consistency that can be both safe and live despite faulty nodes. Second, Depot implements protocols that use this consistent ordering of updates to provide other desirable consistency, staleness, durability, and recovery properties. Our evaluation suggests that the costs of these guarantees are modest and that Depot can tolerate faults and maintain good availability, latency, overhead, and staleness even when significant faults occur.
---
paper_title: HAIL: a high-availability and integrity layer for cloud storage
paper_content:
We introduce HAIL (High-Availability and Integrity Layer), a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable. HAIL strengthens, formally unifies, and streamlines distinct approaches from the cryptographic and distributed-systems communities. Proofs in HAIL are efficiently computable by servers and highly compact---typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, formal adversarial model for HAIL, and rigorous analysis and parameter choices. We show how HAIL improves on the security and efficiency of existing tools, like Proofs of Retrievability (PORs) deployed on individual servers. We also report on a prototype implementation.
---
paper_title: A new model to ensure security in cloud computing services
paper_content:
In the commercial world, various computing needs are provided as a service. Service providers meet these computing needs in different ways, for example, by maintaining software or purchasing expensive hardware. Security is one of the most critical aspects in a cloud computing environment due to the sensitivity and importance of information stored in the cloud. The risk of malicious insiders in the cloud and the failure of cloud services have received a great deal of attention by companies. This paper focuses on issues related to data security and privacy in cloud computing and proposes a new model, called Multi-Cloud Databases (MCDB). The purpose of the proposed new model is to address security and privacy risks in the cloud computing environment. Three security issues will be examined in our proposed model: data integrity, data intrusion, and service availability.
---
paper_title: Data Storage Security Model for Cloud Computing
paper_content:
Data security is one of the biggest concerns in adopting Cloud computing. In Cloud environment, users remotely store their data and relieve themselves from the hassle of local storage and maintenance. However, in this process, they lose control over their data. Existing approaches do not take all the facets into consideration viz. dynamic nature of Cloud, computation & communication overhead etc. In this paper, we propose a Data Storage Security Model to achieve storage correctness incorporating Cloud’s dynamic nature while maintaining low computation and communication cost.
---
paper_title: Deployment models: Towards eliminating security concerns from cloud computing
paper_content:
Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment.
---
paper_title: BlueSky: A Cloud-Backed File System for the Enterprise
paper_content:
We present BlueSky, a network file system backed by cloud storage. BlueSky stores data persistently in a cloud storage provider such as Amazon S3 or Windows Azure, allowing users to take advantage of the reliability and large storage capacity of cloud providers and avoid the need for dedicated server hardware. Clients access the storage through a proxy running on-site, which caches data to provide lower-latency responses and additional opportunities for optimization. We describe some of the optimizations which are necessary to achieve good performance and low cost, including a log-structured design and a secure in-cloud log cleaner. BlueSky supports multiple protocols--both NFS and CIFS--and is portable to different providers.
---
|
Title: Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing
Section 1: INTRODUCTION
Description 1: Introduce the importance of scheduling in cloud computing, the associated security issues, and the objectives of this paper. Outline the organization of the paper.
Section 2: BACKGROUND
Description 2: Provide a general view of cloud computing, cloud architecture, features, obstacles, research method, and research questions.
Section 3: Cloud Definition
Description 3: Discuss the various definitions of cloud computing, the Service Level Agreement (SLA), and the concerns of Quality of Service (QoS).
Section 4: Research Method
Description 4: Explain the Systematic Literature Review (SLR) method, its stages, and its necessity for this research, including identifying gaps and classifications.
Section 5: Research Questions
Description 5: List the specific research questions that guide this study.
Section 6: Scope
Description 6: Define the scope of the research using the PICOC model.
Section 7: SEARCH STRATEGY
Description 7: Discuss the choice of the search period, search strings, electronic resources, and manual search methods.
Section 8: Study Selection Criteria and Procedures
Description 8: Describe the inclusion/exclusion criteria and the procedures for performing the selection of studies.
Section 9: QUALITY ASSESSMENT
Description 9: Outline the decision-making process about the quality of existing work and the quality assessment questionnaires.
Section 10: DATA EXTRACTION
Description 10: Detail the proposed questions for data extraction from each paper, including the types of models, security issues, and outcomes.
Section 11: SYNTHESIS
Description 11: Explain the process for synthesizing data from the studies, including the synthesis strategy and threats to validity.
Section 12: LIMITATIONS
Description 12: Discuss the limitations anticipated in this research.
Section 13: DISCUSSION
Description 13: Discuss the findings from the SLR, including recent related approaches, models, and the proposed solution.
Section 14: CONCLUSION
Description 14: Summarize the paper’s findings and the importance of SLR in cloud computing scheduling and security.
|
Congestion Control Techniques in WSNs: A Review
| 15 |
---
paper_title: A Light-Weight Opportunistic Forwarding Protocol with Optimized Preamble Length for Low-Duty-Cycle Wireless Sensor Networks
paper_content:
In wireless sensor networks, sensed information is expected to be reliably and timely delivered to a sink in an ad-hoc way. However, it is challenging to achieve this goal because of the highly dynamic topology induced from asynchronous duty cycles and temporally and spatially varying link quality among nodes. Currently some opportunistic forwarding protocols have been proposed to address the challenge. However, they involve complicated mechanisms to determine the best forwarder at each hop, which incurs heavy overheads for the resource-constrained nodes. In this paper, we propose a light-weight opportunistic forwarding (LWOF) scheme. Different from other recently proposed opportunistic forwarding schemes, LWOF employs neither historical network information nor a contention process to select a forwarder prior to data transmissions. It confines forwarding candidates to an optimized area, and takes advantage of the preamble in low-power-listening (LPL) MAC protocols and dual-channel communication to forward a packet to a unique downstream node towards the sink with a high probability, without making a forwarding decision prior to data transmission. Under LWOF, we optimize LPL MAC protocol to have a shortened preamble (LWMAC), based on a theoretical analysis on the relationship among preamble length, delivery probability at each hop, node density and sleep duration. Simulation results show that LWOF, along with LWMAC, can achieve relatively good performance in terms of delivery reliability and latency, as a receiver-based opportunistic forwarding protocol, while reducing energy consumption per packet by at least twice.
---
paper_title: Secure Data Aggregation In Wireless Sensor Networks
paper_content:
Wireless sensor networks (WSNs) usually consists of a large number of sensors which have limited capability in terms of communication, computation and memory [1] (Akyildiz et al. IEEE Commun. Mag. 40 (8), 102–114, 2002), [2] (Yick et al. Comput. Networks, 52 (12), 2292–2330, 2008). These sensors are deployed in a remote and unsurveillant field and they autonomously form before they engage in a predefined sensing task. WSNs renders possible solutions to many problems in both civilian and military applications, including temperature monitoring, wildfire detection, animal tracking, and battlefield surveillance. Therefore, it is challenging to come up with efficient ways to collect desired data, given the sensors only have simple hardware and software resources. For instance, most of the sensors only have a short lifetime due to the non-rechargeable battery which is a bottleneck for designing WSNs protocols.
---
paper_title: A Congestion-Aware Routing Algorithms Based on Traffic Priority in Wireless Sensor Networks
paper_content:
Wireless sensor network allow the network manager to measure the observed events in a short radio range and give them an appropriate response. In many applications of wireless sensor network, due to the high volume of traffic, probability of congestion and packet loss increases. Congestion in sensor networks has a direct effect on energy efficiency and quality of service applications. Congestion may cause a buffer overflow, longer queuing time and higher packet loss. Packet loss not only reduces the reliability and quality of service application but also wastes energy. In this paper, a scheme for controlling congestion in wireless sensor network is proposed. The aim of the proposed method is to reduce congestion by considering the priority of data. In the proposed algorithm, according to the data priority, the packets will be classified. According to type of packet, traffic is redirected to control congestion in the network. Finally, the proposed algorithm is simulated and the result shows that the proposed algorithm improves the number of packet loss, energy consumption and average buffer size rather than the similar algorithm.
---
paper_title: Hop‐by‐Hop Congestion Avoidance in wireless sensor networks based on genetic support vector machine
paper_content:
Abstract Congestion in wireless sensor networks causes packet loss, throughput reduction and low energy efficiency. To address this challenge, a transmission rate control method is presented in this article. The strategy calculates buffer occupancy ratio and estimates the congestion degree of the downstream node. Then, it sends this information to the current node. The current node adjusts the transmission rate to tackle the problem of congestion, improving the network throughput by using multi-classification obtained via Support Vector Machines (SVMs). SVM parameters are tuned, using genetic algorithm. Simulations showed that in most cases, the results of the SVM network match the actual data in training and testing phases. Also, simulation results demonstrated that the proposed method not only decreases energy consumption, packet loss and end to end delay in networks, but it also significantly improves throughput and network lifetime under different traffic conditions, especially in heavy traffic areas.
---
paper_title: Quality of Information for Wireless Body Area Networks
paper_content:
The Wireless Body Area Networks (WBANs) are a specific group of Wireless Sensor Networks (WSNs) that are used to establish patient monitoring systems which facilitate remote sensing of patients over a long period of time. In this type of system, there is possibility that the information accessible to the health expert at the end point may divert from the original information generated. In some cases, these variations may cause an expert to make a diverse decision from what would have been made specified to the original data. The proposed work contributes toward overcoming this foremost difficulty by defining a quality of information (QoI) metric that helps to preserve the required information. In this paper, we analytically model the QoI as reliability of data generation and reliability of data transfer in WBAN.
---
paper_title: Data Aggregation in Wireless Sensor Networks: Previous Research, Current Status and Future Directions
paper_content:
Wireless sensor networks (WSNs) consist of large number of small sized sensor nodes, whose main task is to sense the desired phenomena in a particular region of interest. These networks have large number of applications such as habitat monitoring, disaster management, security and military etc. Sensor nodes are very small in size and have limited processing capability as these nodes have very low battery power. WSNs are also prone to failure, due to low battery power constraint. Data aggregation is an energy efficient technique in WSNs. Due to high node density in sensor networks same data is sensed by many nodes, which results in redundancy. This redundancy can be eliminated by using data aggregation approach while routing packets from source nodes to base station. Researchers still face trouble to select an efficient and appropriate data aggregation technique from the existing literature of WSNs. This research work depicts a broad methodical literature analysis of data aggregation in the area of WSNs in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 123 research papers out of large collection of 932 research papers published in 20 foremost workshops, symposiums, conferences and 17 prominent journals. The current status of data aggregation in WSNs is distributed into various categories. Methodical analysis of data aggregation in WSNs is presented which includes techniques, tools, methodology and challenges in data aggregation. The literature covered fifteen types of data aggregation techniques in WSNs. Detailed analysis of this research work will help researchers to find the important characteristics of data aggregation techniques and will also help to select the most suitable technique for data aggregation. Research issues and future research directions have also been suggested in this research literature.
---
paper_title: A Survey on Reliability Protocols in Wireless Sensor Networks
paper_content:
Wireless Sensor Network (WSN) applications have become more and more attractive with the miniaturization of circuits and the large variety of sensors. The different application domains, especially critical fields of WSN use, make the reliability of data acquisition and communication a hot research field that must be tackled efficiently. Indeed, the quality of largely used, cheap-cost wireless sensors and their scarce energy supply support these reliability challenges that lead to data loss or corruption. For solving this problem, the conception of a reliability mechanism that detects these shortcomings and recovers to them becomes necessary. In this article, we present a survey on existing reliability protocols conceived especially for WSNs due to their special features. The deep classification and discussion in this study allow for understanding the pros and cons of state-of-the-art works in order to enhance the existing schemes and fill the gaps. We have classified the works according to the required level of reliability, the manner to identify the origins of the lack of reliability, and the control to recover this lack of reliability. Across the discussion along this study, we deduce that the cross-layer design between MAC, routing, and transport layers presents a good concept to efficiently overcome the different reliability holes.
---
paper_title: A Survey on Reliability Protocols in Wireless Sensor Networks
paper_content:
Wireless Sensor Network (WSN) applications have become more and more attractive with the miniaturization of circuits and the large variety of sensors. The different application domains, especially critical fields of WSN use, make the reliability of data acquisition and communication a hot research field that must be tackled efficiently. Indeed, the quality of largely used, cheap-cost wireless sensors and their scarce energy supply support these reliability challenges that lead to data loss or corruption. For solving this problem, the conception of a reliability mechanism that detects these shortcomings and recovers to them becomes necessary. In this article, we present a survey on existing reliability protocols conceived especially for WSNs due to their special features. The deep classification and discussion in this study allow for understanding the pros and cons of state-of-the-art works in order to enhance the existing schemes and fill the gaps. We have classified the works according to the required level of reliability, the manner to identify the origins of the lack of reliability, and the control to recover this lack of reliability. Across the discussion along this study, we deduce that the cross-layer design between MAC, routing, and transport layers presents a good concept to efficiently overcome the different reliability holes.
---
paper_title: Quality of Information for Wireless Body Area Networks
paper_content:
The Wireless Body Area Networks (WBANs) are a specific group of Wireless Sensor Networks (WSNs) that are used to establish patient monitoring systems which facilitate remote sensing of patients over a long period of time. In this type of system, there is possibility that the information accessible to the health expert at the end point may divert from the original information generated. In some cases, these variations may cause an expert to make a diverse decision from what would have been made specified to the original data. The proposed work contributes toward overcoming this foremost difficulty by defining a quality of information (QoI) metric that helps to preserve the required information. In this paper, we analytically model the QoI as reliability of data generation and reliability of data transfer in WBAN.
---
paper_title: A Survey on Reliability Protocols in Wireless Sensor Networks
paper_content:
Wireless Sensor Network (WSN) applications have become more and more attractive with the miniaturization of circuits and the large variety of sensors. The different application domains, especially critical fields of WSN use, make the reliability of data acquisition and communication a hot research field that must be tackled efficiently. Indeed, the quality of largely used, cheap-cost wireless sensors and their scarce energy supply support these reliability challenges that lead to data loss or corruption. For solving this problem, the conception of a reliability mechanism that detects these shortcomings and recovers to them becomes necessary. In this article, we present a survey on existing reliability protocols conceived especially for WSNs due to their special features. The deep classification and discussion in this study allow for understanding the pros and cons of state-of-the-art works in order to enhance the existing schemes and fill the gaps. We have classified the works according to the required level of reliability, the manner to identify the origins of the lack of reliability, and the control to recover this lack of reliability. Across the discussion along this study, we deduce that the cross-layer design between MAC, routing, and transport layers presents a good concept to efficiently overcome the different reliability holes.
---
paper_title: A Light-Weight Opportunistic Forwarding Protocol with Optimized Preamble Length for Low-Duty-Cycle Wireless Sensor Networks
paper_content:
In wireless sensor networks, sensed information is expected to be reliably and timely delivered to a sink in an ad-hoc way. However, it is challenging to achieve this goal because of the highly dynamic topology induced from asynchronous duty cycles and temporally and spatially varying link quality among nodes. Currently some opportunistic forwarding protocols have been proposed to address the challenge. However, they involve complicated mechanisms to determine the best forwarder at each hop, which incurs heavy overheads for the resource-constrained nodes. In this paper, we propose a light-weight opportunistic forwarding (LWOF) scheme. Different from other recently proposed opportunistic forwarding schemes, LWOF employs neither historical network information nor a contention process to select a forwarder prior to data transmissions. It confines forwarding candidates to an optimized area, and takes advantage of the preamble in low-power-listening (LPL) MAC protocols and dual-channel communication to forward a packet to a unique downstream node towards the sink with a high probability, without making a forwarding decision prior to data transmission. Under LWOF, we optimize LPL MAC protocol to have a shortened preamble (LWMAC), based on a theoretical analysis on the relationship among preamble length, delivery probability at each hop, node density and sleep duration. Simulation results show that LWOF, along with LWMAC, can achieve relatively good performance in terms of delivery reliability and latency, as a receiver-based opportunistic forwarding protocol, while reducing energy consumption per packet by at least twice.
---
paper_title: A Survey on Reliability Protocols in Wireless Sensor Networks
paper_content:
Wireless Sensor Network (WSN) applications have become more and more attractive with the miniaturization of circuits and the large variety of sensors. The different application domains, especially critical fields of WSN use, make the reliability of data acquisition and communication a hot research field that must be tackled efficiently. Indeed, the quality of largely used, cheap-cost wireless sensors and their scarce energy supply support these reliability challenges that lead to data loss or corruption. For solving this problem, the conception of a reliability mechanism that detects these shortcomings and recovers to them becomes necessary. In this article, we present a survey on existing reliability protocols conceived especially for WSNs due to their special features. The deep classification and discussion in this study allow for understanding the pros and cons of state-of-the-art works in order to enhance the existing schemes and fill the gaps. We have classified the works according to the required level of reliability, the manner to identify the origins of the lack of reliability, and the control to recover this lack of reliability. Across the discussion along this study, we deduce that the cross-layer design between MAC, routing, and transport layers presents a good concept to efficiently overcome the different reliability holes.
---
paper_title: RCRT: rate-controlled reliable transport for wireless sensor networks
paper_content:
Emerging high-rate applications (imaging, structural monitoring, acoustic localization) will need to transport large volumes of data concurrently from several sensors. These applications are also loss-intolerant. A key requirement for such applications, then, is a protocol that reliably transport sensor data from many sources to one or more sinks without incurring congestion collapse. In this paper, we discuss RCRT, a rate-controlled reliable transport protocol suitable for constrained sensor nodes. RCRT uses end-to-end explicit loss recovery, but places all the congestion detection and rate adaptation functionality in the sinks. This has two important advantages: efficiency and flexibility. Because sinks make rate allocation decisions, they are able to achieve greater efficiency since they have a more comprehensive view of network behavior. For the same reason, it is possible to alter the rate allocation decisions (for example, from one that ensures that all nodes get the same rate, to one that ensures that nodes get rates in proportion to their demands), without modifying sensor code at all. We evaluate RCRT extensively on a 40-node wireless sensor network testbed and show that RCRT achieves more than twice the rate achieved by a recently proposed interference-aware distributed rate-control protocol, IFRC [23].
---
paper_title: Traffic-- -Aware Dynamic Routing to Alleviate Aware Dynamic Routing to Alleviate Aware Dynamic Routing to Alleviate Aware Dynamic Routing to Alleviate Congestion in
paper_content:
Congestion is one of the problems encountered in Wireless Sensor Networks (WSNs). Many algorithms came into existence. They all focused on the solution by reducing the number of packets to be sent by the sender node. This solution is not ideal as it cause decrease in overall throughput in the WSN. This paper focuses on a dynamic aware routing algorithm which is aware of network traffic and probability of congestion. This solves problem of congestion in WSNs through optimal use of idle nodes in the network. Though congestion areas, the algorithm sends less packets while it sends more packets through many paths that contains idle nodes or the nodes that have less load. The proposed algorithm can overcome obstacles created by congestion and provide best throughput performance in WSNs.
---
paper_title: Congestion Avoidance Based on Lightweight Buffer Management in Sensor Networks
paper_content:
A wireless sensor network is constrained by computation capability, memory space, communication bandwidth, and above all, energy supply. When a critical event triggers a surge of data generated by the sensors, congestion may occur as data packets converge toward a sink. Congestion causes energy waste, throughput reduction, and information loss. However, the important problem of congestion avoidance in sensor networks is largely open. This paper proposes a congestion-avoidance scheme based on lightweight buffer management. We describe simple yet effective approaches that prevent data packets from overflowing the buffer space of the intermediate sensors. These approaches automatically adapt the sensors' forwarding rates to nearly optimal without causing congestion. We discuss how to implement buffer-based congestion avoidance with different MAC protocols. In particular, for CSMA with implicit ACK, our 1/k-buffer solution prevents hidden terminals from causing congestion. We demonstrate how to maintain near-optimal throughput with a small buffer at each sensor and how to achieve congestion-free load balancing when there are multiple routing paths toward multiple sinks
---
paper_title: Explicit and precise rate control for wireless sensor networks
paper_content:
The state of the art congestion control algorithms for wireless sensor networks respond to coarse-grained feedback regarding available capacity in the network with an additive increase multiplicative decrease mechanism to set source rates. Providing precise feedback is challenging in wireless networks because link capacities vary with traffic on interfering links. We address this challenge by applying a receiver capacity model that associates capacities with nodes instead of links, and use it to develop and implement the first explicit and precise distributed rate-based congestion control protocol for wireless sensor networks --- the wireless rate control protocol (WRCP). Apart from congestion control, WRCP has been designed to achieve lexicographic max-min fairness. Through extensive experimental evaluation on the USC Tutornet wireless sensor network testbed, we show that WRCP offers substantial improvements over the state of the art in flow completion times as well as in end-to-end packet delays.
---
|
Title: Congestion Control Techniques in WSNs: A Review
Section 1: INTRODUCTION
Description 1: This section provides a detailed background on wireless sensor networks (WSNs), the significance of congestion control, and the phases involved in congestion control.
Section 2: LITERATURE REVIEW
Description 2: This section discusses various congestion control algorithms for WSNs, their types, and their operational procedures.
Section 3: RCRT
Description 3: This section describes the RCRT (Rate-Controlled Reliable Transport) protocol and its mechanism for resolving congestion.
Section 4: I2MR
Description 4: This section covers the I2MR (Interference and Interdependency Mitigated Routing) protocol, how it controls congestion, and its limitations.
Section 5: TADR
Description 5: This section elaborates on the TADR (Traffic-Aware Dynamic Routing) protocol, including its congestion control mechanism and drawbacks.
Section 6: Buffer-Based Congestion Avoidance Scheme
Description 6: This section outlines the buffer-based congestion avoidance scheme and its effectiveness in load balancing and buffer access.
Section 7: DAIPaS
Description 7: This section explains the DAIPaS (Dynamic Alternative Interference-aware Path Selection) protocol and its strategy for congestion control.
Section 8: Fusion
Description 8: This section describes the Fusion method for congestion control, including its reliance on queue length and hop-by-hop flow control.
Section 9: WRCP (Wireless Rate Control Protocol)
Description 9: This section details the WRCP protocol and its use of a receiver capacity model for fast convergence and fair rate allocation.
Section 10: TRCCIT (Tunable Reliability with Congestion Control for Information Transport)
Description 10: This section discusses the TRCCIT protocol, its hybrid acknowledgment approach, and multipath forwarding for congestion control.
Section 11: DPCC (Decentralized Predictive Congestion Control)
Description 11: This section focuses on the DPCC protocol, including its adaptive flow and back-off interval selection approaches.
Section 12: GMCAR (Grid-based Multipath with Congestion Avoidance Routing)
Description 12: This section introduces the GMCAR protocol, its grid-based methodology, and its effectiveness in reducing delay and increasing network output.
Section 13: TASA (Traffic Aware Scheduling Algorithm)
Description 13: This section elaborates on the TASA protocol, its scheduling approach, and its use of graph theory methods for traffic management.
Section 14: OTF (On-the-Fly Scheduling)
Description 14: This section describes the OTF scheduling approach, including its slot-based interference prevention and traffic load adaptation.
Section 15: PERFORMANCE COMPARISON
Description 15: This section presents a performance comparison of the discussed congestion control schemes, highlighting their operational strategies, strengths, and weaknesses.
|
Learning and the Unknown: Surveying Steps toward Open World Recognition
| 5 |
---
paper_title: Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly
paper_content:
Due to the importance of zero-shot learning, i.e. classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
---
paper_title: Incremental Open Set Intrusion Recognition Using Extreme Value Machine
paper_content:
Typically, most network intrusion detection systems use supervised learning techniques to identify network anomalies. A problem exists when identifying the unknowns and automatically updating a classifier with new query classes. This is defined as an open set incremental learning problem and we propose to extend a recently introduced method, the Extreme Value Machine (EVM) to address the issue of identifying new classes during query time. The EVM is derived from the statistical extreme value theory and is the first classifier that can perform kernel-free, nonlinear, variable bandwidth outlier detection combined with incremental learning. In this paper, we utilize the EVM for intrusion detection and measure the open set recognition performance of identifying known and unknown classes. Additionally, we evaluate the performance on the KDDCUP’99 dataset and compare the results with the state-ofthe- art Weibull-SVM (W-SVM). Our findings demonstrate that the EVM mirrors the performance of the W-SVM classifier, while it supports incremental learning.
---
paper_title: Open Set Fingerprint Spoof Detection Across Novel Fabrication Materials
paper_content:
A fingerprint spoof detector is a pattern classifier that is used to distinguish a live finger from a fake (spoof) one in the context of an automated fingerprint recognition system. Most spoof detectors are learning-based and rely on a set of training images. Consequently, the performance of any such spoof detector significantly degrades when encountering spoofs fabricated using novel materials not found in the training set. In real-world applications, the problem of fingerprint spoof detection must be treated as an open set recognition problem where incomplete knowledge of the fabrication materials used to generate spoofs is present at training time, and novel materials may be encountered during system deployment. To mitigate the security risk posed by novel spoofs, this paper introduces: 1) the use of the Weibull-calibrated SVM (W-SVM), which is relatively robust for open set recognition, as a novel-material detector and a spoof detector and 2) a scheme for the automatic adaptation of the W-SVM-based spoof detector to new spoof materials that leverages interoperability across classifiers. Experiments conducted on new partitions of the LivDet 2011 database designed for open set evaluation suggest: 1) a 97% increase in the error rate of the existing spoof detectors when tested using new spoof materials and 2) up to 44% improvement in spoof detection performance across spoof materials when the proposed adaptive approach is used.
---
paper_title: On optimum recognition error and reject tradeoff
paper_content:
The performance of a pattern recognition system is characterized by its error and reject tradeoff. This paper describes an optimum rejection rule and presents a general relation between the error and reject probabilities and some simple properties of the tradeoff in the optimum recognition system. The error rate can be directly evaluated from the reject function. Some practical implications of the results are discussed. Examples in normal distributions and uniform distributions are given.
---
paper_title: On the feasibility of classification-based product package authentication
paper_content:
Depending on the product category the authenticity of a consumer good concerns economic, social and/or environmental issues. Counterfeited drugs are a threat to patient safety and cause significant economic losses. Different from physical-marking based approaches this work investigates authentication of drugs based on intrinsic texture features of the packaging material. Therefore, it is assumed that the packaging material of a certain drug shows constant but discriminative textural features which enable authentication, i.e. to prove if the packaging material is genuine or not. This objective requires considering a binary classification problem with an open set of negative classes, i.e. unknown and unseen counterfeits. In order to investigate the feasibility a novel drug packaging texture databases was acquired. The experimental evaluation of two basic requirements in texture classification serves as an evidence on the basic feasibility.
---
paper_title: Novelty detection: a review - part 1: statistical approaches
paper_content:
Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains information about objects that were not known at the time of training the model. In this paper we provide state-of-the-art review in the area of novelty detection based on statistical approaches. The second part paper details novelty detection using neural networks. As discussed, there are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics.
---
paper_title: Finding the Unknown: Novelty Detection with Extreme Value Signatures of Deep Neural Activations
paper_content:
Achieving or even surpassing human-level accuracy became recently possible in a variety of application scenarios due to the rise of convolutional neural networks (CNNs) trained from large datasets. However, solving supervised visual recognition tasks by discriminating among known categories is only one side of the coin. In contrast to this, novelty detection is still an unsolved task where instances of yet unknown categories need to be identified. Therefore, we propose to leverage the powerful discriminative nature of CNNs to novelty detection tasks by investigating class-specific activation patterns. More precisely, we assume that a semantic category can be described by its extreme value signature, that specifies which dimensions of deep neural activations have largest values. By following this intuition, we show that already a small number of high-valued dimensions allows to separate known from unknown categories. Our approach is simple, intuitive, and can be easily put on top of CNNs trained for vanilla classification tasks. We empirically validate the benefits of our approach in terms of accuracy and speed by comparing it against established methods in a variety of novelty detection tasks derived from ImageNet. Finally, we show that visualizing extreme value signatures allows to inspect class-specific patterns learned during training which may ultimately help to better understand CNN models.
---
paper_title: Data-Fusion Techniques for Open-Set Recognition Problems
paper_content:
Most pattern classification techniques are focused on solving closed-set problems in which a classifier is trained with samples of all classes that may appear during the testing phase. In many situations, however, samples of unknown classes, i.e., whose classes did not have any example during the training stage, need to be properly handled during testing. This specific setup is referred to in the literature as open-set recognition. Open-set problems are harder as they might be ill-sampled, not sampled at all, or even undefined. Differently from existing literature, here we aim at solving open-set recognition problems combining different classifiers and features while, at the same time, taking care of unknown classes. Researchers have greatly benefited from combining different methods in order to achieve more robust and reliable classifiers in daring recognition conditions, but those solutions have often focused on closed-set setups. In this paper, we propose the integration of a newly designed open-set graph-based optimum-path forest (OSOPF) classifier with genetic programming (GP) and majority voting fusion techniques. While OSOPF takes care of learning decision boundaries more resilient to unknown classes and outliers, GP combines different problem features to discover appropriate similarity functions and allows a more robust classification through early fusion. Finally, the majority-voting approach combines different classification evidence from different classifier outcomes and features through late-fusion techniques. Performed experiments show the proposed data-fusion approaches yield effective results for open-set recognition problems, significantly outperforming existing counterparts in the literature and paving the way for investigations in this field.
---
paper_title: Towards Open World Recognition
paper_content:
With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition.
---
paper_title: Using Visual Rhythms for Detecting Video-Based Facial Spoof Attacks
paper_content:
Spoofing attacks or impersonation can be easily accomplished in a facial biometric system wherein users without access privileges attempt to authenticate themselves as valid users, in which an impostor needs only a photograph or a video with facial information of a legitimate user. Even with recent advances in biometrics, information forensics and security, vulnerability of facial biometric systems against spoofing attacks is still an open problem. Even though several methods have been proposed for photo-based spoofing attack detection, attacks performed with videos have been vastly overlooked, which hinders the use of the facial biometric systems in modern applications. In this paper, we present an algorithm for video-based spoofing attack detection through the analysis of global information which is invariant to content, since we discard video contents and analyze content-independent noise signatures present in the video related to the unique acquisition processes. Our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access videos. For that, we use the Fourier spectrum followed by the computation of video visual rhythms and the extraction of different characterization methods. For evaluation, we consider the novel unicamp video-attack database, which comprises 17 076 videos composed of real access and spoofing attack videos. In addition, we evaluate the proposed method using the replay-attack database, which contains photo-based and video-based face spoofing attacks.
---
paper_title: A bounded neural network for open set recognition
paper_content:
Open set recognition is, more than an interesting research subject, a component of various machine learning applications which is sometimes neglected: it is not unusual the existence of learning systems developed on the top of closed-set assumptions, ignoring the error risk involved in a prediction. This risk is strictly related to the location in feature space where the prediction has to be made, compared to the location of the training data: the more distant the training observations are, less is known, higher is the risk. Proper handling of this risk can be necessary in various situation where classification and its variants are employed. This paper presents an approach to open set recognition based on an elaborate distance-like computation provided by a weightless neural network model. The results obtained in the proposed test scenarios are quite interesting, placing the proposed method among the current best ones.
---
paper_title: Connecting the dots: Toward accountable machine-learning printer attribution methods
paper_content:
Abstract Digital forensics is rapidly evolving as a direct consequence of the adoption of machine-learning methods allied with ever-growing amounts of data. Despite the fact that these methods yield more consistent and accurate results, they may face adoption hindrances in practice if their produced results are absent in a human-interpretable form. In this paper, we exemplify how human-interpretable (a.k.a., accountable) extensions can enhance existing algorithms to aid human experts, by introducing a new method for the source printer attribution problem. We leverage the recently proposed Convolutional Texture Gradient Filter (CTGF) algorithm’s ability to capture local printing imperfections to introduce a new method that maps and highlights important attribution features directly onto the investigated printed document. Supported by Random Forest classifiers, we isolate and rank features that are pivotal for differentiating a printer from others, and back-project those features onto the investigated document, giving analysts further evidence about the attribution process.
---
paper_title: Novelty detection and multi-class classification in power distribution voltage waveforms
paper_content:
Accurate classification of events in waveforms from electrical distribution networks.Novelty detection: dynamic identification of new classes of events.SVDD using negative examples and maximal margin separation: better generalization.Experiments using real data: significant improvements in classification accuracy.Direct application as part of tools to assist mitigation processes in power utilities. The automatic analysis of electrical waveforms is a recurring subject in the power system sector worldwide. In this sense, the idea of this paper is to present an original approach for automatic classification of voltage waveforms in electrical distribution networks. It includes both the classification of the waveforms in multiple known classes, and the detection of new waveforms (novelties) that are not available during the training stage. The classification method, based on the Support Vector Data Description (SVDD), has a suitable formulation for this task, because it is capable of fitting a model on a relatively small set of examples, which may also include negative examples (patterns from other known classes or even novelties), with maximal margin separation. The results obtained on both simulated and real world data demonstrate the ability of the method to identify novelties and to classify known examples correctly. The method finds application in the mitigation process of emergencies normally performed by power utilities' maintenance and protection engineers, which requires fast and accurate event cause identification.
---
paper_title: Learning Person-Specific Representations From Faces in the Wild
paper_content:
Humans are natural face recognition experts, far out-performing current automated face recognition algorithms, especially in naturalistic, “in the wild” settings. However, a striking feature of human face recognition is that we are dramatically better at recognizing highly familiar faces, presumably because we can leverage large amounts of past experience with the appearance of an individual to aid future recognition. Meanwhile, the analogous situation in automated face recognition, where a large number of training examples of an individual are available, has been largely underexplored, in spite of the increasing relevance of this setting in the age of social media. Inspired by these observations, we propose to explicitly learn enhanced face representations on a per-individual basis, and we present two methods enabling this approach. By learning and operating within person-specific representations, we are able to significantly outperform the previous state-of-the-art on PubFig83, a challenging benchmark for familiar face recognition in the wild, using a novel method for learning representations in deep visual hierarchies. We suggest that such person-specific representations aid recognition by introducing an intermediate form of regularization to the problem.
---
paper_title: Open set intrusion recognition for fine-grained attack categorization
paper_content:
Confidently distinguishing a malicious intrusion over a network is an important challenge. Most intrusion detection system evaluations have been performed in a closed set protocol in which only classes seen during training are considered during classification. Thus far, there has been no realistic application in which novel types of behaviors unseen at training - unknown classes as it were - must be recognized for manual categorization. This paper comparatively evaluates malware classification using both closed set and open set protocols for intrusion recognition on the KDDCUP'99 dataset. In contrast to much of the previous work, we employ a fine-grained recognition protocol, in which the dataset is loosely open set - i.e., recognizing individual intrusion types - e.g., “sendmail”, “snmp_guess”, …, etc., rather than more general attack categories (e.g., “DoS”,“Probe”,“R2L”,“U2R”,“Normal”). We also employ two different classifier types - Gaussian RBF kernel SVMs, which are not theoretically guaranteed to bound open space risk, and W-SVMs, which are theoretically guaranteed to bound open space risk. We find that the W-SVM offers superior performance under the open set regime, particularly as the cost of misclassifying unknown classes at query time (i.e., classes not present in the training set) increases. Results of performance tradeoff with respect to cost of unknown as well as discussion of the ramifications of these findings in an operational setting are presented.
---
paper_title: Convolutional Neural Network approaches to granite tiles classification
paper_content:
Abstract The quality control process in stone industry is a challenging problem to deal with nowadays. Due to the similar visual appearance of different rocks with the same mineralogical content, economical losses can happen in industry if clients cannot recognize properly the rocks delivered as the ones initially purchased. In this paper, we go toward the automation of rock-quality assessment in different image resolutions by proposing the first data-driven technique applied to granite tiles classification. Our approach understands intrinsic patterns in small image patches through the use of Convolutional Neural Networks tailored for this problem. Experiments comparing the proposed approach to texture descriptors in a well-known dataset show the effectiveness of the proposed method and its suitability for applications in some uncontrolled conditions, such as classifying granite tiles under different image resolutions.
---
paper_title: iCaRL: Incremental Classifier and Representation Learning
paper_content:
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.
---
paper_title: Place categorization and semantic mapping on a mobile robot
paper_content:
In this paper we focus on the challenging problem of place categorization and semantic mapping on a robot without environment-specific training. Motivated by their ongoing success in various visual recognition tasks, we build our system upon a state-of-the-art convolutional network. We overcome its closed-set limitations by complementing the network with a series of one-vs-all classifiers that can learn to recognize new semantic classes online. Prior domain knowledge is incorporated by embedding the classification system into a Bayesian filter framework that also ensures temporal coherence. We evaluate the classification accuracy of the system on a robot that maps a variety of places on our campus in real-time. We show how semantic information can boost robotic object detection performance and how the semantic map can be used to modulate the robot's behaviour during navigation tasks. The system is made available to the community as a ROS module.
---
paper_title: Classification Under Streaming Emerging New Classes: A Solution Using Completely-Random Trees
paper_content:
This paper investigates an important problem in stream mining, i.e., classification under streaming emerging new classes or SENC . The SENC problem can be decomposed into three subproblems: detecting emerging new classes, classifying known classes, and updating models to integrate each new class as part of known classes. The common approach is to treat it as a classification problem and solve it using either a supervised learner or a semi-supervised learner. We propose an alternative approach by using unsupervised learning as the basis to solve this problem. The proposed method employs completely-random trees which have been shown to work well in unsupervised learning and supervised learning independently in the literature. The completely-random trees are used as a single common core to solve all three subproblems: unsupervised learning, supervised learning, and model update on data streams. We show that the proposed unsupervised-learning-focused method often achieves significantly better outcomes than existing classification-focused methods.
---
paper_title: Probability Models for Open Set Recognition
paper_content:
Real-world tasks in computer vision often touch upon open set recognition: multi-class recognition with incomplete knowledge of the world and many unknown inputs. Recent work on this problem has proposed a model incorporating an open space risk term to account for the space beyond the reasonable support of known classes. This paper extends the general idea of open space risk limiting classification to accommodate non-linear classifiers in a multi-class setting. We introduce a new open set recognition model called compact abating probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical extreme value theory for score calibration with one-class and binary support vector machines. Our experiments show that the W-SVM is significantly better for open set object detection and OCR problems when compared to the state-of-the-art for the same tasks.
---
paper_title: Modular ensembles for one-class classification based on density analysis
paper_content:
One-Class Classification (OCC) is an important machine learning task. It studies a special classification problem that training samples from only one class, named target class, are available or reliable. Recently, various OCC algorithms have been proposed, however many of them do not adequately deal with multi-modality, multi-density, the noise and arbitrarily shaped distributions of the target class. In this paper, we propose a novel Density Based Modular Ensemble One-class Classifier (DBM-EOC) algorithm which is motivated by density analysis, divide-and-conquer method and ensemble learning. DBM-EOC first performs density analysis on training samples to obtain a minimal spanning tree using density characteristics of the target class. On this basis, DBM-EOC automatically identifies clusters, multi-density distributions and the noise in training samples using extreme value analysis. Then target samples are categorized into several groups called Local Dense Subset (LDS). Samples in each LDS are close to each other and their local densities are similar. A simple base OCC model e.g. the Gaussian estimator is built for each LDS afterwards. Finally all the base classifiers are modularly aggregated to construct the DBM-EOC model. We experimentally evaluate DBM-EOC with 6 state-of-art OCC algorithms on 5 synthetic datasets, 18 UCI benchmark datasets and the MNIST dataset. The results show that DBM-EOC outperforms other competitors in majority cases especially when the datasets are multi-modality, multi-density or noisy. We propose a modular ensemble OCC algorithm DBM-EOC based on density analysis.We analyze peculiarities of the target class which are crucial for OCC.DBM-EOC obtains a tree structure of the target class considering density.DBM-EOC can automatically detect clusters and remove noise samples.DBM-EOC solves OCC problems with the divide-and-conquer method.
---
paper_title: Challenges in detecting UAS with radar
paper_content:
The recent proliferation of drones has contributed to the emergence of new threats in security applications. Because of their great agility and small size, UAS can be used for numerous missions and are very challenging to detect. Radar technology with its all-weather capability can play an important role in detecting UAS-based threats and in protecting critical assets. However, to be successful, radars have to quickly scan large volumes with great sensitivity, eliminate nuisance alarms from birds and discriminate UAS from ground targets. Radar parameters, antenna scan techniques and target classification requirements for UAS detection are analyzed. A radar implementation is discussed and preliminary results presented. Overall, an X-band radar with electronic scanning capability can contribute to a reliable and affordable solution for detecting UAS-based threats.
---
paper_title: Local Novelty Detection in Multi-class Recognition Problems
paper_content:
In this paper, we propose using local learning for multiclass novelty detection, a framework that we call local novelty detection. Estimating the novelty of a new sample is an extremely challenging task due to the large variability of known object categories. The features used to judge on the novelty are often very specific for the object in the image and therefore we argue that individual novelty models for each test sample are important. Similar to human experts, it seems intuitive to first look for the most related images thus filtering out unrelated data. Afterwards, the system focuses on discovering similarities and differences to those images only. Therefore, we claim that it is beneficial to solely consider training images most similar to a test sample when deciding about its novelty. Following the principle of local learning, for each test sample a local novelty detection model is learned and evaluated. Our local novelty score turns out to be a valuable indicator for deciding whether the sample belongs to a known category from the training set or to a new, unseen one. With our local novelty detection approach, we achieve state-of-the-art performance in multi-class novelty detection on two popular visual object recognition datasets, Caltech-256 and Image Net. We further show that our framework: (i) can be successfully applied to unknown face detection using the Labeled-Faces-in-the-Wild dataset and (ii) outperforms recent work on attribute-based unfamiliar class detection in fine-grained recognition of bird species on the challenging CUB-200-2011 dataset.
---
paper_title: Audio Event Recognition in the Smart Home
paper_content:
After giving a brief overview of the relevance and value of deploying automatic audio event recognition (AER) in the smart home market, this chapter reviews three aspects of the productization of AER which are important to consider when developing pathways to impact between fundamental research and “real-world” applicative outlets. In the first section, it is shown that applications introduce a variety of practical constraints which elicit new research topics in the field: clarifying the definition of sound events, thus suggesting interest for the explicit modeling of temporal patterns and interruption; running and evaluating AER in 24/7 sound detection setups, which suggests to recast the problem as open-set recognition; and running AER applications on consumer devices with limited audio quality and computational power, thus triggering interest for scalability and robustness. The second section explores the definition of user experience for AER. After reporting field observations about the ways in which system errors affect user experience, it is proposed to introduce opinion scoring into AER evaluation methodology. Then, the link between standard AER performance metrics and subjective user experience metrics is being explored, and attention is being drawn to the fact that F-score metrics actually mash up the objective evaluation of acoustic discrimination with the subjective choice of an application-dependent operation point. Solutions to the separation of discrimination and calibration in system evaluation are introduced, thus allowing the more explicit separation of acoustic modeling optimization from that of application-dependent user experience. Finally, the last section analyses the ethical and legal issues involved in deploying AER systems which are “listening” at all times into the users’ private space. A review of the key notions underpinning European data and privacy protection laws, questioning if and when these apply to audio data, suggests a set of guidelines which summarize into empowering users to consent by fully informing them about the use of their data, as well as taking reasonable information security measures to protect users’ personal data.
---
paper_title: Authorship Attribution for Social Media Forensics
paper_content:
The veil of anonymity provided by smartphones with pre-paid SIM cards, public Wi-Fi hotspots, and distributed networks like Tor has drastically complicated the task of identifying users of social media during forensic investigations. In some cases, the text of a single posted message will be the only clue to an author’s identity. How can we accurately predict who that author might be when the message may never exceed 140 characters on a service like Twitter? For the past 50 years, linguists, computer scientists, and scholars of the humanities have been jointly developing automated methods to identify authors based on the style of their writing. All authors possess peculiarities of habit that influence the form and content of their written works. These characteristics can often be quantified and measured using machine learning algorithms. In this paper, we provide a comprehensive review of the methods of authorship attribution that can be applied to the problem of social media forensics. Furthermore, we examine emerging supervised learning-based methods that are effective for small sample sizes, and provide step-by-step explanations for several scalable approaches as instructional case studies for newcomers to the field. We argue that there is a significant need in forensics for new authorship attribution algorithms that can exploit context, can process multi-modal data, and are tolerant to incomplete knowledge of the space of all possible authors at training time.
---
paper_title: An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild
paper_content:
We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional zero-shot learning (ZSL) that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this trade-off. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet with more than 20,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper bound on the performance limit of GZSL. In particular, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving ZSL.
---
paper_title: Open Set Domain Adaptation
paper_content:
When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.
---
paper_title: Zero-Shot Learning — The Good, the Bad and the Ugly
paper_content:
Due to the importance of zero-shot learning, the number of proposed approaches has increased steadily recently. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss limitations of the current status of the area which can be taken as a basis for advancing it.
---
paper_title: Overcoming the challenge for text classification in the open world
paper_content:
Classification is often referred to as the task of discriminating one class from others in a given set of classes. Traditionally, classifiers work well assuming that a priori knowledge of all classes are given. Unfortunately, a presenting of unknown class during testing can lead to poor performance of even state-of-the-art classifiers due to observed classes being incorrectly identified to other classes. Recent proposed open world recognition framework provides a promising venue for tackling this challenge. While the majority of works in this relative new field is in computer vision, the rare work in Natural Language Processing shows its instability in its performance and is not based on the open world recognition framework. To tackle this problem, we represent our Nearest Centroid Class (NCC) which is incremental learning and able to detect unknown class during testing. Our model yields promising results in a document classification on text classification domains among current state-of-the-art models.
---
paper_title: Towards Open-Set Identity Preserving Face Synthesis
paper_content:
We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection.
---
paper_title: Open Set Learning with Counterfactual Images
paper_content:
In open set recognition, a classifier must label instances of known classes while detecting instances of unknown classes not encountered during training. To detect unknown classes while still generalizing to new instances of existing classes, we introduce a dataset augmentation technique that we call counterfactual image generation. Our approach, based on generative adversarial networks, generates examples that are close to training set examples yet do not belong to any training category. By augmenting training with examples generated by this optimization, we can reformulate open set recognition as classification with one additional class, which includes the set of novel and unknown examples. Our approach outperforms existing open set recognition algorithms on a selection of image classification tasks.
---
paper_title: RO-SVM: Support Vector Machine with Reject Option for Image Categorization
paper_content:
When applying Multiple Instance Learning (MIL) for image categorization, an image is treated as a bag containing a number of instances, each representing a region inside the image. The categorization of this image is determined by the labels of these instances, which are not specified in the training data-set. Hence, these instance labels are needed to be estimated together with the classifier. To improve classification reliability, we propose in this paper a new Support Vector Machine approach by incorporating a reject option, named RO-SVM to determine the instance labels, and the rejection region during the training phase simultaneously. Our approach can also be easily extended to solve multi-class classification problems. Experimental results demonstrate that higher categorization accuracy can be achieved with our RO-SVM method, comparing to approaches that do not exclude uninformative image patches. Our method is able to produce results comparable even with few training samples.
---
paper_title: Extreme Value Analysis for Mobile Active User Authentication
paper_content:
In this paper, we propose to improve the performance of mobile Active Authentication (AA) systems in the low false alarm region using the statistical Extreme Value Theory (EVT). The problem is studied under a Bayesian framework where extremal observations that contribute to mis-verification are given more prominence. We prop~e modeling !he.tail. of the match distribution using a Generalized Pareto Distrfbution (GPD) in order to make better inferences about the extremal observations. A method based on the mean excess function is introduced for parameter estimation of the GPD. Effectiveness of the proposed framework is demonstrated using publicly available unconstrained mobile active authentication datasets. It is shown that the proposed EVT-based method can significantly enhance the performance of traditional AA systems in the low false alarm rate region.
---
paper_title: A Survey of Stealth Malware Attacks, Mitigation Measures, and Steps Toward Autonomous Open World Solutions
paper_content:
As our professional, social, and financial existences become increasingly digitized and as our government, healthcare, and military infrastructures rely more on computer technologies, they present larger and more lucrative targets for malware. Stealth malware in particular poses an increased threat because it is specifically designed to evade detection mechanisms, spreading dormant, in the wild for extended periods of time, gathering sensitive information or positioning itself for a high-impact zero-day attack. Policing the growing attack surface requires the development of efficient anti-malware solutions with improved generalization to detect novel types of malware and resolve these occurrences with as little burden on human experts as possible. In this paper, we survey malicious stealth technologies as well as existing solutions for detecting and categorizing these countermeasures autonomously. While machine learning offers promising potential for increasingly autonomous solutions with improved generalization to new malware types, both at the network level and at the host level, our findings suggest that several flawed assumptions inherent to most recognition algorithms prevent a direct mapping between the stealth malware recognition problem and a machine learning solution. The most notable of these flawed assumptions is the closed world assumption: that no sample belonging to a class outside of a static training set will appear at query time. We present a formalized adaptive open world framework for stealth malware recognition and relate it mathematically to research from other machine learning domains.
---
paper_title: Open set recognition for automatic target classification with rejection
paper_content:
Training sets for supervised classification tasks are usually limited in scope and only contain examples of a few classes. In practice, classes that were not seen in training are given labels that are always incorrect. Open set recognition (OSR) algorithms address this issue by providing classifiers with a rejection option for unknown samples. In this work, we introduce a new OSR algorithm and compare its performance to other current approaches for open set image classification.
---
paper_title: Open set source camera attribution and device linking
paper_content:
Camera attribution approaches in digital image forensics have most often been evaluated in a closed set context, whereby all devices are known during training and testing time. However, in a real investigation, we must assume that innocuous images from unknown devices will be recovered, which we would like to remove from the pool of evidence. In pattern recognition, this corresponds to what is known as the open set recognition problem. This article introduces new algorithms for open set modes of image source attribution (identifying whether or not an image was captured by a specific digital camera) and device linking (identifying whether or not a pair of images was acquired from the same digital camera without the need for physical access to the device). Both algorithms rely on a new multi-region feature generation strategy, which serves as a projection space for the class of interest and emphasizes its properties, and on decision boundary carving, a novel method that models the decision space of a trained SVM classifier by taking advantage of a few known cameras to adjust the decision boundaries to decrease false matches from unknown classes. Experiments including thousands of unconstrained images collected from the web show a significant advantage for our approaches over the most competitive prior work.
---
paper_title: Open set recognition of aircraft in aerial imagery using synthetic template models
paper_content:
Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing ::: advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of ::: confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new ::: targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set ::: manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for ::: inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated ::: synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist ::: approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support ::: vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial ::: augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that ::: open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs ::: from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the ::: relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications ::: that may improve the ability of synthetic data to represent real data.
---
paper_title: Analyzing the Roles of Descriptions and Actions in Open Systems
paper_content:
This paper analyzes relationships between the roles of descriptions and actions in large scale, open ended, geographically distributed, concurrent systems. Rather than attempt to deal with the complexities and ambiguities of currently implemented descriptive languages, we concentrate our analysis on what can be expressed in the underlying frameworks such as the lambda calculus and first order logic. By this means we conclude that descriptions and actions complement one another; neither being sufficient unto itself. This paper provides a basis to begin the analysis of the very subtle relationships that hold between descriptions and actions in Open Systems.
---
paper_title: Metric learning for large scale image classification: generalizing to new classes at near-zero cost
paper_content:
We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.
---
paper_title: Domain Adaptation for Visual Recognition
paper_content:
Domain adaptation is an active, emerging research area that attemptsto address the changes in data distribution across training and testingdatasets. With the availability of a multitude of image acquisition sensors,variations due to illumination, and viewpoint among others, computervision applications present a very natural test bed for evaluatingdomain adaptation methods. In this monograph, we provide a comprehensiveoverview of domain adaptation solutions for visual recognitionproblems. By starting with the problem description and illustrations,we discuss three adaptation scenarios namely, i unsupervised adaptationwhere the "source domain" training data is partially labeledand the "target domain" test data is unlabeled, ii semi-supervisedadaptation where the target domain also has partial labels, and iiimulti-domain heterogeneous adaptation which studies the previous twosettings with the source and/or target having more than one domain,and accounts for cases where the features used to represent the datain each domain are different. For all these topics we discuss existingadaptation techniques in the literature, which are motivated by theprinciples of max-margin discriminative learning, manifold learning,sparse coding, as well as low-rank representations. These techniqueshave shown improved performance on a variety of applications suchas object recognition, face recognition, activity analysis, concept classification,and person detection. We then conclude by analyzing thechallenges posed by the realm of "big visual data", in terms of thegeneralization ability of adaptation algorithms to unconstrained dataacquisition as well as issues related to their computational tractability,and draw parallels with the efforts from vision community on imagetransformation models, and invariant descriptors so as to facilitate improvedunderstanding of vision problems under uncertainty.
---
paper_title: Support Vector Machines with Embedded Reject Option
paper_content:
In this paper, the problem of implementing the reject option in support vector machines (SVMs) is addressed. We started by observing that methods proposed so far simply apply a reject threshold to the outputs of a trained SVM. We then showed that, under the framework of the structural risk minimisation principle, the rejection region must be determined during the training phase of a classifier. By applying this concept, and by following Vapnik's approach, we developed a maximum margin classifier with reject option. This led us to a SVM whose rejection region is determined during the training phase, that is, a SVM with embedded reject option. To implement such a SVM, we devised a novel formulation of the SVM training problem and developed a specific algorithm to solve it. Preliminary results on a character recognition problem show the advantages of the proposed SVM in terms of the achievable error-reject trade-off.
---
paper_title: Steps Toward Robust Artificial Intelligence
paper_content:
Recent advances in artificial intelligence are encouraging governments and corporations to deploy AI in high-stakes settings including driving cars autonomously, managing the power grid, trading on stock exchanges, and controlling autonomous weapons systems. Such applications require AI methods to be robust to both the known unknowns (those uncertain aspects of the world about which the computer can reason explicitly) and the unknown unknowns (those aspects of the world that are not captured by the system’s models). This article discusses recent progress in AI and then describes eight ideas related to robustness that are being pursued within the AI research community. While these ideas are a start, we need to devote more attention to the challenges of dealing with the known and unknown unknowns. These issues are fascinating, because they touch on the fundamental question of how finite systems can survive and thrive in a complex and dangerous world
---
paper_title: Attribute-Based Classification for Zero-Shot Visual Object Categorization
paper_content:
We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.
---
paper_title: Lifelong Machine Learning
paper_content:
Abstract NOTE ⁃ A New Edition of This Title is Available: Lifelong Machine Learning, Second Edition Lifelong Machine Learning (or Lifelong Learning) is an advanced machine learning paradigm that le...
---
paper_title: Unbounded cache model for online language modeling with open vocabulary
paper_content:
Recently, continuous cache models were proposed as extensions to recurrent neural network language models, to adapt their predictions to local changes in the data distribution. These models only capture the local context, of up to a few thousands tokens. In this paper, we propose an extension of continuous cache models, which can scale to larger contexts. In particular, we use a large scale non-parametric memory component that stores all the hidden activations seen in the past. We leverage recent advances in approximate nearest neighbor search and quantization algorithms to store millions of representations while searching them efficiently. We conduct extensive experiments showing that our approach significantly improves the perplexity of pre-trained language models on new distributions, and can scale efficiently to much larger contexts than previously proposed local cache models.
---
paper_title: Multi-class Fukunaga Koontz discriminant analysis for enhanced face recognition
paper_content:
Linear subspace learning methods such as Fisher's Linear Discriminant Analysis (LDA), Unsupervised Discriminant Projection (UDP), and Locality Preserving Projections (LPP) have been widely used in face recognition applications as a tool to capture low dimensional discriminant information. However, when these methods are applied in the context of face recognition, they often encounter the small-sample-size problem. In order to overcome this problem, a separate Principal Component Analysis (PCA) step is usually adopted to reduce the dimensionality of the data. However, such a step may discard dimensions that contain important discriminative information that can aid classification performance. In this work, we propose a new idea which we named Multi-class Fukunaga Koontz Discriminant Analysis (FKDA) by incorporating the Fukunaga Koontz Transform within the optimization for maximizing class separation criteria in LDA, UDP, and LPP. In contrast to traditional LDA, UDP, and LPP, our approach can work with very high dimensional data as input, without requiring a separate dimensionality reduction step to make the scatter matrices full rank. In addition, the FKDA formulation seeks optimal projection direction vectors that are orthogonal which the existing methods cannot guarantee, and it has the capability of finding the exact solutions to the "trace ratio" objective in discriminant analysis problems while traditional methods can only deal with a relaxed and inexact "ratio trace" objective. We have shown using six face database, in the context of large scale unconstrained face recognition, face recognition with occlusions, and illumination invariant face recognition, under "closed set", "semi-open set", and "open set" recognition scenarios, that our proposed FKDA significantly outperforms traditional linear discriminant subspace learning methods as well as five other competing algorithms. HighlightsSolve small-sample-size problem in LDA, UDP, LPP using FKT formulation.Can work with high dimensional data without inverting any scatter matrices.Finds optimal projection direction vectors that are orthogonal.Finds exact solutions to the objective in the form of trace ratio.Improvement in unconstrained face recognition scenarios.
---
paper_title: The open-set problem in acoustic scene classification
paper_content:
Acoustic scene classification (ASC) has attracted growing research interest in recent years. Whereas the previous work has investigated closed-set classification scenarios, the predominant ASC application is open-set in nature. The contributions of the paper are (i) the first investigation of ASC in an open-set scenario, (ii) the formulation of open-set ASC as a detection problem, (iii) a classifier tailored to the open-set scenario and (iv) a new assessment protocol and metric. Experiments show that, despite the challenge of open-set ASC, reliable performance is achieved with the support vector data description classifier for varying levels of openness.
---
paper_title: Towards Open World Recognition
paper_content:
With the of advent rich classification models and high computational power visual recognition systems have found many operational applications. Recognition in the real world poses multiple challenges that are not apparent in controlled lab environments. The datasets are dynamic and novel categories must be continuously detected and then added. At prediction time, a trained system has to deal with myriad unseen categories. Operational systems require minimal downtime, even to learn. To handle these operational issues, we present the problem of Open World Recognition and formally define it. We prove that thresholding sums of monotonically decreasing functions of distances in linearly transformed feature space can balance “open space risk” and empirical risk. Our theory extends existing algorithms for open world recognition. We present a protocol for evaluation of open world recognition systems. We present the Nearest Non-Outlier (NNO) algorithm that evolves model efficiently, adding object categories incrementally while detecting outliers and managing open space risk. We perform experiments on the ImageNet dataset with 1.2M+ images to validate the effectiveness of our method on large scale visual recognition tasks. NNO consistently yields superior results on open world recognition.
---
paper_title: The Extreme Value Machine
paper_content:
It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.
---
paper_title: Specialized Support Vector Machines for Open-set Recognition
paper_content:
Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. In such cases, we need to think of robust classification methods able to deal with the"unknown"and properly reject samples belonging to classes never seen during training. Notwithstanding, almost all existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the well-known Support Vector Machines (SVM) classifier and introduce the Specialized Support Vector Machines (SSVM), which is suitable for recognition in open-set setups. SSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk.
---
paper_title: Incremental and Distributed Learning with Support Vector Machines
paper_content:
Due to the increase in the amount of data gathered every day in the real world problems (e.g., bioinformatics), there is a need for inductive learning algorithms that can incrementally process large amounts of data that is being accumulated over time in physically distributed, autonomous data repositories. In the incremental setting, the learner gradually refines a hypothesis (or a set of hypotheses) as new data become available. Because of the large volume of data involved, it may not be practical to store and access the entire dataset during learning. Thus, the learner does not have access to data that has been encountered at a previous time. Learning in the distributed setting can be defined in a similar fashion. An incremental or distributed learning algorithm is said to be exact if it gives the same results as those obtained by batch learning (i.e., when the entire dataset is accessible to the learning algorithm during learning). We explore exact distributed and incremental learning algorithms that are variants and extensions of the support vector machine (SVM) family of learning algorithms. For the sake of simplicity, suppose that we have two data sets and , and we want to learn from them in an incremental setting using SVM. A naive approach (Syed, Liu & Sung, 1999) works as follows: 1. Apply the SVM algorithm to and generate a set of support vectors
---
paper_title: On optimum recognition error and reject tradeoff
paper_content:
The performance of a pattern recognition system is characterized by its error and reject tradeoff. This paper describes an optimum rejection rule and presents a general relation between the error and reject probabilities and some simple properties of the tradeoff in the optimum recognition system. The error rate can be directly evaluated from the reject function. Some practical implications of the results are discussed. Examples in normal distributions and uniform distributions are given.
---
paper_title: Finding the Unknown: Novelty Detection with Extreme Value Signatures of Deep Neural Activations
paper_content:
Achieving or even surpassing human-level accuracy became recently possible in a variety of application scenarios due to the rise of convolutional neural networks (CNNs) trained from large datasets. However, solving supervised visual recognition tasks by discriminating among known categories is only one side of the coin. In contrast to this, novelty detection is still an unsolved task where instances of yet unknown categories need to be identified. Therefore, we propose to leverage the powerful discriminative nature of CNNs to novelty detection tasks by investigating class-specific activation patterns. More precisely, we assume that a semantic category can be described by its extreme value signature, that specifies which dimensions of deep neural activations have largest values. By following this intuition, we show that already a small number of high-valued dimensions allows to separate known from unknown categories. Our approach is simple, intuitive, and can be easily put on top of CNNs trained for vanilla classification tasks. We empirically validate the benefits of our approach in terms of accuracy and speed by comparing it against established methods in a variety of novelty detection tasks derived from ImageNet. Finally, we show that visualizing extreme value signatures allows to inspect class-specific patterns learned during training which may ultimately help to better understand CNN models.
---
paper_title: Growing a multi-class classifier with a reject option
paper_content:
In many classification problems objects should be rejected when the confidence in their classification is too low. An example is a face recognition problem where the faces of a selected group of people have to be classified, but where all other faces and non-faces should be rejected. These problems are typically solved by estimating the class densities and assigning an object to the class with the highest posterior probability. The total probability density is thresholded to detect the outliers. Unfortunately, this procedure does not easily allow for class-dependent thresholds, or for class models that are not based on probability densities but on distances. In this paper we propose a new heuristic to combine any type of one-class models for solving the multi-class classification problem with outlier rejection. It normalizes the average model output per class, instead of the more common non-linear transformation of the distances. It creates the possibility to adjust the rejection threshold per class, and also to combine class models that are not (all) based on probability densities and to add class models without affecting the boundaries of existing models. Experiments show that for several classification problems using class-specific models significantly improves the performance.
---
paper_title: Open set face recognition using transduction
paper_content:
This paper motivates and describes a novel realization of transductive inference that can address the open set face recognition task. Open set operates under the assumption that not all the test probes have mates in the gallery. It either detects the presence of some biometric signature within the gallery and finds its identity or rejects it, i.e., it provides for the "none of the above" answer. The main contribution of the paper is open set TCM-kNN (transduction confidence machine-k nearest neighbors), which is suitable for multiclass authentication operational scenarios that have to include a rejection option for classes never enrolled in the gallery. Open set TCM-kNN, driven by the relation between transduction and Kolmogorov complexity, provides a local estimation of the likelihood ratio needed for detection tasks. We provide extensive experimental data to show the feasibility, robustness, and comparative advantages of open set TCM-kNN on open set identification and watch list (surveillance) tasks using challenging FERET data. Last, we analyze the error structure driven by the fact that most of the errors in identification are due to a relatively small number of face patterns. Open set TCM-kNN is shown to be suitable for PSEI (pattern specific error inhomogeneities) error analysis in order to identify difficult to recognize faces. PSEI analysis improves biometric performance by removing a small number of those difficult to recognize faces responsible for much of the original error in performance and/or by using data fusion.
---
paper_title: Novelty detection and multi-class classification in power distribution voltage waveforms
paper_content:
Accurate classification of events in waveforms from electrical distribution networks.Novelty detection: dynamic identification of new classes of events.SVDD using negative examples and maximal margin separation: better generalization.Experiments using real data: significant improvements in classification accuracy.Direct application as part of tools to assist mitigation processes in power utilities. The automatic analysis of electrical waveforms is a recurring subject in the power system sector worldwide. In this sense, the idea of this paper is to present an original approach for automatic classification of voltage waveforms in electrical distribution networks. It includes both the classification of the waveforms in multiple known classes, and the detection of new waveforms (novelties) that are not available during the training stage. The classification method, based on the Support Vector Data Description (SVDD), has a suitable formulation for this task, because it is capable of fitting a model on a relatively small set of examples, which may also include negative examples (patterns from other known classes or even novelties), with maximal margin separation. The results obtained on both simulated and real world data demonstrate the ability of the method to identify novelties and to classify known examples correctly. The method finds application in the mitigation process of emergencies normally performed by power utilities' maintenance and protection engineers, which requires fast and accurate event cause identification.
---
paper_title: An optimum character recognition system using decision functions
paper_content:
The character recognition problem, usually resulting from characters being corrupted by printing deterioration and/or inherent noise of the devices, is considered from the viewpoint of statistical decision theory. The optimization consists of minimizing the expected risk for a weight function which is preassigned to measure the consequences of system decisions As an alternative minimization of the error rate for a given rejection rate is used as the critenon. The optimum recogition is thus obtained. The optimum system consists of a conditional-probability densisities computer; character channels, one for each character; a rejection channel; and a comparison network. Its precise structure and and ultimate performance depend essentially upon the signals and noise structure. Explicit examples for an additive Gaussian noise and a ``cosine'' noise are presented. Finally, an error-free recognition system and a possible criterion to measure the character style and deteriortation are presented.
---
paper_title: Modular ensembles for one-class classification based on density analysis
paper_content:
One-Class Classification (OCC) is an important machine learning task. It studies a special classification problem that training samples from only one class, named target class, are available or reliable. Recently, various OCC algorithms have been proposed, however many of them do not adequately deal with multi-modality, multi-density, the noise and arbitrarily shaped distributions of the target class. In this paper, we propose a novel Density Based Modular Ensemble One-class Classifier (DBM-EOC) algorithm which is motivated by density analysis, divide-and-conquer method and ensemble learning. DBM-EOC first performs density analysis on training samples to obtain a minimal spanning tree using density characteristics of the target class. On this basis, DBM-EOC automatically identifies clusters, multi-density distributions and the noise in training samples using extreme value analysis. Then target samples are categorized into several groups called Local Dense Subset (LDS). Samples in each LDS are close to each other and their local densities are similar. A simple base OCC model e.g. the Gaussian estimator is built for each LDS afterwards. Finally all the base classifiers are modularly aggregated to construct the DBM-EOC model. We experimentally evaluate DBM-EOC with 6 state-of-art OCC algorithms on 5 synthetic datasets, 18 UCI benchmark datasets and the MNIST dataset. The results show that DBM-EOC outperforms other competitors in majority cases especially when the datasets are multi-modality, multi-density or noisy. We propose a modular ensemble OCC algorithm DBM-EOC based on density analysis.We analyze peculiarities of the target class which are crucial for OCC.DBM-EOC obtains a tree structure of the target class considering density.DBM-EOC can automatically detect clusters and remove noise samples.DBM-EOC solves OCC problems with the divide-and-conquer method.
---
paper_title: Local Novelty Detection in Multi-class Recognition Problems
paper_content:
In this paper, we propose using local learning for multiclass novelty detection, a framework that we call local novelty detection. Estimating the novelty of a new sample is an extremely challenging task due to the large variability of known object categories. The features used to judge on the novelty are often very specific for the object in the image and therefore we argue that individual novelty models for each test sample are important. Similar to human experts, it seems intuitive to first look for the most related images thus filtering out unrelated data. Afterwards, the system focuses on discovering similarities and differences to those images only. Therefore, we claim that it is beneficial to solely consider training images most similar to a test sample when deciding about its novelty. Following the principle of local learning, for each test sample a local novelty detection model is learned and evaluated. Our local novelty score turns out to be a valuable indicator for deciding whether the sample belongs to a known category from the training set or to a new, unseen one. With our local novelty detection approach, we achieve state-of-the-art performance in multi-class novelty detection on two popular visual object recognition datasets, Caltech-256 and Image Net. We further show that our framework: (i) can be successfully applied to unknown face detection using the Labeled-Faces-in-the-Wild dataset and (ii) outperforms recent work on attribute-based unfamiliar class detection in fine-grained recognition of bird species on the challenging CUB-200-2011 dataset.
---
paper_title: Support vector machines with a reject option
paper_content:
This paper studies $\ell_1$ regularization with high-dimensional features for support vector machines with a built-in reject option (meaning that the decision of classifying an observation can be withheld at a cost lower than that of misclassification). The procedure can be conveniently implemented as a linear program and computed using standard software. We prove that the minimizer of the penalized population risk favors sparse solutions and show that the behavior of the empirical risk minimizer mimics that of the population risk minimizer. We also introduce a notion of classification complexity and prove that our minimizers adapt to the unknown complexity. Using a novel oracle inequality for the excess risk, we identify situations where fast rates of convergence occur.
---
paper_title: RO-SVM: Support Vector Machine with Reject Option for Image Categorization
paper_content:
When applying Multiple Instance Learning (MIL) for image categorization, an image is treated as a bag containing a number of instances, each representing a region inside the image. The categorization of this image is determined by the labels of these instances, which are not specified in the training data-set. Hence, these instance labels are needed to be estimated together with the classifier. To improve classification reliability, we propose in this paper a new Support Vector Machine approach by incorporating a reject option, named RO-SVM to determine the instance labels, and the rejection region during the training phase simultaneously. Our approach can also be easily extended to solve multi-class classification problems. Experimental results demonstrate that higher categorization accuracy can be achieved with our RO-SVM method, comparing to approaches that do not exclude uninformative image patches. Our method is able to produce results comparable even with few training samples.
---
paper_title: Support Vector Machines with Embedded Reject Option
paper_content:
In this paper, the problem of implementing the reject option in support vector machines (SVMs) is addressed. We started by observing that methods proposed so far simply apply a reject threshold to the outputs of a trained SVM. We then showed that, under the framework of the structural risk minimisation principle, the rejection region must be determined during the training phase of a classifier. By applying this concept, and by following Vapnik's approach, we developed a maximum margin classifier with reject option. This led us to a SVM whose rejection region is determined during the training phase, that is, a SVM with embedded reject option. To implement such a SVM, we devised a novel formulation of the SVM training problem and developed a specific algorithm to solve it. Preliminary results on a character recognition problem show the advantages of the proposed SVM in terms of the achievable error-reject trade-off.
---
paper_title: Assumed density filtering methods for learning Bayesian neural networks
paper_content:
Buoyed by the success of deep multilayer neural networks, there is renewed interest in scalable learning of Bayesian neural networks. Here, we study algorithms that utilize recent advances in Bayesian inference to efficiently learn distributions over network weights. In particular, we focus on recently proposed assumed density filtering based methods for learning Bayesian neural networks – Expectation and Probabilistic backpropagation. Apart from scaling to large datasets, these techniques seamlessly deal with non-differentiable activation functions and provide parameter (learning rate, momentum) free learning. In this paper, we first rigorously compare the two algorithms and in the process develop several extensions, including a version of EBP for continuous regression problems and a PBP variant for binary classification. Next, we extend both algorithms to deal with multiclass classification and count regression problems. On a variety of diverse real world benchmarks, we find our extensions to be effective, achieving results competitive with the state-of-the-art.
---
paper_title: Reducing Network Agnostophobia
paper_content:
Agnostophobia, the fear of the unknown, can be experienced by deep learning engineers while applying their networks to real-world applications. Unfortunately, network behavior is not well defined for inputs far from a networks training set. In an uncontrolled environment, networks face many instances that are not of interest to them and have to be rejected in order to avoid a false positive. This problem has previously been tackled by researchers by either a) thresholding softmax, which by construction cannot return "none of the known classes", or b) using an additional background or garbage class. In this paper, we show that both of these approaches help, but are generally insufficient when previously unseen classes are encountered. We also introduce a new evaluation metric that focuses on comparing the performance of multiple approaches in scenarios where such unseen classes or unknowns are encountered. Our major contributions are simple yet effective Entropic Open-Set and Objectosphere losses that train networks using negative samples from some classes. These novel losses are designed to maximize entropy for unknown inputs while increasing separation in deep feature space by modifying magnitudes of known and unknown samples. Experiments on networks trained to classify classes from MNIST and CIFAR-10 show that our novel loss functions are significantly better at dealing with unknown inputs from datasets such as Devanagari, NotMNIST, CIFAR-100, and SVHN.
---
paper_title: SSD: Single Shot MultiBox Detector
paper_content:
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For \(300 \times 300\) input, SSD achieves 74.3 % mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for \(512 \times 512\) input, SSD achieves 76.9 % mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https://github.com/weiliu89/caffe/tree/ssd.
---
paper_title: The Extreme Value Machine
paper_content:
It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.
---
paper_title: Adversarial Robustness: Softmax versus Openmax
paper_content:
Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications. However, it was discovered that machine learning models, including the best performing DNNs, suffer from a fundamental problem: they can unexpectedly and confidently misclassify examples formed by slightly perturbing otherwise correctly recognized inputs. Various approaches have been developed for efficiently generating these so-called adversarial examples, but those mostly rely on ascending the gradient of loss. In this paper, we introduce the novel logits optimized targeting system (LOTS) to directly manipulate deep features captured at the penultimate layer. Using LOTS, we analyze and compare the adversarial robustness of DNNs using the traditional Softmax layer with Openmax, which was designed to provide open set recognition by defining classes derived from deep representations, and is claimed to be more robust to adversarial perturbations. We demonstrate that Openmax provides less vulnerable systems than Softmax to traditional attacks, however, we show that it can be equally susceptible to more sophisticated adversarial generation techniques that directly work on deep representations.
---
paper_title: Towards Reaching Human Performance in Pedestrian Detection
paper_content:
Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the “perfect single frame detector”. We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech pedestrian dataset). After manually clustering the frequent errors of a top detector, we characterise both localisation and background-versus-foreground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve results even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech pedestrian dataset, and provide a new sanitised set of training and test annotations.
---
paper_title: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
paper_content:
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
---
paper_title: Figure of Merit Training for Detection and Spotting
paper_content:
Spotting tasks require detection of target patterns from a background of richly varied non-target inputs. The performance measure of interest for these tasks, called the figure of merit (FOM), is the detection rate for target patterns when the false alarm rate is in an acceptable range. A new approach to training spotters is presented which computes the FOM gradient for each input pattern and then directly maximizes the FOM using backpropagation. This eliminates the need for thresholds during training. It also uses network resources to model Bayesian a posteriori probability functions accurately only for patterns which have a significant effect on the detection accuracy over the false alarm rate of interest. FOM training increased detection accuracy by 5 percentage points for a hybrid radial basis function (RBF) - hidden Markov model (HMM) wordspotter on the credit-card speech corpus.
---
paper_title: Intriguing properties of neural networks
paper_content:
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
---
paper_title: The Pascal Visual Object Classes (VOC) Challenge
paper_content:
The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
---
paper_title: Microsoft COCO: Common Objects in Context
paper_content:
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
---
paper_title: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
paper_content:
Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
---
paper_title: Confidence Prediction for Lexicon-Free OCR
paper_content:
Having a reliable accuracy score is crucial for real world applications of OCR, since such systems are judged by the number of false readings. Lexicon-based OCR systems, which deal with what is essentially a multi-class classification problem, often employ methods explicitly taking into account the lexicon, in order to improve accuracy. However, in lexicon-free scenarios, filtering errors requires an explicit confidence calculation. In this work we show two explicit confidence measurement techniques, and show that they are able to achieve a significant reduction in misreads on both standard benchmarks and a proprietary dataset.
---
paper_title: Fast R-CNN
paper_content:
This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.
---
paper_title: Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
paper_content:
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
---
paper_title: Towards Open Set Deep Networks
paper_content:
Deep networks have produced significant gains for various visual recognition problems, leading to high impact academic and commercial applications. Recent work in deep networks highlighted that it is easy to generate images that humans would never classify as a particular object class, yet networks classify such images high confidence as that given class - deep network are easily fooled with images humans do not consider meaningful. The closed set nature of deep networks forces them to choose from one of the known classes leading to such artifacts. Recognition in the real world is open set, i.e. the recognition system should reject unknown/unseen classes at test time. We present a methodology to adapt deep networks for open set recognition, by introducing a new model layer, OpenMax, which estimates the probability of an input being from an unknown class. A key element of estimating the unknown probability is adapting Meta-Recognition concepts to the activation patterns in the penultimate layer of the network. OpenMax allows rejection of "fooling" and unrelated open set images presented to the system; OpenMax greatly reduces the number of obvious errors made by a deep network. We prove that the OpenMax concept provides bounded open space risk, thereby formally providing an open set recognition solution. We evaluate the resulting open set deep networks using pre-trained networks from the Caffe Model-zoo on ImageNet 2012 validation data, and thousands of fooling and open set images. The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities.
---
paper_title: A Discriminative Feature Learning Approach for Deep Face Recognition
paper_content:
Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.
---
paper_title: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
paper_content:
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.
---
|
Title: Learning and the Unknown: Surveying Steps toward Open World Recognition
Section 1: Introduction
Description 1: Introduce the concept of open world recognition and its importance in current machine learning research.
Section 2: Formalizing Open Set Recognition
Description 2: Discuss the formalization of open set recognition, including the definition, mathematical models, and key properties.
Section 3: Approaches that sometimes solve OSR
Description 3: Analyze various approaches and discuss why some algorithms may partially address OSR while others fall short.
Section 4: Open Set Deep Networks
Description 4: Examine the challenges and current solutions related to open set recognition in deep learning frameworks.
Section 5: Conclusion
Description 5: Summarize the key points, the progress made, and emphasize the ongoing challenges and future directions in the field of open world recognition.
|
Performance evaluation of component-based software systems: A survey
| 5 |
---
paper_title: Nieuwenhuis Performance comparison of middleware threading strategies
paper_content:
The spectacular growth of E-business applications on the Internet has boosted the development of middleware technology. Middleware is software that manages interactions between applications distributed across a heterogeneous computing environment. In the competitive E-business market the ability to deliver a high and predictable performance of E-business applications is crucial to avoid customer churn, and thus loss of revenue. This raises the need for service providers to be able to predict and control performance. The performance of middleware-based applications depends strongly on the choice of the so-called threading strategy, describing how the middleware layer handles competing method invocation requests. The goal of this paper is to provide an understanding of the impact of threading strategies on the performance of middlewarebased applications. To this end, we (1) develop new quantitative models for the performance of middleware under different threading strategies, (2) perform extensive test lab experiments to compare the performance under different threading strategies, and (3) explain the experimental results by relating them to the quantitative models. As such, this paper provides new and fundamental insight in the impact of threading strategies on the performance of E-business applications.
---
paper_title: The Future of Software Performance Engineering
paper_content:
Performance is a pervasive quality of software systems; everything affects it, from the software itself to all underlying layers, such as operating system, middleware, hardware, communication networks, etc. Software Performance Engineering encompasses efforts to describe and improve performance, with two distinct approaches: an early-cycle predictive model-based approach, and a late-cycle measurement-based approach. Current progress and future trends within these two approaches are described, with a tendency (and a need) for them to converge, in order to cover the entire development cycle.
---
paper_title: Making the Business Case for Software Performance Engineering
paper_content:
Shrinking budgets and increased fiscal accountability mean that management needs a sound financial justification before committing funds to software process improvements such as Software Performance Engineering (SPE). Preparing a business case for SPE can demonstrate that the commitment is financially worthwhile and win support for an SPE initiative. This paper presents an introduction to the use of business case analysis to justify investing in SPE to reduce costs due to performance failures. A case study illustrates how to perform a financial analysis and calculate a projected return on investment.
---
paper_title: Process algebra for performance evaluation
paper_content:
This paper surveys the theoretical developments in the field of stochastic process algebras, process algebras where action occurrences may be subject to a delay that is determined by a random variable. A huge class of resource-sharing systems - like large-scale computers, client-server architectures, networks - can accurately be described using such stochastic specification formalisms. The main emphasis of this paper is the treatment of operational semantics, notions of equivalence, and (sound and complete) axiomatisations of these equivalences for different types of Markovian process algebras, where delays are governed by exponential distributions. Starting from a simple actionless algebra for describing time-homogeneous continuous-time Markov chains, we consider the integration of actions and random delays both as a single entity (like in known Markovian process algebras like TIPP, PEPA and EMPA) and as separate entities (like in the timed process algebras timed CSP and TCCS). In total we consider four related calculi and investigate their relationship to existing Markovian process algebras. We also briefly indicate how one can profit from the separation of time and actions when incorporating more general, non-Markovian distributions.
---
paper_title: Model-based performance prediction in software development: a survey
paper_content:
Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.
---
paper_title: Component Software: Beyond Object-Oriented Programming
paper_content:
From the Publisher: ::: Component Software: Beyond Object-Oriented Programming explains the technical foundations of this evolving technology and its importance in the software market place. It provides in-depth discussion of both the technical and the business issues to be considered, then moves on to suggest approaches for implementing component-oriented software production and the organizational requirements for success. The author draws on his own experience to offer tried-and-tested solutions to common problems and novel approaches to potential pitfalls. Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
---
paper_title: SAAM: a method for analyzing the properties of software architectures
paper_content:
While software architecture has become an increasingly important research topic in recent years, insufficient attention has been paid to methods for evaluation of these architectures. Evaluating architectures is difficult for two main reasons. First, there is no common language used to describe different architectures. Second, there is no clear way of understanding an architecture with respect to an organization's life cycle concerns -software quality concerns such as maintainability portability, modularity, reusability, and so forth. We address these shortcomings by describing three perspectives by which we can understand the description of a software architecture and then proposing a five-step method for analyzing software architectures called SAAM (Software Architecture Analysis Method). We illustrate the method by analyzing three separate user interface architectures with respect to the quality of modifiability. >
---
paper_title: Stochastic Petri Nets An Introduction To The Theory
paper_content:
Thank you very much for reading stochastic petri nets an introduction to the theory. Maybe you have knowledge that, people have look hundreds times for their favorite novels like this stochastic petri nets an introduction to the theory, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their computer.
---
paper_title: ATAM: Method for Architecture Evaluation
paper_content:
Abstract : If a software architecture is a key business asset for an organization, then architectural analysis must also be a key practice for that organization. Why? Because architectures are complex and involve many design tradeoffs. Without undertaking a formal analysis process, the organization cannot ensure that the architectural decisions made-particularly those which affect the achievement of quality attribute such as performance, availability, security, and modifiability-are advisable ones that appropriately mitigate risks. In this report, some of the technical and organizational foundations for performing architectural analysis are discussed, the Architecture Tradeoff Analysis Method (ATAM), is presented. The ATAM is a technique for analyzing software architectures that has been developed and refined in practice over the past three years.
---
paper_title: Performance Modeling and Evaluation of Distributed Component-Based Systems Using Queueing Petri Nets
paper_content:
Performance models are used increasingly throughout the phases of the software engineering lifecycle of distributed component-based systems. However, as systems grow in size and complexity, building models that accurately capture the different aspects of their behavior becomes a more and more challenging task. In this paper, we present a novel case study of a realistic distributed component-based system, showing how queueing Petri net models can be exploited as a powerful performance prediction tool in the software engineering process. A detailed system model is built in a step-by-step fashion, validated, and then used to evaluate the system performance and scalability. Along with the case study, a practical performance modeling methodology is presented which helps to construct models that accurately reflect the system performance and scalability characteristics. Taking advantage of the modeling power and expressiveness of queueing Petri nets, our approach makes it possible to model the system at a higher degree of accuracy, providing a number of important benefits
---
paper_title: Performance prediction of component-based systems : A survey from an engineering perspective
paper_content:
Performance predictions of component assemblies and the ability of obtaining system-level performance properties from these predictions are a cru- cial success factor when building trustworthy component-based systems. In order to achieve this goal, a collection of methods and tools to capture and analyze the performance of software systems has been developed. These methods and tools aim at helping software engineers by providing them with the capability to understand design trade-offs, optimize their design by identifying performance inhibitors, or predict a systems performance within a specified deployment envi- ronment. In this paper, we analyze the applicability of various performance pre- diction methods for the development of component-based systems and contrast their inherent strengths and weaknesses in different engineering problem scenar- ios. In so doing, we establish a basis to select an appropriate prediction method and to provide recommendations for future research activities, which could sig- nificantly improve the performance prediction of component-based systems.
---
paper_title: Model-Driven Software Development: Technology, Engineering, Management
paper_content:
Part I: Introduction. 1. Introduction. 2. MDSD - Basic Ideas and Terminology. 3. Case Study: A Typical Web Application. 4. Concept Formation. 5. Classification. Part II: Domain Architectures. 6. Metamodeling. 7. MDSD-Capable Target Architectures. 8. Building Domain Architectures. 9. Code Generation Techniques. 10. Model Transformation Techniques. 11. MDSD Tools: Roles, Architecture, Selection Criteria, and Pointers. 12. The MDA Standard. Part III: Processes and Engineering. 13. MDSD Process Building Blocks and Best Practices. 14. Testing. 15. Versioning. 16. Case Study: Embedded Component Infrastructures. 17. Case Study: An Enterprise System. Part IV: Management. 18. Decision Support. 1.9 Organizational Aspects. 20. Adoption Strategies for MDSD. References. Index.
---
paper_title: Software component models
paper_content:
Component-based Development (CBD) is an important emerging topic in Software Engineering, promising long sought after benefits like increased reuse and reduced time-to-market (and hence software production cost). However, there are at present many obstacles to overcome before CBD can succeed. For one thing, CBD success is predicated on a standardised market place for software components, which does not yet exist. In fact currently CBD even lacks a universally accepted terminology. Existing component models adopt different component definitions and composition operators. Therefore much research remains to be done. We believe that the starting point for this endeavour should be a thorough study of current component models, identifying their key characteristics and comparing their strengths and weaknesses. A desirable side-effect would be clarifying and unifying the CBD terminology. In this tutorial, we present a clear and concise exposition of all the current major software component models, including a taxonomy. The purpose is to distill and present knowledge of current software component models, as well as to present an analysis of their properties with respect to commonly accepted criteria for CBD. The taxonomy also provides a starting point for a unified terminology.
---
paper_title: Component Software: Beyond Object-Oriented Programming
paper_content:
From the Publisher: ::: Component Software: Beyond Object-Oriented Programming explains the technical foundations of this evolving technology and its importance in the software market place. It provides in-depth discussion of both the technical and the business issues to be considered, then moves on to suggest approaches for implementing component-oriented software production and the organizational requirements for success. The author draws on his own experience to offer tried-and-tested solutions to common problems and novel approaches to potential pitfalls. Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
---
paper_title: Performance Solutions A Practical Guide To Creating Responsive Scalable Software
paper_content:
Thank you for downloading performance solutions a practical guide to creating responsive scalable software. As you may know, people have search numerous times for their favorite books like this performance solutions a practical guide to creating responsive scalable software, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop.
---
paper_title: Model-based performance prediction in software development: a survey
paper_content:
Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.
---
paper_title: Component Software: Beyond Object-Oriented Programming
paper_content:
From the Publisher: ::: Component Software: Beyond Object-Oriented Programming explains the technical foundations of this evolving technology and its importance in the software market place. It provides in-depth discussion of both the technical and the business issues to be considered, then moves on to suggest approaches for implementing component-oriented software production and the organizational requirements for success. The author draws on his own experience to offer tried-and-tested solutions to common problems and novel approaches to potential pitfalls. Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
---
paper_title: Towards Automatic Construction of Reusable Prediction Models for Component-Based Performance Engineering
paper_content:
Performance predictions for software architectures can reveal performance bottlenecks and quantitatively support design decisions for different architectural alternatives. As software architects aim at reusing existing software components, their performance properties should be included into performance predictions without the need for manual modelling. However, most prediction approaches do not include automated support for modelling implemented components. Therefore, we propose a new reverse engineering approach, which generates Palladio performance models from Java code. In this paper, we focus on the static analysis of Java code, which we have implemented as an Eclipse plugin called Java2PCM. We evaluated our approach on a larger component-based software architecture, and show that a similar prediction accuracy can be achieved with generated models compared to completely manually specified ones.
---
paper_title: UML-Based Performance Modeling Framework for Component-Based Distributed Systems
paper_content:
We describe a performance modeling framework that can be used in the development and maintenance of component-based distributed systems, such as those based on the CORBA, EJB, and COM+ platforms. The purpose of the framework is to produce predictive performance models that can be used for obtaining performance related information on the target system at all stages of its life cycle. The framework defines a UML-based notation for describing performance models, and a set of special techniques for modeling component-based distributed systems. In addition, we present a transformation for converting the resulting models into a format that can be solved approximately for a number of relevant performance metrics.
---
paper_title: Model-based performance prediction in software development: a survey
paper_content:
Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.
---
paper_title: Efficient Performance Models in Component-Based Software Engineering
paper_content:
Performance evaluation of Component-Based software systems should be performed as early as possible during the software development life cycle. Unfortunately, a detailed quantitative analysis is often not possible during such stages, as only the system outline is available, with very little quantitative knowledge. In this paper we propose an approach based on Queueing Network analysis for performance evaluation of component-based software systems at the software architectural level. Our approach provides performance bounds which can be efficiently computed. Starting from annotated UML diagrams we compute bounds on the system throughput and response time without explicitly deriving or solving the underlying multichain and multiclass Queueing Network model. We illustrate with an example how the technique can be applied to answer many performance-related questions which may arise during the software design phase.
---
paper_title: CB-SPE Tool: Putting component-based performance engineering into practice
paper_content:
A crucial issue in the design of Component-Based (CB) applications is the ability to early guarantee that the system under development will satisfy its Quality of Service requirements. In particular, we need rigorous and easy-to-use techniques for predicting and analyzing the performance of the assembly based on the properties of the constituent components. To this purpose, we propose the CB-SPE framework: a compositional methodology for CB Software Performance Engineering (SPE) and its supporting tool. CB-SPE is based on, and adapts to a CB paradigm, the concepts and steps of the well-known SPE technology, using for input modeling the standard RT-UML PA profile. The methodology is compositional: it is first applied by the component developer at the component layer, achieving a parametric performance evaluation of the components in isolation; then, at the application layer, the system assembler is provided with a step-wise procedure for predicting the performance of the assembled components on the actual platform. We have developed the CB-SPE tool reusing as much as possible existing free tools. In this paper we present the realized framework, together with a simple application example.
---
paper_title: Performance modeling from software components
paper_content:
When software products are assembled from pre-defined components, performance prediction should be based on the components also. This supports rapid model-building, using previously calibrated sub-models or "performance components", in sync with the construction of the product. The specification of a performance component must be tied closely to the software component specification, but it also includes performance related parameters (describing workload characteristics and demands), and it abstracts the behaviour of the component in various ways (for reasons related to practical factors in performance analysis). A useful set of abstractions and parameters are already defined for layered performance modeling. This work extends them to accommodate software components, using a new XML-based language called Component-Based Modeling Language (CBML). With CBML, compatible components can be inserted into slots provided in a hierarchical component specification based on the UML component model.
---
paper_title: Model-Based performance prediction with the palladio component model
paper_content:
One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independent from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influence factors like the hardware platform or the usage profile into account. In our approach, we use the Palladio Component Model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. In this paper, we present our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy can be sufficient to support the evaluation of architectural design decisions.
---
paper_title: Performance modeling and prediction of enterprise JavaBeans with layered queuing network templates
paper_content:
Component technologies, such as Enterprise Java Beans (EJB) and .NET, are used in enterprise servers with requirements for high performance and scalability. This work considers performance prediction from the design of an EJB system, based on the modular structure of an application server and the application components. It uses layered queueing models, which are naturally structured around the software components. This paper describes a framework for constructing such models, based on layered queue templates for EJBs, and for their inclusion in the server. The resulting model is calibrated and validated by comparison with an actual system.
---
paper_title: Packaging Predictable Assembly
paper_content:
Significant economic and technical benefits accrue from the use of pre-existing and commercially available software components to develop new systems. However, challenges remain that, if not adequately addressed, will slow the adoption of software component technology. Chief among these are a lack of consumer trust in the quality of components, and a lack of trust in the quality of assemblies of components without extensive and expensive testing. This paper describes prediction-enabled component technology (PECT). A PECT results from integrating component technology with analysis models. An analysis model permits analysis and prediction of assembly-level properties prior to component composition, and, perhaps, prior to component acquisition. Analysis models also identify required component properties and their certifiable descriptions. Component technology supports and enforces the assumptions underlying analysis models; it also provides the medium for deploying PECT instances and PECT-compliant software components. This paper describes the structure of PECT. It discusses the means of establishing the predictive powers of a PECT so that consumers may obtain measurably bounded trust in both components and design-time predictions based on the use of these components. We demonstrate these ideas in a simple but illustrative model problem: predicting average endto-end latency of a 'soft' real time application built from off-the-shelf software components.
---
paper_title: Predicting the Behavior of a Highly Configurable Component Based Real-Time System
paper_content:
Software components and the technology supporting component based software engineering contribute greatly to the rapid development and configuration of systems for a variety of application domains. Such domains go beyond desktop office applications and information systems supporting e-commerce, but include systems having real-time performance requirements and critical functionality. Discussed in this paper are the results from an experiment that demonstrates the ability to predict deadline satisfaction of threads in a real-time system where the functionality performed is based on the configuration of the assembled software components. Presented is the method used to abstract the large, legacy code base of the system software and the application software components in the system; the model of those abstractions based on available architecture documentation and empirically-based, runtime observations; and the analysis of the predictions which yielded objective confidence in the observations and model created which formed the underlying basis for the predictions.
---
paper_title: From design to analysis models: a kernel language for performance and reliability analysis of component-based systems
paper_content:
To facilitate the use of non-functional analysis results in the selection and assembly of components for component-based systems, automatic prediction tools should be devised, to predict some overall quality attribute of the application without requiring extensive knowledge of analysis methodologies to the application designer. To achieve this goal, a key idea is to define a model transformation that takes as input some "design-oriented" model of the component assembly and produces as a result an "analysis-oriented" model that lends itself to the application of some analysis methodology. However, to actually devise such a transformation, we must face both the heterogeneous design level notations for component-based systems, and the variety of non-functional attributes and related analysis methodologies we could be interested in. In this perspective, we define a kernel language whose aim is to capture the relevant information for the analysis of non-functional attributes of component-based systems, with a focus on performance and reliability. Using this kernel language as a bridge between design-oriented and analysis-oriented notations we reduce the burden of defining a variety of direct transformations from the former to the latter to the less complex problem of defining transformations to/from the kernel language. The proposed kernel language is defined within the MOF (Meta-Object Facility) framework, to allow the exploitation of MOF-based model transformation facilities.
---
paper_title: Model-driven performance analysis
paper_content:
Model-Driven Engineering (MDE) is an approach to develop software systems by creating models and applying automated transformations to them to ultimately generate the implementation for a target platform. Although the main focus of MDE is on the generation of code, it is also necessary to support the analysis of the designs with respect to quality attributes such as performance. To complement the model-to-implementation path of MDE approaches, an MDE tool infrastructure should provide what we call model-driven analysis. This paper describes an approach to model-driven analysis based on reasoning frameworks. In particular, it describes a performance reasoning framework that can transform a design into a model suitable for analysis of real-time performance properties with different evaluation procedures including rate monotonic analysis and simulation. The concepts presented in this paper have been implemented in the PACC Starter Kit, a development environment that supports code generation and analysis from the same models.
---
paper_title: Performance prediction for black-box components using reengineered parametric behaviour models
paper_content:
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
---
paper_title: TESTEJB - A measurement framework for EJBs
paper_content:
Specification of Quality of Service (QoS) for components can only be done in relation to the QoS the components themselves are given by imported components. Developers as well as users need support in order to derive valid data for specification respectively for checking whether a selected component complies with its specification. In this paper we introduce the architecture of a measurement framework for EJBs giving such support and discuss in detail the measurement of the well understood property of response time.
---
paper_title: Snapshot of CCL : A Language for Predictable Assembly
paper_content:
Abstract : Construction and composition language (CCL) plays several roles in our approach to achieving automated predictable assembly. CCL is used to produce specifications that contain structural, behavioral, and analysis-specific information about component technologies, as well as components and assemblies in such technologies. These specifications are translated to one or more reasoning frameworks that analyze and predict the runtime properties of assemblies. CCL processors can also be used to automate many of the constructive activities of component-based development through various forms of program generation. Using a common specification for prediction and construction improves confidence that analysis models match implementations. This report presents a snapshot of CCL by examining a small example CCL specification.
---
paper_title: The COMQUAD component model: enabling dynamic selection of implementations by weaving non-functional aspects
paper_content:
The reliability of non-functional contracts is crucial for many software applications. This added to the increasing attention this issue lately received in software engineering. Another development in software engineering is toward component-based systems. The interaction of both, non-functional aspects and components, is a relatively new research area, which the COMQUAD project is focusing on.Our component model, presented in this paper, enables the specification and runtime support of non-functional aspects in component-based systems. At the same time, a clear separation of non-functional properties and functionally motivated issues is provided. We achieve this by extending the concepts of the existing component-based systems Enterprise JavaBeans (EJB) and CORBA Components (CCM). Non-functional aspects are described orthogonally to the application structure using descriptors, and are woven into the running application by the component container acting as a contract manager. The container implicitly instantiates component specifications and connects them according to the current requests. The selection of actual implementations depends on the particular client's non-functional requirements. This technique also enables adaptation based on the specific quantitative capabilities of the running system.In this paper we give a detailed description of the COMQUAD component model and the appropriate container support. We also provide a simple case study of a multimedia application for better understanding.
---
paper_title: Predicting real-time properties of component assemblies: a scenario-simulation approach
paper_content:
This work addresses the problem of predicting timing properties of multitasking component assemblies during the design phase. For real-time applications, it is of vital importance to guarantee that the timing requirements of an assembly are met. We propose a simulation-based approach for predicting the real-time behaviour of an assembly based on models of its constituent components. Our approach extends the scenario-based method in [J. Muskens et al. (2004)] by offering a system model that is tailored to the domain of real-time applications. Contributions of This work include the possibility to handle the following features: mutual exclusion, combinations of aperiodic and periodic tasks and synchronization constraints. The analytical approach we used in previous work cannot handle these features. Therefore, we introduce the simulation-based approach. Our simulator provides data about dynamic resource consumption and real-time properties like response time, blocking time and number of missed deadlines per task. We have validated our approach using a video-decoder application.
---
paper_title: Exploring performance trade-offs of a JPEG decoder using the deepcompass framework
paper_content:
Designing embedded systems for multiprocessor platforms requires early prediction and balancing of multiple system quality attributes. We present a design space exploration framework for component-based software systems that allows an architect to get insight into a space of possible design alternatives with further evaluation and comparison of these alternatives. The framework provides (a) tool-guided design of multiple alternatives of software and hardware architectures, (b) early design-time predictions of performance properties and identification of bottlenecks for each architectural alternative, and (c) evaluation of each alternative with respect to multi-objective trade-offs. The performance prediction technique employs modeling of individual components and composition of the models into a system model representing the system behaviour and resource usage. We illustrate the framework by a case study of a JPEG decoder application. For this system, we consider architectural alternatives, show their specification, and explore their trade-offs with respect to task latencies, resource utilization and system cost.
---
paper_title: The Method of Layers
paper_content:
Distributed applications are being developed that contain one or more layers of software servers. Software processes within such systems suffer contention delays both for shared hardware and at the software servers. The responsiveness of these systems is affected by the software design, the threading level and number of instances of software processes, and the allocation of processes to processors. The Method of Layers (MOL) is proposed to provide performance estimates for such systems. The MOL uses the mean value analysis (MVA) linearizer algorithm as a subprogram to assist in predicting model performance measures. >
---
paper_title: Performance and scalability of EJB applications
paper_content:
We investigate the combined effect of application implementation method, container design, and efficiency of communication layers on the performance scalability of J2EE application servers by detailed measurement and profiling of an auction site server.We have implemented five versions of the auction site. The first version uses stateless session beans, making only minimal use of the services provided by the Enterprise JavaBeans (EJB) container. Two versions use entity beans, one with container-managed persistence and the other with bean-managed persistence. The fourth version applies the session fasade pattern, using session beans as a fasade to access entity beans. The last version uses EJB 2.0 local interfaces with the session fasade pattern. We evaluate these different implementations on two popular open-source EJB containers with orthogonal designs. JBoss uses dynamic proxies to generate the container classes at run time, making an extensive use of reflection. JOnAS pre-compiles classes during deployment, minimizing the use of reflection at run time. We also evaluate the communication optimizations provided by each of these EJB containers.The most important factor in determining performance is the application implementation method. EJB applications with session beans perform as well as a Java servlets-only implementation and an order-of-magnitude better than most of the implementations based on entity beans. The fine-granularity access exposed by the entity beans limits scalability. Use of session fasade beans improves performance for entity beans, but only if local communication is very efficient or EJB 2.0 local interfaces are used. Otherwise, session fasade beans degrade performance.For the implementation using session beans, communication cost forms the major component of the execution time on the EJB server. The design of the container has little effect on performance. With entity beans, the design of the container becomes important. In particular, the cost of reflection affects performance. For implementations using session fasade beans, local communication cost is critically important. EJB 2.0 local interfaces improve the performance by avoiding the communication layers for local communications.
---
paper_title: Revel8or: Model Driven Capacity Planning Tool Suite
paper_content:
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of design diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.
---
paper_title: Design-level performance prediction of component-based applications
paper_content:
Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform.
---
paper_title: Mean-Value Analysis of Closed Multichain Queuing Networks
paper_content:
It is shown that mean queue sizes, mean waiting times, and throughputs in closed multiple-chain queuing networks which have product-form solution can be computed recursively without computing product terms and normalization constants. The resulting computational procedures have improved properties (avoidance of numerical problems and, in some cases, fewer operations) compared to previous algorithms. Furthermore, the new algorithms have a physically meaningful interpretation which provides the basis for heuristic extensions that allow the approximate solution of networks with a very large number of closed chains, and which is shown to be asymptotically valid for large chain populations.
---
paper_title: Early performance testing of distributed software applications
paper_content:
Performance characteristics, such as response time, through put and scalability, are key quality attributes of distributed applications. Current practice, however, rarely applies systematic techniques to evaluate performance characteristics. We argue that evaluation of performance is particularly crucial in early development stages, when important architectural choices are made. At first glance, this contradicts the use of testing techniques, which are usually applied towards the end of a project. In this paper, we assume that many distributed systems are built with middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or the Common Object Request Broker Architecture (CORBA). These provide services and facilities whose implementations are available when architectures are defined. We also note that it is the middleware functionality, such as transaction and persistence services, remote communication primitives and threading policy primitives, that dominate distributed system performance Drawing on these observations, this paper presents a novel approach to performance testing of distributed applications. We propose to derive application-specific test cases from architecture designs so that performance of a distributed application can be tested using the middleware software at early stages of a development process. We report empirical results that support the viability of the approach.
---
paper_title: Performance prediction of component-based applications
paper_content:
One of the major problems in building large-scale enterprise systems is anticipating the performance of the eventual solution before it has been built. The fundamental software engineering problem becomes more difficult when the systems are built on component technology. This paper investigates the feasibility of providing a practical solution to this problem. An empirical approach is proposed to determine the performance characteristics of component-based applications by benchmarking and profiling. Based on observation, a model is constructed to act as a performance predictor for a class of applications based on the specific component technology. The performance model derived from empirical measures is necessary to make the problem tractable and the results relevant. A case study applies the performance model to an application prototype implemented by two component infrastructures: CORBA and J2EE.
---
paper_title: Theory of software reliability based on components
paper_content:
We present a foundational theory of software system reliability based on components. The theory describes how component developers can design and test their components to produce measurements that are later used by system designers to calculate composite system reliability — without implementation and test of the system being designed. The theory describes how to make component measurements that are independent of operational profiles, and how to incorporate the overall system-level operational profile into the system reliability calculations. In principle, the theory resolves the central problem of assessing a component, which is: a component developer cannot know how the component will be used and so cannot certify it for an arbitrary use; but if the component buyer must certify each component before using it, component-based development loses much of its appeal. This dilemma is resolved if the component developer does the certification and provides the results in such a way that the component buyer can factor in the usage information later, without repeating the certification. Our theory addresses the basic technical problems inherent in certifying components to be released for later use in an arbitrary system. Most component research has been directed at functional specification of software components; our theory addresses the other, equally important, side of the coin: component quality.
---
paper_title: Performance specification of software components
paper_content:
Component-based software engineering is concerned with predictability in both functional and performance behavior, though most formal techniques have typically focused their attention on the former. Reasoning about the (functional or performance) behavior of a component-based system must be compositional in order to be scalable. Compositional performance reasoning demands that components include performance specifications, in addition to descriptions of functional behavior. Unfortunately, as explained in this paper, classical techniques and notations for performance analysis are either unsuitable or unnatural to capture performance behaviors of generic software components. They fail to work in the presence of parameterization and layering. The paper introduces elements of a compositional approach to performance analysis using a detailed example. It explains that performance specification problems are so basic that there are unresolved research issues to be tackled even for the simplest reusable components. These issues must be tackled by any practical proposal for sound performance reasoning. Only then will software developers be able to engineer new systems by choosing and assembling components that best fit their performance (time and space) requirements.
---
paper_title: Tools and experiments supporting a testing-based theory of component composition
paper_content:
Development of software using off-the-shelf components seems to offer a chance for improving product quality and developer productivity. This article reviews a foundational testing-based theory of component composition, describes tools that implement the theory, and presents experiments with functional and nonfunctional component/system properties that validate the theory and illuminate issues in component composition. The context for this work is an ideal form of Component-Based Software Development (CBSD) supported by tools. Component developers describe their components by measuring approximations to functional and nonfunctional behavior on a finite collection of subdomains. Systems designers describe an application-system structure by the component connections that form it. From measured component descriptions and a system structure, a CAD tool synthesizes the system properties, predicting how the system will behave. The system is not built, nor are any test executions performed. Neither the component sources nor executables are needed by systems designers. From CAD calculations a designer can learn (approximately) anything that could be learned by testing an actual system implementation. The CAD tool is often more efficient than it would be to assemble and execute an actual system. Using tools that support an ideal separation between component- and system development, experiments were conducted to investigate two related questions: (1) To what extent can unit (that is, component) testing replace system testingq (2) What properties of software and subdomains influence the quality of subdomain testingq
---
paper_title: Software component composition: a subdomain-based testing-theory foundation
paper_content:
Composition of software elements into assemblies (systems) is a fundamental aspect of software development. It is an important strength of formal mathematical specification that the descriptions of elements can be precisely composed into the descriptions of assemblies. Testing, on the other hand, is usually thought to be ‘non-compositional.’ Testing provides information about any executable software element, but testing descriptions have not been combined to describe assemblies of elements. The underlying reason for the compositional deficiency of testing is that tests are samples. When two elements are composed, the input samples (test points) for the first lead to an output sample, but it does not match the input test points of the second, following element. The current interest in software components and component-based software development (CBSD) provides an ideal context for investigating elements and assemblies. In CBSD, the elements (components) are analysed without knowledge of the system(s) to be later assembled. A fundamental testing theory of component composition must use measured component properties (test results) to predict system properties. This paper proposes a testing-based theory of software component composition based on subdomains. It shows how to combine subdomain tests of components into testing predictions for arbitrarily complex assemblies formed by sequence, conditional, and iteration constructions. The basic construction of the theory applies to functional behaviour, but the theory can also predict the system's non-functional properties from component subdomain tests. Compared with the alternative of actually building and testing a system, theoretical predictions are computationally more efficient. The theory can also be described as an exercise in modelling. Components are replaced by abstractions derived from testing them, and these models are manipulated to model system behaviour. This article replaces a previously published version. DOI: 10.1002/stvr.368. Copyright © 2007 John Wiley & Sons, Ltd.
---
paper_title: Automating the performance management of component-based enterprise systems through the use of redundancy
paper_content:
Component technologies are increasingly being used for building enterprise systems, as they can address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components.This paper presents a framework for automating the performance management of complex, component-based systems. The adopted approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different running environment. A fully-automated framework prototype for J2EE is presented, along with results from managing a sample enterprise application on JBoss. A mechanism that uses monitoring data to learn and automatically improve the framework's management behaviour is proposed. The framework imposes no extra requirements on component providers, or on the component technologies.
---
paper_title: A framework for performance monitoring, modelling and prediction of component oriented distributed systems
paper_content:
We present a framework that can be used to identify performance issues in component-oriented distributed systems. The framework consists of a monitoring module, a modelling module and a prediction module, that are interrelated. The monitoring block extracts real-time performance data from a live or under development system. The modelling block generates UML models of the system showing where the performance problems are located and drives the monitoring process. The performance prediction block simulates different system-loads on the generated models and pinpoints possible performance issues. The technological focus is currently on Enterprise Java Beans systems.
---
paper_title: Performance Solutions A Practical Guide To Creating Responsive Scalable Software
paper_content:
Thank you for downloading performance solutions a practical guide to creating responsive scalable software. As you may know, people have search numerous times for their favorite books like this performance solutions a practical guide to creating responsive scalable software, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop.
---
paper_title: TESTEJB - A measurement framework for EJBs
paper_content:
Specification of Quality of Service (QoS) for components can only be done in relation to the QoS the components themselves are given by imported components. Developers as well as users need support in order to derive valid data for specification respectively for checking whether a selected component complies with its specification. In this paper we introduce the architecture of a measurement framework for EJBs giving such support and discuss in detail the measurement of the well understood property of response time.
---
paper_title: Detecting Performance Antipatterns in Component Based Enterprise Systems
paper_content:
We introduce an approach for automatic detection of performance antipatterns. The approach is based on a number of advanced monitoring and analysis techniques. The advanced analysis is used to identify relationships and patterns in the monitored data. This information is subsequently used to reconstruct a design model of the underlying system, which is loaded into a rule engine in order to identify predefined antipatterns. We give results of applying this approach to identify a number of antipatterns in two JEE applications. Finally, this work also categorises JEE antipatterns into categories based on the data needed to detect them.
---
paper_title: Performance prediction for black-box components using reengineered parametric behaviour models
paper_content:
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
---
paper_title: A model transformation approach for the early performance and reliability analysis of component-based systems
paper_content:
The adoption of a “high level” perspective in the design of a component-based application, without considering the specific features of some underlying supporting platform, has the advantage of focusingon the relevant architectural aspects and reasoning about them in a platform independent way, omitting unnecessary details that could even not be known at the earliest development stages.On the other hand, many of the details that are typically neglected in this high-level perspective must necessarily be taken into account to obtain a meaningful evaluation of different architectural choices in terms of extra-functional quality attributes, like performance or reliability. Toward the reconciliation of these two contrasting needs, we propose a model-based approach whose goal is to support the derivation of sufficiently detailed prediction models from high level models of component-based systems, focusing on the prediction of performance and reliability. We exploit for this purpose a refinement mechanism based on the use of model transformation techniques.
---
paper_title: Automatic inclusion of middleware performance attributes into architectural UML software models
paper_content:
Distributed systems often use a form of communication middleware to cope with different forms of heterogeneity, including geographical spreading of the components, different programming languages and platform architectures, etc. The middleware, of course, impact the architecture and the performance of the system. This paper presents a model transformation framework to automatically include the architectural impact and the overhead incurred by using a middleware layer between several system components. Using this framework, architects can model the system in a middleware-independent fashion. Accurate, middleware-aware models can then be obtained automatically using a middleware model repository. The actual transformation algorithm is presented in more detail. The resulting models can be used to obtain performance models of the system. From those performance models, early indications of the system performance can be extracted.
---
paper_title: Performance prediction for component compositions
paper_content:
A stepwise approach is proposed to predict the performance of component compositions. The approach considers the major factors influencing the performance of component compositions in sequence: component operations, activities, and composition of activities. During each step, various models - analytical, statistical, simulation - can be constructed to specify the contribution of each relevant factor to the performance of the composition. The architects can flexibly choose which model they use at each step in order to trade prediction accuracy against prediction effort. The approach is illustrated with an example about the performance prediction for an Automobile Navigation System.
---
paper_title: Component Composition with Parametric Contracts
paper_content:
We discuss compositionality in terms of (a) component interoperability and contractual use of components, (b) component adaptation and (c) prediction of properties of composite components. In particular, we present parametric component contracts as a framework treating the above mentioned facets of compositionality in a unified way. Parametric contracts compute component interfaces in dependency of context properties, such as available external services or the profile how the component will be used by its clients. Under well-specified conditions, parametric contracts yield interfaces offering interoperability to the component context (as they are component-specifically generated). Therefore, parametric contracts can be considered as adaptation mechanism, adapting a components providesor requires-interface depending on connected components. If non-functional properties are specified in a component provides interface, parametric contracts compute these nonfunctional properties in dependency of the environment.
---
paper_title: Software component models
paper_content:
Component-based Development (CBD) is an important emerging topic in Software Engineering, promising long sought after benefits like increased reuse and reduced time-to-market (and hence software production cost). However, there are at present many obstacles to overcome before CBD can succeed. For one thing, CBD success is predicated on a standardised market place for software components, which does not yet exist. In fact currently CBD even lacks a universally accepted terminology. Existing component models adopt different component definitions and composition operators. Therefore much research remains to be done. We believe that the starting point for this endeavour should be a thorough study of current component models, identifying their key characteristics and comparing their strengths and weaknesses. A desirable side-effect would be clarifying and unifying the CBD terminology. In this tutorial, we present a clear and concise exposition of all the current major software component models, including a taxonomy. The purpose is to distill and present knowledge of current software component models, as well as to present an analysis of their properties with respect to commonly accepted criteria for CBD. The taxonomy also provides a starting point for a unified terminology.
---
paper_title: How far are we from the definition of a common software performance ontology?
paper_content:
The recent approaches to software performance modeling and validation share the idea of annotating software models with information related to performance (e.g. operational profile) and transforming the annotated model into a performance model (e.g. a Stochastic Petri Net). Up to date, no standard has been defined to represent the information related to performance in software artifacts, although clear advantages in tool interoperability and model transformations would stem from it. This paper is aimed at questioning whether a software performance ontology (i.e. a standard set of concepts and relations) is achievable or not. We consider three meta-models defined for software performance, that are the Schedulability, Performance and Time profile of UML, the Core Scenario Model and the Software Performance Engineering meta-model. We devise two approaches to the creation of an ontology: (i) bottom-up, that extracts common knowledge from the meta-models, (ii) top-down, that is driven from a set of requirements.
---
paper_title: An empirical investigation of the effort of creating reusable models for performance prediction
paper_content:
Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once.
---
paper_title: Predicting the Behavior of a Highly Configurable Component Based Real-Time System
paper_content:
Software components and the technology supporting component based software engineering contribute greatly to the rapid development and configuration of systems for a variety of application domains. Such domains go beyond desktop office applications and information systems supporting e-commerce, but include systems having real-time performance requirements and critical functionality. Discussed in this paper are the results from an experiment that demonstrates the ability to predict deadline satisfaction of threads in a real-time system where the functionality performed is based on the configuration of the assembled software components. Presented is the method used to abstract the large, legacy code base of the system software and the application software components in the system; the model of those abstractions based on available architecture documentation and empirically-based, runtime observations; and the analysis of the predictions which yielded objective confidence in the observations and model created which formed the underlying basis for the predictions.
---
paper_title: Rule-based automatic software performance diagnosis and improvement
paper_content:
There are many advantages to analyzing performance at the design level, rather than waiting until system testing. However the necessary expertise in making and interpreting performance models may not be available, and the time for the analysis may be prohibitive. This work addresses both these difficulties through automation. Starting from an annotated specification in UML, it is possible to automatically derive a performance model. This work goes further to automate the performance analysis, and to explore design changes using diagnostic and design-change rules. The rules generate improved performance models which can be transformed back to an improved design. They untangle the effects of the system configuration (such as the allocation of processors) from limitations of the design, and they recommend both configuration and design improvements. This paper describes a prototype called Performance Booster (PB), which incorporates several rules, and demonstrates its feasibility by applying PB to the design of several case studies (tutorial and industrial). It also addresses how the changes at the level of a performance model should be implemented in the software.
---
paper_title: Performance Solutions A Practical Guide To Creating Responsive Scalable Software
paper_content:
Thank you for downloading performance solutions a practical guide to creating responsive scalable software. As you may know, people have search numerous times for their favorite books like this performance solutions a practical guide to creating responsive scalable software, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop.
---
paper_title: Exploring performance trade-offs of a JPEG decoder using the deepcompass framework
paper_content:
Designing embedded systems for multiprocessor platforms requires early prediction and balancing of multiple system quality attributes. We present a design space exploration framework for component-based software systems that allows an architect to get insight into a space of possible design alternatives with further evaluation and comparison of these alternatives. The framework provides (a) tool-guided design of multiple alternatives of software and hardware architectures, (b) early design-time predictions of performance properties and identification of bottlenecks for each architectural alternative, and (c) evaluation of each alternative with respect to multi-objective trade-offs. The performance prediction technique employs modeling of individual components and composition of the models into a system model representing the system behaviour and resource usage. We illustrate the framework by a case study of a JPEG decoder application. For this system, we consider architectural alternatives, show their specification, and explore their trade-offs with respect to task latencies, resource utilization and system cost.
---
paper_title: Performance prediction for black-box components using reengineered parametric behaviour models
paper_content:
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
---
paper_title: Performance prediction of component-based applications
paper_content:
One of the major problems in building large-scale enterprise systems is anticipating the performance of the eventual solution before it has been built. The fundamental software engineering problem becomes more difficult when the systems are built on component technology. This paper investigates the feasibility of providing a practical solution to this problem. An empirical approach is proposed to determine the performance characteristics of component-based applications by benchmarking and profiling. Based on observation, a model is constructed to act as a performance predictor for a class of applications based on the specific component technology. The performance model derived from empirical measures is necessary to make the problem tractable and the results relevant. A case study applies the performance model to an application prototype implemented by two component infrastructures: CORBA and J2EE.
---
paper_title: Predicting the performance of component-based software architectures with different usage profiles
paper_content:
Performance predictions aim at increasing the quality of software architectures during design time. To enable such predictions, specifications of the performance properties of individual components within the architecture are required. However, the response times of a component might depend on its configuration in a specific setting and the data send to or retrieved from it. Many existing prediction approaches for component-based systems neglect these influences. This paper introduces extensions to a performance specification language for components, the Palladio Component Model, to model these influences. The model enables to predict response times of different architectural alternatives. A case study on a component-based architecture for a web portal validates the approach and shows that it is capable of supporting a design decision in this scenario.
---
paper_title: Modelling of input-parameter dependency for performance predictions of component-based embedded systems
paper_content:
The guaranty of meeting the timing constraints during the design phase of real-time component-based embedded software has not been realized. To satisfy real-time requirements, we need to understand behaviour and resource usage of a system over time. In this paper, we address both aspects in detail by observing the influence of input data on the system behaviour and performance. We extend an existing scenario simulation approach that features the modelling of input parameter dependencies and simulating the execution of the models. The approach enables specification of the dependencies in the component models, as well as initialisation of the parameters in the application scenario model. This gives a component-based application designer an explorative possibility of going through all possible execution scenarios with different parameter initialisations, and finding the worst-case scenarios where the predicted performance does not satisfy the requirements. The identification of these scenarios is important because it avoids system redesign at the later stage. In addition, the conditional behaviour and resource usage modelling with respect to the input data provide more accurate prediction.
---
paper_title: An empirical investigation of the effort of creating reusable models for performance prediction
paper_content:
Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once.
---
paper_title: Software performance in the real world: personal lessons from the performance trauma team
paper_content:
In the nine years that we have been involved in software performance engineering (SPE) and performance testing engagements we have learned several things. Across numerous eCommerce applications and an enterprise CRM product suite, our knowledge base about the field of Software Performance Engineering is constantly evolving. The focus of this paper is what we have learned in the areas of SPE project management, performance testing, defining the scope of SPE projects, ITIL, post production performance support, and exploration of the boundaries of applied SPE. Is it really just about performance.
---
paper_title: The Future of Software Performance Engineering
paper_content:
Performance is a pervasive quality of software systems; everything affects it, from the software itself to all underlying layers, such as operating system, middleware, hardware, communication networks, etc. Software Performance Engineering encompasses efforts to describe and improve performance, with two distinct approaches: an early-cycle predictive model-based approach, and a late-cycle measurement-based approach. Current progress and future trends within these two approaches are described, with a tendency (and a need) for them to converge, in order to cover the entire development cycle.
---
paper_title: Performance Model Estimation and Tracking Using Optimal Filters
paper_content:
To update a performance model, its parameter values must be updated, and in some applications (such as autonomic systems) tracked continuously over time. Direct measurement of many parameters during system operation requires instrumentation which is impractical. Kalman filter estimators can track such parameters using other data such as response times and utilizations, which are readily observable. This paper adapts Kalman filter estimators for performance model parameters, evaluates the approximations which must be made, and develops a systematic approach to setting up an estimator. The estimator converges under easily verified conditions. Different queueing-based models are considered here, and the extension for state-based models (such as stochastic Petri nets) is straightforward.
---
paper_title: Predicting the Behavior of a Highly Configurable Component Based Real-Time System
paper_content:
Software components and the technology supporting component based software engineering contribute greatly to the rapid development and configuration of systems for a variety of application domains. Such domains go beyond desktop office applications and information systems supporting e-commerce, but include systems having real-time performance requirements and critical functionality. Discussed in this paper are the results from an experiment that demonstrates the ability to predict deadline satisfaction of threads in a real-time system where the functionality performed is based on the configuration of the assembled software components. Presented is the method used to abstract the large, legacy code base of the system software and the application software components in the system; the model of those abstractions based on available architecture documentation and empirically-based, runtime observations; and the analysis of the predictions which yielded objective confidence in the observations and model created which formed the underlying basis for the predictions.
---
paper_title: Next generation data centers: trends and implications
paper_content:
In this talk we will discuss next generation data centers and the important impact they will have upon enterprise applications. Specifically, we will discuss the technical and economical trends motivating the move towards large scale distributed data centers consisting of tens of thousands of servers and hundreds of petabytes of storage. We will explain the roles and advantages of virtual machines and other virtualization technologies in these environments and also explore how they exacerbate the complexity of management and achieving predictable application behavior.To better illustrate issues emerging in such environments we will describe early experiments we conducted with a 1000-processor utility rendering service created for DreamWorks Animation that was used to render the films Shrek II and Madagascar. We will discuss the lessons learned from this experience.Next, we consider the trend towards service oriented architectures for enterprise application platforms. Service orientation provides for more flexible and agile information technology systems but further increases the complexity of management and behavior. We will explore the implications of composing services dynamically using an SOA approach.These trends for enterprise application platforms and the trends towards next generation data centers have helped to drive our current research agenda. Our goal is to enable the flexibility and agility offered by these new technologies while enabling cost effective management, predictable behavior and improved quality of service. We will give an overview of our research on model driven design for enterprise application infrastructure, automated deployment, and operations of distributed application services executing in a virtualized, shared resource pool within these next generation data centers.Finally, we will summarize the implications, challenges, and opportunities posed by these trends on academic and industrial research. In particular, we consider the impact on software performance and software performance engineering and pose important unanswered questions.
---
paper_title: Rule-based automatic software performance diagnosis and improvement
paper_content:
There are many advantages to analyzing performance at the design level, rather than waiting until system testing. However the necessary expertise in making and interpreting performance models may not be available, and the time for the analysis may be prohibitive. This work addresses both these difficulties through automation. Starting from an annotated specification in UML, it is possible to automatically derive a performance model. This work goes further to automate the performance analysis, and to explore design changes using diagnostic and design-change rules. The rules generate improved performance models which can be transformed back to an improved design. They untangle the effects of the system configuration (such as the allocation of processors) from limitations of the design, and they recommend both configuration and design improvements. This paper describes a prototype called Performance Booster (PB), which incorporates several rules, and demonstrates its feasibility by applying PB to the design of several case studies (tutorial and industrial). It also addresses how the changes at the level of a performance model should be implemented in the software.
---
paper_title: Performance Solutions A Practical Guide To Creating Responsive Scalable Software
paper_content:
Thank you for downloading performance solutions a practical guide to creating responsive scalable software. As you may know, people have search numerous times for their favorite books like this performance solutions a practical guide to creating responsive scalable software, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their laptop.
---
paper_title: A framework for automated generation of architectural feedback from software performance analysis
paper_content:
A rather complex task in the performance analysis of software architectures has always been the interpretation of the analysis results and the generation of feedback that may help developers to improve their architecture with alternative "better performing" solutions. This is due, on one side, to the fact that performance analysis results may be rather complex to interpret (e.g., they are often collections of different indices) and, on the other side, to the problem of coupling the "right" architectural alternatives to results, that are the alternatives that allow to improve the performance by resolving critical issues in the architecture. In this paper we propose a framework to interpret the performance analysis results and to propose alternatives to developers that improve their architectural designs. The interpretation of results is based on the ability to automatically recognize performance anti-patterns in the software architecture. The whole process of result interpretation and generation of architectural alternatives is supported by a tool based on the Layered Queueing Network notation.
---
paper_title: Experimentation in software engineering: an introduction
paper_content:
The purpose of Experimentation in Software Engineering: An Introduction is to introduce students, teachers, researchers, and practitioners to experimentation and experimental evaluation with a focus on software engineering. The objective is, in particular, to provide guidelines for performing experiments evaluating methods, techniques and tools in software engineering. The introduction is provided through a process perspective. The focus is on the steps that we go through to perform experiments and quasi-experiments. The process also includes other types of empirical studies. The motivation for the book emerged from the need for support we experienced when turning our software engineering research more experimental. Several books are available which either treat the subject in very general terms or focus on some specific part of experimentation; most focus on the statistical methods in experimentation. These are important, but there were few books elaborating on experimentation from a process perspective, none addressing experimentation in software engineering in particular. The scope of Experimentation in Software Engineering: An Introduction is primarily experiments in software engineering as a means for evaluating methods, techniques and tools. The book provides some information regarding empirical studies in general, including both case studies and surveys. The intention is to provide a brief understanding of these strategies and in particular to relate them to experimentation. Experimentation in Software Engineering: An Introduction is suitable for use as a textbook or a secondary text for graduate courses, and for researchers and practitioners interested in an empirical approach to software engineering. (Less)
---
paper_title: Performance prediction for black-box components using reengineered parametric behaviour models
paper_content:
In component-based software engineering, the response time of an entire application is often predicted from the execution durations of individual component services. However, these execution durations are specific for an execution platform (i.e. its resources such as CPU) and for a usage profile. Reusing an existing component on different execution platforms up to now required repeated measurements of the concerned components for each relevant combination of execution platform and usage profile, leading to high effort. This paper presents a novel integrated approach that overcomes these limitations by reconstructing behaviour models with platform-independent resource demands of bytecode components. The reconstructed models are parameterised over input parameter values. Using platform-specific results of bytecode benchmarking, our approach is able to translate the platform-independent resource demands into predictions for execution durations on a certain platform. We validate our approach by predicting the performance of a file sharing application.
---
paper_title: TESTEJB - A measurement framework for EJBs
paper_content:
Specification of Quality of Service (QoS) for components can only be done in relation to the QoS the components themselves are given by imported components. Developers as well as users need support in order to derive valid data for specification respectively for checking whether a selected component complies with its specification. In this paper we introduce the architecture of a measurement framework for EJBs giving such support and discuss in detail the measurement of the well understood property of response time.
---
paper_title: Automatic inclusion of middleware performance attributes into architectural UML software models
paper_content:
Distributed systems often use a form of communication middleware to cope with different forms of heterogeneity, including geographical spreading of the components, different programming languages and platform architectures, etc. The middleware, of course, impact the architecture and the performance of the system. This paper presents a model transformation framework to automatically include the architectural impact and the overhead incurred by using a middleware layer between several system components. Using this framework, architects can model the system in a middleware-independent fashion. Accurate, middleware-aware models can then be obtained automatically using a middleware model repository. The actual transformation algorithm is presented in more detail. The resulting models can be used to obtain performance models of the system. From those performance models, early indications of the system performance can be extracted.
---
paper_title: Detecting Performance Antipatterns in Component Based Enterprise Systems
paper_content:
We introduce an approach for automatic detection of performance antipatterns. The approach is based on a number of advanced monitoring and analysis techniques. The advanced analysis is used to identify relationships and patterns in the monitored data. This information is subsequently used to reconstruct a design model of the underlying system, which is loaded into a rule engine in order to identify predefined antipatterns. We give results of applying this approach to identify a number of antipatterns in two JEE applications. Finally, this work also categorises JEE antipatterns into categories based on the data needed to detect them.
---
paper_title: Architecture-Based Software Reliability Analysis: Overview and Limitations
paper_content:
With the growing size and complexity of software applications, research in the area of architecture-based software reliability analysis has gained prominence. The purpose of this paper is to provide an overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations
---
paper_title: A model-driven approach to performability analysis of dynamically reconfigurable component-based systems
paper_content:
Dynamic reconfiguration techniques appear promising to build component-based (C-B) systems for application domains that have strong adaptability requirements, like the mobile and the service-oriented computing domains. However, introducing dynamic reconfiguration features into a C-B application makes even more challenging the design and verification of functional and non functional requirements. Our goal is to support the model-based analysis of the effectiveness of reconfigurable C-B applications, with a focus on the assessment of the non-functional performance and reliability attributes. As a first step towards this end, we address the issue of selecting suitable analysis models for reconfigurable systems, suggesting to this end the use of joint performance and reliability (performability) models. Furthermore, we propose a model-driven approach to automatically transform a design model into an analysis model. For this purpose, we build on the existence of intermediate languages that have been proposed to facilitate this transformation and we extend one of them, to capture the core features (from a performance/reliability viewpoint) of a dynamically reconfigurable C-B system. Finally, we illustrate by a simple application example the main steps of the proposed approach.
---
paper_title: Hybrid performance modeling approach for network intensive distributed software
paper_content:
When designing and evaluating software architectures and network facilities for hosting demanding distributed applications, taking performance considerations into account is essential. A key factor in assessing the performance of such a distributed system is the network latency and its relation to the application behaviour. In this respect, it is important to include the performance impact of the network into the performance models used during the entire design cycle of the system.A framework is proposed that allows to model both the software and the network components separately and extracts a single set of performance estimates for the entire system. This has the advantage of allowing the network and software aspects to be modeled separately using the modeling languages and tools most suited to those system aspects. A case study is presented to illustrate the use of the framework and its usefulness in predicting system performance.
---
paper_title: Modelling of input-parameter dependency for performance predictions of component-based embedded systems
paper_content:
The guaranty of meeting the timing constraints during the design phase of real-time component-based embedded software has not been realized. To satisfy real-time requirements, we need to understand behaviour and resource usage of a system over time. In this paper, we address both aspects in detail by observing the influence of input data on the system behaviour and performance. We extend an existing scenario simulation approach that features the modelling of input parameter dependencies and simulating the execution of the models. The approach enables specification of the dependencies in the component models, as well as initialisation of the parameters in the application scenario model. This gives a component-based application designer an explorative possibility of going through all possible execution scenarios with different parameter initialisations, and finding the worst-case scenarios where the predicted performance does not satisfy the requirements. The identification of these scenarios is important because it avoids system redesign at the later stage. In addition, the conditional behaviour and resource usage modelling with respect to the input data provide more accurate prediction.
---
|
Title: Performance evaluation of component-based software systems: A survey
Section 1: Introduction
Description 1: This section provides an overview of the paper, discussing the goal of classifying performance prediction and measurement approaches for component-based software systems proposed over the last ten years.
Section 2: Software Component Performance
Description 2: This section introduces the key terms and concepts needed to understand the challenges of performance evaluation methods for component-based systems.
Section 3: Performance Evaluation Methods
Description 3: This section summarizes the main approaches to performance evaluation for component-based software systems, including prediction approaches based on UML and proprietary meta-models, middleware-focused methods, and formal specification methods.
Section 4: Critical Reflection
Description 4: This section offers a critical analysis of the surveyed approaches, discussing their benefits and drawbacks, and details the factors impacting component performance and their descriptions.
Section 5: Future Directions
Description 5: This section outlines possible future research directions in the field of performance evaluation of component-based systems, including model expressiveness, support for runtime and dynamic architectures, and domain-specific methods.
Section 6: Conclusions
Description 6: This section concludes the survey, summarizing the current state of the field and the importance of developing mixed approaches that incorporate both measurements and modelings to handle the complexity of component-based systems.
|
A Survey of the Path Partition Conjecture
| 9 |
---
paper_title: On a cycle partition problem
paper_content:
Let G be any graph and let c(G) denote the circumference of G. We conjecture that for every pair c"1,c"2 of positive integers satisfying c"1+c"2=c(G), the vertex set of G admits a partition into two sets V"1 and V"2, such that V"i induces a graph of circumference at most c"i, i=1,2. We establish various results in support of the conjecture; e.g. it is observed that planar graphs, claw-free graphs, certain important classes of perfect graphs, and graphs without too many intersecting long cycles, satisfy the conjecture. This work is inspired by a well-known, long-standing, analogous conjecture involving paths.
---
paper_title: Uniquely (m, k)τ-colourable graphs and k − τ-saturated graphs
paper_content:
Abstract For a graph G , the path number τ ( G ) is defined as the order of a longest path in G . An ( m , k ) τ -colouring of a graph H is a partition of the vertex set of H into m subsets such that each subset induces a subgraph of H for which τ is at most k . The k − τ -chromatic number χ k τ ( H ) is the least m for which H has an ( m , k ) τ -colouring. A graph H is uniquely ( m , k ) τ -colourable if χ k τ ( H ) = m and there is only one partition of the vertex set of H which is an ( m , k ) τ -colouring of H . A graph G is called k − τ -saturated if τ ( G ) ⩽ k and τ ( G + e ) ⩾ k + 1 for all e ϵ E ( G ). For k = 1, the graphs obtained by taking the join of k − τ -saturated graphs (which are empty graphs in this case) are known to be uniquely colourable graphs. In this paper we construct uniquely ( m , k ) τ -colourable graphs (for all positive integers m and k ) using k − τ -saturated graphs in a similar fashion. As a corollary we characterise those p for which there exists a uniquely ( m , k ) τ -colourable graph of order p .
---
paper_title: P_{m}-saturated graphs with minimum size
paper_content:
By \(P_m\) we denote a path of order \(m\). A graph \(G\) is said to be \(P_m\)-saturated if \(G\) has no subgraph isomorphic to \(P_m\) and adding any new edge to \(G\) creates a \(P_m\) in \(G\). In 1986 L. Kaszonyi and Zs. Tuza considered the following problem: for given \(m\) and \(n\) find the minimum size \(sat(n;P_m)\) of \(P_m\)-saturated graph and characterize the graphs of \(Sat(n;P_m)\) - the set of \(P_m\)-saturated graphs of minimum size. They have solved this problem for \(n\geq a_m\) where \(a_m=\begin{cases}3\cdot 2^{k-1}-2 &\quad\text{ if }\quad m=2k,\, k\gt 2\\ 2^{k+1}-2 &\quad\text{ if }\quad m=2k+1,\, k\geq 2\end{cases}\). We define \(b_m=\begin{cases}3\cdot 2^{k-2} &\quad\text{ if }\quad m=2k,\, k\geq 3\\ 3\cdot 2^{k-1}-1 &\quad\text{ if }\quad m=2k+1,\, k\geq 3\end{cases}\) and give \(sat(n;P_m)\) and \(Sat(n;P_m)\) for \(m\geq 6\) and \(b_m\leq n\leq a_m\).
---
paper_title: A note on a cycle partition problem
paper_content:
Abstract Let G be any graph, and let c ( G ) denote the circumference of G . If, for every pair c 1 , c 2 of positive integers satisfying c 1 + c 2 = c ( G ) , the vertex set of G admits a partition into two sets V 1 and V 2 such that V i induces a graph of circumference at most c i , i = 1 , 2 , then G is said to be c -partitionable. In [M.H. Nielsen, On a cycle partition problem, Discrete Math. 308 (2008) 6339–6347], it is conjectured that every graph is c -partitionable. In this paper, we verify this conjecture for a graph with a longest cycle that is a dominating cycle. Moreover, we prove that G is c -partitionable if c ( G ) ≥ | V ( G ) | − 3 .
---
paper_title: Saturated graphs with minimal number of edges
paper_content:
Let F = {F1,…} be a given class of forbidden graphs. A graph G is called F-saturated if no Fi ∈ F is a subgraph of G but the addition of an arbitrary new edge gives a forbidden subgraph. In this paper the minimal number of edges in F-saturated graphs is examined. General estimations are given and the structure of minimal graphs is described for some special forbidden graphs (stars, paths, m pairwise disjoint edges).
---
paper_title: On a tree-partition problem
paper_content:
Abstract If T = ( V , E ) is a tree then – T denotes the additive hereditary property consisting of all graphs that does not contain T as a subgraph. For an arbitrary vertex v of T we deal with a partition of T into two trees T 1 , T 2 , so that V ( T 1 ) ∩ V ( T 2 ) = { v } , V ( T 1 ) ∪ ( T 2 ) = V ( T ) , E ( T 1 ) ∩ E ( T 2 ) = ∅ , E ( T 1 ) ∪ E ( T 2 ) = E ( T ) , T [ V ( T 1 ) \ { v } ] ⊆ E ( T 1 ) and T [ V ( T 2 ) \ { v } ] ⊆ E ( T 2 ) . We call such a partition a T v − p a r t i t i o n of T . We study the following em: Given a graph G belonging to –T . Is it true that for any T v -partition T 1 , T 2 of T there exists a partition { V 1 , V 2 } of the vertices of G such that G [ V 1 ] ∈ − T 1 and G [ V 2 ] ∈ − T 2 ? This problem provides a natural generalization of Δ-partition problem studied by L. Lovasz ([L. Lovasz, On decomposition of graphs. Studia Sci. Math. Hungar. 1 (1966) 237–238]) and Path Partition Conjecture formulated by P. Mihok ([P. Mihok, Problem 4, in: M. Borowiecki, Z. Skupien (Eds.), Graphs, Hypergraphs and Matroids, Zielona Gora, 1985, p. 86]). We present some partial results and a contribution to the Path Kernel Conjecture that was formulated with connection to Path Partition Conjecture.
---
paper_title: A note on the Path Kernel Conjecture
paper_content:
Let @t(G) denote the number of vertices in a longest path in a graph G=(V,E). A subset K of V is called a P"n-kernel of G if @t(G[K])@?n-1 and every vertex [email protected][email protected]?K is adjacent to an end-vertex of a path of order n-1 in G[K]. It is known that every graph has a P"n-kernel for every positive integer [email protected]?9. R. Aldred and C. Thomassen in [R.E.L. Aldred, C. Thomassen, Graphs with not all possible path-kernels, Discrete Math. 285 (2004) 297-300] proved that there exists a graph which contains no P"3"6"4-kernel. In this paper, we generalise this result. We construct a graph with no P"1"5"5-kernel and for each integer l>=0 we provide a construction of a graph G containing no P"@t"("G")"-"l-kernel.
---
paper_title: Graphs with not all possible path-kernels
paper_content:
Abstract The Path Partition Conjecture states that the vertices of a graph G with longest path of length c may be partitioned into two parts X and Y such that the longest path in the subgraph of G induced by X has length at most a and the longest path in the subgraph of G induced by Y has length at most b , where a + b = c . Moreover, for each pair a , b such that a + b = c there is a partition with this property. A stronger conjecture by Broere, Hajnal and Mihok, raised as a problem by Mihok in 1985, states the following: For every graph G and each integer k , c ⩾ k ⩾2 there is a partition of V ( G ) into two parts (K, K ) such that the subgraph G [ K ] of G induced by K has no path on more than k −1 vertices and each vertex in K is adjacent to an endvertex of a path on k −1 vertices in G [ K ]. In this paper we provide a counterexample to this conjecture.
---
paper_title: Graph coloring problems
paper_content:
Planar Graphs. Graphs on Higher Surfaces. Degrees. Critical Graphs. The Conjectures of Hadwiger and Hajos. Sparse Graphs. Perfect Graphs. Geometric and Combinatorial Graphs. Algorithms. Constructions. Edge Colorings. Orientations and Flows. Chromatic Polynomials. Hypergraphs. Infinite Chromatic Graphs. Miscellaneous Problems. Indexes.
---
paper_title: A survey of hereditary properties of graphs
paper_content:
In this paper we survey results and open problems on the structure of additive and hereditary properties of graphs. The important role of vertex partition problems, in particular the existence of uniquely partitionable graphs and reducible properties of graphs in this structure, is emphasized. Many related topics, including questions on the complexity of related problems, are investigated.
---
paper_title: Cycles in k-traceable oriented graphs
paper_content:
A digraph of order at least k is termed k-traceable if each of its subdigraphs of order k is traceable. It turns out that several properties of tournaments-i.e., the 2-traceable oriented graphs-extend to k-traceable oriented graphs for small values of k. For instance, the authors together with O. Oellermann have recently shown that for k=2,3,4,5,6, all k-traceable oriented graphs are traceable. Moon [J.W. Moon, On subtournaments of a tournament, Canad. Math. Bull. 9(3) (1966) 297-301] observed that every nontrivial strong tournament T is vertex-pancyclic-i.e., through each vertex there is a cycle of every length from 3 up to the order of T. The present paper reports results pertaining to various cycle properties of strong k-traceable oriented graphs and explores the extent to which pancyclicity is retained by strong k-traceable oriented graphs. For each k>=2 there are infinitely many k-traceable oriented graphs-e.g. tournaments. However, we establish an upper bound (linear in k) on the order of k-traceable oriented graphs having a strong component with girth greater than 3. As an application of our findings, we show that the Path Partition Conjecture holds for 1-deficient oriented graphs having a strong component with girth at least 6. (A digraph is 1-deficient if its order is exactly one more than the order of its longest paths.)
---
paper_title: A note on path kernels and partitions
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. A subset S of V(G) is called a P"n-kernel of G if @t(G[S])@?n-1 and every vertex v@?V(G)-S is adjacent to an end-vertex of a path of order n-1 in G[S]. A partition of the vertex set of G into two sets, A and B, such that @t(G[A])@?a and @t(G[B])@?b is called an (a,b)-partition of G. In this paper we show that any graph with girth g has a P"n"+"1-kernel for every n<3g2-1. Furthermore, if @t(G)=a+b, 1@?a@?b, and G has girth greater than 23(a+1), then G has an (a,b)-partition.
---
paper_title: Longest path partitions in generalizations of tournaments
paper_content:
We consider the so-called Path Partition Conjecture for digraphs which states that for every digraph, D, and every choice of positive integers, @l"1,@l"2, such that @l"1+@l"2 equals the order of a longest directed path in D, there exists a partition of D into two digraphs, D"1 and D"2, such that the order of a longest path in D"i is at most @l"i, for i=1,2. We prove that certain classes of digraphs, which are generalizations of tournaments, satisfy the Path Partition Conjecture and that some of the classes even satisfy the conjecture with equality.
---
paper_title: Stable set meeting every longest path
paper_content:
Laborde, Payan and Xuong conjectured that every digraph has a stable set meeting every longest path. We prove that this conjecture holds for digraphs with stability number at most 2.
---
paper_title: A Traceability Conjecture for Oriented Graphs
paper_content:
A (di)graph $G$ of order $n$ is $k$-traceable (for some $k$, $1\leq k\leq n$) if every induced sub(di)graph of $G$ of order $k$ is traceable. It follows from Dirac's degree condition for hamiltonicity that for $k\geq2$ every $k$-traceable graph of order at least $2k-1$ is hamiltonian. The same is true for strong oriented graphs when $k=2,3,4,$ but not when $k\geq5$. However, we conjecture that for $k\geq2$ every $k$-traceable oriented graph of order at least $2k-1$ is traceable. The truth of this conjecture would imply the truth of an important special case of the Path Partition Conjecture for Oriented Graphs. In this paper we show the conjecture is true for $k \leq 5$ and for certain classes of graphs. In addition we show that every strong $k$-traceable oriented graph of order at least $6k-20$ is traceable. We also characterize those graphs for which all walkable orientations are $k$-traceable.
---
paper_title: Progress on the Traceability Conjecture for Oriented Graphs
paper_content:
A digraph is k -traceable if each of its induced subdigraphs of order k is traceable. The Traceability Conjecture is that for k ≥ 2 every k -traceable oriented graph of order at least 2k-1 is traceable. The conjecture has been proved for k ≤ 5 . We prove that it also holds for k=6 .
---
paper_title: Combinatorics and graph theory
paper_content:
Graph Theory: Introductory Concepts.- Trees.- Planarity.- Colorings.- Matchings.- Ramsey Theory.- References Combinatorics: Three Basic Problems.- Binomial Coefficients.- The Principle of Inclusion and Exclusion.- Generating Functions.- Polya's Theory of Counting.- More Numbers.- Stable Marriage.- References Infinite Combinatorics and Graph Theory: Pigeons and Trees.- Ramsey Revisited.- ZFC.- The Return of der Koenig.- Ordinals, Cardinals, and Many Pigeons.- Incompleteness and Coardinals.- Weakly Compact Cardinals.- Finite Combinatorics with Infinite Consequences.- Points of Departure.- References.
---
paper_title: The Path Partition Conjecture is true for some generalizations of tournaments
paper_content:
Abstract The Path Partition Conjecture for digraphs states that for every digraph D , and every choice of positive integers λ 1 , λ 2 such that λ 1 + λ 2 equals the order of a longest directed path in D , there exists a partition of D in two subdigraphs D 1 , D 2 such that the order of the longest path in D i is at most λ i for i = 1 , 2 . We present sufficient conditions for a digraph to satisfy the Path Partition Conjecture. Using these results, we prove that strong path mergeable, arc-locally semicomplete, strong 3-quasi-transitive, strong arc-locally in-semicomplete and strong arc-locally out-semicomplete digraphs satisfy the Path Partition Conjecture. Some previous results are generalized.
---
paper_title: Traceability of k-traceable oriented graphs
paper_content:
A digraph of order at least k is k-traceable if each of its subdigraphs of order k is traceable. We note that 2-traceable oriented graphs are tournaments and for k>=3, k-traceable oriented graphs can be regarded as generalized tournaments. We show that for [email protected][email protected]?6 every k-traceable oriented graph is traceable, thus extending the well-known fact that every tournament is traceable. This result does not extend to k=7. In fact, for every k>=7, except possibly for k=8 or 10, there exist k-traceable oriented graphs that are nontraceable. However, we show that for every k>=2 there exists a smallest integer t(k) such that every k-traceable oriented graph of order at least t(k) is traceable.
---
paper_title: Digraphs -- Theory, Algorithms and Applications
paper_content:
The theory of directed graphs has developed enormously over recent decades, yet this book (first published in 2000) remains the only book to cover more than a small fraction of the results. New research in the field has made a second edition a necessity. Substantially revised, reorganised and updated, the book now comprises eighteen chapters, carefully arranged in a straightforward and logical manner, with many new results and open problems. As well as covering the theoretical aspects of the subject, with detailed proofs of many important results, the authors present a number of algorithms, and whole chapters are devoted to topics such as branchings, feedback arc and vertex sets, connectivity augmentations, sparse subdigraphs with prescribed connectivity, and also packing, covering and decompositions of digraphs. Throughout the book, there is a strong focus on applications which include quantum mechanics, bioinformatics, embedded computing, and the travelling salesman problem. Detailed indices and topic-oriented chapters ease navigation, and more than 650 exercises, 170 figures and 150 open problems are included to help immerse the reader in all aspects of the subject. Digraphs is an essential, comprehensive reference for undergraduate and graduate students, and researchers in mathematics, operations research and computer science. It will also prove invaluable to specialists in related areas, such as meteorology, physics and computational biology.
---
paper_title: The Path Partition Conjecture is true for claw-free graphs
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. The Path Partition Conjecture (PPC) is the following: If G is any graph and (a,b) any pair of positive integers such that @t(G)=a+b, then the vertex set of G has a partition (A,B) such that @t()=)=
---
paper_title: A survey of hereditary properties of graphs
paper_content:
In this paper we survey results and open problems on the structure of additive and hereditary properties of graphs. The important role of vertex partition problems, in particular the existence of uniquely partitionable graphs and reducible properties of graphs in this structure, is emphasized. Many related topics, including questions on the complexity of related problems, are investigated.
---
paper_title: A note on path kernels and partitions
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. A subset S of V(G) is called a P"n-kernel of G if @t(G[S])@?n-1 and every vertex v@?V(G)-S is adjacent to an end-vertex of a path of order n-1 in G[S]. A partition of the vertex set of G into two sets, A and B, such that @t(G[A])@?a and @t(G[B])@?b is called an (a,b)-partition of G. In this paper we show that any graph with girth g has a P"n"+"1-kernel for every n<3g2-1. Furthermore, if @t(G)=a+b, 1@?a@?b, and G has girth greater than 23(a+1), then G has an (a,b)-partition.
---
paper_title: A note on path kernels and partitions
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. A subset S of V(G) is called a P"n-kernel of G if @t(G[S])@?n-1 and every vertex v@?V(G)-S is adjacent to an end-vertex of a path of order n-1 in G[S]. A partition of the vertex set of G into two sets, A and B, such that @t(G[A])@?a and @t(G[B])@?b is called an (a,b)-partition of G. In this paper we show that any graph with girth g has a P"n"+"1-kernel for every n<3g2-1. Furthermore, if @t(G)=a+b, 1@?a@?b, and G has girth greater than 23(a+1), then G has an (a,b)-partition.
---
paper_title: The Path Partition Conjecture is true for claw-free graphs
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. The Path Partition Conjecture (PPC) is the following: If G is any graph and (a,b) any pair of positive integers such that @t(G)=a+b, then the vertex set of G has a partition (A,B) such that @t()=)=
---
paper_title: On a Closure Concept in Claw-Free Graphs
paper_content:
IfGis a claw-free graph, then there is a graphcl(G) such that (i)Gis a spanning subgraph ofcl(G), (ii)cl(G) is a line graph of a triangle-free graph, and (iii)the length of a longest cycle inGand incl(G) is the same. A sufficient condition for hamiltonicity in claw-free graphs, the equivalence of some conjectures on hamiltonicity in 2-tough graphs and the hamiltonicity of 7-connected claw-free graphs are obtained as corollaries.
---
paper_title: Closure and stable Hamiltonian properties in claw-free graphs
paper_content:
In the class of k-connected claw-free graphs, we study the stability of some Hamiltonian properties under a closure operation introduced by the third author. We prove that (i) the properties of pancyclicity, vertex pancyclicity and cycle extendability are not stable for any k (i.e., for any of these properties there is an infinite family of graphs Gk of arbitrarily high connectivity k such that the closure of Gk has the property while the graph Gk does not); (ii) traceability is a stable property even for k = 1; (iii) homogeneous traceability is not stable for k = 2 (although it is stable for k = 7). The article is concluded with several open questions concerning stability of homogeneous traceability and Hamiltonian connectedness. © 2000 John Wiley & Sons, Inc. J Graph Theory 34: 30–41, 2000
---
paper_title: The Path Partition Conjecture is true for claw-free graphs
paper_content:
The detour order of a graph G, denoted by @t(G), is the order of a longest path in G. The Path Partition Conjecture (PPC) is the following: If G is any graph and (a,b) any pair of positive integers such that @t(G)=a+b, then the vertex set of G has a partition (A,B) such that @t()=)=
---
paper_title: Cycles in k-traceable oriented graphs
paper_content:
A digraph of order at least k is termed k-traceable if each of its subdigraphs of order k is traceable. It turns out that several properties of tournaments-i.e., the 2-traceable oriented graphs-extend to k-traceable oriented graphs for small values of k. For instance, the authors together with O. Oellermann have recently shown that for k=2,3,4,5,6, all k-traceable oriented graphs are traceable. Moon [J.W. Moon, On subtournaments of a tournament, Canad. Math. Bull. 9(3) (1966) 297-301] observed that every nontrivial strong tournament T is vertex-pancyclic-i.e., through each vertex there is a cycle of every length from 3 up to the order of T. The present paper reports results pertaining to various cycle properties of strong k-traceable oriented graphs and explores the extent to which pancyclicity is retained by strong k-traceable oriented graphs. For each k>=2 there are infinitely many k-traceable oriented graphs-e.g. tournaments. However, we establish an upper bound (linear in k) on the order of k-traceable oriented graphs having a strong component with girth greater than 3. As an application of our findings, we show that the Path Partition Conjecture holds for 1-deficient oriented graphs having a strong component with girth at least 6. (A digraph is 1-deficient if its order is exactly one more than the order of its longest paths.)
---
paper_title: Longest path partitions in generalizations of tournaments
paper_content:
We consider the so-called Path Partition Conjecture for digraphs which states that for every digraph, D, and every choice of positive integers, @l"1,@l"2, such that @l"1+@l"2 equals the order of a longest directed path in D, there exists a partition of D into two digraphs, D"1 and D"2, such that the order of a longest path in D"i is at most @l"i, for i=1,2. We prove that certain classes of digraphs, which are generalizations of tournaments, satisfy the Path Partition Conjecture and that some of the classes even satisfy the conjecture with equality.
---
paper_title: Stable set meeting every longest path
paper_content:
Laborde, Payan and Xuong conjectured that every digraph has a stable set meeting every longest path. We prove that this conjecture holds for digraphs with stability number at most 2.
---
paper_title: A Traceability Conjecture for Oriented Graphs
paper_content:
A (di)graph $G$ of order $n$ is $k$-traceable (for some $k$, $1\leq k\leq n$) if every induced sub(di)graph of $G$ of order $k$ is traceable. It follows from Dirac's degree condition for hamiltonicity that for $k\geq2$ every $k$-traceable graph of order at least $2k-1$ is hamiltonian. The same is true for strong oriented graphs when $k=2,3,4,$ but not when $k\geq5$. However, we conjecture that for $k\geq2$ every $k$-traceable oriented graph of order at least $2k-1$ is traceable. The truth of this conjecture would imply the truth of an important special case of the Path Partition Conjecture for Oriented Graphs. In this paper we show the conjecture is true for $k \leq 5$ and for certain classes of graphs. In addition we show that every strong $k$-traceable oriented graph of order at least $6k-20$ is traceable. We also characterize those graphs for which all walkable orientations are $k$-traceable.
---
paper_title: Progress on the Traceability Conjecture for Oriented Graphs
paper_content:
A digraph is k -traceable if each of its induced subdigraphs of order k is traceable. The Traceability Conjecture is that for k ≥ 2 every k -traceable oriented graph of order at least 2k-1 is traceable. The conjecture has been proved for k ≤ 5 . We prove that it also holds for k=6 .
---
paper_title: The Path Partition Conjecture is true for some generalizations of tournaments
paper_content:
Abstract The Path Partition Conjecture for digraphs states that for every digraph D , and every choice of positive integers λ 1 , λ 2 such that λ 1 + λ 2 equals the order of a longest directed path in D , there exists a partition of D in two subdigraphs D 1 , D 2 such that the order of the longest path in D i is at most λ i for i = 1 , 2 . We present sufficient conditions for a digraph to satisfy the Path Partition Conjecture. Using these results, we prove that strong path mergeable, arc-locally semicomplete, strong 3-quasi-transitive, strong arc-locally in-semicomplete and strong arc-locally out-semicomplete digraphs satisfy the Path Partition Conjecture. Some previous results are generalized.
---
paper_title: Traceability of k-traceable oriented graphs
paper_content:
A digraph of order at least k is k-traceable if each of its subdigraphs of order k is traceable. We note that 2-traceable oriented graphs are tournaments and for k>=3, k-traceable oriented graphs can be regarded as generalized tournaments. We show that for [email protected][email protected]?6 every k-traceable oriented graph is traceable, thus extending the well-known fact that every tournament is traceable. This result does not extend to k=7. In fact, for every k>=7, except possibly for k=8 or 10, there exist k-traceable oriented graphs that are nontraceable. However, we show that for every k>=2 there exists a smallest integer t(k) such that every k-traceable oriented graph of order at least t(k) is traceable.
---
|
Title: A Survey of the Path Partition Conjecture
Section 1: Introduction
Description 1: Introduce the Path Partition Conjecture (PPC), its significance, history, and main objectives of the survey.
Section 2: The PPC and the Lattice of Additive Hereditary Properties
Description 2: Define key notations and terminologies used in the study, and discuss the relevance of additive hereditary properties to the PPC.
Section 3: W_p-maximal Sets and P_p+2-kernels
Description 3: Explore the concept of W_p-maximal sets and P_p+2-kernels, including related conjectures and theorems.
Section 4: The PPC and Cycle Structure
Description 4: Present the relationship between cycle structures in graphs and the PPC, including important results and corollaries.
Section 5: The PPC Restricted to Special Classes of Graphs
Description 5: Discuss the PPC’s validity for special classes of graphs such as claw-free graphs and planar graphs, including theorems and significant results.
Section 6: The PPC for t-deficient Graphs
Description 6: Investigate the PPC’s implications for t-deficient graphs, providing key theorems and asymptotic results.
Section 7: Generalized Chromatic Numbers
Description 7: Examine the connection between the PPC and generalized chromatic numbers, as well as related conjectures and bounds.
Section 8: Directed and Oriented Analogues of the PPC
Description 8: Briefly discuss the directed and oriented versions of the PPC and their challenges and results achieved thus far.
Section 9: Conclusion
Description 9: Summarize the main findings of the survey, the current state of the PPC, and potential directions for future research.
|
Image Registration: A Review of Elastic Registration Methods Applied to Medical Imaging
| 6 |
---
paper_title: Image Registration using Fractional Fourier Transform
paper_content:
This image registration (IR) is a fundamental task in many image processing applications such as medical diagnosis, satellite imaging, super-resolution image reconstruction etc. Fourier transform based methods have also been in use for IR purposes for both the shifted and rotated images using the phase correlation method (Brown, 1992). As fractional Fourier transform (FRFT) is a generalization of the conventional Fourier transform, it is natural to extend its use in IR applications. In this paper, the authors propose the use of the FRFT in IR problems involving either linear shifts or pure rotations in the given images to be registered. Simulation results of the proposed techniques are also presented
---
paper_title: Intra-subject Elastic Registration of 3D Ultrasound Images
paper_content:
Abstract 3D registration of ultrasound images is an important and fast-growing research area with various medical applications, such as image-guided radiotherapy and surgery. However, this registration process remains extremely challenging due to the deformation of soft tissue and the existence of speckles in these images. This paper presents a technique for intra-subject, intra-modality elastic registration of 3D ultrasound images. Using the general concept of attribute vectors, we define the corresponding voxels in the fixed and moving images. Our method does not require presegmentation and does not employ any numerical optimization procedure. As the computational requirements are minimal, the method has potential use in real-time applications. The technique is implemented and tested on 3D ultrasound images of human liver, captured by a 3D ultrasound transducer. The results show that the method is sufficiently accurate and robust even in cases where artifacts such as shadows exist in the ultrasound data.
---
paper_title: Nonrigid registration using free-form deformations: application to breast MR images
paper_content:
In this paper the authors present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation (FFD) based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement. Registration is achieved by minimizing a cost function, which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of three-dimensional (3-D) breast MRI in volunteers and patients. In particular, the authors have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.
---
paper_title: Segmentation of ultrasound images by using wavelet transform
paper_content:
This paper presents a new feature extraction method for the segmentation of ultrasound images. Wavelet transform is proposed for determination of the textures in the ultrasound images. Elements of the feature vectors are formed by the wavelet coefficients at several decomposition level. In this study, incremental self-organized neural network (INeN) is proposed as the classifier. The classification performance is increased by using the wavelet transform and the INeN together.
---
paper_title: A review of cardiac image registration methods
paper_content:
In this paper, the current status of cardiac image registration methods is reviewed. The combination of information from multiple cardiac image modalities, such as magnetic resonance imaging, computed tomography, positron emission tomography, single-photon emission computed tomography, and ultrasound, is of increasing interest in the medical community for physiologic understanding and diagnostic purposes. Registration of cardiac images is a more complex problem than brain image registration because the heart is a nonrigid moving organ inside a moving body. Moreover, as compared to the registration of brain images, the heart exhibits much fewer accurate anatomical landmarks. In a clinical context, physicians often mentally integrate image information from different modalities. Automatic registration, based on computer programs, might, however, offer better accuracy and repeatability and save time.
---
paper_title: Robust registration for computer-integrated orthopedic surgery: Laboratory validation and clinical experience
paper_content:
Abstract In order to provide navigational guidance during computer-integrated orthopedic surgery, the anatomy of the patient must first be registered to a medical image or model. A common registration approach is to digitize points from the surface of a bone and then find the rigid transformation that best matches the points to the model by constrained optimization. Many optimization criteria, including a least-squares objective function, perform poorly if the data include spurious data points (outliers). This paper describes a statistically robust, surface-based registration algorithm that we have developed for orthopedic surgery. To find an initial estimate, the user digitizes points from predefined regions of bone that are large enough to reliably locate even in the absence of anatomic landmarks. Outliers are automatically detected and managed by integrating a statistically robust M -estimator with the iterative-closest-point algorithm. Our in vitro validation method simulated the registration process by drawing registration data points from several sets of densely digitized surface points. The method has been used clinically in computer-integrated surgery for high tibial osteotomy, distal radius osteotomy, and excision of osteoid osteoma.
---
paper_title: A fast image registration technique for motion artifact reduction in DSA
paper_content:
In digital subtraction angiography (DSA), patient motion is the primary cause of image quality degradation. The motion correction algorithms developed so far were not sufficiently fast so as to be suitable for integration in a clinical setting. In this paper we describe a new image registration technique for motion artifact reduction in DSA which is fully automatic, effective, and computationally very efficient. Using an image content driven control point selection mechanism and modern graphics hardware for image warping, the algorithm requires less than one second per DSA image (on average). Preliminary experiments on cerebral DSA images illustrate the applicability of the technique.
---
paper_title: Image-processing technique for suppressing ribs in chest radiographs by means of massive training artificial neural network (MTANN)
paper_content:
When lung nodules overlap with ribs or clavicles in chest radiographs, it can be difficult for radiologists as well as computer-aided diagnostic (CAD) schemes to detect these nodules. In this paper, we developed an image-processing technique for suppressing the contrast of ribs and clavicles in chest radiographs by means of a multiresolution massive training artificial neural network (MTANN). An MTANN is a highly nonlinear filter that can be trained by use of input chest radiographs and the corresponding "teaching" images. We employed "bone" images obtained by use of a dual-energy subtraction technique as the teaching images. For effective suppression of ribs having various spatial frequencies, we developed a multiresolution MTANN consisting of multiresolution decomposition/composition techniques and three MTANNs for three different-resolution images. After training with input chest radiographs and the corresponding dual-energy bone images, the multiresolution MTANN was able to provide "bone-image-like" images which were similar to the teaching bone images. By subtracting the bone-image-like images from the corresponding chest radiographs, we were able to produce "soft-tissue-image-like" images where ribs and clavicles were substantially suppressed. We used a validation test database consisting of 118 chest radiographs with pulmonary nodules and an independent test database consisting of 136 digitized screen-film chest radiographs with 136 solitary pulmonary nodules collected from 14 medical institutions in this study. When our technique was applied to nontraining chest radiographs, ribs and clavicles in the chest radiographs were suppressed substantially, while the visibility of nodules and lung vessels was maintained. Thus, our image-processing technique for rib suppression by means of a multiresolution MTANN would be potentially useful for radiologists as well as for CAD schemes in detection of lung nodules on chest radiographs.
---
paper_title: A Survey of Medical Image Registration
paper_content:
The purpose of this paper is to present a survey of recent (published in 1993 or later) publications concerning medical image registration techniques. These publications will be classified according to a model based on nine salient criteria, the main dichotomy of which is extrinsic versus intrinsic methods. The statistics of the classification show definite trends in the evolving registration techniques, which will be discussed. At this moment, the bulk of interesting intrinsic methods is based on either segmented points or surfaces, or on techniques endeavouring to use the full information content of the images involved.
---
paper_title: Fast parametric elastic image registration
paper_content:
We present an algorithm for fast elastic multidimensional intensity-based image registration with a parametric model of the deformation. It is fully automatic in its default mode of operation. In the case of hard real-world problems, it is capable of accepting expert hints in the form of soft landmark constraints. Much fewer landmarks are needed and the results are far superior compared to pure landmark registration. Particular attention has been paid to the factors influencing the speed of this algorithm. The B-spline deformation model is shown to be computationally more efficient than other alternatives. The algorithm has been successfully used for several two-dimensional (2-D) and three-dimensional (3-D) registration tasks in the medical domain, involving MRI, SPECT, CT, and ultrasound image modalities. We also present experiments in a controlled environment, permitting an exact evaluation of the registration accuracy. Test deformations are generated automatically using a random hierarchical fractional wavelet-based generator.
---
paper_title: A hierarchical approach to elastic registration based on mutual information
paper_content:
Abstract A hierarchical approach to elastic registration based on mutual information, in which the images are progressively subdivided, locally registered, and elastically interpolated, is presented. To improve the registration, a combination of prior and floating information on the joint probability is proposed. It is shown that such a combination increases the registration speed at the coarser levels in hierarchy, enables a registration of finer details, and provides additional guidance to the optimisation process. Besides, a threefold local registration consistency test and correction of shading were employed to increase the overall registration performance. The proposed hierarchical method for elastic registration was tested on an experimental database of 2D images of histochemically differently stained serial cross-sections of human skeletal muscle. The obtained results show that 95% of the images could be successfully registered. The inclusion of prior information is an important break through that may enable routine use of the mutual information cost function in a variety of 2D and 3D image registration algorithms in the future.
---
paper_title: Elastic Medical Image Registration Based on Image Intensity
paper_content:
A two-step elastic medical image registration approach is proposed, which is based on the image intensity. In the first step, the global affine medical image registration is used to establish one-to-one mapping between the two images to be registered. After this first step, the images are registered up to small local elastic deformation. Then the mapped images are used as inputs in the second step, during which, the study image is modeled as elastic sheet by being divided into several subimages. Moving the individual subimage in the reference image, the local displacement vectors are found and the global elastic transformation is achieved by assimilating all of the local transformation into a continuous transformation. This algorithm has been tested on both simulated and tomographic images.
---
paper_title: 3D Registration of Ultrasound Images Based on Morphology Skeleton
paper_content:
In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on morphology skeleton was adopted in this paper. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction
---
paper_title: Automatic hybrid registration for 2-dimensional CT abdominal images
paper_content:
Registration of abdominal images plays an important role in clinical practice, however, because of the complex deformations of organ structure and volume, abdominal image registration remains a challenge. In this paper, a hybrid registration approach is proposed, which consists of two procedures: intensity-based registration procedure and landmark-based registration procedure. In intensity-based registration, in order to speed up registration convergence and to improve registration computation efficiency, a wavelet-based hierarchical method is proposed, in which the global displacements are corrected using mutual information algorithm. In the landmark-based registration, firstly, the landmark points are selected automatically; and then, the local non-linear deformations are corrected using the elastically thin-plate splines. By combining the advantages of intensity-based registration method and that of landmark-based method, the proposed approach can register the images accurately and efficiently. The registration performance of the proposed algorithm is validated by experiments on clinical computed tomography (CT) abdominal images.
---
paper_title: Medical image registration
paper_content:
Radiological images are increasingly being used in healthcare and medical research. There is, consequently, widespread interest in accurately relating information in the different images for diagnosis, treatment and basic science. This article reviews registration techniques used to solve this problem, and describes the wide variety of applications to which these techniques are applied. Applications of image registration include combining images of the same subject from different modalities, aligning temporal sequences of images to compensate for motion of the subject between scans, image guidance during interventions and aligning images from multiple subjects in cohort studies. Current registration algorithms can, in many cases, automatically register images that are related by a rigid body transformation (i.e. where tissue deformation can be ignored). There has also been substantial progress in non-rigid registration algorithms that can compensate for tissue deformation, or align images from different subjects. Nevertheless many registration problems remain unsolved, and this is likely to continue to be an active field of research in the future.
---
paper_title: Evaluation of hierarchical elastic medical image registration method
paper_content:
The paper investigates the hierarchical approach to elastic medical image registration based on mutual information (MI) (Likar, B. and Pernus, F., Image and Vision Computing, vol.19, p.33-44, 2001), in which images are progressively subdivided, locally registered, and elastically interpolated using a thin-plate spline. The technique has been shown to be efficient and robust with small local transformations. However, problems do exist with this technique. First, MI is a statistical property of the two images, so the reduction in the number of samples due to the partitioning of the images into smaller sub-images reduces the statistical quality of the joint intensity histogram. Also, the partitioning scheme may lose some important information, such as edges, which lie exactly on the partition. The statistical problem of MI is resolved by resampling and combining with global MI. An overlapping scheme is implemented in which the image is subdivided into sub-images which overlap their neighbours. This helps to overcome the edge problems. Experiments show that these two methods can improve the registration results to some limited extent (PSNR) and the visual result is much better, especially for the overlapping window scheme.
---
paper_title: Image registration using hierarchical B-splines
paper_content:
Hierarchical B-splines have been widely used for shape modeling since their discovery by Forsey and Bartels. We present an application of this concept, in the form of free-form deformation, to image registration by matching two images at increasing levels of detail. Results using MRI brain data are presented that demonstrate high degrees of matching while unnecessary distortions are avoided. We compare our results with the nonlinear ICP (iterative closest point) algorithm (used for landmark-based registration) and optical flow (used for intensity-based registration).
---
paper_title: Image Registration using Fractional Fourier Transform
paper_content:
This image registration (IR) is a fundamental task in many image processing applications such as medical diagnosis, satellite imaging, super-resolution image reconstruction etc. Fourier transform based methods have also been in use for IR purposes for both the shifted and rotated images using the phase correlation method (Brown, 1992). As fractional Fourier transform (FRFT) is a generalization of the conventional Fourier transform, it is natural to extend its use in IR applications. In this paper, the authors propose the use of the FRFT in IR problems involving either linear shifts or pure rotations in the given images to be registered. Simulation results of the proposed techniques are also presented
---
paper_title: Automatic hybrid registration for 2-dimensional CT abdominal images
paper_content:
Registration of abdominal images plays an important role in clinical practice, however, because of the complex deformations of organ structure and volume, abdominal image registration remains a challenge. In this paper, a hybrid registration approach is proposed, which consists of two procedures: intensity-based registration procedure and landmark-based registration procedure. In intensity-based registration, in order to speed up registration convergence and to improve registration computation efficiency, a wavelet-based hierarchical method is proposed, in which the global displacements are corrected using mutual information algorithm. In the landmark-based registration, firstly, the landmark points are selected automatically; and then, the local non-linear deformations are corrected using the elastically thin-plate splines. By combining the advantages of intensity-based registration method and that of landmark-based method, the proposed approach can register the images accurately and efficiently. The registration performance of the proposed algorithm is validated by experiments on clinical computed tomography (CT) abdominal images.
---
paper_title: Elastic registration of 2D abdominal CT images using hybrid feature point selection for liver lesions
paper_content:
Abdominal CT images have distinct intensity distribution. This feature is used to correct local deformations in the image. Reference and study images are decomposed using wavelet decomposition. Global deformations are first corrected applying rigid registration by use of maximization of Mutual Information as the similarity measure at each level of registration hierarchy. Initially registered image and reference image are further elastically registered using landmark based elastic registration. Here landmarks or feature points are obtained by first intensity thresholding the images followed by boundary selection to obtain lesion boundaries and finally obtaining the centroid and convex hull points of lesions within the images. Convex hull points that lie on the boundary of lesions coupled with centroids of lesions are helpful in precisely identifying the lesions. An advantage of this is that lesions are enhanced to allow for deformations to be precisely determined. This is useful in improving diagnostic accuracy. The performance of algorithm is tested on a real case study of abdominal CT images with liver abscess. Considerable improvement in correlation coefficient and Signal to Noise ratio of the two images is observed.
---
|
Title: Image Registration: A Review of Elastic Registration Methods Applied to Medical Imaging
Section 1: INTRODUCTION
Description 1: Provide an overview of image registration, its objectives, and applications in various fields including medical imaging.
Section 2: Image registration in medical imaging
Description 2: Explore the historical growth, significance, and different applications of image registration in the field of medical imaging.
Section 3: Rigid and Elastic registration
Description 3: Discuss the differences between rigid and elastic registration, and their specific applications within medical imaging.
Section 4: ELASTIC REGISTRATION APPLIED TO MEDICAL IMAGING
Description 4: Examine various methods and approaches used for elastic registration in medical imaging, including specific cases and studies.
Section 5: METHODOLOGY USED
Description 5: Detail the methodology adopted for hybrid image registration, highlighting key processes such as initial registration, segmentation, and elastic registration.
Section 6: EXPERIMENTAL RESULTS
Description 6: Present the results obtained from the experimental implementation of the discussed methodologies.
|
Survey on Routing Protocols for Under Water Sensor Networks
| 9 |
---
paper_title: Silent Positioning in Underwater Acoustic Sensor Networks
paper_content:
In this paper, we present a silent positioning scheme termed UPS for underwater acoustic sensor networks. UPS relies on the time difference of arrivals locally measured at a sensor to detect range differences from the sensor to four anchor nodes. These range differences are averaged over multiple beacon intervals before they are combined to estimate the 3-D sensor location through trilateration. UPS requires no time synchronization and provides location privacy at underwater vehicles/sensors whose locations need to be determined. To study the performance of UPS, we model the underwater acoustic channel as a modified ultrawideband Saleh-Valenzuela model: The arrival of each path cluster and the paths within each cluster follow double Poisson distributions, and the multipath channel gain follows a Rician distribution. Based on this channel model, we perform both theoretical analysis and simulation study on the position error of UPS under acoustic fading channels. The obtained results indicate that UPS is an effective scheme for underwater vehicle/sensor self-positioning.
---
paper_title: DFR: Directional flooding-based routing protocol for underwater sensor networks
paper_content:
Unlike terrestrial sensor networks, underwater sensor networks (UWSNs) have different characteristics such as a long propagation delay, a narrow bandwidth and high packet loss. Hence, existing path setup-based routing protocols proposed for terrestrial sensor networks are not applicable in the underwater environment. For example, they take much time when establishing a path between source and destination nodes due to the long propagation delay. In addition, the path establishment requires much overhead of control messages. Moreover, the dynamic and high packet loss degrades reliability, which invokes more retransmissions. Even though exiting routing protocols such as VBF were proposed to improve the reliability, they did not take into account the link quality. That is, there is no guarantee that packets reach the sink safely especially when a link is error-prone. In this paper, we therefore propose a directional flooding-based routing protocol, called DFR. Basically, DFR relies on a packet flooding technique to increase the reliability. However, the number of nodes which flood a packet is controlled in order to prevent a packet from flooding over the whole network and the nodes to forward the packet are decided according to the link quality. In addition, DFR also addresses a well-known void problem by allowing at least one node to participate in forwarding a packet. Our simulation study using ns-2 proves that DFR is more suitable for UWSNs especially when links are prone to packet loss.
---
paper_title: Improving the Robustness of Location-Based Routing for Underwater Sensor Networks
paper_content:
This paper investigates a fundamental networking problem in underwater sensor networks: robust and energy-efficient routing. We present an adaptive location-based routing protocol, called hop-by-hop vector-based forwarding (HH-VBF). It uses the notion of a "routing vector" (a vector from the source to the sink) acting as the axis of the "routing pipe", similar to the vector based forward (VBF) routing in the work of P. Xie, J.-H. Cui and L. Lao (VBF: Vector-Based Forwarding Protocol for Underwater Sensor Networks. Technical report, UCONN CSE Technical Report: UbiNet-TR05-03 (BECAT/CSE-TR-05-6), Feb. 2005). Unlike the original VBF approach, however, HH-VBF suggests the use of a routing vector for each individual forwarder in the network, instead of a single network-wide source-to-sink routing vector. By the creation of the hop-by-hop vectors, HH-VBF can overcome two major problems in VBF: (1) too small data delivery ratio for sparse networks; (2) too sensitive to "routing pipe" radius threshold. We conduct simulations to evaluate HH-VBF, and the results show that HH-VBF yields much better performance than VBF in sparse networks. In addition, HH-VBF is less sensitive to the routing pipe radius threshold. Furthermore, we also analyze the behavior of HH-VBF and show that assuming proper redundancy and feedback techniques, HH-VBF can facilitate the avoidance of any "void" areas in the network.
---
paper_title: REBAR: A Reliable and Energy Balanced Routing Algorithm for UWSNs
paper_content:
In this paper, we propose a reliable and energy-efficient routing algorithm for underwater wireless sensor networks (UWSNs). We first use a sphere energy depletion model to analyze the energy consumption of nodes in UWSNs. We then extend the model by considering the node mobility in UWSNs. Our analysis is in accordance with the common view that although water flows might cause the underwater environment more dynamic, this instability can actually improve energy-efficiency. However, the nodes closer to the sink still tend to die early, causing network partition around the sink. Accordingly, we propose a Reliable and Energy BAlanced Routing algorithm (REBAR). We explore the tradeoff between packet delivery ratio and energy efficiency. We design an adaptive scheme for setting data propagation range in order to balance the energy consumption throughout the network. Multi-path routing is used to provide redundancy. We further extend REBAR to deal with possible routing voids in the network which could be common in UWSNs. Simulation results show that our protocol outperforms existing VBF-based approaches in terms of reliability and network lifetime.
---
paper_title: Sector-Based Routing with Destination Location Prediction for Underwater Mobile Networks
paper_content:
Unlike in terrestrial sensor networks where the locations of destination nodes are often assumed to be fixed and accurately known, such assumptions are usually not valid in underwater sensor networks where the destination nodes tend to be mobile inherently, either due to their self-propelling capability, or due to random motion caused by ocean currents. As a result, many existing location-based routing protocols do not work well in underwater environments. We propose a location-based routing protocol that is designed for mobile underwater acoustic sensor networks, called "Sector-based Routing with Destination Location Prediction (SBR-DLP)". While the SBR-DLP also assumes that a node knows its own location like many other location-based routing protocols, it predicts the location of the destination node, and therefore, relaxes the need for precise knowledge of the destination's location. Through simulations, the SBR-DLP is shown to enhance the packet delivery ratio significantly when all nodes are mobile.
---
|
Title: Survey on Routing Protocols for Under Water Sensor Networks
Section 1: Introduction
Description 1: Introduce underwater sensor networks (UWSNs), mention differences compared to terrestrial sensor networks, including energy consumption, node cost, deployment, power requirements, and mediums of communication. Discuss the challenges of routing in UWSNs and why specialized protocols are needed.
Section 2: Vector Based Forwarding (VBF)
Description 2: Describe the VBF protocol, which handles packet losses and node failures through redundant and interleaved paths, and emphasize its scalability and energy-saving characteristics.
Section 3: Hop by Hop Vector Based Forwarding (HH-VBF)
Description 3: Explain the HH-VBF protocol, focusing on its use of virtual pipes for per-hop forwarding, and the computations made by nodes to forward packets based on their location.
Section 4: Focused Beam Routing (FBR)
Description 4: Detail the FBR protocol, highlighting its prevention of unnecessary broadcast flooding, suitability for both static and mobile nodes, and location-awareness requirements for source and destination nodes.
Section 5: Reliable and Energy Balanced Routing Algorithm (REBAR)
Description 5: Cover the REBAR protocol, a location-based method that uses geographic information for data transfer, defines nodes by unique IDs, and utilizes specific assumptions about node locations and data transfer rates.
Section 6: Sector-based Routing with Destination Location Prediction (SBR-DLP)
Description 6: Describe the SBR-DLP protocol, which avoids data packet flooding by predicting and utilizing destination mobility information, and selecting the next hop using candidate nodes within a communication circle.
Section 7: Direction Flooding Based Routing (DFR)
Description 7: Discuss the DFR protocol, which enhances reliability through packet flooding, and requires nodes to know their location, one-hop neighbor locations, and the final destination.
Section 8: Location Aware Source Routing (LASR)
Description 8: Explain the LASR protocol, which uses link quality metrics and location awareness to handle high latency of acoustic channels, and includes all relevant network information in the protocol header.
Section 9: Conclusion
Description 9: Summarize the surveyed UWSN routing protocols and their comparative aspects, emphasizing energy efficiency and dynamic network handling abilities. Point out which protocols are best suited for specific applications based on the comparisons.
|
A Succinct Overview of Virtual Reality Technology Use in Alzheimer’s Disease
| 12 |
---
paper_title: Virtual Immersion for Post-Stroke Hand Rehabilitation Therapy
paper_content:
Stroke is the leading cause of serious, long-term disability in the United States. Impairment of upper extremity function is a common outcome following stroke, often to the detriment of lifestyle and employment opportunities. While the upper extremity is a natural target for therapy, treatment may be hampered by limitations in baseline capability as lack of success may discourage arm and hand use. We developeda virtual reality (VR) system in order to encourage repetitive task practice. This system combined an assistive glove with a novel VR environment. A set of exercises for this system was developed to encourage specific movements. Six stroke survivors with chronic upper extremity hemiparesis volunteered to participate in a pilot study in which they completed 18 one-hour training sessions with the VR system. Performance with the system was recorded across the 18 training sessions. Clinical evaluations of motor control were conducted at three time points: prior to initiation of training, following the end of training, and 1 month later. Subjects displayed significant improvement on performance of the virtual tasks over the course of the training, although for the clinical outcome measures only lateral pinch showed significant improvement. Future expansion to multi-user virtual environments may extend the benefits of this system for stroke survivors with hemiparesis by furthering engagement in the rehabilitation exercises.
---
paper_title: Effects of Virtual Reality Simulator Training Method and Observational Learning on Surgical Performance
paper_content:
Background ::: Virtual reality (VR) simulators and Web-based instructional videos are valuable supplemental training resources in surgical programs, but it is unclear how to optimally integrate them into minimally invasive surgical training.
---
paper_title: Nu!RehaVR: virtual reality in neuro tele-rehabilitation of patients with traumatic brain injury and stroke
paper_content:
The availability of virtual environments on the Web is fostering new applications of virtual reality in several fields, including some therapeutical applications. We present an application of virtual reality applied to the tele-rehabilitation of patients with traumatic brain injury and stroke. Our system, based on X3D and Ajax3D technologies, enhances the possibility of making tele-rehabilitation exercises aimed at the recovery of the neurological disease. The system, called Nu!RehaVR, has been designed to integrate the activity carried out on a tele-rehabilitation system, Nu!Reha (Nu!Reha is a trademark of Pragma Engineering srl. See http://www.nureha.eu) desk, with the activities performed in the virtual worlds, through some rehabilitation exercises in contexts incompatible with the patients’ impairments (not able to move or forced in static positions because of therapies, etc.). The architecture of Nu!RehaVR and the environments associated to two exercises, “Utilising an elevator to reach a given floor” and “Crossing a road using a traffic light”, are illustrated. These exercises can be considered as prototypes of a series of tele-rehabilitation exercises which help to stimulate the patients performing actions in relatively dangerous scenarios. The system is designed to allow the remote monitoring and assessment of the patient’s activities by the medical staff at the hospital using the communication facilities of the tele-rehabilitation system.
---
paper_title: Virtual Reality Therapy for Adults Post-Stroke: A Systematic Review and Meta-Analysis Exploring Virtual Environments and Commercial Games in Therapy
paper_content:
BACKGROUND ::: The objective of this analysis was to systematically review the evidence for virtual reality (VR) therapy in an adult post-stroke population in both custom built virtual environments (VE) and commercially available gaming systems (CG). ::: ::: ::: METHODS ::: MEDLINE, CINAHL, EMBASE, ERIC, PSYCInfo, DARE, PEDro, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews were systematically searched from the earliest available date until April 4, 2013. Controlled trials that compared VR to conventional therapy were included. Population criteria included adults (>18) post-stroke, excluding children, cerebral palsy, and other neurological disorders. Included studies were reported in English. Quality of studies was assessed with the Physiotherapy Evidence Database Scale (PEDro). ::: ::: ::: RESULTS ::: Twenty-six studies met the inclusion criteria. For body function outcomes, there was a significant benefit of VR therapy compared to conventional therapy controls, G = 0.48, 95% CI = [0.27, 0.70], and no significant difference between VE and CG interventions (P = 0.38). For activity outcomes, there was a significant benefit of VR therapy, G = 0.58, 95% CI = [0.32, 0.85], and no significant difference between VE and CG interventions (P = 0.66). For participation outcomes, the overall effect size was G = 0.56, 95% CI = [0.02, 1.10]. All participation outcomes came from VE studies. ::: ::: ::: DISCUSSION ::: VR rehabilitation moderately improves outcomes compared to conventional therapy in adults post-stroke. Current CG interventions have been too few and too small to assess potential benefits of CG. Future research in this area should aim to clearly define conventional therapy, report on participation measures, consider motivational components of therapy, and investigate commercially available systems in larger RCTs. ::: ::: ::: TRIAL REGISTRATION ::: Prospero CRD42013004338.
---
paper_title: A meta-analysis of the training effectiveness of virtual reality surgical simulators
paper_content:
The increasing use of virtual reality (VR) simulators in surgical training makes it imperative that definitive studies be performed to assess their training effectiveness. Indeed, in this paper we report the meta-analysis of the efficacy of virtual reality simulators in: 1) the transference of skills from the simulator training environment to the operating room, and 2) their ability to discriminate between the experience levels of their users. The task completion time and the error score were the two study outcomes collated and analyzed in this meta-analysis. Sixteen studies were identified from a computer-based literature search (1996-2004). The meta-analysis of the random effects model (because of the heterogeneity of the data) revealed that training on virtual reality simulators did lessen the time taken to complete a given surgical task as well as clearly differentiate between the experienced and the novice trainees. Meta-analytic studies such as the one reported here would be very helpful in the planning and setting up of surgical training programs and for the establishment of reference `learning curves' for a specific simulator and surgical task. If any such programs already exist, they can then indicate the improvements to be made in the simulator used, such as providing for more variety in their case scenarios based on the state and/or rate of learning of the trainee
---
paper_title: Modulation of cortical activity in 2D versus 3D virtual reality environments: An EEG study
paper_content:
Abstract There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings.
---
paper_title: Egocentric and allocentric memory as assessed by virtual reality in individuals with amnestic mild cognitive impairment
paper_content:
Abstract Present evidence suggests that medial temporal cortices subserve allocentric representation and memory, whereas egocentric representation and memory also depends on parietal association cortices and the striatum. Virtual reality environments have a major advantage for the assessment of spatial navigation and memory formation, as computer-simulated first-person environments can simulate navigation in a large-scale space. Twenty-nine patients with amnestic MCI (aMCI) were compared with 29 healthy matched controls on two virtual reality tasks affording to learn a virtual park (allocentric memory) and a virtual maze (egocentric memory). Participants further received a neuropsychological investigation and MRI volumetry at the time of the assessment. Results indicate that aMCI patients had significantly reduced size of the hippocampus bilaterally and the right-sided precuneus and inferior parietal cortex. aMCI patients were severely impaired learning the virtual park and the virtual maze. Smaller volumes of the right-sided precuneus were related to worse performance on the virtual maze. Participants with striatal lacunar lesions committed more errors than participants without such lesions on the virtual maze but not on the virtual park. aMCI patients later converting to dementia (n = 15) had significantly smaller hippocampal size when compared with non-converters (n = 14). However, both groups did not differ on virtual reality task performance. Our study clearly demonstrates the feasibility of virtual reality technology to study spatial memory deficits of persons with aMCI. Future studies should try to design spatial virtual reality tasks being specific enough to predict conversion from MCI to dementia and conversion from normal to MCI.
---
paper_title: Health and safety implications of virtual reality : A review of empirical evidence
paper_content:
For the last 10 years a number of papers have been written that discuss human factors issues associated with virtual reality (VR). The nature of these papers has gradually evolved from speculation and anecdotal report to empirical research. Despite developments in VR technology, some participants still experience health and safety problems associated with VR use-termed VR-induced symptoms and effects (VRISE). The key concern from the literature is VR-induced sickness, experienced by a large proportion of VR participants, but for the majority these effects are mild and subside quickly. This paper makes a number of recommendations regarding the future direction of research into health and safety implications of VR, including the need to take into account the way in which VR is being used when conducting empirical research: first, to ensure that studies consider both effects and their consequences; second, to ensure that empirical trials reflect the actual likely context of VR use; third, to consider interactions between effects; and finally, to consider ways in which effects can be managed.
---
paper_title: Virtual Reality Rehabilitation from Social Cognitive and Motor Learning Theoretical Perspectives in Stroke Population
paper_content:
Objectives. To identify the virtual reality (VR) interventions used for the lower extremity rehabilitation in stroke population and to explain their underlying training mechanisms using Social Cognitive (SCT) and Motor Learning (MLT) theoretical frameworks. Methods. Medline, Embase, Cinahl, and Cochrane databases were searched up to July 11, 2013. Randomized controlled trials that included a VR intervention for lower extremity rehabilitation in stroke population were included. The Physiotherapy Evidence Database (PEDro) scale was used to assess the quality of the included studies. The underlying training mechanisms involved in each VR intervention were explained according to the principles of SCT (vicarious learning, performance accomplishment, and verbal persuasion) and MLT (focus of attention, order and predictability of practice, augmented feedback, and feedback fading). Results. Eleven studies were included. PEDro scores varied from 3 to 7/10. All studies but one showed significant improvement in outcomes in favour of the VR group (P < 0.05). Ten VR interventions followed the principle of performance accomplishment. All the eleven VR interventions directed subject's attention externally, whereas nine provided training in an unpredictable and variable fashion. Conclusions. The results of this review suggest that VR applications used for lower extremity rehabilitation in stroke population predominantly mediate learning through providing a task-oriented and graduated learning under a variable and unpredictable practice.
---
paper_title: Virtual reality induced symptoms and effects (VRISE) : Comparison of head mounted display (HMD), desktop and projection display systems
paper_content:
Abstract Virtual reality (VR) systems are used in a variety of applications within industry, education, public and domestic settings. Research assessing reported symptoms and side effects of using VR systems indicates that these factors combine to influence user experiences of virtual reality induced symptoms and effects (VRISE). Three experiments were conducted to assess prevalence and severity of sickness symptoms experienced in each of four VR display conditions; head mounted display (HMD), desktop, projection screen and reality theatre, with controlled examination of two additional aspects of viewing (active vs. passive viewing and light vs. dark conditions). Results indicate 60–70% participants experience an increase in symptoms pre–post exposure for HMD, projection screen and reality theatre viewing and found higher reported symptoms in HMD compared with desktop viewing (nausea symptoms) and in HMD compared with reality theatre viewing (nausea, oculomotor and disorientation symptoms). No effect of lighting condition was found. Higher levels of symptoms were reported in passive viewing compared to active control over movement in the VE. However, the most notable finding was that of high inter- and intra-participant variability. As this supports other findings of individual susceptibility to VRISE, recommendations are offered concerning design and use of VR systems in order to minimise VRISE.
---
paper_title: Virtual Reality as Assessment Tool in Psychology
paper_content:
Virtual environments (VEs), offering a new human-computer interaction paradigm, have attracted much attention in clinical psychology, especially in the treatment of phobias. However, a possible new application of VR in psychology is as assessment tool: VEs can be considered as an highly sophisticated form of adaptive testing. This chapter describes the context of current psychological assessment and underlines possible advantages of a VR based assessment tool. The chapter also details the characteristics of BIVRS, Body Image Virtual Reality Scale, an assessment tool designed to assess cognitive and affective components of body image. It consists of a non-immersive 3D graphical interface through which the patient is able to choose between 9 figures of different size which vary in size from underweight to overweight. The software was developed in two architectures, the first (A) running on a single user desktop computer equipped with a standard virtual reality development software and the second (B) splitted into a server (B1) accessible via Internet and actually running the same virtual ambient as in (A) and a VRML client (B2) so that anyone can access the application.
---
paper_title: Mild cognitive impairment and deficits in instrumental activities of daily living: a systematic review
paper_content:
IntroductionThere is a growing body of evidence that subtle deficits in instrumental activities of daily living (IADL) may be present in mild cognitive impairment (MCI). However, it is not clear if there are IADL domains that are consistently affected across patients with MCI. In this systematic review, therefore, we aimed to summarize research results regarding the performance of MCI patients in specific IADL (sub)domains compared with persons who are cognitively normal and/or patients with dementia.MethodsThe databases PsycINFO, PubMed and Web of Science were searched for relevant literature in December 2013. Publications from 1999 onward were considered for inclusion. Altogether, 497 articles were retrieved. Reference lists of selected articles were searched for potentially relevant articles. After screening the abstracts of these 497 articles, 37 articles were included in this review.ResultsIn 35 studies, IADL deficits (such as problems with medication intake, telephone use, keeping appointments, finding things at home and using everyday technology) were documented in patients with MCI. Financial capacity in patients with MCI was affected in the majority of studies. Effect sizes for group differences between patients with MCI and healthy controls were predominantly moderate to large. Performance-based instruments showed slight advantages (in terms of effect sizes) in detecting group differences in IADL functioning between patients with MCI, patients with Alzheimer’s disease and healthy controls.ConclusionIADL requiring higher neuropsychological functioning seem to be most severely affected in patients with MCI. A reliable identification of such deficits is necessary, as patients with MCI with IADL deficits seem to have a higher risk of converting to dementia than patients with MCI without IADL deficits. The use of assessment tools specifically designed and validated for patients with MCI is therefore strongly recommended. Furthermore, the development of performance-based assessment instruments should be intensified, as they allow a valid and reliable assessment of subtle IADL deficits in MCI, even if a proxy is not available. Another important point to consider when designing new scales is the inclusion of technology-associated IADL. Novel instruments for clinical practice should be time-efficient and easy to administer.
---
paper_title: Virtual reality exposure therapy for the treatment of anxiety disorders: An evaluation of research quality
paper_content:
Abstract Randomized controlled trials (RCTs) support the effectiveness of virtual reality exposure therapy (VRET) for anxiety disorders; however, the overall quality of the VRET RCT literature base has yet to be evaluated. This study reviewed 27 VRET RCTs and the degree of adherence to 8 RCT research design criteria derived from existing standards. Adherence to the study quality criteria was generally low as the articles met an average 2.85 criteria (SD = 1.56). None of the studies met more than six quality criteria. Study quality did not predict effect size; however, a reduction in effect size magnitude was observed for studies with larger sample sizes when comparing VRET to non-active control groups. VRET may be an effective method of treatment but caution should be exercised in interpreting the existing body of literature supporting VRET relative to existing standards of care. The need for well-designed VRET research is discussed.
---
paper_title: Virtual reality in mental health : a review of the literature.
paper_content:
BACKGROUND ::: Several virtual reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 10 years. The purpose of this review is to outline the current state of virtual reality research in the treatment of mental health problems. ::: ::: ::: METHODS ::: PubMed and PsycINFO were searched for all articles containing the words "virtual reality". In addition a manual search of the references contained in the papers resulting from this search was conducted and relevant periodicals were searched. Studies reporting the results of treatment utilizing VR in the mental health field and involving at least one patient were identified. ::: ::: ::: RESULTS ::: More than 50 studies using VR were identified, the majority of which were case studies. Seventeen employed a between groups design: 4 involved patients with fear of flying; 3 involved patients with fear of heights; 3 involved patients with social phobia/public speaking anxiety; 2 involved people with spider phobia; 2 involved patients with agoraphobia; 2 involved patients with body image disturbance and 1 involved obese patients. There are both advantages in terms of delivery and disadvantages in terms of side effects to using VR. Although virtual reality based therapy appears to be superior to no treatment the effectiveness of VR therapy over traditional therapeutic approaches is not supported by the research currently available. ::: ::: ::: CONCLUSIONS ::: There is a lack of good quality research on the effectiveness of VR therapy. Before clinicians will be able to make effective use of this emerging technology greater emphasis must be placed on controlled trials with clinically identified populations.
---
paper_title: Using Virtual Reality for Cognitive Training of the Elderly
paper_content:
There is a pressing demand for improving the quality and efficacy of health care and social support services needed by the world’s growing elderly population, especially by those affected by mild cognitive impairment (MCI) and Alzheimer’s disease (AD)-type early-stage dementia. Meeting that demand can significantly benefit from the deployment of innovative, computer-based applications capable of addressing specific needs, particularly in the area of cognitive impairment mitigation and rehabilitation. In that context, we present here our perspective viewpoint on the use of virtual reality (VR) tools for cognitive rehabilitation training, intended to assist medical personnel, health care workers, and other caregivers in improving the quality of daily life activities of people with MCI and AD. We discuss some effective design criteria and developmental strategies and suggest some possibly useful protocols and procedures. The particular innovative supportive advantages offered by the immersive interactive cha...
---
paper_title: How do people with persecutory delusions evaluate threat in a controlled social environment? A qualitative study using virtual reality.
paper_content:
20 participants with persecutory delusions and 20 controls entereda virtual underground train containing neutral characters. Under these circumstances, peoplewith persecutory delusions reported similar levels of paranoia as non-clinical participants.The transcripts of a post-virtual reality interview of the first 10 participants in each groupwere analysed.
---
paper_title: Virtual reality in neuroscience research and therapy
paper_content:
The compatibility of virtual reality systems with brain imaging techniques and their use for animal research have aided the widespread adoption of virtual reality environments in both experimental and therapeutic domains. Here the authors review advances in virtual reality technology and its applications.
---
paper_title: Virtual Reality in Psychotherapy: Review
paper_content:
Virtual reality (VR) has recently emerged as a potentially effective way to provide general and specialty health care services, and appears poised to enter mainstream psychotherapy delivery. Because VR could be part of the future of clinical psychology, it is critical to all psychotherapists that it be defined broadly. To ensure appropriate development of VR applications, clinicians must have a clear understanding of the opportunities and challenges it will provide in professional practice. This review outlines the current state of clinical research relevant to the development of virtual environments for use in psychotherapy. In particular, the paper focuses its analysis on both actual applications of VR in clinical psychology and how different clinical perspectives can use this approach to improve the process of therapeutic change.
---
paper_title: Perspective taking abilities in amnestic mild cognitive impairment and Alzheimer's disease
paper_content:
Abstract Perspective taking is the ability to imagine what a scene looks like from a different viewpoint, which has been reported to be impaired in Alzheimer's disease (AD). This study compared overhead and first-person view perspective taking abilities in patients with mild cognitive impairment (MCI) and AD. A newly developed Arena Perspective Taking Task (APTT), using an environment of a circular arena, was used to compare 23 AD patients and 38 amnestic MCI patients with 18 healthy controls. The results were contrasted with a published perspective taking test (Standardized Road-Map Test of Direction Sense, RMTDS). The AD group was impaired in both overhead and first-person view APTT versions, but the impairment in the overhead view version applied specifically to women. Patients with aMCI were impaired in the first-person view but not in the overhead view version. Substantial sexual differences were found in the overhead but not in the first-person view APTT version. The RMTDS resembled both APTT versions: patients with aMCI were impaired in this test and also women in both patient groups were less accurate than men. Using the receiver operating characteristic analysis, the highest predictive power for MCI and AD patients diagnosis versus controls was observed for their success rate in the first-person view version. The results suggest distinction between overhead and first-person view perspective taking in the impairment of aMCI patients and the sex differences. The first-person view perspective taking is a potentially important candidate psychological marker for AD.
---
paper_title: Vection and visually induced motion sickness: how are they related?
paper_content:
The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (so-called vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate whether or not vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be visually induced motion sickness without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that may speak to this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.
---
paper_title: Virtual Reality and Serious Games in Healthcare
paper_content:
This chapter discusses the applications and solutions of emerging Virtual Reality (VR) and video games technologies in the healthcare sector, e.g. physical therapy for motor rehabilitation, exposure therapy for psychological phobias, and pain relief. Section 2 reviews state-of-the-art interactive devices used in current VR systems and high-end games such as sensor-based and camera-based tracking devices, data gloves, and haptic force feedback devices. Section 3 investigates recent advances and key concepts in games technology, including dynamic simulation, flow theory, adaptive games, and their possible implementation in serious games for healthcare. Various serious games are described in this section: some were designed and developed for specific healthcare purposes, e.g. BreakAway (2009)’s Free Dive, HopeLab (2006)’s Re-Mission, and Ma et al. (2007)’s VR game series, others were utilising off-the-shelf games such as Nintendo Wii sports for physiotherapy. A couple of experiments of using VR systems and games for stroke rehabilitation are highlighted in section 4 as examples to showcase the benefits and impacts of these technologies to conventional clinic practice. Finally, section 5 points some future directions of applying emerging games technologies in healthcare, such as augmented reality, Wii-mote motion control system, and even full body motion capture and controller free games technology demonstrated recently on E3 2009 which have great potentials to treat motor disorders, combat obesity, and other healthcare applications.
---
paper_title: Moving from Virtual Reality Exposure-Based Therapy to Augmented Reality Exposure-Based Therapy: A Review
paper_content:
This paper reviews the move from virtual reality exposure-based therapy (VRET) to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the 20th century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed “safely” to the object(s) of their fear, without the costs associated with programming complete virtual environments. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper also raises some ARET related issues, and proposes potential avenues to be followed. These include the definition of an AR related term, the type of measures to be used to qualify the user’s experience in an augmented reality environment (ARE), the development of alternative geospatial referencing systems, as well as the potential use of ARET to treat social phobia. Overall, it may be said that the use of ARET, although promising, is still in its infancy but that, given a continued cooperation between clinical and technical teams, ARET has the potential of going well beyond the treatment of small animal phobia.
---
paper_title: Controlling Social Stress in Virtual Reality Environments
paper_content:
Virtual reality exposure therapy has been proposed as a viable alternative in the treatment of anxiety disorders, including social anxiety disorder. Therapists could benefit from extensive control of anxiety eliciting stimuli during virtual exposure. Two stimuli controls are studied in this study: the social dialogue situation, and the dialogue feedback responses (negative or positive) between a human and a virtual character. In the first study, 16 participants were exposed in three virtual reality scenarios: a neutral virtual world, blind date scenario, and job interview scenario. Results showed a significant difference between the three virtual scenarios in the level of self-reported anxiety and heart rate. In the second study, 24 participants were exposed to a job interview scenario in a virtual environment where the ratio between negative and positive dialogue feedback responses of a virtual character was systematically varied on-the-fly. Results yielded that within a dialogue the more positive dialogue feedback resulted in less self-reported anxiety, lower heart rate, and longer answers, while more negative dialogue feedback of the virtual character resulted in the opposite. The correlations between on the one hand the dialogue stressor ratio and on the other hand the means of SUD score, heart rate and audio length in the eight dialogue conditions showed a strong relationship: r(6)?=?0.91, p?=?0.002; r(6)?=?0.76, p?=?0.028 and r(6)?=?-0.94, p?=?0.001 respectively. Furthermore, more anticipatory anxiety reported before exposure was found to coincide with more self-reported anxiety, and shorter answers during the virtual exposure. These results demonstrate that social dialogues in a virtual environment can be effectively manipulated for therapeutic purposes.
---
paper_title: How we experience immersive virtual environments: the concept of presence and its measurement *
paper_content:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion; (c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
---
paper_title: Sense of Presence and Metacognition Enhancement in Virtual Reality Exposure Therapy in the Treatment of Social Phobias and the Fear of Flying
paper_content:
The aim of this research effort is to identify feeling-of-presence and metacognitive amplifiers over existing well-established VRET treatment methods. Patient real time projection in virtual environments during stimuli exposure and electroencephalographyi¾?EEG report sharing are among the techniques, which have been used to achieve the desired result. Initialized from theoretical inferences, is moving towards a proof-of-concept prototype, which has been developed as a realization of the proposed method. The evaluation of the prototype made possible with an expert team of 28 therapists testing the fear of public speaking and fear of flying case studies.
---
paper_title: A Dual-Modal Virtual Reality Kitchen for (Re)Learning of Everyday Cooking Activities in Alzheimer's Disease
paper_content:
Everyday action impairment is one of the diagnostic criteria of Alzheimer's disease and is associated with many serious consequences, including loss of functional autonomy and independence. It has been shown that the (re)learning of everyday activities is possible in Alzheimer's disease by using error reduction teaching approaches in naturalistic clinical settings. The purpose of this study is to develop a dual-modal virtual reality platform for training in everyday cooking activities in Alzheimer's disease and to establish its value as a training tool for everyday activities in these patients. Two everyday tasks and two error reduction learning methods were implemented within a virtual kitchen. Two patients with Alzheimer's disease and two healthy elderly controls were tested. All subjects were trained in two learning sessions on two comparable cooking tasks. Within each group (i.e., patients and controls), the order of the training methods was counterbalanced. Repeated measure analysis before and after learning was performed. A questionnaire of presence and a verbal interview were used to obtain information about the subjective responses of the participants to the VR experience. The results in terms of errors, omissions, and perseverations (i.e., repetitive behaviors) indicate that the patients performed worse than the controls before learning, but that they reached a level of performance similar to that of the controls after a short learning session, regardless of the learning method employed. This finding provides preliminary support for the value of the dual-modal virtual reality platform for training in everyday cooking activities in Alzheimer's disease. However, further work is needed before it is ready for clinical application.
---
paper_title: Evaluation of a virtual reality‐based memory training programme for Hong Kong Chinese older adults with questionable dementia: a pilot study
paper_content:
Background ::: ::: Older adults with questionable dementia are at risk of progressing to dementia, and early intervention is considered important. The present study investigated the effectiveness of a virtual reality (VR)-based memory training for older adults with questionable dementia. ::: ::: ::: ::: Methods ::: ::: A pre-test and post-test design was adopted. Twenty and 24 older adults with questionable dementia were randomly assigned to a VR-based and a therapist-led memory training group, respectively. Primary outcome measures included the Multifactorial Memory Questionnaire and Fuld Object Memory Evaluation. ::: ::: ::: ::: Results ::: ::: Both groups demonstrated positive training effects, with the VR group showing greater improvement in objective memory performance and the non-VR group showing better subjective memory subtest results in the Multifactorial Memory Questionnaire. ::: ::: ::: ::: Conclusion ::: ::: The use of VR seems to be acceptable for older adults with questionable dementia. Further study on the effect of educational background and memory training modality (visual, auditory) is warranted. Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: Virtual reality in mental health : a review of the literature.
paper_content:
BACKGROUND ::: Several virtual reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 10 years. The purpose of this review is to outline the current state of virtual reality research in the treatment of mental health problems. ::: ::: ::: METHODS ::: PubMed and PsycINFO were searched for all articles containing the words "virtual reality". In addition a manual search of the references contained in the papers resulting from this search was conducted and relevant periodicals were searched. Studies reporting the results of treatment utilizing VR in the mental health field and involving at least one patient were identified. ::: ::: ::: RESULTS ::: More than 50 studies using VR were identified, the majority of which were case studies. Seventeen employed a between groups design: 4 involved patients with fear of flying; 3 involved patients with fear of heights; 3 involved patients with social phobia/public speaking anxiety; 2 involved people with spider phobia; 2 involved patients with agoraphobia; 2 involved patients with body image disturbance and 1 involved obese patients. There are both advantages in terms of delivery and disadvantages in terms of side effects to using VR. Although virtual reality based therapy appears to be superior to no treatment the effectiveness of VR therapy over traditional therapeutic approaches is not supported by the research currently available. ::: ::: ::: CONCLUSIONS ::: There is a lack of good quality research on the effectiveness of VR therapy. Before clinicians will be able to make effective use of this emerging technology greater emphasis must be placed on controlled trials with clinically identified populations.
---
paper_title: Virtual reality and neuropsychology: upgrading the current tools.
paper_content:
Background: Virtual reality (VR) is an evolving technology that has been applied in various aspects of medicine, including the treatment of phobia disorders, pain distraction interventions, surgical training, and medical education. These applications have served to demonstrate the various assets offered through the use of VR. Objective: To provide a background and rationale for the application of VR to neuropsychological assessment. Methods: A brief introduction to VR technology and a review of current ongoing neuropsychological research that integrates the use of this technology. Conclusions: VR offers numerous assets that may enhance current neuropsychological assessment protocols and address many of the
---
paper_title: Non-Pharmacological Intervention for Memory Decline
paper_content:
Non-pharmacological intervention of memory difficulties in healthy older adults, as well as those with brain damage and neurodegenerative disorders, has gained much attention in recent years. The two main reasons that explain this growing interest in memory rehabilitation are the limited efficacy of current drug therapies and the plasticity of the human central nervous and the discovery that during aging, the connections in the brain are not fixed but retain the capacity to change with learning. Moreover, several studies have reported enhanced cognitive performance in patients with neurological disease, following non-invasive brain stimulation [i.e., repetitive transcranial magnetic stimulation and transcranial direct current stimulation to specific cortical areas]. The present review provides an overview of memory rehabilitation in individuals with mild cognitive impairment and in patients with Alzheimer’s disease with particular regard to cognitive rehabilitation interventions focused on memory and non-invasive brain stimulation. Reviewed data suggest that in patients with memory deficits, memory intervention therapy could lead to performance improvements in memory, nevertheless further studies need to be conducted in order to establish the real value of this approach.
---
paper_title: NeuroVR 2--a free virtual reality platform for the assessment and treatment in behavioral health care.
paper_content:
At MMVR 2007 we presented NeuroVR (http://www.neurovr.org) a free virtual reality platform based on open-source software. The software allows non-expert users to adapt the content of 14 pre-designed virtual environments to the specific needs of the clinical or experimental setting. Following the feedbacks of the 2000 users who downloaded the first versions (1 and 1.5), we developed a new version--NeuroVR 2 (http://www.neurovr2.org)--that improves the possibility for the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, by using external sounds, photos or videos. More, when running a simulation, the system offers a set of standard features that contribute to increase the realism of the simulated scene. These include collision detection to control movements in the environment, realistic walk-style motion, advanced lighting techniques for enhanced image quality, and streaming of video textures using alpha channel for transparency.
---
paper_title: VREAD: A Virtual Simulation to Investigate Cognitive Function in the Elderly
paper_content:
Recent studies have shown that people with mild cognitive impairment (MCI) may convert to Alzheimer's disease (AD) over time although not all MCI cases progress to dementia. The diagnosis of MCI is important to allow prompt treatment and disease management before the neurons degenerate to a stage beyond repair. Hence, the ability to obtain a method of identifying MCI is of great importance. VREAD is a quick, easy and friendly tool that was developed with an aim to investigate cognitive functioning in a group of healthy elderly and those with MCI. It focuses on the task of following a route, since Topographical Disorientation (TD) is common in AD. The results shows that this novel simulation was able to predict with about 90% overall accuracy using weighting function proposed to discriminate between MCI and healthy elderly.
---
paper_title: Video game training enhances cognitive control in older adults
paper_content:
Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner. Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (multitasking), generating interference as the result of fundamental information processing limitations. It is clear that multitasking behaviour has become ubiquitous in today's technologically dense world, and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our ageing population. Here we show that multitasking performance, as assessed with a custom-designed three-dimensional video game (NeuroRacer), exhibits a linear age-related decline from 20 to 79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60 to 85 years old) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond those achieved by untrained 20-year-old participants, with gains persisting for 6 months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (enhanced midline frontal theta power and frontal-posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement 6 months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the ageing brain, and provide the first evidence, to our knowledge, of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms, and serve as a powerful tool for cognitive enhancement.
---
paper_title: Detecting Everyday Action Deficits in Alzheimer’s Disease Using a Nonimmersive Virtual Reality Kitchen
paper_content:
Alzheimer’s disease (AD) causes impairments affecting instrumental activities of daily living (IADL). Transdisciplinary research in neuropsychology and virtual reality has fostered the development of ecologically valid virtual tools for the assessment of IADL, using simulations of real life activities. Few studies have examined the benefits of this approach in AD patients. Our aim was to examine the utility of a non-immersive virtual coffee task (NI-VCT) for assessment of IADL in these patients. We focus on the assessment results obtained from a group of 24 AD patients on a task designed to assess their ability to prepare a virtual cup of coffee, using a virtual coffee machine. We compared performance on the virtual task to an identical daily living task involving the actual preparation of a cup of coffee, as well as to global cognitive, executive, and caregiver-reported IADL functioning. Relative to 32 comparable, healthy elderly (HE) controls, AD patients performed worse than HE controls on all tasks. Correlation analyses revealed that NI-VCT measures were related to all other neuropsychological measures. Moreover, regression analyses demonstrated that performance on the NI-VCT predicted actual task performance and caregiver-reported IADL functioning. Our results provide initial support for the utility of our virtual kitchen for assessment of IADL in AD patients. ( JINS , 2014, 20 , 1–10)
---
paper_title: Egocentric and allocentric memory as assessed by virtual reality in individuals with amnestic mild cognitive impairment
paper_content:
Abstract Present evidence suggests that medial temporal cortices subserve allocentric representation and memory, whereas egocentric representation and memory also depends on parietal association cortices and the striatum. Virtual reality environments have a major advantage for the assessment of spatial navigation and memory formation, as computer-simulated first-person environments can simulate navigation in a large-scale space. Twenty-nine patients with amnestic MCI (aMCI) were compared with 29 healthy matched controls on two virtual reality tasks affording to learn a virtual park (allocentric memory) and a virtual maze (egocentric memory). Participants further received a neuropsychological investigation and MRI volumetry at the time of the assessment. Results indicate that aMCI patients had significantly reduced size of the hippocampus bilaterally and the right-sided precuneus and inferior parietal cortex. aMCI patients were severely impaired learning the virtual park and the virtual maze. Smaller volumes of the right-sided precuneus were related to worse performance on the virtual maze. Participants with striatal lacunar lesions committed more errors than participants without such lesions on the virtual maze but not on the virtual park. aMCI patients later converting to dementia (n = 15) had significantly smaller hippocampal size when compared with non-converters (n = 14). However, both groups did not differ on virtual reality task performance. Our study clearly demonstrates the feasibility of virtual reality technology to study spatial memory deficits of persons with aMCI. Future studies should try to design spatial virtual reality tasks being specific enough to predict conversion from MCI to dementia and conversion from normal to MCI.
---
paper_title: Interactive computer-training as a therapeutic tool in Alzheimer’s disease
paper_content:
Abstract The current study sought to evaluate a novel kind of interactive computer-based cognitive training (ICT) in Alzheimer’s disease (AD). AD patients (N = 9), age- and gender-matched patients with a major depressive episode (N = 9), and healthy control subjects (N = 10) were trained to use an ICT program that relates to activities of daily living (ADL). Digital photographs of a shopping route were implemented in a close-to-reality simulation on a computer touch-screen. The task was to find a predefined shopping route, to buy three items, and to answer correctly 10 multiple-choice questions addressing knowledge related to the virtual tasks. Training performance was rated using the number of mistakes (wrong way), time needed for the tasks, number of correct multiple-choice answers, and of repeat of instruction. Compared to normal controls and depressed patients, AD patients performed significantly worse with regard to all variables. Within a 4-week training period including 12 sessions, however, substantial training gains were observed, including a significant reduction of mistakes. Training effects were sustained until follow-up 3 weeks later. The performance of the depressed patients and the normal controls improved as well, with no difference between the two groups. Self-reported effects revealed that the training was well perceived. Thus, the task performance of AD patients improved substantially and subjects appeared to have liked this approach to ICT. New interactive media, therefore, may yield interesting opportunities for rehabilitation and (psycho)therapeutic interventions.
---
paper_title: Mild cognitive impairment and deficits in instrumental activities of daily living: a systematic review
paper_content:
IntroductionThere is a growing body of evidence that subtle deficits in instrumental activities of daily living (IADL) may be present in mild cognitive impairment (MCI). However, it is not clear if there are IADL domains that are consistently affected across patients with MCI. In this systematic review, therefore, we aimed to summarize research results regarding the performance of MCI patients in specific IADL (sub)domains compared with persons who are cognitively normal and/or patients with dementia.MethodsThe databases PsycINFO, PubMed and Web of Science were searched for relevant literature in December 2013. Publications from 1999 onward were considered for inclusion. Altogether, 497 articles were retrieved. Reference lists of selected articles were searched for potentially relevant articles. After screening the abstracts of these 497 articles, 37 articles were included in this review.ResultsIn 35 studies, IADL deficits (such as problems with medication intake, telephone use, keeping appointments, finding things at home and using everyday technology) were documented in patients with MCI. Financial capacity in patients with MCI was affected in the majority of studies. Effect sizes for group differences between patients with MCI and healthy controls were predominantly moderate to large. Performance-based instruments showed slight advantages (in terms of effect sizes) in detecting group differences in IADL functioning between patients with MCI, patients with Alzheimer’s disease and healthy controls.ConclusionIADL requiring higher neuropsychological functioning seem to be most severely affected in patients with MCI. A reliable identification of such deficits is necessary, as patients with MCI with IADL deficits seem to have a higher risk of converting to dementia than patients with MCI without IADL deficits. The use of assessment tools specifically designed and validated for patients with MCI is therefore strongly recommended. Furthermore, the development of performance-based assessment instruments should be intensified, as they allow a valid and reliable assessment of subtle IADL deficits in MCI, even if a proxy is not available. Another important point to consider when designing new scales is the inclusion of technology-associated IADL. Novel instruments for clinical practice should be time-efficient and easy to administer.
---
paper_title: Allothetic orientation and sequential ordering of places is impaired in early stages of Alzheimer's disease: corresponding results in real space tests and computer tests
paper_content:
Spatial disorientation and learning problems belong to the integral symptoms of Alzheimer's disease (AD). A circular arena for human subjects (2.9 m diameter, 3 m high) was equipped with a computerized tracking system, similar to that used in animals. We studied navigation in 11 subjects diagnosed with early stages of Alzheimer's disease (AD), 27 subjects with subjective problems with memory or concentration, and 10 controls. The task was to locate one or several unmarked goals using the arena geometry, starting position and/or cues on the arena wall. Navigation in a real version and a computer map view version of the tests yielded similar results. The AD group was severely impaired relative to controls in navigation to one hidden goal in eight rotated positions. The impairment was largest when only the cues on the wall could be used for orientation. Also, the AD group recalled worse than controls the order of six sequentially presented locations, though they recalled similarly to controls the positions of the locations. The group with subjective problems was not impaired in any of the tests. Our results document the spatial navigation and non-verbal episodic memory impairment in the AD. Similar results in real and map view computer tests support the use of computer tests in diagnosis of cognitive disturbances.
---
paper_title: Age and dementia related differences in spatial navigation within an immersive virtual environment.
paper_content:
Immersive virtual reality (VR) is an innovative tool that can allow study of human spatial navigation in a realistic but controlled environment. The purpose of this study was to examine age- and Alzheinrer's disease-related differences in route learning and memory using VR. The spatial memory task took place in a VR environment set up on a Computer Workstation. Participants were immersed by putting video unit goggles over their eyes using a Head Mounted. Participants were shown a path within a virtual city, and then had to navigate it as quickly and accurately as possible. They were granted four learning trials on this path. An interference path was then presented before asking participants to re-navigate the first route at short and long delays. Finally, participants were tested for recognition of the city's buildings and objects. Young adults were consistently quicker and more accurate in their path navigation than older participants whilst those patients with Alzheimer's Disease made more mistakes on the recognition task in particular, being more likely to mistakenly affirm having seen an element in the city when it was in fact a foil. Our study would suggest that spatial navigation is susceptible to the effects of aging and Alzheimer's Disease. The potential applications ofVR to the study of spatial navigation is seemingly important in that it may help place the science of neuropsychology on firmer scientific grounds in terms of its validity to real world function and dysfunction.
---
paper_title: An innovative virtual reality system for mild cognitive impairment: Diagnosis and evaluation
paper_content:
In advanced countries throughout the world, the population of Alzheimer's Disease(AD) patients has been gradually increasing with the aging of the society. As a result, it has become an important research topic how to diagnose AD early and give necessary treatment and training to AD patients, especially those with mild cognitive impairment(MCI), whose executive functions such as response inhibition, cognitive flexibility, attention switching and planning may display evident disorder and impairment. Unlike traditional paper tests and subjective assessments by the patient's relatives, this study adopts virtual reality(VR) technology to develop a novel diagnosis & assessment system, which uses head mounted display(HMD), game technology and sensors to generate an interactive and panoramic scenario—a virtual convenience store—for assessment of executive functions and memory. A variety of tasks of multi-layered difficulty-level hierarchy, such as memorizing a shopping list, looking for certain goods, and checking out, has been designed for customized and adaptive assessment, training, and treatment of MD. In the meantime, the study also records test-takers' performance data (including path and central-vision movement) in the process of all tasks for the development of a novel diagnosis & assessment method. Moreover, test-takers' technology acceptance is measured for assessing the elderly's subjective perception of new technology and discussing the topic of human-machine interaction. In the study, tests on 2 healthy adults have been completed, the system's functionality has been preliminarily verified, and test-takers' subjective perception of the system has been investigated.
---
paper_title: Effects of Enactment in Episodic Memory: A Pilot Virtual Reality Study with Young and Elderly Adults
paper_content:
None of the previous studies on aging have tested the influence of action with respect to the degree of interaction with the environment (active or passive navigation) and the source of itinerary choice (self or externally imposed), on episodic memory encoding. The aim of this pilot study was to explore the influence of these factors on feature binding (the association between what, where and when) in episodic memory and on the subjective sense of remembering. Navigation in a virtual city was performed by 64 young and 64 older adults in one of four modes of exploration: (1) passive condition where participants were immersed as passengers of a virtual car (no interaction, no itinerary control), (2) itinerary control (the subject chose the itinerary, but did not drive the car), (3) low or (4) high navigation control (the subject just moved the car on rails or drove the car with a steering wheel and a gas pedal on a fixed itinerary, respectively). The task was to memorize as many events encountered in the virtual environment as possible along with their factual (what), spatial (where), and temporal (when) details, and then to perform immediate and delayed memory tests. An age-related decline was evidenced for immediate and delayed feature binding. Compared to passive and high navigation conditions, and regardless of age groups, feature binding was enhanced by low navigation and itinerary control conditions. The subjective sense of remembering was boosted by the itinerary control in older adults. Memory performance following high navigation was specifically linked to variability in executive functions. The present findings suggest that the decision of the itinerary is beneficial to boost episodic memory in aging, although it does not eliminate age-related deficits. Active navigation can also enhance episodic memory when it is not too demanding for subjects' cognitive resources.
---
paper_title: Can a novel computerized cognitive screening test provide additional information for early detection of Alzheimer's disease?
paper_content:
Abstract Background Virtual reality testing of everyday activities is a novel type of computerized assessment that measures cognitive, executive, and motor performance as a screening tool for early dementia. This study used a virtual reality day-out task (VR-DOT) environment to evaluate its predictive value in patients with mild cognitive impairment (MCI). Methods One hundred thirty-four patients with MCI were selected and compared with 75 healthy control subjects. Participants received an initial assessment that included VR-DOT, a neuropsychological evaluation, magnetic resonance imaging (MRI) scan, and event-related potentials (ERPs). After 12 months, participants were assessed again with MRI, ERP, VR-DOT, and neuropsychological tests. Results At the end of the study, we differentiated two subgroups of patients with MCI according to their clinical evolution from baseline to follow-up: 56 MCI progressors and 78 MCI nonprogressors. VR-DOT performance profiles correlated strongly with existing predictive biomarkers, especially the ERP and MRI biomarkers of cortical thickness. Conclusions Compared with ERP, MRI, or neuropsychological tests alone, the VR-DOT could provide additional predictive information in a low-cost, computerized, and noninvasive way.
---
paper_title: How different spatial representations interact in virtual environments: the role of mental frame syncing
paper_content:
뀀ഀȠ 뀀ഀȠ Silvia SerinoGiuseppe Riva 뀀ഀȠ 뀀ഀȠ 뀀ഀȠ 뀀ഀȠ 뀀ഀȠ 뀀ഀȠ 뀀ഀȠ Abstract This experiment is aimed at understanding how egocentric experiences, allocentric viewpoint-dependent representations, and allocentric viewpoint-independent representations interact when encoding and retrieving a spatial environment. Although several cognitive theories have highlighted the interaction between reference frames, it is less clear about the role of a real-time presentation of allocentric viewpoint-dependent representation on the spatial organization of information. Sixty participants were asked to navigate in two virtual cities to memorize the position of one hidden object. Half of the participants had the possibility to visualize the virtual city with an inter- active aerial view. Then, they were required to find the position of the object in three different experimental con- ditions (''retrieval with an interactive aerial view'' vs. ''retrieval on a map'' vs. ''retrieval without an interactive aerial view''). Results revealed that participants were sig- nificantly more precise in retrieving the position of the object when immersed in an egocentric experience with the interactive aerial view. The retrieval of spatial information is facilitated by the presence of the interactive aerial view of the city, since it provides a real-time allocentric view- point-dependent representation. More participants with high preference for using cardinal points tend to be more accurate when they were asked to retrieve the position of the object on the map. As suggested by the mental frame syncing hypothesis, the presence of an allocentric 뀀ഀȠ 뀀ഀȠ 뀀ഀȠoduction
---
paper_title: Spatial navigation impairment is proportional to right hippocampal volume
paper_content:
Cognitive deficits in older adults attributable to Alzheimer's disease (AD) pathology are featured early on by hippocampal impairment. Among these individuals, deterioration in spatial navigation, manifested by poor hippocampus-dependent allocentric navigation, may occur well before the clinical onset of dementia. Our aim was to determine whether allocentric spatial navigation impairment would be proportional to right hippocampal volume loss irrespective of general brain atrophy. We also contrasted the respective spatial navigation scores of the real-space human Morris water maze with its corresponding 2D computer version. We included 42 cognitively impaired patients with either amnestic mild cognitive impairment (n = 23) or mild and moderate AD (n = 19), and 14 cognitively intact older controls. All participants underwent 1.5T MRI brain scanning with subsequent automatic measurement of the total brain and hippocampal (right and left) volumes. Allocentric spatial navigation was tested in the real-space version of the human Morris water maze and in its corresponding computer version. Participants used two navigational cues to locate an invisible goal independent of the start position. We found that smaller right hippocampal volume was associated with poorer navigation performance in both the real-space (β = −0.62, P 0.59) subjects. The respective real-space and virtual scores strongly correlated with each other. Our findings indicate that the right hippocampus plays a critical role in allocentric navigation, particularly when cognitive impairment is present.
---
paper_title: BrightArm™ therapy for patients with advanced dementia: A feasibility study
paper_content:
Virtual reality use in cognitive rehabilitation of advanced dementia has been sparse. Three residents of a Dementia Ward participated in a feasibility study of the BrightArm™ system. They played custom games targeting several cognitive domains including short-term and working memory. Clinician observation revealed a positive effect on emotive state, with technology well accepted by all participants.
---
paper_title: Spatial navigation deficit in amnestic mild cognitive impairment
paper_content:
Patients with Alzheimer's disease (AD) frequently have difficulties with spatial orientation in their day-to-day life. Although AD is typically preceded by amnestic mild cognitive impairment (MCI), spatial navigation has not yet been studied in MCI. Sixty-five patients were divided into five groups: probable AD (n = 21); MCI, further classified as amnestic MCI single domain (n = 11); amnestic MCI multiple domain (n = 18), or nonamnestic MCI (n = 7), and subjective memory complaints (n = 8). These patients, together with a group of healthy control subjects (n = 26), were tested by using a four-subtests task that required them to locate an invisible goal inside a circular arena. Each subtest began with an overhead view of the arena showed on a computer monitor. This was followed by a real navigation inside of the actual space, an enclosed arena 2.9 m in diameter. Depending on the subtest, the subjects could use the starting position and/or cues on the wall for navigation. The subtests thus were focused on allocentric and egocentric navigation. The AD group and amnestic MCI multiple-domain group were impaired in all subtests. The amnestic MCI single-domain group was impaired significantly in subtests focused on allocentric orientation and at the beginning of the real space egocentric subtest, suggesting impaired memory for allocentric and real space configurations. Our results suggest that spatial navigation impairment occurs early in the development of AD and can be used for monitoring of the disease progression or for evaluation of presymptomiatic AD.
---
paper_title: Temporal Order Memory Assessed during Spatiotemporal Navigation As a Behavioral Cognitive Marker for Differential Alzheimer's Disease Diagnosis
paper_content:
Episodic memory impairment is a hallmark for early diagnosis of Alzheimer9s disease. Most actual tests used to diagnose Alzheimer9s disease do not assess the spatiotemporal properties of episodic memory and lead to false-positive or -negative diagnosis. We used a newly developed, nonverbal navigation test for Human, based on the objective experimental testing of a spatiotemporal experience, to differentially Alzheimer9s disease at the mild stage ( N = 16 patients) from frontotemporal lobar degeneration ( N = 11 patients) and normal aging ( N = 24 subjects). Comparing navigation parameters and standard neuropsychological tests, temporal order memory appeared to have the highest predictive power for mild Alzheimer9s disease diagnosis versus frontotemporal lobar degeneration and normal aging. This test was also nonredundant with classical neuropsychological tests. As a conclusion, our results suggest that temporal order memory tested in a spatial navigation task may provide a selective behavioral marker of Alzheimer9s disease.
---
paper_title: Impaired Allocentric Spatial Memory Underlying Topographical Disorientation
paper_content:
The cognitive processes supporting spatial navigation are considered in the context of a patient (CF) with possible very early Alzheimer's disease who presents with topographical disorientation. Her verbal memory and her recognition memory for unknown buildings, landmarks and outdoor scenes was intact, although she showed an impairment in face processing. By contrast, her navigational ability, quantitatively assessed within a small virtual reality (VR) town, was significantly impaired. Interestingly, she showed a selective impairment in a VR object-location memory test whenever her viewpoint was shifted between presentation and test, but not when tested from the same viewpoint. We suggest that a specific impairment in locating objects relative to the environment rather than relative to the perceived viewpoint (i.e. allocentric rather than egocentric spatial memory) underlies her topographical disorientation. We discuss the likely neural bases of this deficit in the light of related studies in humans and animals, focusing on the hippocampus and related areas. The specificity of our test indicates a new way of assessing topographical disorientation, with possible application to the assessment of progressive dementias such as Alzheimer's disease.
---
paper_title: Spatial memory impairments in amnestic mild cognitive impairment in a virtual radial arm maze
paper_content:
Author contributions ::: ::: Dr Jun-Young Lee conceived the study, acquired, and interpreted the data. Ms Sooyeon Kho and Ms Hye Bin Yoo analyzed the data and edited the article. Ms Soowon Park made major revisions to the Introduction section. Dr Jung-Seok Choi and Jun Soo Kwon acquired the patients and their clinical reports. The corresponding authors Dr Kyung Ryeol Cha and Dr Hee-Yeon Jung designed the research and the experiments. All authors made substantial contributions to conception and design of the paper, acquisition of data, or analysis and interpretation of data, and drafted the article or revised it for critically important content. ::: ::: ::: ::: Disclosure ::: ::: The authors report no conflict of interest in this work. No competing financial interests exist.
---
paper_title: Involving Persons with Dementia in the Evaluation of Outdoor Environments
paper_content:
ABSTRACT Using virtual reality (VR), we examined the barriers to and facilitators of functioning outdoors in persons with dementia (PwD) and investigated the generalizability of findings in VR to the real world. An existing town center was modeled in VR. PwD took part in both real-world and VR walks. Based on the results, the model was redesigned and then tested again. Performance on the walks improved, and potentially beneficial adaptations to outdoor environments were identified, but limitations of VR as a representation of the real world were also identified. We conclude that VR models, together with a rigorous behavioral testing method, can be a useful tool for the evaluation of outdoor environments and for identifying improvements for PwD.
---
paper_title: Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality
paper_content:
Background:Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Methods:Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). Results:We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Conclusions:Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community. GLOSSARYAD = Alzheimer disease; EAD = early Alzheimer disease; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; ONC = older normal control; std. wt. = standardized weight; THSD = Tukey honestly significant difference; VR = virtual reality; YNC = young normal control.
---
paper_title: Controlling Memory Impairment in Elderly Adults Using Virtual Reality Memory Training: A Randomized Controlled Pilot Study
paper_content:
Background. Memory decline is a prevalent aspect of aging but may also be the first sign of cognitive pathology. Virtual reality (VR) using immersion and interaction may provide new approaches to the treatment of memory deficits in elderly individuals. Objective. The authors implemented a VR training intervention to try to lessen cognitive decline and improve memory functions. Methods. The authors randomly assigned 36 elderly residents of a rest care facility (median age 80 years) who were impaired on the Verbal Story Recall Test either to the experimental group (EG) or the control group (CG). The EG underwent 6 months of VR memory training (VRMT) that involved auditory stimulation and VR experiences in path finding. The initial training phase lasted 3 months (3 auditory and 3 VR sessions every 2 weeks), and there was a booster training phase during the following 3 months (1 auditory and 1 VR session per week). The CG underwent equivalent face-to-face training sessions using music therapy. Both groups par...
---
paper_title: Autobiographical Memory Deficits in Alzheimer's Disease
paper_content:
Autobiographical memory comprises memories of one's own past that are characterized by a sense of subjective time and autonoetic awareness. Although autobiographical memory deficits are among the major complaints of patients with dementia, they have rarely been systematically assessed in mild cognitive impairment and Alzheimer's disease. We therefore investigated semantic and episodic aspects of autobiographical memory for remote and recent life periods in a sample of 239 nursing home residents (165 in different stages of Alzheimer's disease, 33 with mild cognitive impairment, and 41 cognitively unimpaired) with respect to potential confounders. Episodic autobiographical memories, especially the richness of details, were impaired early in the course of Alzheimer's disease or even in the preclinical phase, while semantic memories were spared until moderate stages, indicating a dissociation between both memory systems. The examination of autobiographic memory loss can facilitate the clinical diagnosis of Alzheimer's disease.
---
paper_title: Augmented reality annotations to assist persons with Alzheimers and their caregivers
paper_content:
Persons with Alzheimer's disease (AD) and their caregivers implement diverse strategies to cope with memory loss. A common strategy involves placing tags on drawers or removing cabinet doors to make their contents visible. This study describes the Ambient aNnotation System (ANS), aimed at assisting people suffering from AD and their caregivers with this task. The system has two main modules: The tagging subsystem allows caregivers to create and manage ambient annotations in order to assist people with memory problems. The second subsystem allows people with AD to use a mobile phone to recognize tags in the environment and to receive relevant information in the form of audio, text, or images. The identification of these tags is performed in real time by uploading images from the mobile phone to a server, which uses the SURF algorithm for object recognition. We describe the design and implementation of the system as well as results of the evaluation of its performance and efficiency. ANS can process query images approximately every 2 s and is able to locate users in their homes with a precision of 0.93. A usability study conducted with six subjects determined that audio notifications are more effective than vibrating notifications to alert the user about tags in the environment.
---
paper_title: Is it possible to use highly realistic virtual reality in the elderly? A feasibility study with image-based rendering
paper_content:
Background: Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. ::: Methods: Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant’s home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the “remember/know” procedure. ::: Results: All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P,0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P,0.01). ::: Conclusion: The study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy.
---
paper_title: A Pilot Evaluation of a Therapeutic Game Applied to Small Animal Phobia Treatment
paper_content:
This study presents Catch Me game, a therapeutic game for teaching patients how to confront to their feared animals during spider and cockroach phobia treatment, and evaluates whether inclusion of gaming elements into therapeutic protocol does have an added value for the treatment. The Catch Me game was designed in order to help patients learn about their feared animals and to learn how to apply adapted confrontation strategies and techniques with the lowest possible anxiety level. In this study, Catch Me game was evaluated in terms of knowledge acquisition, anxiety level, self-efficacy belief acquisition, and appeal to both participants and therapists. Data collection consisted of quantitative measures on a sample of 14 non-clinical population. The results showed that the game significantly improved knowledge and self-efficacy belief regarding the feared animal, and significantly decreased the anxiety level of participants. Moreover, both participants and the therapists were highly satisfied with the game.
---
paper_title: GenVirtual: An Augmented Reality Musical Game for Cognitive and Motor Rehabilitation
paper_content:
Electronic games have been used to stimulate cognitive functions such as attention, concentration and memory. This paper presents GenVirtual, which is an augmented reality musical game and is proposed to help people with learning disabilities. The intention is to help the patient in the following skills: creativity, attention, memory (storage and retrieval), planning, concentration, ready-response, hearing and visual perception, and motor coordination. The therapist has flexibility to place the musical and visual elements, allowing him to create different scenarios to each patient. GenVirtual uses Augmented Reality technology to allow people with physical disorders to interact with the game. Patients with no fingers can also play this game. GenVirtual was evaluated by a music therapist who considered it as a facilitating and motivating game to the learning process and that it has the potential to improve the life of the people with special needs.
---
paper_title: Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia
paper_content:
Augmented reality (AR) refers to the introduction of virtual elements in the real world. That is, the person is seeing an image composed of a visualization of the real world, and a series of virtual elements that, at that same moment, are super-imposed on the real world. The most important aspect of AR is that the virtual elements supply to the person relevant and useful information that is not contained in the real world. AR has notable potential, and has already been used in diverse fields, such as medicine, the army, coaching, engineering, design, and robotics. Until now, AR has never been used in the scope of psychological treatment. Nevertheless, AR presents various advantages. Just like in the classical systems of virtual reality, it is possible to have total control over the virtual elements that are super-imposed on the real world, and how one interacts with those elements. AR could involve additional advantages; on one side it could be less expensive since it also uses the real world (this does not need to be modeled), and it could facilitate the feeling of presence (the sensation of being there), and reality judgment (the fact of judging the experience as real) of the person since the environment he or she is in, and what he or she is seeing is, in fact the "reality." In this paper, we present the data of the first case study in which AR has been used for the treatment of a specific phobia, cockroaches phobia. It addresses a system of AR that permits exposure to virtual cockroaches super-imposed on the real world. In order to carry out the exposure, the guidelines of Ost with respect to "one-session treatment" were followed. The results are promising. The participant demonstrated notable fear and avoidance in the behavioral avoidance test before the treatment, and not only was an important decrease in the scores of fear and avoidance observed after the treatment, but also the participant was capable of approaching, interacting, and killing live cockroaches immediately following the treatment. The results are maintained in a follow-up conducted 1 month after the termination of the treatment.
---
paper_title: Augmented reality cube game for cognitive training: an interaction study.
paper_content:
There is the potential that cognitive activity may delay cognitive decline in people with mild cognitive impairment. Games provide both cognitive challenge and motivation for repeated use, a prerequisite for long lasting effect. Recent advances in technology introduce several new interaction methods, potentially leading to more efficient, personalized cognitive gaming experiences. In this paper, we present an Augmented Reality (AR) cognitive training game, utilizing cubes as input tools, and we test the cube interaction with a pilot study. The results of the study revealed the marker occlusion problem, and that novice AR users can adjust to the developed AR environment after a small number of sessions.
---
paper_title: Moving from Virtual Reality Exposure-Based Therapy to Augmented Reality Exposure-Based Therapy: A Review
paper_content:
This paper reviews the move from virtual reality exposure-based therapy (VRET) to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the 20th century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed “safely” to the object(s) of their fear, without the costs associated with programming complete virtual environments. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper also raises some ARET related issues, and proposes potential avenues to be followed. These include the definition of an AR related term, the type of measures to be used to qualify the user’s experience in an augmented reality environment (ARE), the development of alternative geospatial referencing systems, as well as the potential use of ARET to treat social phobia. Overall, it may be said that the use of ARET, although promising, is still in its infancy but that, given a continued cooperation between clinical and technical teams, ARET has the potential of going well beyond the treatment of small animal phobia.
---
paper_title: ON THE CONVERGENCE OF AFFECTIVE AND PERSUASIVE TECHNOLOGIES IN COMPUTER-MEDIATED HEALTH-CARE SYSTEMS
paper_content:
This paper offers a portrayal of how affective computing and persuasive technologies can converge into an effective tool for interfacing biomedical engineering with behavioral sciences and medicine. We describe the characteristics, features, applications, present state of the art, perspectives, and trends of both streams of research. In particular, these streams are analyzed in light of the potential contribution of their convergence for improving computer-mediated health-care systems, by facilitating the modification of patients' attitudes and behaviors, such as engagement and compliance. We propose a framework for future research in this emerging area, highlighting how key constructs and intervening variables should be considered. Some specific implications and challenges posed by the convergence of these two technologies in health care, such as paradigm change, multimodality, patients' attitude improvement, and cost reduction, are also briefly addressed and discussed.
---
paper_title: Using Virtual Reality for Cognitive Training of the Elderly
paper_content:
There is a pressing demand for improving the quality and efficacy of health care and social support services needed by the world’s growing elderly population, especially by those affected by mild cognitive impairment (MCI) and Alzheimer’s disease (AD)-type early-stage dementia. Meeting that demand can significantly benefit from the deployment of innovative, computer-based applications capable of addressing specific needs, particularly in the area of cognitive impairment mitigation and rehabilitation. In that context, we present here our perspective viewpoint on the use of virtual reality (VR) tools for cognitive rehabilitation training, intended to assist medical personnel, health care workers, and other caregivers in improving the quality of daily life activities of people with MCI and AD. We discuss some effective design criteria and developmental strategies and suggest some possibly useful protocols and procedures. The particular innovative supportive advantages offered by the immersive interactive cha...
---
paper_title: Remote measurement of cognitive stress via heart rate variability
paper_content:
Remote detection of cognitive load has many powerful applications, such as measuring stress in the workplace. Cognitive tasks have an impact on breathing and heart rate variability (HRV). We show that changes in physiological parameters during cognitive stress can be captured remotely (at a distance of 3m) using a digital camera. A study (n=10) was conducted with participants at rest and under cognitive stress. A novel five band digital camera was used to capture videos of the face of the participant. Significantly higher normalized low frequency HRV components and breathing rates were measured in the stress condition when compared to the rest condition. Heart rates were not significantly different between the two conditions. We built a person-independent classifier to predict cognitive stress based on the remotely detected physiological parameters (heart rate, breathing rate and heart rate variability). The accuracy of the model was 85% (35% greater than chance).
---
|
Title: A Succinct Overview of Virtual Reality Technology Use in Alzheimer’s Disease
Section 1: Introduction
Description 1: Provide an introduction to the topic, detailing the significance of VR technology in the context of Alzheimer’s disease (AD).
Section 2: Current Applications of VR in Psychotherapy
Description 2: Discuss the general application areas of VR in psychotherapy and list examples relevant to Alzheimer’s disease.
Section 3: VR Technologies and Devices
Description 3: Describe the types of VR technologies and devices used, including their levels of immersion and interaction.
Section 4: Importance of Immersion and Presence
Description 4: Explain the role of immersion and presence in VR applications, and how these factors affect the effectiveness of VR-based interventions for AD patients.
Section 5: Health and Safety Considerations
Description 5: Address the health and safety implications of using VR, particularly focusing on issues like cyber-sickness and VR-induced symptoms.
Section 6: Literature Review Methodology
Description 6: Outline the methodology followed for the literature review, including information on literature search, selection criteria, and categorization.
Section 7: Categorization of VR Applications Used in AD
Description 7: Present a detailed classification of VR systems based on their intended purpose, impairment focus, methodology, and type of interaction.
Section 8: Intended Purpose of VR Applications
Description 8: Discuss the primary goals of VR applications in the context of Alzheimer’s disease – assessment, diagnosis, cognitive training, and caregivers’ training.
Section 9: Focal Aspects in VR Research for AD
Description 9: Identify the specific cognitive impairment features considered most relevant for VR diagnostic and training purposes in AD research.
Section 10: Interaction Techniques in VR
Description 10: Describe the different techniques for user interaction in VR systems, including tasks, activities, and games, and their relevance to AD applications.
Section 11: Emerging Augmented Reality Applications
Description 11: Highlight the emerging use of augmented reality (AR) in addition to VR for cognitive training and rehabilitation of AD patients.
Section 12: Conclusion
Description 12: Summarize the key findings of the review, emphasize the need for more immersive and effective VR applications, and suggest future directions for research and development in this field.
|
A Survey on Recent Advances of Computer Vision Algorithms for Egocentric Video
| 8 |
---
paper_title: MILES: Multiple-Instance Learning via Embedded Instance Selection
paper_content:
Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty
---
paper_title: Egocentric recognition of handled objects: Benchmark and analysis
paper_content:
Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.
---
paper_title: Figure-ground segmentation improves handled object recognition in egocentric video
paper_content:
Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33% to 60%, and that of a latent-HOG system from 64% to 86%.
---
paper_title: A discriminatively trained, multiscale, deformable part model
paper_content:
This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.
---
paper_title: High Accuracy Optical Flow Estimation Based on a Theory for Warping
paper_content:
We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise.
---
paper_title: Learning to recognize objects in egocentric activities
paper_content:
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
---
paper_title: Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope
paper_content:
In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.
---
paper_title: Temporal segmentation and activity classification from first-person sensing
paper_content:
Temporal segmentation of human motion into actions is central to the understanding and building of computational models of human motion and activity recognition. Several issues contribute to the challenge of temporal segmentation and classification of human motion. These include the large variability in the temporal scale and periodicity of human actions, the complexity of representing articulated motion, and the exponential nature of all possible movement combinations. We provide initial results from investigating two distinct problems -classification of the overall task being performed, and the more difficult problem of classifying individual frames over time into specific actions. We explore first-person sensing through a wearable camera and inertial measurement units (IMUs) for temporally segmenting human motion into actions and performing activity classification in the context of cooking and recipe preparation in a natural environment. We present baseline results for supervised and unsupervised temporal segmentation, and recipe recognition in the CMU-multimodal activity database (CMU-MMAC).
---
paper_title: Object Detection with Discriminatively Trained Part-Based Models
paper_content:
We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.
---
paper_title: Learning to Recognize Daily Actions using Gaze
paper_content:
We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.
---
paper_title: Detecting activities of daily living in first-person camera views
paper_content:
We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”
---
paper_title: Understanding egocentric activities
paper_content:
We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.
---
paper_title: Maximum likelihood estimation via the ECM algorithm : a general framework
paper_content:
Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the M-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated
---
paper_title: Learning to recognize objects in egocentric activities
paper_content:
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
---
paper_title: In what ways do eye movements contribute to everyday activities?
paper_content:
Two recent studies have investigated the relations of eye and hand movements in extended food preparation tasks, and here the results are compared. The tasks could be divided into a series of actions performed on objects. The eyes usually reached the next object in the sequence before any sign of manipulative action, indicating that eye movements are planned into the motor pattern and lead each action. The eyes usually fixated the same object throughout the action upon it, although they often moved on to the next object in the sequence before completion of the preceding action. The specific roles of individual fixations could be identified as locating (establishing the locations of objects for future use), directing (establishing target direction prior to contact), guiding (supervising the relative movements of two or three objects) and checking (establishing whether some particular condition is met, prior to the termination of an action). It is argued that, at the beginning of each action, the oculomotor system is supplied with the identity of the required object, information about its location, and instructions about the nature of the monitoring required during the action. The eye movements during this kind of task are nearly all to task-relevant objects, and thus their control is seen as primarily ‘top-down’, and influenced very little by the ‘intrinsic salience’ of objects. © 2001 Elsevier Science Ltd. All rights reserved.
---
paper_title: Understanding egocentric activities
paper_content:
We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.
---
paper_title: Modeling Actions through State Changes
paper_content:
In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results.
---
paper_title: Learning to recognize objects in egocentric activities
paper_content:
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
---
paper_title: Fast Bayesian Inference in Dirichlet Process Mixture Models
paper_content:
There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the propo...
---
paper_title: Fast unsupervised ego-action learning for first-person sports videos
paper_content:
Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering first-person action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action categorization across several sports genres. Comparative analysis shows that our approach outperforms other state-of-the-art topic models with respect to both classification accuracy and computational speed. Preliminary results indicate that on average, the categorical content of a 10 minute video sequence can be indexed in under 5 seconds.
---
paper_title: First-Person Activity Recognition: What Are They Doing to Me?
paper_content:
This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.
---
paper_title: SenseCam: A Retrospective Memory Aid
paper_content:
This paper presents a novel ubiquitous computing device, the SenseCam, a sensor augmented wearable stills camera. SenseCam is designed to capture a digital record of the wearer's day, by recording a series of images and capturing a log of sensor data. We believe that reviewing this information will help the wearer recollect aspects of earlier experiences that have subsequently been forgotten, and thereby form a powerful retrospective memory aid. In this paper we review existing work on memory aids and conclude that there is scope for an improved device. We then report on the design of SenseCam in some detail for the first time. We explain the details of a first in-depth user study of this device, a 12-month clinical trial with a patient suffering from amnesia. The results of this initial evaluation are extremely promising; periodic review of images of events recorded by SenseCam results in significant recall of those events by the patient, which was previously impossible. We end the paper with a discussion of future work, including the application of SenseCam to a wider audience, such as those with neurodegenerative conditions such as Alzheimer's disease.
---
paper_title: Rapid object detection using a boosted cascade of simple features
paper_content:
This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.
---
paper_title: Constrained parametric min-cuts for automatic object segmentation
paper_content:
We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.
---
paper_title: The blur effect: perception and estimation with a new no-reference perceptual blur metric
paper_content:
To achieve the best image quality, noise and artifacts are generally removed at the cost of a loss of details generating the blur effect. To control and quantify the emergence of the blur effect, blur metrics have already been proposed in the literature. By associating the blur effect with the edge spreading, these metrics are sensitive not only to the threshold choice to classify the edge, but also to the presence of noise which can mislead the edge detection. Based on the observation that we have difficulties to perceive differences between a blurred image and the same reblurred image, we propose a new approach which is not based on transient characteristics but on the discrimination between different levels of blur perceptible on the same picture. Using subjective tests and psychophysics functions, we validate our blur perception theory for a set of pictures which are naturally unsharp or more or less blurred through one or two-dimensional low-pass filters. Those tests show the robustness and the ability of the metric to evaluate not only the blur introduced by a restoration processing but also focal blur or motion blur. Requiring no reference and a low cost implementation, this new perceptual blur metric is applicable in a large domain from a simple metric to a means to fine-tune artifacts corrections.
---
paper_title: Detecting activities of daily living in first-person camera views
paper_content:
We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”
---
paper_title: Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs
paper_content:
The SenseCam is a passive capture wearable camera and when worn continuously it takes an average of 1,900 images per day. It can be used to create a personal lifelog or visual recording of a wearer's life which can be helpful as an aid to human memory. For such a large amount of visual information to be useful, it needs to be structured into "events", which can be achieved through automatic segmentation. An important component of this structuring process is the selection of keyframes to represent individual events. This work investigates a variety of techniques for the selection of a single representative keyframe image from each event, in order to provide the user with an instant visual summary of that event. In our experiments we use a large test set of 2,232 lifelog events collected by 5 users over a time period of one month each. We propose a novel keyframe selection technique which seeks to select the image with the highest "quality" as the keyframe. The inclusion of "quality" approaches in keyframe selection is demonstrated to be useful owing to the high variability in image visual quality within passively captured image collections.
---
paper_title: What is an object?
paper_content:
We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors.
---
paper_title: Story-Driven Summarization for Egocentric Video
paper_content:
We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event "leads to" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.
---
paper_title: Connecting the dots between news articles
paper_content:
The process of extracting useful knowledge from large datasets has become one of the most pressing problems in today's society. The problem spans entire sectors, from scientists to intelligence analysts and web users, all of whom are constantly struggling to keep up with the larger and larger amounts of content published every day. With this much data, it is often easy to miss the big picture. ::: ::: In this paper, we investigate methods for automatically connecting the dots - providing a structured, easy way to navigate within a new topic and discover hidden connections. We focus on the news domain: given two news articles, our system automatically finds a coherent chain linking them together. For example, it can recover the chain of events leading from the decline of home prices (2007) to the health-care debate (2009). ::: ::: We formalize the characteristics of a good chain and provide efficient algorithms to connect two fixed endpoints. We incorporate user feedback into our framework, allowing the stories to be refined and personalized. Finally, we evaluate our algorithm over real news data. Our user studies demonstrate the algorithm's effectiveness in helping users understanding the news.
---
paper_title: Discovering important people and objects for egocentric video summarization
paper_content:
We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.
---
paper_title: SenseCam: A Retrospective Memory Aid
paper_content:
This paper presents a novel ubiquitous computing device, the SenseCam, a sensor augmented wearable stills camera. SenseCam is designed to capture a digital record of the wearer's day, by recording a series of images and capturing a log of sensor data. We believe that reviewing this information will help the wearer recollect aspects of earlier experiences that have subsequently been forgotten, and thereby form a powerful retrospective memory aid. In this paper we review existing work on memory aids and conclude that there is scope for an improved device. We then report on the design of SenseCam in some detail for the first time. We explain the details of a first in-depth user study of this device, a 12-month clinical trial with a patient suffering from amnesia. The results of this initial evaluation are extremely promising; periodic review of images of events recorded by SenseCam results in significant recall of those events by the patient, which was previously impossible. We end the paper with a discussion of future work, including the application of SenseCam to a wider audience, such as those with neurodegenerative conditions such as Alzheimer's disease.
---
paper_title: Aggregating local descriptors into a compact image representation
paper_content:
We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.
---
paper_title: Novelty detection from an ego-centric perspective
paper_content:
This paper demonstrates a system for the automatic extraction of novelty in images captured from a small video camera attached to a subject's chest, replicating his visual perspective, while performing activities which are repeated daily. Novelty is detected when a (sub)sequence cannot be registered to previously stored sequences captured while performing the same daily activity. Sequence registration is performed by measuring appearance and geometric similarity of individual frames and exploiting the invariant temporal order of the activity. Experimental results demonstrate that this is a robust way to detect novelties induced by variations in the wearer's ego-motion such as stopping and talking to a person. This is an essentially new and generic way of automatically extracting information of interest to the camera wearer and can be used as input to a system for life logging or memory support.
---
paper_title: Social interactions: A first-person perspective
paper_content:
This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.
---
paper_title: Egocentric recognition of handled objects: Benchmark and analysis
paper_content:
Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.
---
paper_title: Hidden conditional random fields for phone classification
paper_content:
In this paper, we show the novel application of hidden conditional random fields (HCRFs) – conditional random fields with hidden state sequences – for modeling speech. Hidden state sequences are critical for modeling the non-stationarity of speech signals. We show that HCRFs can easily be trained using the simple direct optimization technique of stochastic gradient descent. We present the results on the TIMIT phone classification task and show that HCRFs outperforms comparable ML and CML/MMI trained HMMs. In fact, HCRF results on this task are the best single classifier results known to us. We note that the HCRF framework is easily extensible to recognition since it is a state and label sequence modeling technique. We also note that HCRFs have the ability to handle complex features without any change in training procedure.
---
paper_title: Detecting activities of daily living in first-person camera views
paper_content:
We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”
---
paper_title: Learning to recognize objects in egocentric activities
paper_content:
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
---
paper_title: Story-Driven Summarization for Egocentric Video
paper_content:
We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event "leads to" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.
---
paper_title: Fast unsupervised ego-action learning for first-person sports videos
paper_content:
Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering first-person action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action categorization across several sports genres. Comparative analysis shows that our approach outperforms other state-of-the-art topic models with respect to both classification accuracy and computational speed. Preliminary results indicate that on average, the categorical content of a 10 minute video sequence can be indexed in under 5 seconds.
---
paper_title: Egocentric recognition of handled objects: Benchmark and analysis
paper_content:
Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.
---
paper_title: Figure-ground segmentation improves handled object recognition in egocentric video
paper_content:
Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33% to 60%, and that of a latent-HOG system from 64% to 86%.
---
paper_title: Learning to Recognize Daily Actions using Gaze
paper_content:
We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.
---
paper_title: Detecting activities of daily living in first-person camera views
paper_content:
We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”
---
paper_title: Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs
paper_content:
The SenseCam is a passive capture wearable camera and when worn continuously it takes an average of 1,900 images per day. It can be used to create a personal lifelog or visual recording of a wearer's life which can be helpful as an aid to human memory. For such a large amount of visual information to be useful, it needs to be structured into "events", which can be achieved through automatic segmentation. An important component of this structuring process is the selection of keyframes to represent individual events. This work investigates a variety of techniques for the selection of a single representative keyframe image from each event, in order to provide the user with an instant visual summary of that event. In our experiments we use a large test set of 2,232 lifelog events collected by 5 users over a time period of one month each. We propose a novel keyframe selection technique which seeks to select the image with the highest "quality" as the keyframe. The inclusion of "quality" approaches in keyframe selection is demonstrated to be useful owing to the high variability in image visual quality within passively captured image collections.
---
paper_title: Understanding egocentric activities
paper_content:
We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.
---
paper_title: Modeling Actions through State Changes
paper_content:
In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results.
---
paper_title: Learning to recognize objects in egocentric activities
paper_content:
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
---
paper_title: First-Person Activity Recognition: What Are They Doing to Me?
paper_content:
This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.
---
paper_title: Story-Driven Summarization for Egocentric Video
paper_content:
We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event "leads to" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.
---
paper_title: Discovering important people and objects for egocentric video summarization
paper_content:
We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.
---
paper_title: Egocentric recognition of handled objects: Benchmark and analysis
paper_content:
Recognizing objects being manipulated in hands can provide essential information about a person's activities and have far-reaching impacts on the application of vision in everyday life. The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions. We collect a comprehensive dataset and analyze the feasibilities and challenges of the egocentric recognition of handled objects. We use a lapel-worn camera and record uncompressed video streams as human subjects manipulate objects in daily activities. We use 42 day-to-day objects that vary in size, shape, color and textureness. 10 video sequences are shot for each object under different illuminations and backgrounds. We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with its unique constraints, such as hand color, location prior and temporal consistency. SIFT-based recognition has an average recognition rate of 12%, and reaches 20% through enforcing temporal consistency. We use simulations to estimate the upper bound for SIFT-based recognition at 64%, the loss of accuracy due to background clutter at 20%, and that of hand occlusion at 13%. Our quantitative evaluations show that the egocentric recognition of handled objects is a challenging but feasible problem with many unique characteristics and many opportunities for future research.
---
paper_title: Temporal segmentation and activity classification from first-person sensing
paper_content:
Temporal segmentation of human motion into actions is central to the understanding and building of computational models of human motion and activity recognition. Several issues contribute to the challenge of temporal segmentation and classification of human motion. These include the large variability in the temporal scale and periodicity of human actions, the complexity of representing articulated motion, and the exponential nature of all possible movement combinations. We provide initial results from investigating two distinct problems -classification of the overall task being performed, and the more difficult problem of classifying individual frames over time into specific actions. We explore first-person sensing through a wearable camera and inertial measurement units (IMUs) for temporally segmenting human motion into actions and performing activity classification in the context of cooking and recipe preparation in a natural environment. We present baseline results for supervised and unsupervised temporal segmentation, and recipe recognition in the CMU-multimodal activity database (CMU-MMAC).
---
paper_title: Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs
paper_content:
The SenseCam is a passive capture wearable camera and when worn continuously it takes an average of 1,900 images per day. It can be used to create a personal lifelog or visual recording of a wearer's life which can be helpful as an aid to human memory. For such a large amount of visual information to be useful, it needs to be structured into "events", which can be achieved through automatic segmentation. An important component of this structuring process is the selection of keyframes to represent individual events. This work investigates a variety of techniques for the selection of a single representative keyframe image from each event, in order to provide the user with an instant visual summary of that event. In our experiments we use a large test set of 2,232 lifelog events collected by 5 users over a time period of one month each. We propose a novel keyframe selection technique which seeks to select the image with the highest "quality" as the keyframe. The inclusion of "quality" approaches in keyframe selection is demonstrated to be useful owing to the high variability in image visual quality within passively captured image collections.
---
|
Title: A Survey on Recent Advances of Computer Vision Algorithms for Egocentric Video
Section 1: Introduction
Description 1: Provide an overview of the significance and challenges of egocentric video in computer vision, including the practical applications and benefits of wearable cameras.
Section 2: Recent Work
Description 2: Summarize recent advancements in the field of egocentric video, categorized into object recognition, activity and action detection, and life logging video.
Section 3: Object Recognition
Description 3: Discuss various approaches and challenges in recognizing objects from egocentric videos, including the analysis, datasets used, and results obtained by different researchers.
Section 4: Activity and Action Detection
Description 4: Explore methods and evaluations concerning the detection and classification of activities and actions in egocentric video, distinguishing between simple actions and complex activities.
Section 5: Life Logging Video
Description 5: Delve into techniques for summarizing life logging video data, focusing on keyframe selection, novelty detection, and social interaction recognition.
Section 6: Datasets
Description 6: Provide a comprehensive overview of publicly available datasets relevant to egocentric video, detailing the types of data and annotations included.
Section 7: Summary and Comparison
Description 7: Summarize key points from prior sections, comparing methodologies and findings, emphasizing major trends and common techniques in the domain.
Section 8: Conclusion
Description 8: Conclude with a reflection on the progress and current state of research in egocentric video, noting emerging patterns and potential future directions.
|
Episturmian words: a survey
| 29 |
---
paper_title: Sequences With Grouped Factors
paper_content:
A connector for making electrical connection to a flexible conductor cable is disclosed which includes a block member having a first surface and a plurality of resilient electrical contact members projecting from the block member. Each of the contact members includes a portion substantially parallel to the first surface of the block member which terminates in a free end, with the parallel portions of the contact members and the first surface of the block member defining a space for receiving a flexible conductor cable to which electrical contact is to be made. The connector further includes a cover member having an inner surface. The cover member and block member are affixed together in such a manner that the inner surface of the cover member depresses the contact members in the direction of the first surface of the block member, whereby the free ends of the contact members exert a spring force against any flexible conductor cable inserted in the defined cable receiving space to make electrical contact with the flexible conductor cable and to hold the flexible conductor cable securely in the electrical connector.
---
paper_title: Palindromic factors of billiard words
paper_content:
We study palindromic factors of billiard words, in any dimension. There are differences between the two-dimensional case, and higher dimension. Arbitrary long palindrome factors exist in any dimension, but arbitrary long palindromic prefixes exist in general only in dimension 2.
---
paper_title: Fraenkel's conjecture for six sequences
paper_content:
Abstract A striking conjecture of Fraenkel asserts that every decomposition of Z >0 into m ⩾3 sets {⌊α i n+β i ⌋} n∈ Z >0 with α i and β i real, α i >1 and α i 's distinct for i =1,…, m satisfies {α 1 ,…,α m }= 2 m −1 2 k : 0⩽k . Fraenkel's conjecture was proved by Morikawa if m =3 and, under some condition, if m =4. Proofs in terms of balanced sequences have been given for m =3 by the author and for m =4 by Altman, Gaujal and Hordijk. In the present paper we use the latter approach to establish Fraenkel's conjecture for m =5 and for m =6.
---
paper_title: Coding rotations on intervals
paper_content:
We show that the coding of rotation by $\alpha$ on $m$ intervals with rationally independent lengths can be recoded over $m$ Sturmian words of angle $\alpha.$ ::: More precisely, for a given $m$ an universal automaton is constructed such that the edge indexed by the vector of values of the $i$th letter on each Sturmian word gives the value of the $i$th letter of the coding of rotation.
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Three distance theorems and combinatorics on words
paper_content:
The aim of this paper is to investigate the connection between some generalizations of the three distance theorem and combinatorics on words for sequences defined as codings of irrational rotations on the unit circle. We also give some new results concerning the frequencies of factors for such sequences.
---
paper_title: Automatic Sequences: Theory, Applications, Generalizations
paper_content:
Preface 1. Stringology 2. Number theory and algebra 3. Numeration systems 4. Finite automata and other models of computation 5. Automatic sequences 6. Uniform morphisms and automatic sequences 7. Morphic sequences 8. Frequency of letters 9. Characteristic words 10. Subwords 11. Cobham's theorem 12. Formal power series 13. Automatic real numbers 14. Multidimensional automatic sequences 15. Automaticity 16. k-regular sequences 17. Physics Appendix. Hints, references and solutions for selected exercises Bibliography Index.
---
paper_title: Balances for fixed points of primitive substitutions
paper_content:
An infinite word defined over a finite alphabet A is balanced if for any pair (ω,ω') of factors of the same length and for any letter a in the alphabet ||ω|a - |ω'|a| ≤ 1, where |ω|a denotes the number of occurrences of the letter a in the word ω. In this paper, we generalize this notion and introduce a measure of balance for an infinite sequence. In the case of fixed points of primitive substitutions, we show that the asymptotic behaviour of this measure is in part ruled by the spectrum of the incidence matrix associated with the substitution. Connections with frequencies of letters and other balance properties are also discussed.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Transcendence of Sturmian or morphic continued fractions
paper_content:
Abstract We prove, using a theorem of W. Schmidt, that if the sequence of partial quotients of the continued fraction expansion of a positive irrational real number takes only two values, and begins with arbitrary long blocks which are “almost squares,” then this number is either quadratic or transcendental. This result applies in particular to real numbers whose partial quotients form a Sturmian (or quasi-Sturmian) sequence, or are given by the sequence (1+(⌊ nα ⌋ mod 2)) n ⩾0 , or are a “repetitive” fixed point of a binary morphism satisfying some technical conditions.
---
paper_title: Generalized balances in Sturmian words
paper_content:
One of the numerous characterizations of Sturmian words is based on the notion of balance. An infinite word X on the {0, 1} alphabet is balanced if, given two factors of X, w and w', having the same length, the difference between the number of 0's in w (denoted by |w|0) and the number of 0's in w' is at most 1, i.e. ||w|0 - |w'|0| ≤ 1. It is well known that an aperiodic word is Sturmian if and only if it is balanced.In this paper, the balance notion is generalized by considering the number of occurrences of a word u in w (denoted by |w|u,) and w'. The following is obtained. Theorem. Let x be a Sturmian word. Let u, w and w' be three factors of x. Then, |w| = |w'| ⇒ ||w|u - |w'|u| ≤ |u|.Another balance property, called equilibrium, is also given. This notion permits us to give a new characterization of Sturmian words. The main techniques used in the proofs are word graphs and return words.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Sturmian words: structure, combinatorics, and their arithmetics
paper_content:
Abstract We prove some new results concerning the structure, the combinatorics and the arithmetics of the set PER of all the words w having two periods p and q, p ⩽ q , which are coprimes and such that ¦w¦= p + q− 2 . A basic theorem relating PER with the set of finite standard Sturmian words was proved in de Luca and Mignosi (1994). The main result of this paper is the following simple inductive definition of PER : the empty word belongs to PER , If w is an already constructed word of PER , then also ( aw ) (−) and ( bw ) (−) belong to PER , where (−) denotes the operator of palindrome left-closure, i.e. it associates to each word u the smallest palindrome word u (−) having u as a suffix. We show that, by this result, one can construct in a simple way all finite and infinite standard Sturmian words. We prove also that, up to the automorphism which interchanges the letter a with the letter b, any element of PER can be codified by the irreducible fraction p q . This allows us to construct for any n ⩾0 a natural bijection, that we call Farey correspondence, of the set of the Farey series of order n + 1 and the set of special elements of length n of the set St of all finite Sturmian words. Finally, we introduce the concepts of Farey tree and Farey monoid. This latter is obtained by defining a suitable product operation on the developments in continued fractions of the set of all irreducible fractions p q .
---
paper_title: Episturmian morphisms and a Galois theorem on continued fractions
paper_content:
We associate with a word w on a finite alphabet A an episturmian (or Arnoux-Rauzy) morphism and a palindrome. We study their relations with the similar ones for the reversal of w . Then when |A|=2 we deduce, using the Sturmian words that are the fixed points of the two morphisms, a proof of a Galois theorem on purely periodic continued fractions whose periods are the reversal of each other.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: Quasiperiodic and Lyndon episturmian words
paper_content:
Recently the second two authors characterized quasiperiodic Sturmian words, proving that a Sturmian word is non-quasiperiodic if and only if, it is an infinite Lyndon word. Here we extend this study to episturmian words (a natural generalization of Sturmian words) by describing all the quasiperiods of an episturmian word, which yields a characterization of quasiperiodic episturmian words in terms of their directive words. Even further, we establish a complete characterization of all episturmian words that are Lyndon words. Our main results show that, unlike the Sturmian case, there is a much wider class of episturmian words that are non-quasiperiodic, besides those that are infinite Lyndon words. Our key tools are morphisms and directive words, in particular normalized directive words, which we introduced in an earlier paper. Also of importance is the use of return words to characterize quasiperiodic episturmian words, since such a method could be useful in other contexts.
---
paper_title: A local balance property of episturmian words
paper_content:
We prove that episturmian words and Arnoux-Rauzy sequences can be characterized using a local balance property. We also give a new characterization of epistandard words.
---
paper_title: On Sturmian and episturmian words, and related topics
paper_content:
Combinatorics on words plays a fundamental role in various fields of mathematics, not to mention its relevance in theoretical computer science and physics. Most renowned among its branches is the theory of infinite binary sequences called Sturmian words, which are fascinating in many respects, having been studied from combinatorial, algebraic, and geometric points of view. The most well-known example of a Sturmian word is the ubiquitous Fibonacci word, the importance of which lies in combinatorial pattern matching and the theory of words. Properties of the Fibonacci word and, more generally, Sturmian words have been extensively studied, not only because of their significance in discrete mathematics, but also due to their practical applications in computer imagery (digital straightness), theoretical physics (quasicrystal modelling) and molecular biology.
---
paper_title: Powers in a class of A-strict standard episturmian words
paper_content:
This paper concerns a specific class of strict standard episturmian words whose directive words resemble those of characteristic Sturmian words. In particular, we explicitly determine all integer powers occurring in such infinite words, extending recent results of Damanik and Lenz [D. Damanik, D. Lenz, Powers in Sturmian sequences, European J. Combin. 24 (2003) 377-390, doi:10.1016/S0195-6698(03)00026-X], who studied powers in Sturmian words. The key tools in our analysis are canonical decompositions and a generalization of singular words, which were originally defined for the ubiquitous Fibonacci word. Our main results are demonstrated via some examples, including the k-bonacci word, a generalization of the Fibonacci word to a k-letter alphabet (k>=2).
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Some properties of the Tribonacci sequence
paper_content:
In this paper, we consider the factor properties of the Tribonacci sequence. We define the singular words, and then give the singular factorization and the Lyndon factorization. As applications, we study the powers of the factors and the overlap of the factors. We also calculate the free index of the sequence.
---
paper_title: Conjugacy and episturmian morphisms
paper_content:
Episturmian morphisms generalize Sturmian morphisms. Here, we study some intrinsic properties of these morphisms: invertibility, presentation, cancellativity, unitarity, characterization by conjugacy. Most of them are generalizations of known properties of Sturmian morphisms. But we present also some results on episturmian morphisms that have not already been stated in the particular case of Sturmian morphisms: characterization of the episturmian morphisms that preserve palindromes, new algorithms to compute conjugates.We also study the conjugation of morphisms in the general case and show that the monoid of invertible morphisms on an alphabet containing at least three letters is not finitely generated.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: A connection between palindromic and factor complexity using return words
paper_content:
In this paper we prove that for any infinite word W whose set of factors is closed under reversal, the following conditions are equivalent: (I) all complete returns to palindromes are palindromes; (II) P(n) + P(n+1) = C(n+1) - C(n) + 2 for all n, where P (resp. C) denotes the palindromic complexity (resp. factor complexity) function of W, which counts the number of distinct palindromic factors (resp. factors) of each length in W.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: On a characteristic property of ARNOUX–RAUZY sequences
paper_content:
Here we give a characterization of Arnoux-Rauzy sequences by the way of the lexicographic orderings of their alphabet.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: A characterization of Sturmian morphisms
paper_content:
A morphism is called Sturmian if it preserves all Sturmian (infinite) words. It is weakly Sturmian if it preserves at least one Sturmian word. We prove that a morphism is Sturmian if and only if it keeps the word ba2ba2baba2bab balanced. As a consequence, weakly Sturmian morphisms are Sturmian. An application to infinite words associated to irrational numbers is given.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: On Sturmian and episturmian words, and related topics
paper_content:
Combinatorics on words plays a fundamental role in various fields of mathematics, not to mention its relevance in theoretical computer science and physics. Most renowned among its branches is the theory of infinite binary sequences called Sturmian words, which are fascinating in many respects, having been studied from combinatorial, algebraic, and geometric points of view. The most well-known example of a Sturmian word is the ubiquitous Fibonacci word, the importance of which lies in combinatorial pattern matching and the theory of words. Properties of the Fibonacci word and, more generally, Sturmian words have been extensively studied, not only because of their significance in discrete mathematics, but also due to their practical applications in computer imagery (digital straightness), theoretical physics (quasicrystal modelling) and molecular biology.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Some remarks on invertible substitutions on three letter alphabet
paper_content:
By introducing the notion of “prime substitution” it is shown that the set of invertible substitutions over an alphabet of more than three letters is not finitely generated. Some examples are given.
---
paper_title: On Morphisms Preserving Infinite Lyndon Words
paper_content:
In a previous paper, we characterized free monoid morphisms preserving finite Lyndon words. In particular, we proved that such a morphism preserves the order on finite words. Here we study morphisms preserving infinite Lyndon words and morphisms preserving the order on infinite words. We characterize them and show relations with morphisms preserving Lyndon words or the order on finite words. We also briefly study morphisms preserving border-free words and those preserving the radix order.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Conjugacy and episturmian morphisms
paper_content:
Episturmian morphisms generalize Sturmian morphisms. Here, we study some intrinsic properties of these morphisms: invertibility, presentation, cancellativity, unitarity, characterization by conjugacy. Most of them are generalizations of known properties of Sturmian morphisms. But we present also some results on episturmian morphisms that have not already been stated in the particular case of Sturmian morphisms: characterization of the episturmian morphisms that preserve palindromes, new algorithms to compute conjugates.We also study the conjugation of morphisms in the general case and show that the monoid of invertible morphisms on an alphabet containing at least three letters is not finitely generated.
---
paper_title: Complexity of sequences and dynamical systems
paper_content:
Abstract This is a survey of recent results on the notion of symbolic complexity, which counts the number of factors of an infinite sequence, particularly in view of its relations with dynamical systems.
---
paper_title: A local balance property of episturmian words
paper_content:
We prove that episturmian words and Arnoux-Rauzy sequences can be characterized using a local balance property. We also give a new characterization of epistandard words.
---
paper_title: Conjugacy of morphisms and Lyndon decomposition of standard Sturmian words
paper_content:
Using the notions of conjugacy of morphisms and of morphisms preserving Lyndon words, we answer a question of G. Melancon. We characterize cases where the sequence of Lyndon words in the Lyndon factorization of a standard Sturmian word is morphic. In each possible case, the corresponding morphism is given.
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Initial powers of Sturmian sequences
paper_content:
In this paper we investigate powers of prefixes of Sturmian sequences. We give an explicit formula for ice(ω), the initial critical exponent of a Sturmian sequence ω, defined as the supremum of all real numbers p > 0 for which there exist arbitrary long prefixes of ω of the form up, in terms of its S-adic representation. This formula is based on Ostrowski's numeration system. Furthermore we characterize those irrational slopes α of which there exists a Sturmian sequence ω beginning in only finitely many powers of 2 + e, that is for which ice(ω) = 2. In the process we recover the known results for the index (or critical exponent) of a Sturmian sequence. We also focus on the Fibonacci Sturmian shift and prove that the set of Sturmian sequences with ice strictly smaller than its everywhere value has Hausdorff dimension 1.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Balances for fixed points of primitive substitutions
paper_content:
An infinite word defined over a finite alphabet A is balanced if for any pair (ω,ω') of factors of the same length and for any letter a in the alphabet ||ω|a - |ω'|a| ≤ 1, where |ω|a denotes the number of occurrences of the letter a in the word ω. In this paper, we generalize this notion and introduce a measure of balance for an infinite sequence. In the case of fixed points of primitive substitutions, we show that the asymptotic behaviour of this measure is in part ruled by the spectrum of the incidence matrix associated with the substitution. Connections with frequencies of letters and other balance properties are also discussed.
---
paper_title: Quasiperiodic Sturmian words and morphisms
paper_content:
We characterize all quasiperiodic Sturmian words: a Sturmian word is not quasiperiodic if and only if it is a Lyndon word. Moreover, we study links between Sturmian morphisms and quasiperiodicity.
---
paper_title: A characterization of fine words over a finite alphabet
paper_content:
To any infinite word w over a finite alphabet A we can associate two infinite words min(w) and max(w) such that any prefix of min(w) (resp. max(w)) is the lexicographically smallest (resp. greatest) amongst the factors of w of the same length. We say that an infinite word w over A is"fine"if there exists an infinite word u such that, for any lexicographic order, min(w) = au where a = min(A). In this paper, we characterize fine words; specifically, we prove that an infinite word w is fine if and only if w is either a"strict episturmian word"or a strict"skew episturmian word''. This characterization generalizes a recent result of G. Pirillo, who proved that a fine word over a 2-letter alphabet is either an (aperiodic) Sturmian word, or an ultimately periodic (but not periodic) infinite word, all of whose factors are (finite) Sturmian.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Balances for fixed points of primitive substitutions
paper_content:
An infinite word defined over a finite alphabet A is balanced if for any pair (ω,ω') of factors of the same length and for any letter a in the alphabet ||ω|a - |ω'|a| ≤ 1, where |ω|a denotes the number of occurrences of the letter a in the word ω. In this paper, we generalize this notion and introduce a measure of balance for an infinite sequence. In the case of fixed points of primitive substitutions, we show that the asymptotic behaviour of this measure is in part ruled by the spectrum of the incidence matrix associated with the substitution. Connections with frequencies of letters and other balance properties are also discussed.
---
paper_title: Conjugacy and episturmian morphisms
paper_content:
Episturmian morphisms generalize Sturmian morphisms. Here, we study some intrinsic properties of these morphisms: invertibility, presentation, cancellativity, unitarity, characterization by conjugacy. Most of them are generalizations of known properties of Sturmian morphisms. But we present also some results on episturmian morphisms that have not already been stated in the particular case of Sturmian morphisms: characterization of the episturmian morphisms that preserve palindromes, new algorithms to compute conjugates.We also study the conjugation of morphisms in the general case and show that the monoid of invertible morphisms on an alphabet containing at least three letters is not finitely generated.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Directive words of episturmian words: equivalences and normalization
paper_content:
Episturmian morphisms constitute a powerful tool to study episturmian words. Indeed, any episturmian word can be infinitely decomposed over the set of pure episturmian morphisms. Thus, an episturmian word can be defined by one of its morphic decompositions or, equivalently, by a certain directive word. Here we characterize pairs of words directing the same episturmian word. We also propose a way to uniquely define any episturmian word through a normalization of its directive words. As a consequence of these results, we characterize episturmian words having a unique directive word.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Directive words of episturmian words: equivalences and normalization
paper_content:
Episturmian morphisms constitute a powerful tool to study episturmian words. Indeed, any episturmian word can be infinitely decomposed over the set of pure episturmian morphisms. Thus, an episturmian word can be defined by one of its morphic decompositions or, equivalently, by a certain directive word. Here we characterize pairs of words directing the same episturmian word. We also propose a way to uniquely define any episturmian word through a normalization of its directive words. As a consequence of these results, we characterize episturmian words having a unique directive word.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Conjugacy and episturmian morphisms
paper_content:
Episturmian morphisms generalize Sturmian morphisms. Here, we study some intrinsic properties of these morphisms: invertibility, presentation, cancellativity, unitarity, characterization by conjugacy. Most of them are generalizations of known properties of Sturmian morphisms. But we present also some results on episturmian morphisms that have not already been stated in the particular case of Sturmian morphisms: characterization of the episturmian morphisms that preserve palindromes, new algorithms to compute conjugates.We also study the conjugation of morphisms in the general case and show that the monoid of invertible morphisms on an alphabet containing at least three letters is not finitely generated.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Quasiperiodic and Lyndon episturmian words
paper_content:
Recently the second two authors characterized quasiperiodic Sturmian words, proving that a Sturmian word is non-quasiperiodic if and only if, it is an infinite Lyndon word. Here we extend this study to episturmian words (a natural generalization of Sturmian words) by describing all the quasiperiods of an episturmian word, which yields a characterization of quasiperiodic episturmian words in terms of their directive words. Even further, we establish a complete characterization of all episturmian words that are Lyndon words. Our main results show that, unlike the Sturmian case, there is a much wider class of episturmian words that are non-quasiperiodic, besides those that are infinite Lyndon words. Our key tools are morphisms and directive words, in particular normalized directive words, which we introduced in an earlier paper. Also of importance is the use of return words to characterize quasiperiodic episturmian words, since such a method could be useful in other contexts.
---
paper_title: Directive words of episturmian words: equivalences and normalization
paper_content:
Episturmian morphisms constitute a powerful tool to study episturmian words. Indeed, any episturmian word can be infinitely decomposed over the set of pure episturmian morphisms. Thus, an episturmian word can be defined by one of its morphic decompositions or, equivalently, by a certain directive word. Here we characterize pairs of words directing the same episturmian word. We also propose a way to uniquely define any episturmian word through a normalization of its directive words. As a consequence of these results, we characterize episturmian words having a unique directive word.
---
paper_title: Inequalities characterizing standard Sturmian and episturmian words
paper_content:
Considering the smallest and the greatest factors with respect to the lexicographic order we associate to each infinite word r two other infinite words min(r) and max(r). In this paper we prove that the inequalities as ≤ min(s) ≤ max(s) ≤ bs characterize standard Sturmian words (proper ones and periodic ones) and that the condition "for any x ∈ A and lexicographic order < satisfying x = min(A) one has xs ≤ min(s)" characterizes standard episturmian words.
---
paper_title: Characterisations of Balanced Words via Orderings
paper_content:
Three new characterisations of balanced words are presented. Each of these characterisations is based on the ordering of a shift orbit, either lexicographically or with respect to the norm |ċ|1 (which counts the number of occurrences of the symbol 1).
---
paper_title: Initial powers of Sturmian sequences
paper_content:
In this paper we investigate powers of prefixes of Sturmian sequences. We give an explicit formula for ice(ω), the initial critical exponent of a Sturmian sequence ω, defined as the supremum of all real numbers p > 0 for which there exist arbitrary long prefixes of ω of the form up, in terms of its S-adic representation. This formula is based on Ostrowski's numeration system. Furthermore we characterize those irrational slopes α of which there exists a Sturmian sequence ω beginning in only finitely many powers of 2 + e, that is for which ice(ω) = 2. In the process we recover the known results for the index (or critical exponent) of a Sturmian sequence. We also focus on the Fibonacci Sturmian shift and prove that the set of Sturmian sequences with ice strictly smaller than its everywhere value has Hausdorff dimension 1.
---
paper_title: A characterization of fine words over a finite alphabet
paper_content:
To any infinite word w over a finite alphabet A we can associate two infinite words min(w) and max(w) such that any prefix of min(w) (resp. max(w)) is the lexicographically smallest (resp. greatest) amongst the factors of w of the same length. We say that an infinite word w over A is"fine"if there exists an infinite word u such that, for any lexicographic order, min(w) = au where a = min(A). In this paper, we characterize fine words; specifically, we prove that an infinite word w is fine if and only if w is either a"strict episturmian word"or a strict"skew episturmian word''. This characterization generalizes a recent result of G. Pirillo, who proved that a fine word over a 2-letter alphabet is either an (aperiodic) Sturmian word, or an ultimately periodic (but not periodic) infinite word, all of whose factors are (finite) Sturmian.
---
paper_title: Palindromic Richness
paper_content:
In this paper, we study combinatorial and structural properties of a new class of finite and infinite words that are 'rich' in palindromes in the utmost sense. A characteristic property of so-called"rich words"is that all complete returns to any palindromic factor are themselves palindromes. These words encompass the well-known episturmian words, originally introduced by the second author together with X. Droubay and G. Pirillo in 2001. Other examples of rich words have appeared in many different contexts. Here we present the first unified approach to the study of this intriguing family of words. Amongst our main results, we give an explicit description of the periodic rich infinite words and show that the recurrent balanced rich infinite words coincide with the balanced episturmian words. We also consider two wider classes of infinite words, namely"weakly rich words"and almost rich words (both strictly contain all rich words, but neither one is contained in the other). In particular, we classify all recurrent balanced weakly rich words. As a consequence, we show that any such word on at least three letters is necessarily episturmian; hence weakly rich words obey Fraenkel's conjecture. Likewise, we prove that a certain class of almost rich words obeys Fraenkel's conjecture by showing that the recurrent balanced ones are episturmian or contain at least two distinct letters with the same frequency. Lastly, we study the action of morphisms on (almost) rich words with particular interest in morphisms that preserve (almost) richness. Such morphisms belong to the class of"P-morphisms"that was introduced by A. Hof, O. Knill, and B. Simon in 1995.
---
paper_title: On a characteristic property of ARNOUX–RAUZY sequences
paper_content:
Here we give a characterization of Arnoux-Rauzy sequences by the way of the lexicographic orderings of their alphabet.
---
paper_title: Morse and Hedlund’s Skew Sturmian Words Revisited
paper_content:
For any infinite word r over A = {a, b} we associate two infinite words min(r), max(r) such that any prefix of min(r) (max(r), respectively) is the lexicographically smallest (greatest, respectively) among the factors of r of the same length. We prove that (min(r); max(r)) = (as, bs) for some infinite word s if and only if r is a proper Sturmian word or an ultimately periodic word of a particular form. This result is based on a lemma concerning sequences of infinite words.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: A characterization of Sturmian morphisms
paper_content:
A morphism is called Sturmian if it preserves all Sturmian (infinite) words. It is weakly Sturmian if it preserves at least one Sturmian word. We prove that a morphism is Sturmian if and only if it keeps the word ba2ba2baba2bab balanced. As a consequence, weakly Sturmian morphisms are Sturmian. An application to infinite words associated to irrational numbers is given.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: On substitution invariant Sturmian words: An application of Rauzy fractals
paper_content:
Sturmian words are infinite words that have exactly n + 1 factors of length n for every positive integer n. A Sturmian word s�,� is also defined as a coding over a two-letter alphabet of the orbit of the pointunder the action of the irrational rotation R� : x 7! x + � (mod 1). Yasutomi characterized in (34) all the pairs (�,�) such that the Sturmian word s�,� is a fixed point of some non-trivial substitution. By investigating the Rauzy frac- tals associated with invertible substitutions, we give an alternative geometric proof of Yasutomi's characterization.
---
paper_title: A remark on morphic sturmian words
paper_content:
This Note deals with binary Sturmian words that are morphic, i.e. generated by iterating a morphism. Among these, characteristic words are a well-known subclass. We prove that for every characteristic morphic word x, the four words ax, bx, abx and bax are morphic
---
paper_title: A little more about morphic Sturmian words
paper_content:
Among Sturmian words, some of them are morphic, i.e. fixed point of a non-identical morphism on words. Berstel and Seebold (1993) have shown that if a characteristic Sturmian word is morphic, then it can be extended by the left with one or two letters in such a way that it remains morphic and Sturmian. Yasutomi (1997) has proved that these were the sole possible additions and that, if we cut the first letters of such a word, it didn't remain morphic. In this paper, we give an elementary and combinatorial proof of this result.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Pisot substitutions and Rauzy fractals
paper_content:
We prove that the dynamical system generated by a primitive unimodular substitution of the Pisot type ond letters satisfying a combinatorial condition which is easy to check, is measurably isomorphic to a domain exchange in R d 1 , and is a nite extension of a translation on the torus T d 1 .I n the course of the proof, we introduce some potentially useful notions: the linear maps associated to a substitution and their dual maps, and the -structure for a dynamical system with respect to a pair of partitions.
---
paper_title: COMBINATORIAL PROPERTIES OF ARNOUX–RAUZY SUBSHIFTS AND APPLICATIONS TO SCHRÖDINGER OPERATORS
paper_content:
We consider Arnoux–Rauzy subshifts X and study various combinatorial questions: When is X linearly recurrent? What is the maximal power occurring in X? What is the number of palindromes of a given length occurring in X? We present applications of our combinatorial results to the spectral theory of discrete one-dimensional Schrodinger operators with potentials given by Arnoux–Rauzy sequences.
---
paper_title: RECENT RESULTS ON EXTENSIONS OF STURMIAN WORDS
paper_content:
Sturmian words are infinite words over a two-letter alphabet that admit a great number of equivalent definitions. Most of them have been given in the past ten years. Among several extensions of Sturmian words to larger alphabets, the Arnoux–Rauzy words appear to share many of the properties of Sturmian words. In this survey, combinatorial properties of these two families are considered and compared.
---
paper_title: STRUCTURE OF THREE INTERVAL EXCHANGE TRANSFORMATIONS I : AN ARITHMETIC STUDY
paper_content:
Dans cet article nous decrivons une generalisation a la dimension 2 de l'algorithme d'Euclide, qui provient de la dynamique des echanges de 3 intervalles. Nous examinons diverses proprietes diophantiennes de cet algorithme, en particulier la, qualite de l'approximation simultanee. Nous montrons qu'il verifie un theoreme de type Lagrange : l'algorithme est finalement periodique si et seulement si les parametres sont dans la meme extension quadratique de Q.
---
paper_title: Fine and Wilf's theorem for three periods and a generalization of Sturmian words
paper_content:
We extend the theorem of Fine and Wilf to words having three periods. We then define the set 3-PER of words of maximal length for which such result does not apply. We prove that the set 3-PER and the sequences of complexity 2n + 1, introduced by Arnoux and Rauzy to generalize Sturmian words, have the same set of factors.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Palindromes and Sturmian words
paper_content:
Abstract An infinite word x over the alphabet A is Sturmian if and only if g x ( n ) = n + 1 for any integer n , where g x ( n ) is the number of distinct words of length n occurring in x . A palindrome is a word that can be read indistinctly from left to right or from right to left. We prove that x is Sturmian if and only if h x ( n ) = 1 + ( n mod 2) for any integer n , where h x ( n ) is the number of palindromes of length n occurring in x . An infinite word x over the alphabet A is generated by a morphism f if there exists a letter c ϵ A such that lim n →∞ f n ( c ) = x . We prove the existence of a morphism that generates the palindromes of any infinite Sturmian word generated by a morphism.
---
paper_title: Palindromes and Sturmian words
paper_content:
Abstract An infinite word x over the alphabet A is Sturmian if and only if g x ( n ) = n + 1 for any integer n , where g x ( n ) is the number of distinct words of length n occurring in x . A palindrome is a word that can be read indistinctly from left to right or from right to left. We prove that x is Sturmian if and only if h x ( n ) = 1 + ( n mod 2) for any integer n , where h x ( n ) is the number of palindromes of length n occurring in x . An infinite word x over the alphabet A is generated by a morphism f if there exists a letter c ϵ A such that lim n →∞ f n ( c ) = x . We prove the existence of a morphism that generates the palindromes of any infinite Sturmian word generated by a morphism.
---
paper_title: Frequencies of factors in Arnoux-Rauzy sequences
paper_content:
V. Berthe showed that the frequencies of factors in a Sturmian word of slope α, as well as the number of factors with a given frequency, can be expressed in terms of the continued fraction expansion of α. In this paper we describe a multi-dimensional continued fraction process associated with a class of sequences of (block) complexity kn+1 originally introduced by P. Arnoux and G. Rauzy. This vectorial division algorithm yields simultaneous rational approximations of the frequencies of the letters. We extend Berthe’s result to factors of Arnoux-Rauzy sequences by expressing both the frequencies and the number of factors with a given frequency, in terms of the ‘convergents’ obtained from the generalized continued fraction expansion of the frequencies of the letters.
---
paper_title: Episturmian morphisms and a Galois theorem on continued fractions
paper_content:
We associate with a word w on a finite alphabet A an episturmian (or Arnoux-Rauzy) morphism and a palindrome. We study their relations with the similar ones for the reversal of w . Then when |A|=2 we deduce, using the Sturmian words that are the fixed points of the two morphisms, a proof of a Galois theorem on purely periodic continued fractions whose periods are the reversal of each other.
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: EPISTURMIAN WORDS: SHIFTS, MORPHISMS AND NUMERATION SYSTEMS
paper_content:
Episturmian words, which include the Arnoux-Rauzy sequences, are infinite words on a finite alphabet generalizing the Sturmian words and sharing many of their same properties. This was studied in previous papers. Here we gain a deeper insight into these properties. This leads in particular to consider numerations systems similar to the Ostrowski ones and to give a matrix formula for computing the number of representations of an integer in such a system. We also obtain a complete answer to the question: if an episturmian word is morphic, which shifts of it, if any, also are morphic ?
---
paper_title: Palindromic Prefixes and Episturmian Words
paper_content:
Let $w$ be an infinite word on an alphabet $A$. We denote by $(n_i)_{i \geq 1}$ the increasing sequence (assumed to be infinite) of all lengths of palindrome prefixes of $w$. In this text, we give an explicit construction of all words $w$ such that $n_{i+1} \leq 2 n_i + 1$ for any $i$, and study these words. Special examples include characteristic Sturmian words, and more generally standard episturmian words. As an application, we study the values taken by the quantity $\limsup n_{i+1}/n_i$, and prove that it is minimal (among all non-periodic words) for the Fibonacci word.
---
paper_title: Palindromic Richness
paper_content:
In this paper, we study combinatorial and structural properties of a new class of finite and infinite words that are 'rich' in palindromes in the utmost sense. A characteristic property of so-called"rich words"is that all complete returns to any palindromic factor are themselves palindromes. These words encompass the well-known episturmian words, originally introduced by the second author together with X. Droubay and G. Pirillo in 2001. Other examples of rich words have appeared in many different contexts. Here we present the first unified approach to the study of this intriguing family of words. Amongst our main results, we give an explicit description of the periodic rich infinite words and show that the recurrent balanced rich infinite words coincide with the balanced episturmian words. We also consider two wider classes of infinite words, namely"weakly rich words"and almost rich words (both strictly contain all rich words, but neither one is contained in the other). In particular, we classify all recurrent balanced weakly rich words. As a consequence, we show that any such word on at least three letters is necessarily episturmian; hence weakly rich words obey Fraenkel's conjecture. Likewise, we prove that a certain class of almost rich words obeys Fraenkel's conjecture by showing that the recurrent balanced ones are episturmian or contain at least two distinct letters with the same frequency. Lastly, we study the action of morphisms on (almost) rich words with particular interest in morphisms that preserve (almost) richness. Such morphisms belong to the class of"P-morphisms"that was introduced by A. Hof, O. Knill, and B. Simon in 1995.
---
paper_title: Episturmian words and some constructions of de Luca and Rauzy
paper_content:
Abstract In this paper we study infinite episturmian words which are a natural generalization of Sturmian words to an arbitrary alphabet. A characteristic property is: they are closed under reversal and have at most one right special factor of each length. They are first obtained by a construction due to de LUCA which utilizes the palindrome closure. They can also be obtained by the way of extended RAUZY rules.
---
paper_title: Palindromic complexity of infinite words associated with simple Parry numbers
paper_content:
We study the palindromic complexity of infinite words $u_\beta$, the fixed points of the substitution over a binary alphabet, $\phi(0)=0^a1$, $\phi(1)=0^b1$, with $a-1\geq b\geq 1$, which are canonically associated with quadratic non-simple Parry numbers $\beta$.
---
paper_title: Transcendence measures for continued fractions involving repetitive or symmetric patterns
paper_content:
There is a long tradition in constructing explicit classes of transcendental continued fractions and especially transcendental continued fractions with bounded partial quotients. By means of the Schmidt Subspace Theorem, existing results were recently substantially improved by the authors in a series of papers, providing new classes of transcendental continued fractions. It is the purpose of the present work to show how the Quantitative Subspace Theorem yields transcendence measures for (most of) these numbers.
---
paper_title: A connection between palindromic and factor complexity using return words
paper_content:
In this paper we prove that for any infinite word W whose set of factors is closed under reversal, the following conditions are equivalent: (I) all complete returns to palindromes are palindromes; (II) P(n) + P(n+1) = C(n+1) - C(n) + 2 for all n, where P (resp. C) denotes the palindromic complexity (resp. factor complexity) function of W, which counts the number of distinct palindromic factors (resp. factors) of each length in W.
---
paper_title: Powers in a class of A-strict standard episturmian words
paper_content:
This paper concerns a specific class of strict standard episturmian words whose directive words resemble those of characteristic Sturmian words. In particular, we explicitly determine all integer powers occurring in such infinite words, extending recent results of Damanik and Lenz [D. Damanik, D. Lenz, Powers in Sturmian sequences, European J. Combin. 24 (2003) 377-390, doi:10.1016/S0195-6698(03)00026-X], who studied powers in Sturmian words. The key tools in our analysis are canonical decompositions and a generalization of singular words, which were originally defined for the ubiquitous Fibonacci word. Our main results are demonstrated via some examples, including the k-bonacci word, a generalization of the Fibonacci word to a k-letter alphabet (k>=2).
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Initial powers of Sturmian sequences
paper_content:
In this paper we investigate powers of prefixes of Sturmian sequences. We give an explicit formula for ice(ω), the initial critical exponent of a Sturmian sequence ω, defined as the supremum of all real numbers p > 0 for which there exist arbitrary long prefixes of ω of the form up, in terms of its S-adic representation. This formula is based on Ostrowski's numeration system. Furthermore we characterize those irrational slopes α of which there exists a Sturmian sequence ω beginning in only finitely many powers of 2 + e, that is for which ice(ω) = 2. In the process we recover the known results for the index (or critical exponent) of a Sturmian sequence. We also focus on the Fibonacci Sturmian shift and prove that the set of Sturmian sequences with ice strictly smaller than its everywhere value has Hausdorff dimension 1.
---
paper_title: Sturmian words and words with a critical exponent
paper_content:
Let S be a standard Sturmian word that is a fixed point of a non-trivial homomorphism. Associated to the infinite word S is a unique irrational number β with 0 0 it contains a fractional power with exponent greater than Ω−e; here Ω is a constant that depends on β. The constant Ω is given explicitly. Using these results we are able to give a short proof of Mignosi's theorem and give an exact evaluation of the maximal power that can occur in a standard Sturmian word.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Transcendence of Numbers with a Low Complexity Expansion
paper_content:
Abstract A sequence is Sturmian if it has complexity n + l −1, that is, n + l −1 factors of length n for every n ; we show that real numbers whose expansion in some base k ⩾ l is Sturmian are transcendental, and give explicit expressions for these numbers. We then generalize the transcendence property to other sequences of low complexity, particularly the Arnoux–Rauzy sequences.
---
paper_title: Some properties of the Tribonacci sequence
paper_content:
In this paper, we consider the factor properties of the Tribonacci sequence. We define the singular words, and then give the singular factorization and the Lyndon factorization. As applications, we study the powers of the factors and the overlap of the factors. We also calculate the free index of the sequence.
---
paper_title: Fractional powers in Sturmian words
paper_content:
Abstract Given an infinite Sturmian word s , we calculate the function L(m) which gives the length of the longest factor of s having period m . The expression of L(m) makes use of the continued fraction of the irrational α associated with s .
---
paper_title: Transcendence of Sturmian or morphic continued fractions
paper_content:
Abstract We prove, using a theorem of W. Schmidt, that if the sequence of partial quotients of the continued fraction expansion of a positive irrational real number takes only two values, and begins with arbitrary long blocks which are “almost squares,” then this number is either quadratic or transcendental. This result applies in particular to real numbers whose partial quotients form a Sturmian (or quasi-Sturmian) sequence, or are given by the sequence (1+(⌊ nα ⌋ mod 2)) n ⩾0 , or are a “repetitive” fixed point of a binary morphism satisfying some technical conditions.
---
paper_title: On the Index of Sturmian Words
paper_content:
An infinite word x has finite index if the exponents of the powers of primitive words that are factors of x are bounded. F. Mignosi has proved that a Sturmian word has finite index if and only if the coefficients of the continued fraction development of its slope are bounded. Mignosi’s proof relies on a delicate analysis of the approximation of the slope by rational numbers. We give here a proof based on combinatorial properties of words, and give some additional relations between the exponents and the slope.
---
paper_title: Frequencies of factors in Arnoux-Rauzy sequences
paper_content:
V. Berthe showed that the frequencies of factors in a Sturmian word of slope α, as well as the number of factors with a given frequency, can be expressed in terms of the continued fraction expansion of α. In this paper we describe a multi-dimensional continued fraction process associated with a class of sequences of (block) complexity kn+1 originally introduced by P. Arnoux and G. Rauzy. This vectorial division algorithm yields simultaneous rational approximations of the frequencies of the letters. We extend Berthe’s result to factors of Arnoux-Rauzy sequences by expressing both the frequencies and the number of factors with a given frequency, in terms of the ‘convergents’ obtained from the generalized continued fraction expansion of the frequencies of the letters.
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Episturmian words and episturmian morphisms
paper_content:
Infinite episturmian words are a generalization of Sturmian words which includes the Arnoux-Rauzy sequences. We continue their study and that of episturmian morphisms, begun previously, in relation with the action of the shift operator. Palindromic and periodic factors of these words are described. We consider, in particular, the case where these words are generated by morphisms and introduce then a notion of intercept generalizing that of Sturmian words. Finally, we prove that the frequencies of the factors in a strong sense do exist for all episturmian words.
---
paper_title: Descendants of Primitive Substitutions
paper_content:
Let s = (A, τ) be a primitive substitution. To each decomposition of the form τ (h) = uhv we associate a primitive substitution D [(h,u)] (s) defined on the set of return words to h . The substitution D [(h,u)] (s) is called a descendant of s and its associated dynamical system is the induced system (X h , T h ) on the cylinder determined by h . We show that \( {\cal D}(s) \) , the set of all descendants of s , is finite for each primitive substitution s . We consider this to be a symbolic counterpart to a theorem of Boshernitzan and Carroll which states that an interval exchange transformation defined over a quadratic field has only finitely many descendants. If s fixes a nonperiodic sequence, then \( {\cal D}(s) \) contains a recognizable substitution. Under certain conditions the set \( \Omega (s) = \bigcap_{s^\prime \in {\cal D}(s)}{\cal D}(s^\prime ) \) is nonempty.
---
paper_title: A characterization of Sturmian morphisms
paper_content:
A morphism is called Sturmian if it preserves all Sturmian (infinite) words. It is weakly Sturmian if it preserves at least one Sturmian word. We prove that a morphism is Sturmian if and only if it keeps the word ba2ba2baba2bab balanced. As a consequence, weakly Sturmian morphisms are Sturmian. An application to infinite words associated to irrational numbers is given.
---
paper_title: Substitution dynamical systems : Algebraic characterization of eigenvalues
paper_content:
We give a necessary and sufficient condition allowing to compute explicitly the eigenvalues of the dynamical system associated to any
---
paper_title: A remark on morphic sturmian words
paper_content:
This Note deals with binary Sturmian words that are morphic, i.e. generated by iterating a morphism. Among these, characteristic words are a well-known subclass. We prove that for every characteristic morphic word x, the four words ax, bx, abx and bax are morphic
---
paper_title: Morse and Hedlund’s Skew Sturmian Words Revisited
paper_content:
For any infinite word r over A = {a, b} we associate two infinite words min(r), max(r) such that any prefix of min(r) (max(r), respectively) is the lexicographically smallest (greatest, respectively) among the factors of r of the same length. We prove that (min(r); max(r)) = (as, bs) for some infinite word s if and only if r is a proper Sturmian word or an ultimately periodic word of a particular form. This result is based on a lemma concerning sequences of infinite words.
---
paper_title: On complementary triples of Sturmian bisequences
paper_content:
Abstract A Sturmian bisequence S is a subset of ℤ such that the numbers of elements of S in any two intervals of equal lengths differ by at most 1. A complementary triple of bisequences is a set of three bisequences such that every integer belongs to exactly one bisequence. In the paper a question of Loeve is answered by giving a characterization of all complementary triples of Sturmian bisequences.
---
paper_title: STRUCTURE OF THREE INTERVAL EXCHANGE TRANSFORMATIONS I : AN ARITHMETIC STUDY
paper_content:
Dans cet article nous decrivons une generalisation a la dimension 2 de l'algorithme d'Euclide, qui provient de la dynamique des echanges de 3 intervalles. Nous examinons diverses proprietes diophantiennes de cet algorithme, en particulier la, qualite de l'approximation simultanee. Nous montrons qu'il verifie un theoreme de type Lagrange : l'algorithme est finalement periodique si et seulement si les parametres sont dans la meme extension quadratique de Q.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: Quasiperiodic and Lyndon episturmian words
paper_content:
Recently the second two authors characterized quasiperiodic Sturmian words, proving that a Sturmian word is non-quasiperiodic if and only if, it is an infinite Lyndon word. Here we extend this study to episturmian words (a natural generalization of Sturmian words) by describing all the quasiperiods of an episturmian word, which yields a characterization of quasiperiodic episturmian words in terms of their directive words. Even further, we establish a complete characterization of all episturmian words that are Lyndon words. Our main results show that, unlike the Sturmian case, there is a much wider class of episturmian words that are non-quasiperiodic, besides those that are infinite Lyndon words. Our key tools are morphisms and directive words, in particular normalized directive words, which we introduced in an earlier paper. Also of importance is the use of return words to characterize quasiperiodic episturmian words, since such a method could be useful in other contexts.
---
paper_title: Extremal properties of (epi)Sturmian sequences and distribution modulo 1
paper_content:
Starting from a study of Y. Bugeaud and A. Dubickas (2005) on a question in distribution of real numbers modulo 1 via combinatorics on words, we survey some combinatorial properties of (epi)Sturmian sequences and distribution modulo 1 in connection to their work. In particular we focus on extremal properties of (epi)Sturmian sequences, some of which have been rediscovered several times.
---
paper_title: Symbolic dynamics and rotation numbers
paper_content:
In the space of binary sequences, minimal sets, that is: sets invariant under the shift operation, that have no invariant proper subsets, are investigated. In applications, such as a piecewise linear circle map and the Smale horseshoe in a mapping of the annulus, each of these sets is invariant under the mapping. These sets can be assigned a unique rotation number equal to the average of the number of ones in the sequences.
---
paper_title: Symbolic dynamics of order-preserving orbits
paper_content:
Abstract The maps we consider are roughly those that can be obtained by truncating non-invertible maps to weakly monotonic maps (they have a flat piece). The binary sequences that correspond to order-preserving orbits are shown to satisfy a minimax principle (which was already known for order-preserving orbits with rational rotation number). The converse is also proven: all minimax orbits are order-preserving with respect to some rotation number. For certain families of such circle maps one can solve exactly for the parameter values for which the map has a specified rotation number rho. For rho rational we obtain the endpoints of the resonance intervals recursively. These parameter values can be organized in a natural way as the nodes of a Farey tree. We give some applications of the ideas discussed.
---
paper_title: Conjugacy of morphisms and Lyndon decomposition of standard Sturmian words
paper_content:
Using the notions of conjugacy of morphisms and of morphisms preserving Lyndon words, we answer a question of G. Melancon. We characterize cases where the sequence of Lyndon words in the Lyndon factorization of a standard Sturmian word is morphic. In each possible case, the corresponding morphism is given.
---
paper_title: Inequalities characterizing standard Sturmian and episturmian words
paper_content:
Considering the smallest and the greatest factors with respect to the lexicographic order we associate to each infinite word r two other infinite words min(r) and max(r). In this paper we prove that the inequalities as ≤ min(s) ≤ max(s) ≤ bs characterize standard Sturmian words (proper ones and periodic ones) and that the condition "for any x ∈ A and lexicographic order < satisfying x = min(A) one has xs ≤ min(s)" characterizes standard episturmian words.
---
paper_title: Quasiperiodic Sturmian words and morphisms
paper_content:
We characterize all quasiperiodic Sturmian words: a Sturmian word is not quasiperiodic if and only if it is a Lyndon word. Moreover, we study links between Sturmian morphisms and quasiperiodicity.
---
paper_title: On a characteristic property of ARNOUX–RAUZY sequences
paper_content:
Here we give a characterization of Arnoux-Rauzy sequences by the way of the lexicographic orderings of their alphabet.
---
paper_title: Characterizations of finite and infinite episturmian words via lexicographic orderings
paper_content:
In this talk, I will present some new results arising from collaborative work with Jacques Justin (France) and Giuseppe Pirillo (Italy). This work, which extends previous results on extremal properties of infinite Sturmian and episturmian words, is purely combinatorial in nature. Specifically, we characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian.
---
paper_title: A local balance property of episturmian words
paper_content:
We prove that episturmian words and Arnoux-Rauzy sequences can be characterized using a local balance property. We also give a new characterization of epistandard words.
---
paper_title: A generalization of Sturmian sequences; combinatorial structure and transcendence
paper_content:
We investigate a class of minimal sequences on a finite alphabet Ak = {1,2,...,k} having (k - 1)n + 1 distinct subwords of length n. These sequences, originally defined by P. Arnoux and G. Rauzy, are a natural generalization of binary Sturmian sequences. We describe two simple combinatorial algorithms for constructing characteristic Arnoux-Rauzy sequences (one of which is new even in the Sturmian case). Arnoux-Rauzy sequences arising from fixed points of primitive morphisms are characterized by an underlying periodic structure. We show that every Arnoux-Rauzy sequence contains arbitrarily large subwords of the form V^2+e and, in the Sturmian case, arbitrarily large subwords of the form V^3+e. Finally, we prove that an irrational number whose base b-digit expansion is an Arnoux-Rauzy sequence is transcendental.
---
paper_title: Complementing and exactly covering sequences
paper_content:
Abstract The following is proved (in a slightly more general setting): Let α1, …, αm be positive real, γ1, …, γm real, and suppose that the system [nαi + γi], i = 1, …, m, n = 1, 2, …, contains every positive integer exactly once (= a complementing system). Then α i α j is an integer for some i ≠ j in each of the following cases: (i) m = 3 and m = 4; (ii) m = 5 if all αi but one are integers; (iii) m ⩾ 5, two of the αi are integers, at least one of them prime; (iv) m ⩾ 5 and αn ⩽ 2n for n = 1, 2, …, m − 4. For proving (iv), a method of reduction is developed which, given a complementing system of m sequences, leads under certain conditions to a derived complementing system of m − 1 sequences.
---
paper_title: Fraenkel's conjecture for six sequences
paper_content:
Abstract A striking conjecture of Fraenkel asserts that every decomposition of Z >0 into m ⩾3 sets {⌊α i n+β i ⌋} n∈ Z >0 with α i and β i real, α i >1 and α i 's distinct for i =1,…, m satisfies {α 1 ,…,α m }= 2 m −1 2 k : 0⩽k . Fraenkel's conjecture was proved by Morikawa if m =3 and, under some condition, if m =4. Proofs in terms of balanced sequences have been given for m =3 by the author and for m =4 by Altman, Gaujal and Hordijk. In the present paper we use the latter approach to establish Fraenkel's conjecture for m =5 and for m =6.
---
paper_title: A Characterization of Balanced Episturmian Sequences
paper_content:
It is well known that Sturmian sequences are the aperiodic sequences that are balanced over a 2-letter alphabet. They are also characterized by their complexity: they have exactly $(n+1)$ factors of length $n$. One possible generalization of Sturmian sequences is the set of infinite sequences over a $k$-letter alphabet, $k \geq 3$, which are closed under reversal and have at most one right special factor for each length. This is the set of episturmian sequences. These are not necessarily balanced over a $k$-letter alphabet, nor are they necessarily aperiodic. In this paper, we characterize balanced episturmian sequences, periodic or not, and prove Fraenkel's conjecture for the class of episturmian sequences. This conjecture was first introduced in number theory and has remained unsolved for more than 30 years. It states that for a fixed $k> 2$, there is only one way to cover $\Z$ by $k$ Beatty sequences. The problem can be translated to combinatorics on words: for a $k$-letter alphabet, there exists only one balanced sequence up to letter permutation that has different letter frequencies.
---
paper_title: Quasiperiodic and Lyndon episturmian words
paper_content:
Recently the second two authors characterized quasiperiodic Sturmian words, proving that a Sturmian word is non-quasiperiodic if and only if, it is an infinite Lyndon word. Here we extend this study to episturmian words (a natural generalization of Sturmian words) by describing all the quasiperiods of an episturmian word, which yields a characterization of quasiperiodic episturmian words in terms of their directive words. Even further, we establish a complete characterization of all episturmian words that are Lyndon words. Our main results show that, unlike the Sturmian case, there is a much wider class of episturmian words that are non-quasiperiodic, besides those that are infinite Lyndon words. Our key tools are morphisms and directive words, in particular normalized directive words, which we introduced in an earlier paper. Also of importance is the use of return words to characterize quasiperiodic episturmian words, since such a method could be useful in other contexts.
---
paper_title: On stabilizers of infinite words
paper_content:
The stabilizer of an infinite word w over a finite alphabet @S is the monoid of morphisms over @S that fix w. In this paper we study various problems related to stabilizers and their generators. We show that over a binary alphabet, there exist stabilizers with at least n generators for all n. Over a ternary alphabet, the monoid of morphisms generating a given infinite word by iteration can be infinitely generated, even when the word is generated by iterating an invertible primitive morphism. Stabilizers of strict epistandard words are cyclic when non-trivial, while stabilizers of ultimately strict epistandard words are always non-trivial. For this latter family of words, we give a characterization of stabilizer elements.
---
paper_title: Efficient detection of quasiperiodicities in strings
paper_content:
A string z is quasiperiodic if there is a second string w ≠ z such that the occurrences of w in z cover z entirely, i.e., every position of z falls within some occurrence of w in z. It is shown here that all maximal quasiperiodic substrings of a string x of n symbols can be detected in time O(n log2 n).
---
paper_title: Quasiperiodic infinite words : multi-scale case and dynamical properties
paper_content:
An infinite word x is said to be quasiperiodic if there exists a finite word q such that x is covered by occurrences of q (such a q is called a quasiperiod of x). Using the notion of derivation, we show that this definition is not sufficient to imply any symmetry in an infinite word. Therefore we introduce multi-scale quasiperiodic words, i.e. quasiperiodic words that admit an infinite number of quasiperiods. Such words are uniformly recurrent, this allows us to study the subshift they generate. We prove that multi-scale quasiperiodic subshifts are uniquely ergodic and have zero topological entropy as well as zero Kolmogorov complexity. Sturmian subshifts are shown to be multi-scale quasiperiodic.
---
paper_title: On different generalizations of episturmian words
paper_content:
In this paper we study some classes of infinite words generalizing episturmian words, and analyse the relations occurring among such classes. In each case, the reversal operator R is replaced by an arbitrary involutory antimorphism @q of the free monoid A^*. In particular, we define the class of @q-words with seed, whose ''standard'' elements (@q-standard words with seed) are constructed by an iterative @q-palindrome closure process, starting from a finite word u"0 called the seed. When the seed is empty, one obtains @q-words; episturmian words are exactly the R-words. One of the main theorems of the paper characterizes @q-words with seed as infinite words closed under @q and having at most one left special factor of each length n>=N (where N is some nonnegative integer depending on the word). When N=0 we call such words @q-episturmian. Further results on the structure of @q-episturmian words are proved. In particular, some relationships between @q-words (with or without seed) and @q-episturmian words are shown.
---
paper_title: Quasiperiodic Sturmian words and morphisms
paper_content:
We characterize all quasiperiodic Sturmian words: a Sturmian word is not quasiperiodic if and only if it is a Lyndon word. Moreover, we study links between Sturmian morphisms and quasiperiodicity.
---
paper_title: On some problems related to palindrome closure
paper_content:
In this paper, we solve some open problems related to (pseudo)palindrome closure operators and to the infinite words generated by their iteration, that is, standard episturmian and pseudostandard words. We show that if ϑ is an involutory antimorphism of A* , then the right and left ϑ -palindromic closures of any factor of a ϑ -standard word are also factors of some ϑ -standard word. We also introduce the class of pseudostandard words with “seed”, obtained by iterated pseudopalindrome closure starting from a nonempty word. We show that pseudostandard words with seed are morphic images of standard episturmian words. Moreover, we prove that for any given pseudostandard word s with seed, all sufficiently long left special factors of s are prefixes of it.
---
paper_title: String Pattern Matching for a Deluge Survival Kit
paper_content:
String Pattern Matching concerns itself with algorithmic and combinatorial issues related to matching and searching on linearly arranged sequences of symbols, arguably the simplest possible discrete structures. As unprecedented volumes of sequence data are amassed, disseminated and shared at an increasing pace, effective access to, and manipulation of such data depend crucially on the efficiency with which strings are structured, compressed, transmitted, stored, searched and retrieved. This paper samples from this perspective, and with the authors' own bias, a rich arsenal of ideas and techniques developed in more than three decades of history.
---
|
Title: Episturmian Words: A Survey
Section 1: From Sturmian to episturmian
Description 1: This section introduces the concept of Sturmian words and their fascinating properties, leading to the natural extension to episturmian words.
Section 2: Notation & terminology
Description 2: This section provides definitions, notations, and terminologies related to episturmian words, essential for understanding the subsequent sections.
Section 3: Definitions & basic properties
Description 3: This section offers the foundational definitions and basic properties of episturmian words, including their characteristic attributes and generalizations.
Section 4: Equivalence classes
Description 4: This section explores the classification of infinite words into equivalence classes based on their factor sets, particularly focusing on episturmian words.
Section 5: Bi-infinite episturmian words
Description 5: This section extends the concept of episturmian words to bi-infinite words and discusses their recurrence properties and related formulations.
Section 6: Strict episturmian words
Description 6: This section defines strict episturmian words and discusses their properties in terms of their alphabets and special factors.
Section 7: Episturmian morphisms
Description 7: This section introduces episturmian morphisms, their significance, and how they generalize Sturmian morphisms to larger alphabets.
Section 8: Generators & monoids
Description 8: This section details the generators and monoids of episturmian morphisms, demonstrating various inclusions and decompositions.
Section 9: Relation with episturmian words
Description 9: This section characterizes episturmian words using their decompositions over pure episturmian morphisms and directive words.
Section 10: Spins, shifts, and directive words
Description 10: This section examines the concepts of spins, shifts, and directive words, illustrating how these notions impact the structure of episturmian words.
Section 11: Notation for pure episturmian morphisms
Description 11: This section introduces a notation system for pure episturmian morphisms and discusses their operational principles.
Section 12: Shifts
Description 12: This section explores the impact of shifts on episturmian morphisms and how they relate to conjugate morphisms.
Section 13: Block-equivalence & directive words
Description 13: This section discusses block-equivalence for spinned words and their influence on representing episturmian words.
Section 14: Periodic and purely morphic episturmian words
Description 14: This section describes periodic and purely morphic episturmian words in terms of their directive words and morphic structures.
Section 15: Arnoux-Rauzy sequences
Description 15: This section briefly discusses Arnoux-Rauzy sequences and their combinatorial properties, elucidating their relation to episturmian words.
Section 16: Finite Arnoux-Rauzy words
Description 16: This section considers finite episturmian words and their enumeration and characterization with respect to their periodic properties.
Section 17: Some properties of factors
Description 17: This section summarizes various properties of factors of episturmian words including factor complexity, palindromes, and more.
Section 18: Factor complexity
Description 18: This section provides detailed insights into the complexity of factors in episturmian words.
Section 19: Palindromic factors
Description 19: This section discusses the palindromic factors in episturmian words and their implications.
Section 20: Iterated palindromic closure
Description 20: This section elaborates on the process and significance of iterated palindromic closure in the context of episturmian words.
Section 21: Palindromic richness
Description 21: This section defines and explores the concept of palindromic richness within episturmian words.
Section 22: Fractional powers & critical exponent
Description 22: This section investigates the occurrence of fractional powers in episturmian words and the concepts of the critical exponent.
Section 23: Frequencies
Description 23: This section discusses the frequency of factors within episturmian words, providing formulas and examples.
Section 24: Return words
Description 24: This section introduces the notion of return words and their significance in the study of episturmian words.
Section 25: Episkew words
Description 25: This section defines episkew words and discusses their equivalence to episturmian words and their properties.
Section 26: Extremal properties
Description 26: This section explores the extremal properties of episturmian words with respect to lexicographic order.
Section 27: Imbalance
Description 27: This section discusses the balance properties of episturmian words and how they relate to episturmian sequences.
Section 28: Fraenkel's conjecture
Description 28: This section addresses Fraenkel's conjecture in the context of balanced episturmian words and recurrent sequences.
Section 29: Concluding remarks
Description 29: This section concludes the survey by mentioning recent research works and open questions involving episturmian words.
|
From “hand-written ” to computationally implemented HPSG theories 1 Overview
| 11 |
---
paper_title: A Web-Based Instructional Platform For Contraint-Based Grammar Formalisms And Parsing
paper_content:
We propose the creation of a web-based training framework comprising a set of topics that revolve around the use of feature structures as the core data structure in linguistic theory, its formal foundations, and its use in syntactic processing.
---
paper_title: Verb-initial Constructions in Modern Hebrew
paper_content:
V erb -In itial C on stru ction s in M odern H ebrew by Nurit Melnik B.Sc. (The Hebrew University of Jerusalem) 1993 M.A. (University of California, Berkeley) 1999 A dissertation subm itted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Linguistics in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Andreas Kathol, Chair Professor Paul Kay Professor Johanna Nichols Fall 2002 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
---
paper_title: In search of epistemic primitives in the english resource grammar
paper_content:
This paper seeks to improve HPSG engineering through the design of more terse, readable and intuitive type signatures. It argues against the exclusive use of IS-A networks and, with reference to the English Resource Grammar, demonstrates that a collection of higher-order datatypes are already acutely in demand in contemporary HPSG design. Some default specification conventions to assist in maximizing the utility of higher-order type constructors are also discussed.
---
paper_title: On expressing lexical generalizations in HPSG
paper_content:
This paper investigates the status of the lexicon and the possibilities for expressing lexical generalizations in the paradigm of Head-Driven Phrase Structure Grammar (HPSG). We illustrate that the architecture readily supports the use of implicational principles to express generalizations over a class of word objects. A second kind of lexical generalizations expressing relations between classes of words is often expressed in terms of lexical rules. We show how lexical rules can be integrated into the formal setup for HPSG developed by King (1989, 1994), investigate a lexical rule specification language allowing the linguist to only specify those properties which are supposed to differ between the related classes, and define how this lexical rule specification language is interpreted. We thereby provide a formalization of lexical rules as used in HPSG.
---
paper_title: A Computational Treatment Of Lexical Rules In HPSG As Covariation In Lexical Entries
paper_content:
This paper proposes a new computational treatment of lexical rules as used in the HPSG framework. A complier is described which translates a set of lexical rules and their interaction into a definite clause encoding, which is called by the base lexical entries in the lexicon. This way, the disjunctive possibilities arising from lexical rule application are encoded as systematic covariation in the specification of lexical entries. The compiler ensures the automatic transfer of properties not changed by a lexical rule. Program transformation techniques are used to advance the encoding. The final output of the compiler constitutes an efficient computational counterpart of the linguistic generalizations captured by lexical rules and allows on-the-fly application of lexical rules.
---
paper_title: Head-driven Phrase Structure Grammar
paper_content:
This book presents the most complete exposition of the theory of head-driven phrase structure grammar (HPSG), introduced in the authors' "Information-Based Syntax and Semantics." HPSG provides an integration of key ideas from the various disciplines of cognitive science, drawing on results from diverse approaches to syntactic theory, situation semantics, data type theory, and knowledge representation. The result is a conception of grammar as a set of declarative and order-independent constraints, a conception well suited to modelling human language processing. This self-contained volume demonstrates the applicability of the HPSG approach to a wide range of empirical problems, including a number which have occupied center-stage within syntactic theory for well over twenty years: the control of "understood" subjects, long-distance dependencies conventionally treated in terms of "wh"-movement, and syntactic constraints on the relationship between various kinds of pronouns and their antecedents. The authors make clear how their approach compares with and improves upon approaches undertaken in other frameworks, including in particular the government-binding theory of Noam Chomsky.
---
paper_title: An Open Source Grammar Development Environment and Broad-coverage English Grammar Using HPSG
paper_content:
The LinGO (Linguistic Grammars Online) project’s English Resource Grammar and the LKB grammar development environment are language resources which are freely available for download for any purpose, including commercial use (see http://lingo.stanford.edu). Executable programs and source code are both included. In this paper, we give an outline of the LinGO English grammar and LKB system, and discuss the ways in which they are currently being used. The grammar and processing system can be used independently or combined to give a central component which can be exploited in a variety of ways. Our intention in writing this paper is to encourage more people to use the technology, which supports collaborative development on many levels.
---
paper_title: The Grammar Matrix: An Open-Source Starter-Kit For The Rapid Development Of Cross-Linguistically Consistent Broad-Coverage Precision Grammars
paper_content:
The grammar matrix is an open-source starter-kit for the development of broad-coverage HPSGs. By using a type hierarchy to represent cross-linguistic generalizations and providing compatibility with other open-source tools for grammar engineering, evaluation, parsing and generation, it facilitates not only quick start-up but also rapid growth towards the wide coverage necessary for robust natural language processing and the precision parses and semantic representations necessary for natural language understanding.
---
|
Title: From “hand-written” to computationally implemented HPSG theories 1 Overview
Section 1: Overview
Description 1: Brief introduction about the potential of HPSG for computational implementation and the main focus of the paper.
Section 2: Type Definition
Description 2: Discuss the approach to defining types in TRALE and LKB, including the organization of type hierarchies and the glb condition.
Section 3: Principles
Description 3: Explain how HPSG principles are defined in LKB and TRALE, including their application and the differences in approach.
Section 4: Lexical Rules
Description 4: Overview of the implementation of lexical rules in LKB and TRALE, with emphasis on the "carrying over” of information from input to output.
Section 5: Exhaustive Typing and Subtype Covering
Description 5: Discuss the concept of exhaustive typing and subtype covering in TRALE and LKB, and their implications for HPSG implementation.
Section 6: Definite Relations
Description 6: Describe how LKB and TRALE handle definite relations, such as APPEND, and the integration of a programming language into TRALE.
Section 7: Non-binary Grammar Rules
Description 7: Explain the implementation of non-binary grammar rules in LKB and TRALE, focusing on handling rules with varying numbers of daughters.
Section 8: Semantic Representation
Description 8: Compare the modules for processing semantic representations (MRS in LKB and Lexical Resource Semantics in TRALE).
Section 9: Evaluating Competence and Performance
Description 9: Discuss the tools and methods provided by LKB and TRALE for evaluating grammar competence and performance.
Section 10: User-Interface Issues and Features
Description 10: Compare the user-interface features of LKB and TRALE, including hierarchies display, feature structures, syntactic trees, and command interactions.
Section 11: Conclusion
Description 11: Summarize the comparison between LKB and TRALE regarding faithfulness to "hand-written" HPSG and the accessibility of the implementation platforms for linguists.
|
A Review of Selective Forwarding Attacks in Wireless Sensor Networks
| 3 |
---
paper_title: Lightweight defense scheme against selective forwarding attacks in wireless sensor networks
paper_content:
In data-centric wireless sensor networks, the malicious nodes may selectively drop some crucial data packets, which seriously destroy the network's data collection and decrease the availability of sensor services. In this paper, we present a lightweight defense scheme against selective forwarding attacks. According to characteristics of easy positioning the nodes around transmission paths in a structured topology made of hexagonal mesh, the nodes around transmission path are used to monitor packet transmission of its neighbor nodes, judge the attackers' location and resend the packets dropped by the attackers. The method is effective in detecting selective forwarding attacks to ensure reliable packet delivery. Analysis and simulation results show that the proposed scheme consumes less energy and storage.
---
paper_title: Directed diffusion: a scalable and robust communication paradigm for sensor networks
paper_content:
Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
---
paper_title: A Novel Cooperation Mechanism to Enforce Security in Wireless Sensor Networks
paper_content:
In wireless sensor networks, how to detect the malicious nodes and prevent them from attacking the normal nodes has become one of the most important issues. This paper introduces a new mechanism in which the sensor nodes collaborate with each other to resist denial of service (DoS) attack and protect the being attacked nodes. A reputation-based incentive model for cooperation is also proposed. Simulation results show that the proposed scheme has low false alarm rate and can increase the chance of success in stopping the malicious nodes from attacking the normal nodes in wireless sensor networks.
---
paper_title: Research about security mechanism in wireless sensor network
paper_content:
WSN has been widely used in many prospects, and more and more researchers at home and abroad are paying attention to this hot field. However, due to the deployment of environmental requirements and their conditions, most of the mechanisms used in traditional wireless networks are difficult to directly apply to wireless sensor networks. So, it is very eager to design a special wireless sensor network. This paper introduces wireless sensor network routing protocols and describes the security routing of wireless sensor network. Moreover, this paper has done deep research on SPIN. Finally, through the analysis of NS2 simulation, we prove that SPIN protocol is valuable in security performance evaluation and confidentiality.
---
paper_title: GPSR: greedy perimeter stateless routing for wireless networks
paper_content:
We present Greedy Perimeter Stateless Routing (GPSR), a novel routing protocol for wireless datagram networks that uses the positions of routers and a packet's destination to make packet forwarding decisions. GPSR makes greedy forwarding decisions using only information about a router's immediate neighbors in the network topology. When a packet reaches a region where greedy forwarding is impossible, the algorithm recovers by routing around the perimeter of the region. By keeping state only about the local topology, GPSR scales better in per-router state than shortest-path and ad-hoc routing protocols as the number of network destinations increases. Under mobility's frequent topology changes, GPSR can use local topology information to find correct new routes quickly. We describe the GPSR protocol, and use extensive simulation of mobile wireless networks to compare its performance with that of Dynamic Source Routing. Our simulations demonstrate GPSR's scalability on densely deployed wireless networks.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Priority and Random Selection for Dynamic Window Secured Implicit Geographic Routing in Wireless Sensor Network
paper_content:
Problem statement: Sensor nodes are easily exposed to many attacks since it were deployed in unattended adversarial environment with no global addressing and used for critical applications such as battlefield surveillance and emergency response. While the sensor also needs to act as a router to relay a message to a required recipient, then this increased the vulnerabilities to a network layer. However, existing security mechanisms are not permissible to be fitted directly into any sensor network due to constraints on energy and computational capabilities of sensor node itself that require on the modification on the protocols that associated with the sensor node itself in order to provide the security. Approach: In this study, a Dynamic Window Secured Implicit Geographic Forwarding (DWIGF) routing protocol was presented which based on an approach of lazy binding technique and dynamic time on collection window and inherits a geographical routing techniques. Results: The DWIGF was intelligent to minimize a Clear To Send (CTS) rushing attack and robust against black hole and selective forwarding attacks with high packet delivery ratios because of selection of a failed node and an attacker was minimized respectively. Moreover, few routing attacks were eliminated since the routing technique used was classified as geographic routing. Conclusion: This novel routing protocol was promising a secured routing without inserting any existing security mechanism inside.
---
paper_title: Lightweight defense scheme against selective forwarding attacks in wireless sensor networks
paper_content:
In data-centric wireless sensor networks, the malicious nodes may selectively drop some crucial data packets, which seriously destroy the network's data collection and decrease the availability of sensor services. In this paper, we present a lightweight defense scheme against selective forwarding attacks. According to characteristics of easy positioning the nodes around transmission paths in a structured topology made of hexagonal mesh, the nodes around transmission path are used to monitor packet transmission of its neighbor nodes, judge the attackers' location and resend the packets dropped by the attackers. The method is effective in detecting selective forwarding attacks to ensure reliable packet delivery. Analysis and simulation results show that the proposed scheme consumes less energy and storage.
---
paper_title: Achieving Network Level Privacy in Wireless Sensor Networks
paper_content:
Full network level privacy has often been categorized into four sub-categories: Identity, Route, Location and Data privacy. Achieving full network level privacy is a critical and challenging problem due to the constraints imposed by the sensor nodes (e.g., energy, memory and computation power), sensor networks (e.g., mobility and topology) and QoS issues (e.g., packet reach-ability and timeliness). In this paper, we proposed two new identity, route and location privacy algorithms and data privacy mechanism that addresses this problem. The proposed solutions provide additional trustworthiness and reliability at modest cost of memory and energy. Also, we proved that our proposed solutions provide protection against various privacy disclosure attacks, such as eavesdropping and hop-by-hop trace back attacks. Achieving Network Level Privacy in Wireless Sensor Networks
---
paper_title: A sequential mesh test based selective forwarding attack detection scheme in wireless sensor networks
paper_content:
It is very difficult to distinguish between selective forwarding attacks and normal packet drops in wireless sensor networks. In order to detect selective forwarding attack effectively, we propose the sequential mesh test based detection scheme in this paper. The cluster head node detects the packet drop nodes based on the sequential mesh test method after receiving the packet drop reports. This scheme extracts a small quantity of samples to run the test, instead of regulating the total times of test in advance. It decides whether continue the test or not based on the test result until it obtains the final conclusion. We show through experiments that our scheme can provide a higher detection accurate rate and a lower false alarm rate than the existing detection scheme. Meanwhile, it requires less communication and computation power and shorter detection time to detect the selective forwarding attack nodes.
---
paper_title: Detection of Selective Forwarding Attacks in Heterogeneous Sensor Networks
paper_content:
Security is crucial for wireless sensor networks deployed in the military and other hostile environments. Due to the limited transmission range, a sensor node may need multiple hops of transmissions to deliver a packet to the base station. An attacker can launch the selective forwarding attack and drop a portion of packets for which it needs to relay while forward the rest. Selective forwarding attack is hard to detect, since packet drops in sensor networks may be caused by unreliable wireless communications or node failures. In this paper, first we describe an efficient scheme for reporting packet drops, then we present an effective scheme for detecting the selective forwarding attack in a heterogeneous sensor network. The scheme utilizes powerful high-end sensors and is based on the sequential probability ratio test. Our extensive simulations show that the proposed scheme achieves high detection ratio and very low false alarm rate.
---
|
Title: A Review of Selective Forwarding Attacks in Wireless Sensor Networks
Section 1: INTRODUCTION
Description 1: This section introduces Wireless Sensor Networks (WSNs), highlighting their structure, communication methods, and security vulnerabilities, particularly focusing on selective forwarding attacks.
Section 2: SELECTIVE FORWARDING ATTACK
Description 2: This section describes the selective forwarding attack in detail, explaining how malicious nodes drop or selectively forward packets to compromise the network.
Section 3: RECENT PROPOSED DETECTION TECHNIQUES OF SELECTIVE FORWARDING ATTACKS
Description 3: This section reviews various detection techniques proposed in recent years for identifying and mitigating selective forwarding attacks in WSNs.
Section 4: CONCLUSIONS
Description 4: This section summarizes the importance of detecting selective forwarding attacks and reviews the recent techniques discussed in the paper, potentially guiding future research directions.
|
A Review of Redeye Detection and Removal in Digital Images Through Patents
| 8 |
---
paper_title: Human Face Detection in Visual Scenes
paper_content:
We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with another state-of-the-art face detection system are presented; our system has better performance in terms of detection and false-positive rates.
---
paper_title: Automatic Red-Eye Removal based on Sclera and Skin Tone Detection
paper_content:
It is well-known that taking portrait photographs with a built in camera may create a red-eye effect. This effect is caused by the light entering the subject’s eye through the pupil and reflecting from the retina back to the sensor. These red eyes are probably one of the most important types of artifacts in portrait pictures. Many different techniques exist for removing these artifacts digitally after image capture. In most of the existing software tools, the user has to select the zone in which the red eye is located. The aim of our method is to automatically detect and correct the red eyes. Our algorithm detects the eye itself by finding the appropriate colors and shapes without input from the user. We use the basic knowledge that an eye is haracterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of “skin” around the eye, we obtain a higher success rate than most of the tools we tested. Moreover, our algorithm works for any type of skin tone. The main goal of this algorithm is to accurately remove red eyes from a picture, while avoiding false positives completely, which is the biggest problem of camera integrated algorithms or distributed software tools. At the same time, we want to keep the false negative rate as low as possible. We implemented this algorithm in a web-based application to allow people to correct their images online.
---
paper_title: Red Eye Removal using Digital Color Image Processing
paper_content:
The current paper provides methods to correct the artifact known as “red eye” by means of digital color image processing. This artifact is typically formed in amateur photographs taken with a built-in camera flash. To correct red eye artifacts, an image mask is computed by calculating a colorimetric distance between a prototypical reference “red eye” color and each pixel of the image containing the red eye. Various image processing algorithms such as thresholding, blob analysis, and morphological filtering, are applied to the mask, in order to eliminate noise, reduce errors, and facilitate a more natural looking result. The mask serves to identify pixels in the color image needing correction, and further serves to identify the amount of correction needed. Pixels identified as having red-eye artifacts are modified to a substantially monochrome color, while the bright specular reflection of the eye is preserved.
---
paper_title: An efficient automatic redeye detection and correction algorithm
paper_content:
A fully automatic redeye detection and correction algorithm is presented to address the redeye artifacts in digital photos. The algorithm contains a redeye detection part and a correction part. The detection part is modeled as a feature based object detection problem. Adaboost is used to simultaneously select features and train the classifier. A new feature set is designed to address the orientation-dependency problem associated with the Haar-like features commonly used for object detection design. For each detected redeye, a correction algorithm is applied to do adaptive desaturation and darkening over the redeye region.
---
paper_title: Red eye detection with machine learning
paper_content:
Red-eye is a problem in photography that occurs when a photograph is taken with a flash, and the bright flash light is reflected from the blood vessels in the eye, giving the eye an unnatural red hue. Most red-eye reduction systems need the user to outline the red eyes by hand, but this approach doesn't scale up. Instead, we propose an automatic red-eye detection system. The system contains a red-eye detector that finds red eye-like candidate image patches; a state of the art face detector used to eliminate most false positives (image regions that look but red eyes but are not); and a red-eye outline detector. All three detectors are automatically learned from data, using Boosting. Our system can be combined with a red-eye reduction module to yield a fully automatic red eye corrector.
---
paper_title: Probabilistic Automatic Red Eye Detection and Correction
paper_content:
In this paper we propose a new probabilistic approach to red eye detection and correction. It is based on stepwise refinement of a pixel-wise red eye probability map. Red eye detection starts with a fast non red eye region rejection step. A classification step then adjusts the probabilities attributed to the detected red eye candidates. The correction step finally applies a soft red eye correction based on the resulting probability map. The proposed approach is fast and allows achieving an excellent correction of strong red eyes while producing a still significant correction of weaker red eyes.
---
paper_title: Automatic red-eye detection and correction
paper_content:
"Red-eye" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.
---
paper_title: Robust real-time object detection
paper_content:
This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA
---
paper_title: A Brief Introduction to Boosting
paper_content:
Boosting is a general method for improving the accuracy of any given learning algorithm. This short paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting. Some examples of recent applications of boosting are also described.
---
paper_title: Automatic red-eyes detection based on AAM
paper_content:
Red-eye is a phenomenon that the pupils of people appear unnaturally red when an image is captured using photoflash lamp. There are some algorithms existing for red-eyes detection, but all of them have less accuracy and cannot detect single red-eye. In this paper, a novel approach is proposed to automatically detect red eyes in photos, which is based on active appearance models, and a practicable automatic correction method is presented The goal of the algorithm is to detect all the red-eyes in photos without any human intervention. Finally, experimental results demonstrate the high accuracy and efficiency of the method.
---
paper_title: Towards automatic redeye effect removal
paper_content:
The redeye effect is typically formed in amateur photographs taken with a built-in camera flash. Analysis of the available techniques and products indicates that their efficiency in correcting this artifact is limited and their performance is inconsistent. In this work we propose a user friendly solution, which could be used to restore amateur photographs. In the proposed method the redeye effect is detected using a skin detection module and eye colors are restored using morphological image processing. The new method is computationally efficient, robust to parameter settings and versatile, as it can work in conjunction with a number of skin detection methods.
---
paper_title: Digital red eye removal
paper_content:
The current paper provides methods to correct the artifact known as red eye by means of digital color image processing. This artifact is typically formed in amateur photographs taken with a built-in camera flash. To correct red eye artifacts, an image mask is computed by calculating a colorimetric distance between a prototypical reference red eye color and each pixel of the image containing the red eye. Various image processing algorithms such as thresholding, blob analysis, and morphological filtering, are applied to the mask, in order to eliminate noise, reduce errors, and facilitate a more natural looking result. The mask serves to identify pixels in the color image needing correction, and further serves to identify the amount of correction needed. Pixels identified as having red eye artifacts are modified to a substantially monochrome color, while the bright specular reflection of the eye is preserved.
---
paper_title: A fully automatic redeye detection and correction algorithm
paper_content:
A fully automatic redeye detection and correction algorithm was developed at Eastman Kodak Company Research Laboratories. The algorithm is highly sophisticated so that it is able to distinguish most redeye pairs from scene content. It is also highly optimized for execution speed and memory usage enabling it to be included in a variety of products. Detected redeyes are corrected so that the red color is removed, but the eye maintains a natural look.
---
paper_title: Automatic redeye removal for smart enhancement of photos of unknown origin
paper_content:
The paper describes a modular procedure for automatic correction of redeye artifact in images of unknown origin, maintaining the natural appearance of the eye. First, a smart color balancing procedure is applied. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. Combining the results of a color-based face detector and of a face detector based on a multi-resolution neural network the most likely facial regions are identified. Redeye is searched for only within these regions, seeking areas with high “redness” satisfying some geometric constraints. A novel redeye removal algorithm is then applied automatically to the red eyes identified, and opportunely smoothed to avoid unnatural transitions between the corrected and original parts. Experimental results on a set of over 450 images are reported.
---
paper_title: Safe Red-Eye Correction Plug-in Using Adaptive Methods
paper_content:
An important issue with red-eye correction is that it might result in image degradation. This can be due to the detection of false positives or, even in the case of correct detection, to an inadequate correction technique. Three correction methods are proposed and compared according to their image degradation risk and their expected perceptual quality improvement. Based on those analyses an adaptive system is designed which selects the correction strategy dependent on those measures and the detection confidence. Finally, both qualitative (visual preferences) and quantitative (pixel counts on manual segmented images) evaluation results are shown.
---
paper_title: Automatic Red-Eye Removal based on Sclera and Skin Tone Detection
paper_content:
It is well-known that taking portrait photographs with a built in camera may create a red-eye effect. This effect is caused by the light entering the subject’s eye through the pupil and reflecting from the retina back to the sensor. These red eyes are probably one of the most important types of artifacts in portrait pictures. Many different techniques exist for removing these artifacts digitally after image capture. In most of the existing software tools, the user has to select the zone in which the red eye is located. The aim of our method is to automatically detect and correct the red eyes. Our algorithm detects the eye itself by finding the appropriate colors and shapes without input from the user. We use the basic knowledge that an eye is haracterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of “skin” around the eye, we obtain a higher success rate than most of the tools we tested. Moreover, our algorithm works for any type of skin tone. The main goal of this algorithm is to accurately remove red eyes from a picture, while avoiding false positives completely, which is the biggest problem of camera integrated algorithms or distributed software tools. At the same time, we want to keep the false negative rate as low as possible. We implemented this algorithm in a web-based application to allow people to correct their images online.
---
paper_title: Red eye detection with machine learning
paper_content:
Red-eye is a problem in photography that occurs when a photograph is taken with a flash, and the bright flash light is reflected from the blood vessels in the eye, giving the eye an unnatural red hue. Most red-eye reduction systems need the user to outline the red eyes by hand, but this approach doesn't scale up. Instead, we propose an automatic red-eye detection system. The system contains a red-eye detector that finds red eye-like candidate image patches; a state of the art face detector used to eliminate most false positives (image regions that look but red eyes but are not); and a red-eye outline detector. All three detectors are automatically learned from data, using Boosting. Our system can be combined with a red-eye reduction module to yield a fully automatic red eye corrector.
---
paper_title: Automatic red-eye detection and correction
paper_content:
"Red-eye" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.
---
paper_title: Robust real-time object detection
paper_content:
This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA
---
paper_title: Automatic red-eyes detection based on AAM
paper_content:
Red-eye is a phenomenon that the pupils of people appear unnaturally red when an image is captured using photoflash lamp. There are some algorithms existing for red-eyes detection, but all of them have less accuracy and cannot detect single red-eye. In this paper, a novel approach is proposed to automatically detect red eyes in photos, which is based on active appearance models, and a practicable automatic correction method is presented The goal of the algorithm is to detect all the red-eyes in photos without any human intervention. Finally, experimental results demonstrate the high accuracy and efficiency of the method.
---
paper_title: A fully automatic redeye detection and correction algorithm
paper_content:
A fully automatic redeye detection and correction algorithm was developed at Eastman Kodak Company Research Laboratories. The algorithm is highly sophisticated so that it is able to distinguish most redeye pairs from scene content. It is also highly optimized for execution speed and memory usage enabling it to be included in a variety of products. Detected redeyes are corrected so that the red color is removed, but the eye maintains a natural look.
---
paper_title: Automatic redeye removal for smart enhancement of photos of unknown origin
paper_content:
The paper describes a modular procedure for automatic correction of redeye artifact in images of unknown origin, maintaining the natural appearance of the eye. First, a smart color balancing procedure is applied. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. Combining the results of a color-based face detector and of a face detector based on a multi-resolution neural network the most likely facial regions are identified. Redeye is searched for only within these regions, seeking areas with high “redness” satisfying some geometric constraints. A novel redeye removal algorithm is then applied automatically to the red eyes identified, and opportunely smoothed to avoid unnatural transitions between the corrected and original parts. Experimental results on a set of over 450 images are reported.
---
paper_title: Automated red-eye detection and correction in digital photographs
paper_content:
Caused by light reflected off the subject's retina, red-eye is a troublesome problem in consumer photography. Although most of the cameras have the red-eye reduction mode, the result reality is that no on-camera system is completely effective. In this paper, we propose a fully automatic approach to detecting and correcting red-eyes in digital images. In order to detect red-eyes in a picture, a heuristic yet efficient algorithm is first adopted to detect a group of candidate red regions and then an eye classifier is utilized to confirm whether each candidate region is a human eye. Thereafter, for each detected redeye, we can correct it by the correction algorithm. In case that a red-eye cannot be detected automatically, another algorithm is also provided to detect red-eyes manually with the user's interaction by clicking on an eye. Experimental results on about 300 images with various red-eye appearances demonstrate that the proposed solution is robust and effective.
---
paper_title: Automatic red-eye detection and correction
paper_content:
"Red-eye" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.
---
paper_title: Towards automatic redeye effect removal
paper_content:
The redeye effect is typically formed in amateur photographs taken with a built-in camera flash. Analysis of the available techniques and products indicates that their efficiency in correcting this artifact is limited and their performance is inconsistent. In this work we propose a user friendly solution, which could be used to restore amateur photographs. In the proposed method the redeye effect is detected using a skin detection module and eye colors are restored using morphological image processing. The new method is computationally efficient, robust to parameter settings and versatile, as it can work in conjunction with a number of skin detection methods.
---
paper_title: Towards automatic redeye effect removal
paper_content:
The redeye effect is typically formed in amateur photographs taken with a built-in camera flash. Analysis of the available techniques and products indicates that their efficiency in correcting this artifact is limited and their performance is inconsistent. In this work we propose a user friendly solution, which could be used to restore amateur photographs. In the proposed method the redeye effect is detected using a skin detection module and eye colors are restored using morphological image processing. The new method is computationally efficient, robust to parameter settings and versatile, as it can work in conjunction with a number of skin detection methods.
---
paper_title: Automatic redeye removal for smart enhancement of photos of unknown origin
paper_content:
The paper describes a modular procedure for automatic correction of redeye artifact in images of unknown origin, maintaining the natural appearance of the eye. First, a smart color balancing procedure is applied. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. Combining the results of a color-based face detector and of a face detector based on a multi-resolution neural network the most likely facial regions are identified. Redeye is searched for only within these regions, seeking areas with high “redness” satisfying some geometric constraints. A novel redeye removal algorithm is then applied automatically to the red eyes identified, and opportunely smoothed to avoid unnatural transitions between the corrected and original parts. Experimental results on a set of over 450 images are reported.
---
paper_title: Automatic Red-Eye Removal based on Sclera and Skin Tone Detection
paper_content:
It is well-known that taking portrait photographs with a built in camera may create a red-eye effect. This effect is caused by the light entering the subject’s eye through the pupil and reflecting from the retina back to the sensor. These red eyes are probably one of the most important types of artifacts in portrait pictures. Many different techniques exist for removing these artifacts digitally after image capture. In most of the existing software tools, the user has to select the zone in which the red eye is located. The aim of our method is to automatically detect and correct the red eyes. Our algorithm detects the eye itself by finding the appropriate colors and shapes without input from the user. We use the basic knowledge that an eye is haracterized by its shape and the white color of the sclera. Combining this intuitive approach with the detection of “skin” around the eye, we obtain a higher success rate than most of the tools we tested. Moreover, our algorithm works for any type of skin tone. The main goal of this algorithm is to accurately remove red eyes from a picture, while avoiding false positives completely, which is the biggest problem of camera integrated algorithms or distributed software tools. At the same time, we want to keep the false negative rate as low as possible. We implemented this algorithm in a web-based application to allow people to correct their images online.
---
paper_title: Automatic red-eye detection and correction
paper_content:
"Red-eye" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.
---
paper_title: A Brief Introduction to Boosting
paper_content:
Boosting is a general method for improving the accuracy of any given learning algorithm. This short paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting. Some examples of recent applications of boosting are also described.
---
paper_title: A fully automatic redeye detection and correction algorithm
paper_content:
A fully automatic redeye detection and correction algorithm was developed at Eastman Kodak Company Research Laboratories. The algorithm is highly sophisticated so that it is able to distinguish most redeye pairs from scene content. It is also highly optimized for execution speed and memory usage enabling it to be included in a variety of products. Detected redeyes are corrected so that the red color is removed, but the eye maintains a natural look.
---
paper_title: Safe Red-Eye Correction Plug-in Using Adaptive Methods
paper_content:
An important issue with red-eye correction is that it might result in image degradation. This can be due to the detection of false positives or, even in the case of correct detection, to an inadequate correction technique. Three correction methods are proposed and compared according to their image degradation risk and their expected perceptual quality improvement. Based on those analyses an adaptive system is designed which selects the correction strategy dependent on those measures and the detection confidence. Finally, both qualitative (visual preferences) and quantitative (pixel counts on manual segmented images) evaluation results are shown.
---
paper_title: Automated red-eye detection and correction in digital photographs
paper_content:
Caused by light reflected off the subject's retina, red-eye is a troublesome problem in consumer photography. Although most of the cameras have the red-eye reduction mode, the result reality is that no on-camera system is completely effective. In this paper, we propose a fully automatic approach to detecting and correcting red-eyes in digital images. In order to detect red-eyes in a picture, a heuristic yet efficient algorithm is first adopted to detect a group of candidate red regions and then an eye classifier is utilized to confirm whether each candidate region is a human eye. Thereafter, for each detected redeye, we can correct it by the correction algorithm. In case that a red-eye cannot be detected automatically, another algorithm is also provided to detect red-eyes manually with the user's interaction by clicking on an eye. Experimental results on about 300 images with various red-eye appearances demonstrate that the proposed solution is robust and effective.
---
paper_title: Automatic red-eye detection and correction
paper_content:
"Red-eye" is a phenomenon that causes the eyes of flash photography subjects to appear unnaturally reddish in color. Though commercial solutions exist for red-eye correction, all of them require some measure of user intervention. A method is presented to automatically detect and correct redeye in digital images. First, faces are detected with a cascade of multi-scale classifiers. The red-eye pixels are then located with several refining masks computed over the facial region. The masks are created by thresholding per-pixel metrics, designed to detect red-eye artifacts. Once the redeye pixels have been found, the redness is attenuated with a tapered color desaturation. A detector implemented with this system corrected 95% of the red-eye artifacts in 200 tested images.
---
paper_title: Towards automatic redeye effect removal
paper_content:
The redeye effect is typically formed in amateur photographs taken with a built-in camera flash. Analysis of the available techniques and products indicates that their efficiency in correcting this artifact is limited and their performance is inconsistent. In this work we propose a user friendly solution, which could be used to restore amateur photographs. In the proposed method the redeye effect is detected using a skin detection module and eye colors are restored using morphological image processing. The new method is computationally efficient, robust to parameter settings and versatile, as it can work in conjunction with a number of skin detection methods.
---
paper_title: Automatic redeye removal for smart enhancement of photos of unknown origin
paper_content:
The paper describes a modular procedure for automatic correction of redeye artifact in images of unknown origin, maintaining the natural appearance of the eye. First, a smart color balancing procedure is applied. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. Combining the results of a color-based face detector and of a face detector based on a multi-resolution neural network the most likely facial regions are identified. Redeye is searched for only within these regions, seeking areas with high “redness” satisfying some geometric constraints. A novel redeye removal algorithm is then applied automatically to the red eyes identified, and opportunely smoothed to avoid unnatural transitions between the corrected and original parts. Experimental results on a set of over 450 images are reported.
---
paper_title: Safe Red-Eye Correction Plug-in Using Adaptive Methods
paper_content:
An important issue with red-eye correction is that it might result in image degradation. This can be due to the detection of false positives or, even in the case of correct detection, to an inadequate correction technique. Three correction methods are proposed and compared according to their image degradation risk and their expected perceptual quality improvement. Based on those analyses an adaptive system is designed which selects the correction strategy dependent on those measures and the detection confidence. Finally, both qualitative (visual preferences) and quantitative (pixel counts on manual segmented images) evaluation results are shown.
---
|
Title: A Review of Redeye Detection and Removal in Digital Images Through Patents
Section 1: INTRODUCTION
Description 1: Provide an introduction to the issue of redeye in photography, discussing its causes, frequency, and impact. Mention the importance of digital solutions for redeye removal and give an overview of the paper's structure.
Section 2: CHRONOLOGICAL REVIEW
Description 2: Present a timeline and review of patents related to redeye detection and removal. Discuss contributions and methods from major companies and highlight key developments in the technology.
Section 3: REDEYE DETECTION
Description 3: Describe the various strategies for detecting redeye in images. Differentiate between methods that reduce the search space and those that scan the entire image. Provide examples and explanations of each approach.
Section 4: RED EYE DETECTION MODULES
Description 4: Detail the components and processes involved in redeye detection modules, including color space conversion, segmentation, and pixel classification. Explain the importance of redness maps and different methodologies adopted to identify redeye pixels.
Section 5: REDEYE VALIDATION PROCESSES
Description 5: Outline the techniques used to validate detected redeye regions. Discuss criteria such as geometric constraints, the presence of sclera and glint, and skin tone analysis. Explain the steps taken to confirm candidate redeye areas.
Section 6: REDEYE CORRECTION
Description 6: Discuss the techniques and considerations for correcting redeye once detected. Explain different strategies for color correction, maintaining natural appearance, and avoiding abrupt variations. Include examples of various correction methods and their effectiveness.
Section 7: EVALUATION OF THE REDEYE DETECTION AND CORRECTION PROCESSES
Description 7: Describe how to evaluate the performance of redeye detection and correction processes. Include criteria and metrics used for assessment, and discuss common errors and their impacts on image quality.
Section 8: CURRENT & FUTURE DEVELOPMENTS
Description 8: Summarize the advancements in the redeye detection and removal field. Highlight areas needing further research and development, and discuss potential new challenges like peteye color correction. Suggest the creation of a reference dataset for consistent evaluation.
|
A Review of Medical Image Watermarking Requirements for Teleradiology
| 11 |
---
paper_title: Strict integrity control of biomedical images
paper_content:
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
---
paper_title: Authentication and Data Hiding Using a Hybrid ROI-Based Watermarking Scheme for DICOM Images
paper_content:
Authenticating medical images using watermarking techniques has become a very popular area of research, and some works in this area have been reported worldwide recently. Besides authentication, many data-hiding techniques have been proposed to conceal patient’s data into medical images aiming to reduce the cost needed to store data and the time needed to transmit data when required. In this paper, we present a new hybrid watermarking scheme for DICOM images. In our scheme, two well-known techniques are combined to gain the advantages of both and fulfill the requirements of authentication and data hiding. The scheme divides the images into two parts, the region of interest (ROI) and the region of non-interest (RONI). Patient’s data are embedded into ROI using a reversible technique based on difference expansion, while tamper detection and recovery data are embedded into RONI using a robust technique based on discrete wavelet transform. The experimental results show the ability of hiding patient’s data with a very good visual quality, while ROI, the most important area for diagnosis, is retrieved exactly at the receiver side. The scheme also shows some robustness against certain levels of salt and pepper and cropping noise.
---
paper_title: Medical image security in a HIPAA mandated PACS environment.
paper_content:
Medical image security is an important issue when digital images and their pertinent patient information are transmitted across public networks. Mandates for ensuring health data security have been issued by the federal government such as Health Insurance Portability and Accountability Act (HIPAA), where healthcare institutions are obliged to take appropriate measures to ensure that patient information is only provided to people who have a professional need. Guidelines, such as digital imaging and communication in medicine (DICOM) standards that deal with security issues, continue to be published by organizing bodies in healthcare. However, there are many differences in implementation especially for an integrated system like picture archiving and communication system (PACS), and the infrastructure to deploy these security standards is often lacking. Over the past 6 years, members in the Image Processing and Informatics Laboratory, Childrens Hospital, Los Angeles/University of Southern California, have actively researched image security issues related to PACS and teleradiology. The paper summarizes our previous work and presents an approach to further research on the digital envelope (DE) concept that provides image integrity and security assurance in addition to conventional network security protection. The DE, including the digital signature (DS) of the image as well as encrypted patient information from the DICOM image header, can be embedded in the background area of the image as an invisible permanent watermark. The paper outlines the systematic development, evaluation and deployment of the DE method in a PACS environment. We have also proposed a dedicated PACS security server that will act as an image authority to check and certify the image origin and integrity upon request by a user, and meanwhile act also as a secure DICOM gateway to the outside connections and a PACS operation monitor for HIPAA supporting information.
---
paper_title: Digital watermarking of medical image using ROI information
paper_content:
Recently, the medical image has been digitized by the development of computer science and digitization of the medical devices. There are needs for database service of the medical image and long term storage because of the construction of PACS (Picture Archiving and Communication System) following DICOM (Digital Imaging Communications in Medicine) standards, telemedicine, and et al. Furthermore, authentication and copyright protection are required to protect the illegal distortion and reproduction of the medical information data. In this paper, we propose digital watermarking technique for medical image that prevents illegal forgery that can be caused after transmitting medical image data remotely. A wrong diagnosis may be occurred if the watermark is embedded into the whole area of image. Therefore, we embed the watermark into some area of medical image, except the decision area that makes a diagnosis so called region of interest (ROI) area in our paper, to increase invisibility. The watermark is the value of bit-plane in wavelet transform of the decision area for certification method of integrity verification. The experimental results show that the watermark embedded by the proposed algorithm can survive successfully in image processing operations such as JPEG lossy compression.
---
paper_title: Lossless ROI Medical Image Watermarking Technique with Enhanced Security and High Payload Embedding
paper_content:
In this article, a new fragile, blind, high payload capacity, ROI (Region of Interest) preserving Medical image watermarking (MIW) technique in the spatial domain for gray scale medical images is proposed. We present a watermarking scheme that combines lossless data compression and encryption technique in application to medical images. The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices such as PSNR, MSE and MSSIM enables us to argue that, the method will help to maintain Electronic Patient Report(EPR)/DICOM data privacy and medical image integrity.
---
paper_title: Security Protection of DICOM Medical Images Using Dual-Layer Reversible Watermarking with Tamper Detection Capability
paper_content:
Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.
---
paper_title: Potential impact of HITECH security regulations on medical imaging
paper_content:
Title XIII of Division A and Title IV of Division B of the American Recovery and Reinvestment Act (ARRA) of 2009 [1] include a provision commonly referred to as the “Health Information Technology for Economic and Clinical Health Act” or “HITECH Act” that is intended to promote the electronic exchange of health information to improve the quality of health care. Subtitle D of the HITECH Act includes key amendments to strengthen the privacy and security regulations issued under the Health Insurance Portability and Accountability Act (HIPAA). The HITECH act also states that “the National Coordinator” must consult with the National Institute of Standards and Technology (NIST) in determining what standards are to be applied and enforced for compliance with HIPAA. This has led to speculation that NIST will recommend that the government impose the Federal Information Security Management Act (FISMA) [2], which was created by NIST for application within the federal government, as requirements to the public Electronic Health Records (EHR) community in the USA. In this paper we will describe potential impacts of FISMA on medical image sharing strategies such as teleradiology and outline how a strict application of FISMA or FISMA-based regulations could have significant negative impacts on information sharing between care providers.
---
paper_title: Digital Watermarking and Steganography
paper_content:
Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. ::: ::: *This new edition now contains essential information on steganalysis and steganography ::: *New concepts and new applications including QIM introduced ::: *Digital watermark embedding is given a complete update with new processes and applications
---
paper_title: Relevance of watermarking in medical imaging
paper_content:
Because of the importance of the security issues in the management of medical information, we suggest the use of watermarking techniques to complete the existing measures for protecting medical images. We discuss the necessary requirements for such a system to be accepted by medical staff and its complementary role with respect with existing security systems. We present different scenarios, one devoted to the authentication and tracing of the images, the second to the integrity control of the patient's record.
---
paper_title: Reversible medical image watermarking for tamper detection and recovery
paper_content:
This research paper discussed the usage of watermarking in medical images to ensure the authenticity and integrity of the image and reviewed some watermarking schemes that had been developed. A design of a reversible tamper detection and recovery watermarking scheme was then proposed. The watermarking scheme uses a 640x480x8 bits ultrasound grayscale image as a sample. The concept of ROI (Region Of Interest) and RONI (Region Of Non Interest) were applied. Watermark embedded can be used to detect tampering and recovery of the image can be done. The watermark is also reversible.
---
paper_title: Privacy and security in teleradiology
paper_content:
Teleradiology is probably the most successful eHealth service available today. Its business model is based on the remote transmission of radiological images (e.g. X-ray and CT-images) over electronic networks, and on the interpretation of the transmitted images for diagnostic purpose. Two basic service models are commonly used teleradiology today. The most common approach is based on the message paradigm (off-line model), but more developed teleradiology systems are based on the interactive use of PACS/RIS systems. Modern teleradiology is also more and more cross-organisational or even cross-border service between service providers having different jurisdictions and security policies. This paper defines the requirements needed to make different teleradiology models trusted. Those requirements include a common security policy that covers all partners and entities, common security and privacy protection principles and requirements, controlled contracts between partners, and the use of security controls and tools that supporting the common security policy. The security and privacy protection of any teleradiology system must be planned in advance, and the necessary security and privacy enhancing tools should be selected (e.g. strong authentication, data encryption, non-repudiation services and audit-logs) based on the risk analysis and requirements set by the legislation. In any case the teleradiology system should fulfil ethical and regulatory requirements. Certification of the whole teleradiology service system including security and privacy is also proposed. In the future, teleradiology services will be an integrated part of pervasive eHealth. Security requirements for this environment including dynamic and context aware security services are also discussed in this paper.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Proposal for DICOM Multiframe Medical Image Integrity and Authenticity
paper_content:
This paper presents a novel algorithm to successfully achieve viable integrity and authenticity addition and verification of n-frame DICOM medical images using cryptographic mechanisms. The aim of this work is the enhancement of DICOM security measures, especially for multiframe images. Current approaches have limitations that should be properly addressed for improved security. The algorithm proposed in this work uses data encryption to provide integrity and authenticity, along with digital signature. Relevant header data and digital signature are used as inputs to cipher the image. Therefore, one can only retrieve the original data if and only if the images and the inputs are correct. The encryption process itself is a cascading scheme, where a frame is ciphered with data related to the previous frames, generating also additional data on image integrity and authenticity. Decryption is similar to encryption, featuring also the standard security verification of the image. The implementation was done in JAVA, and a performance evaluation was carried out comparing the speed of the algorithm with other existing approaches. The evaluation showed a good performance of the algorithm, which is an encouraging result to use it in a real environment.
---
paper_title: A hierarchical digital watermarking method for image tamper detection and recovery
paper_content:
In this paper, we present an efficient and effective digital watermarking method for image tamper detection and recovery. Our method is efficient as it only uses simple operations such as parity check and comparison between average intensities. It is effective because the detection is based on a hierarchical structure so that the accuracy of tamper localization can be ensured. That is, if a tampered block is not detected in level-1 inspection, it will be detected in level-2 or level-3 inspection with a probability of nearly 1. Our method is also very storage effective, as it only requires a secret key and a public chaotic mixing algorithm to recover a tampered image. The experimental results demonstrate that the precision of tamper detection and localization is 99.6% and 100% after level-2 and level-3 inspection, respectively. The tamper recovery rate is better than 93% for a less than half tampered image. As compared with the method in Celik et al. [IEEE Trans. Image Process. 11(6) (2002) 585], our method is not only as simple and as effective in tamper detection and localization, it also provides with the capability of tamper recovery by trading off the quality of the watermarked images about 5dB.
---
paper_title: Hybrid watermarking of medical images for ROI authentication and recovery
paper_content:
Medical image data require strict security, confidentiality and integrity. To achieve these stringent requirements, we propose a hybrid watermarking method which embeds a robust watermark in the region of non-interest (RONI) for achieving security and confidentiality, while integrity control is achieved by inserting a fragile watermark into the region of the interest (ROI). First the information to be modified in ROI is separated and is inserted into RONI, which later is used in recovery of the original ROI. Secondly, to avoid the underflow and overflow, a location map is generated for embedding the watermark block-wise by leaving the suspected blocks. This avoids the preprocessing step of histogram modification. The image visual quality, as well as tamper localization, is evaluated. We use weighted peak signal to noise ratio for measuring image quality of watermarked images. Experimental results show that the proposed method outperforms the existing hybrid watermarking techniques.
---
paper_title: Four-scanning attack on hierarchical digital watermarking method for image tamper detection and recovery
paper_content:
In a recent paper presented by Lin et al., a block-based hierarchical watermarking algorithm for digital images is proposed. It adopts parity check and the intensity-relation check to conduct the experiment of image tamper detection. Their experimental results indicate that the precision of tamper detection and localization is 99.6% and 100% after level-2 and level-3 inspections, respectively. The proposed attacks demonstrate that this watermarking algorithm is fundamentally flawed in that the attacker can tamper a watermarked image easily without being detected. In this paper, a four-scanning attack aimed to Lin et al.'s watermarking method is presented to create tampered images. Furthermore, in case they use encryption to protect their 3-tuple-watermark, we proposed a blind attack to tamper watermarked images without being detected. Experimental results are given to support and enhance our conclusions, and demonstrate that our attacks are successful in tampering watermarked images.
---
paper_title: Digital watermarking of medical image using ROI information
paper_content:
Recently, the medical image has been digitized by the development of computer science and digitization of the medical devices. There are needs for database service of the medical image and long term storage because of the construction of PACS (Picture Archiving and Communication System) following DICOM (Digital Imaging Communications in Medicine) standards, telemedicine, and et al. Furthermore, authentication and copyright protection are required to protect the illegal distortion and reproduction of the medical information data. In this paper, we propose digital watermarking technique for medical image that prevents illegal forgery that can be caused after transmitting medical image data remotely. A wrong diagnosis may be occurred if the watermark is embedded into the whole area of image. Therefore, we embed the watermark into some area of medical image, except the decision area that makes a diagnosis so called region of interest (ROI) area in our paper, to increase invisibility. The watermark is the value of bit-plane in wavelet transform of the decision area for certification method of integrity verification. The experimental results show that the watermark embedded by the proposed algorithm can survive successfully in image processing operations such as JPEG lossy compression.
---
paper_title: Security for the digital information age of medicine: Issues, applications, and implementation
paper_content:
Privacy and integrity of medical records is expected by patients. This privacy and integrity is often mandated by regulations. Traditionally, the security of medical records has been based on physical lock and key. As the storage of patient record information shifts from paper to digital, new security concerns arise. Digital cryptographic methods provide solutions to many of these new concerns. In this article we give an overview of new security concerns, new legislation mandating secure medical records and solutions providing security.
---
paper_title: Potential impact of HITECH security regulations on medical imaging
paper_content:
Title XIII of Division A and Title IV of Division B of the American Recovery and Reinvestment Act (ARRA) of 2009 [1] include a provision commonly referred to as the “Health Information Technology for Economic and Clinical Health Act” or “HITECH Act” that is intended to promote the electronic exchange of health information to improve the quality of health care. Subtitle D of the HITECH Act includes key amendments to strengthen the privacy and security regulations issued under the Health Insurance Portability and Accountability Act (HIPAA). The HITECH act also states that “the National Coordinator” must consult with the National Institute of Standards and Technology (NIST) in determining what standards are to be applied and enforced for compliance with HIPAA. This has led to speculation that NIST will recommend that the government impose the Federal Information Security Management Act (FISMA) [2], which was created by NIST for application within the federal government, as requirements to the public Electronic Health Records (EHR) community in the USA. In this paper we will describe potential impacts of FISMA on medical image sharing strategies such as teleradiology and outline how a strict application of FISMA or FISMA-based regulations could have significant negative impacts on information sharing between care providers.
---
paper_title: Relevance of watermarking in medical imaging
paper_content:
Because of the importance of the security issues in the management of medical information, we suggest the use of watermarking techniques to complete the existing measures for protecting medical images. We discuss the necessary requirements for such a system to be accepted by medical staff and its complementary role with respect with existing security systems. We present different scenarios, one devoted to the authentication and tracing of the images, the second to the integrity control of the patient's record.
---
paper_title: Privacy and security in teleradiology
paper_content:
Teleradiology is probably the most successful eHealth service available today. Its business model is based on the remote transmission of radiological images (e.g. X-ray and CT-images) over electronic networks, and on the interpretation of the transmitted images for diagnostic purpose. Two basic service models are commonly used teleradiology today. The most common approach is based on the message paradigm (off-line model), but more developed teleradiology systems are based on the interactive use of PACS/RIS systems. Modern teleradiology is also more and more cross-organisational or even cross-border service between service providers having different jurisdictions and security policies. This paper defines the requirements needed to make different teleradiology models trusted. Those requirements include a common security policy that covers all partners and entities, common security and privacy protection principles and requirements, controlled contracts between partners, and the use of security controls and tools that supporting the common security policy. The security and privacy protection of any teleradiology system must be planned in advance, and the necessary security and privacy enhancing tools should be selected (e.g. strong authentication, data encryption, non-repudiation services and audit-logs) based on the risk analysis and requirements set by the legislation. In any case the teleradiology system should fulfil ethical and regulatory requirements. Certification of the whole teleradiology service system including security and privacy is also proposed. In the future, teleradiology services will be an integrated part of pervasive eHealth. Security requirements for this environment including dynamic and context aware security services are also discussed in this paper.
---
paper_title: How to deal with security issues in teleradiology
paper_content:
Abstract The use of teleradiological systems for medical image communication is increasing significantly. Digital images can be transferred over public telephone (e.g. ISDN) lines to colleagues for interpretation and/or consultation. Thus, a new quality is being introduced into the process of radiological diagnostics. However, technical implementation of such systems is accompanied by little consideration of legal, i.e. data protection and security, issues. In this paper we describe a concept for data protection in teleradiology which unites aspects of privacy and security as well as user aspects. After highlighting the legal situation in Germany we describe the methodology used for deriving the security profile for teleradiology in Germany. As a result the set of security measures which have to be employed with a teleradiology system is listed. A detailed description follows of how the software requirements are implemented in the teleradiology software MEDICUS.
---
paper_title: Strict integrity control of biomedical images
paper_content:
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
---
paper_title: Secure method for sectional image archiving and transmission
paper_content:
Purpose: Data security becomes an important issue in telemedicine when medical information is transmitted over wide area network. Generally, security is characterized in terms of privacy, authenticity and integrity of digital data. We present a method here which can meet the requirements of privacy, authenticity, and integrity for archiving and transmitting of sectional image such as CT, MR. Methods: The method is described as follows: firstly, image segmentation was done and some patient information was read from image DICOM header. Second, a digital signature for the segmented image was produced using the image sender's private key. Afterwards, the digital signature and patient information were concatenated and embedded into the background area of the image. Finally, the whole image was encrypted to form a digital envelope using the receiver's public key. Results: (1) The image can only be decrypted and read by authorized user who own the private key of the receiving site. (2) The authenticity and integrity can be tested by signature verification. Conclusions: The preliminary results demonstrate that the method we presented here is an effective method for secure archiving and transmitting for sectional medical images.
---
paper_title: Fundamentals of Network Security
paper_content:
Basic Security Concepts - Why is Computer and Network Security Important. Background and History. The Security Trinity. Information Security. Risk Assessment. Security Models. Basic Terminology. More Basic Terminology. Threats, Vulnerabilities and Attacks - Protocols. The OSI Reference Model. TCP/IP Protocol Suite. Useful Web Sites. Search Engines. Mailing Lists. Encryption, Digital Signatures and Certification - Cryptography. Stream Ciphers. Breaking Ciphers. Block Ciphers. Encryption. Public Key Cryptosystems. Message Integrity. Authentication. Digital Signatures. Competing Standards. Digital Certificates. Limitations of Digital Certificates. Certificate Authorities. Public Key Infrastructure. The Future. The Limitations of Encryption. Kerberos - How Kerberos Works. Kerberos- Limitations. Encryption on the WWW - The World Wide Web. Secure Sockets Layer (HTTPS). Secure HTTP (SHTTP). Microsoft's Internet Explorer. Viewing Digital Certificates with Internet Explorer. Viewing the Encryption Strength of IE5. Viewing Certification Authorities with IE5. Netscape Navigator. Viewing Digital Certificates with Navigator. Authenticode Certificates. E-Mail - E-Mail Issues. E-Mail Issues. Secure E-Mail Protocols. Web-Based E-Mail Services. Security of Stored Messages. Identity: Spoofing and Hiding. E-Mail as a weapon. E-Mail Policies. E-Mail Privacy. Auto-Responses. Operating System Security -- Passwords. Password Attacks. Onetime Passwords. Access Control. Data Redundancy. General Recommendations. Modems. Useful Tools. LAN Security - LAN Guidelines. Controlling End-User Access. Concurrent Logins. Available Disk Space. Restrictions to Location or Workstation. Time/Day Restrictions. Access to Directories and Trustee Rights. File Attributes. Other Privileges. Single Sign-On. Policy-Based Network Management. Honeypot Systems. Network Segmentation. Static IP Addresses vs. DHCP. Media and Protocols - Network Media. Plenum Cabling and Risers. WANs. Redundancy and Alternative Connections. Routers and SNMP - Router Issues. SNMP. Virtual Private Networks - Encryption on the Network. Node-to-Node Encryption. End-to-End Encryption. Where to Encrypt. Virtual Private Networks. PPTP. L2TP. IPSec. SOCKS. Firewalls - Firewalls Pros and Cons. Types of Firewalls. Packet Filters vs. Proxies. Firewall Configurations. Restricting Users' Access to the Internet. Firewall Products. Personal Firewalls. Biometrics - Identification and Authentication. Biometric Identification and Authentication. Biometric Identification Reliability. Backup Authentication. Environmental Conditions. User Acceptance. Security of the Biometric System. Interoperability. Costs vs. Savings. Policies and Procedures - Policies vs. Procedures. Information Security Policy Objectives. Developing Security Policies. Policy and Procedure Manuals. Policy Awareness & Education. Policy Enforcement. Policy Format. Security Policy Suggestions. Information Protection Team. Crisis Management Planning. Sources for Information Policies. Auditing and Intrusion Detection - What is an Audit. Operational Security Audits. System Security Auditing. Activity and Usage Auditing. Audit Mistakes. Deficiencies of Traditional Audit Techniques. Intrusion Detection. Intrusion Detection Systems. Host-Based Intrusion Detection Systems. Network-Based Intrusion Detection Systems. Knowledge-Based Intrusion Detection Systems. Statistical-Based Intrusion Detection Systems. Defense In-Depth Approach . Future Directions. Crisis Management Planning - Crisis Management. Disaster Recovery Planning. Computer Security Incident Response Plan. Browser Security - Cookie Files. Cache Files. Autocomplete.
---
paper_title: Medical image security in a HIPAA mandated PACS environment.
paper_content:
Medical image security is an important issue when digital images and their pertinent patient information are transmitted across public networks. Mandates for ensuring health data security have been issued by the federal government such as Health Insurance Portability and Accountability Act (HIPAA), where healthcare institutions are obliged to take appropriate measures to ensure that patient information is only provided to people who have a professional need. Guidelines, such as digital imaging and communication in medicine (DICOM) standards that deal with security issues, continue to be published by organizing bodies in healthcare. However, there are many differences in implementation especially for an integrated system like picture archiving and communication system (PACS), and the infrastructure to deploy these security standards is often lacking. Over the past 6 years, members in the Image Processing and Informatics Laboratory, Childrens Hospital, Los Angeles/University of Southern California, have actively researched image security issues related to PACS and teleradiology. The paper summarizes our previous work and presents an approach to further research on the digital envelope (DE) concept that provides image integrity and security assurance in addition to conventional network security protection. The DE, including the digital signature (DS) of the image as well as encrypted patient information from the DICOM image header, can be embedded in the background area of the image as an invisible permanent watermark. The paper outlines the systematic development, evaluation and deployment of the DE method in a PACS environment. We have also proposed a dedicated PACS security server that will act as an image authority to check and certify the image origin and integrity upon request by a user, and meanwhile act also as a secure DICOM gateway to the outside connections and a PACS operation monitor for HIPAA supporting information.
---
paper_title: Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
paper_content:
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
---
paper_title: Digital Watermarking and Steganography
paper_content:
Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. ::: ::: *This new edition now contains essential information on steganalysis and steganography ::: *New concepts and new applications including QIM introduced ::: *Digital watermark embedding is given a complete update with new processes and applications
---
paper_title: Relevance of watermarking in medical imaging
paper_content:
Because of the importance of the security issues in the management of medical information, we suggest the use of watermarking techniques to complete the existing measures for protecting medical images. We discuss the necessary requirements for such a system to be accepted by medical staff and its complementary role with respect with existing security systems. We present different scenarios, one devoted to the authentication and tracing of the images, the second to the integrity control of the patient's record.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Conception and limits of robust perceptual hashing: towards side information assisted hash functions
paper_content:
In this paper, we consider some basic concepts behind the design of existing robust perceptual hashing techniques for content identification. We show the limits of robust hashing from the communication perspectives as well as propose an approach that is able to overcome these shortcomings in certain setups. The consideration is based on both achievable rate and probability of error. We use the fact that most robust hashing algorithms are based on dimensionality reduction using random projections and quantization. Therefore, we demonstrate the corresponding achievable rate and probability of error based on random projections and compare with the results for the direct domain. The effect of dimensionality reduction is studied and the corresponding approximations are provided based on the Johnson-Lindenstrauss lemma. Side-information assisted robust perceptual hashing is proposed as a solution to the above shortcomings.
---
paper_title: Experiment of Tamper Detection and Recovery Watermarking in PACS
paper_content:
Medical images such as x-rays, ultrasounds and MRI (Magnetic Resonance Imaging) plays an important role in helping the physicians to diagnose a disease or body conditions. These images can be tampered with existing image processing tools that is easily available. The usage of security measures such as watermarking can protect the integrity of the images. Numerous watermarking schemes with basic security functions and even tampered image recovery are available. But there is no research on the experimentation of watermarking in the operational environment that involves PACS (Picture Archiving and Communication Systems). This paper will focus on the experiment of selected watermarking scheme running in as imulated operation environment. The watermarked images will be tested to know its effectiveness by comparing its recovery rates.
---
paper_title: JPEG 2000 and Digital Watermarking Technique Using in Medical Image
paper_content:
The Picture Archiving and Communication System(PACS) was introduced for computerization of the medical system and telediagnosis between the hospital. It is becoming possible to create, store, and transmit medical images via PACS. There has been a growing interest in protecting medical images with an enormous amount of information. To improve transmission speed among the hospitals, the medical image should be compression JPEG 2000 by high compression ratio. This paper proposes an algorithm that utilizes both JPEG 2000 and robust watermarking for protection and compression of the medical image. With the proposed algorithm, it takes considerably less time to do JPEG 2000 and watermarking than when they are done separately. Based on the experiment results, it takes 0.72 second for the proposed algorithm and 1.11 second when they are done separately. We confirmed that the proposed algorithm was faster than when they are done separately.
---
paper_title: Reversible medical image watermarking for tamper detection and recovery
paper_content:
This research paper discussed the usage of watermarking in medical images to ensure the authenticity and integrity of the image and reviewed some watermarking schemes that had been developed. A design of a reversible tamper detection and recovery watermarking scheme was then proposed. The watermarking scheme uses a 640x480x8 bits ultrasound grayscale image as a sample. The concept of ROI (Region Of Interest) and RONI (Region Of Non Interest) were applied. Watermark embedded can be used to detect tampering and recovery of the image can be done. The watermark is also reversible.
---
paper_title: Security models of digital watermarking
paper_content:
Digital watermarking, traditionally modeled as communication with side information, is generally considered to have important potential applications in various scenarios such as digital rights managements. However, the current literature mainly focuses on robustness, capacity and imperceptibility. There lacks systematic formal approach in tackling secure issues of watermarking. One one hand, the threat models in many previous works are not sufficiently established, which result in somewhat superficial or even flawed security analysis. On the other hand, there lacks a rigorous model for watermarking in general that allows useful analysis in practice. There has been some efforts in clearing the threat models and formulate rigorous watermarking models. However, there are also many other cases where security issues are lightly or incorrectly treated. In this paper, we survey various security notions and models in previous work, and discuss possible future research directions.
---
paper_title: Non-repudiation oblivious watermarking schema for secure digital video distribution
paper_content:
This paper presents a mechanism and algorithm for creating undeniable watermarks. It assumes a system where a content owner or provider uses outside agents to distribute its content. Content watermarked by distribution agents using this system will be undeniably recognizable by the content provider as originating with that distribution agent. That is to say that given N distribution agents, the content provider will be able to tell which distribution agent watermarked the content. The system does not allow any distribution agent to watermark content that would appear to have been watermarked by another agent and it does also not allow the content provider to watermark content that would appear to have been watermarked by a particular distribution agent. This allows the content provider to place a high degree of trust in the identification of the distribution agent and trace "leak" locations of pirated copies of videos.
---
paper_title: Tamper Detection and Recovery for Medical Images Using Near-lossless Information Hiding Technique
paper_content:
Digital medical images are very easy to be modified for illegal purposes. For example, microcalcification in mammography is an important diagnostic clue, and it can be wiped off intentionally for insurance purposes or added intentionally into a normal mammography. In this paper, we proposed two methods to tamper detection and recovery for a medical image. A 1024 × 1024 x-ray mammogram was chosen to test the ability of tamper detection and recovery. At first, a medical image is divided into several blocks. For each block, an adaptive robust digital watermarking method combined with the modulo operation is used to hide both the authentication message and the recovery information. In the first method, each block is embedded with the authentication message and the recovery information of other blocks. Because the recovered block is too small and excessively compressed, the concept of region of interest (ROI) is introduced into the second method. If there are no tampered blocks, the original image can be obtained with only the stego image. When the ROI, such as microcalcification in mammography, is tampered with, an approximate image will be obtained from other blocks. From the experimental results, the proposed near-lossless method is proven to effectively detect a tampered medical image and recover the original ROI image. In this study, an adaptive robust digital watermarking method combined with the operation of modulo 256 was chosen to achieve information hiding and image authentication. With the proposal method, any random changes on the stego image will be detected in high probability.
---
paper_title: The integration of medical images with the electronic patient record and their web-based distribution
paper_content:
Abstract Medical images are currently created digitally and stored in the radiology department’s picture archiving and communication system. Reports are usually stored in the electronic patient record of other information systems, such as the radiology information system (RIS) and the hospital information system (HIS). But high-quality services can only be provided if electronic patient record data is integrated with digital images in picture archiving and communication systems. Clinicians should be able to access both systems’ data in an integrated and consistent way as part of their regular working environment, whether HIS or RIS. Also, this system should allow for teleconferencing with other users, eg, for consultation with a specialist in the radiology department. This article describes a web-based solution that integrates the digital images of picture archiving and communication systems with electronic patient record/HIS/RIS data and has built-in teleconferencing functionality. This integration has been successfully tested using three different commercial RIS and HIS products.
---
paper_title: Applications of data hiding in digital images
paper_content:
Summary form only given, as follows. The author introduces data hiding in digital imagery as a new and powerful technology with applications including robust digital watermarking of images for copyright protection, image fingerprinting and authentication, tamper detection in digital imagery, covert (invisible) communication using images, access control, copy control in DVDs, embedding subtitles and audio tracks in video signals, etc. Data hiding is a highly multidisciplinary field that combines image and signal processing with cryptography, communication theory, coding theory, signal compression, and the theory of visual perception.
---
paper_title: A data-hiding technique with authentication, integration, and confidentiality for electronic patient records
paper_content:
A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.
---
paper_title: Survey of Medical Image Watermarking Algorithms
paper_content:
Watermarking in medical images is a new area of research and some works in this area have been reported world wide recently. Most of the works are on the tamper detection of the images and embedding of the Electronics Patient Record (EPR) data in the medical images. Watermarked medical images can be used for transmission, storage or telediagnosis. Tamper detection watermarks are useful to locate the regions in the image where some manipulations have been made. EPR data hiding in images improves the confidentiality of the patient data, saves memory storage space and reduce the bandwidth requirement for transmission of images. This paper discusses various aspects of medical image watermarking and makes a review of various watermarking algorithms originally proposed for medical images.
---
paper_title: Lossless ROI Medical Image Watermarking Technique with Enhanced Security and High Payload Embedding
paper_content:
In this article, a new fragile, blind, high payload capacity, ROI (Region of Interest) preserving Medical image watermarking (MIW) technique in the spatial domain for gray scale medical images is proposed. We present a watermarking scheme that combines lossless data compression and encryption technique in application to medical images. The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices such as PSNR, MSE and MSSIM enables us to argue that, the method will help to maintain Electronic Patient Report(EPR)/DICOM data privacy and medical image integrity.
---
paper_title: Relevance of watermarking in medical imaging
paper_content:
Because of the importance of the security issues in the management of medical information, we suggest the use of watermarking techniques to complete the existing measures for protecting medical images. We discuss the necessary requirements for such a system to be accepted by medical staff and its complementary role with respect with existing security systems. We present different scenarios, one devoted to the authentication and tracing of the images, the second to the integrity control of the patient's record.
---
paper_title: The Use of Digital Watermarking for Intelligence Multimedia Document Distribution
paper_content:
Digital watermarking is a promising technology to embed information as unperceivable signals in digital contents. Various watermarking techniques have been proposed to protect copyrights of multimedia digital contents over Internet trading so that ownership of the contents can be determined in subsequent copyrights disputes. However, their applications in preventing unauthorized distribution of intelligence document have not been studied. In this paper, we propose a watermark-based document distribution protocol, which complements conventional cryptography-based access control schemes, to address the problem of tracing unauthorized distribution of sensitive intelligence documents. The reinforcement of document distribution policies requires a concrete support of non-repudiation in the distribution process. The distribution protocol is adapted from our previous work on the watermarking infrastructure for enterprise document management. It makes use of intelligence user certificates to embed the identity of the users into the intelligence documents to whom are distributed. In particular, keeping the identity secrecy between document providers and users (but yet traceable upon disputes) is a key contribution of this protocol in order to support for intelligence applications. We also outline an implementation of the distribution protocol and watermarking scheme employed.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Medical image security and EPR hiding using Shamir's secret sharing scheme
paper_content:
Medical applications such as telediagnosis require information exchange over insecure networks. Therefore, protection of the integrity and confidentiality of the medical images is an important issue. Another issue is to store electronic patient record (EPR) in the medical image by steganographic or watermarking techniques. Studies reported in the literature deal with some of these issues but not all of them are satisfied in a single method. A medical image is distributed among a number of clinicians in telediagnosis and each one of them has all the information about the patient's medical condition. However, disclosing all the information about an important patient's medical condition to each of the clinicians is a security issue. This paper proposes a (k, n) secret sharing scheme which shares medical images among a health team of n clinicians such that at least k of them must gather to reveal the medical image to diagnose. Shamir's secret sharing scheme is used to address all of these security issues in one method. The proposed method can store longer EPR strings along with better authenticity and confidentiality properties while satisfying all the requirements as shown in the results.
---
paper_title: Effective Management of Medical Information Through A Novel Blind Watermarking Technique
paper_content:
Medical Data Management (MDM) domain consists of various issues of medical information like authentication, security, privacy, retrieval and storage etc. Medical Image Watermarking (MIW) techniques have recently emerged as a leading technology to solve the problems associated with MDM. This paper proposes a blind, Contourlet Transform (CNT) based MIW scheme, robust to high JPEG and JPEG2000 compression and simultaneously capable of addressing a range of MDM issues like medical information security, content authentication, safe archiving and controlled access retrieval etc. It also provides a way for effective data communication along with automated medical personnel teaching. The original medical image is first decomposed by CNT. The Low pass subband is used to embed the watermark in such a way that enables the proposed method to extract the embedded watermark in a blind manner. Inverse CNT is then applied to get the watermarked image. Extensive experiments were carried out and the performance of the proposed scheme is evaluated through both subjective and quantitative measures. The experimental results and comparisons, confirm the effectiveness and efficiency of the proposed technique in the MDM paradigm.
---
paper_title: High capacity, reversible data hiding in medical images
paper_content:
In this paper we introduce a highly efficient reversible data hiding technique. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximize data hiding capacity. It exploits the special properties of medical images, where the histogram of their non-overlapping image tiles mostly peak around some gray values and the rest of the spectrum is mainly empty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified. High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. We show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared with data hiding method in the whole image, our scheme can result in 30%–200% capacity improvement with still better image quality, depending on the medical image content.
---
paper_title: Image-Based Electronic Patient Records for Secured Collaborative Medical Applications
paper_content:
We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), image-based EPR repository server (EPR-Server), Web server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of digital signature and authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications
---
paper_title: Reversible data hiding based on block median preservation
paper_content:
This paper proposes a reversible data hiding scheme for gray level images. It exploits the high correlation among image block pixels to produce a difference histogram. Secret data is embedded based on a multi-level histogram shifting mechanism with reference to the integer median of each block. The image blocks are divided into four categories due to four corresponding embedding strategies, aiming at preserving the medians during data embedding. In decoder, the median pixels are retrieved first followed by the hidden data extraction, and the host image can be accurately recovered via an inverse histogram shifting mechanism after removing the secret data from the marked image. Experimental results validate the effectiveness of our scheme and demonstrate that it outperforms several previous methods in terms of capacity and marked image's quality.
---
paper_title: A local Tchebichef moments-based robust image watermarking
paper_content:
Protection against geometric distortions and common image processing operations with blind detection becomes a much challenging task in image watermarking. To achieve this, in this paper we propose a content-based watermarking scheme that combines the invariant feature extraction with watermark embedding by using Tchebichef moments. Harris-Laplace detector is first adopted to extract feature points, and then non-overlapped disks centered at feature points are generated. These disks are invariant to scaling and translation distortions. For each disk, orientation alignment is then performed to achieve rotation invariant. Finally, the watermark is embedded in magnitudes of Tchebichef moments of each disk via dither modulation to realize the robustness to common image processing operations and the blind detection. Thorough simulation results obtained by using the standard benchmark, Stirmark, demonstrate that the proposed method is robust against various geometric distortions as well as common image processing operations and outperforms representative image watermarking schemes.
---
paper_title: Tamper Detection and Recovery for Medical Images Using Near-lossless Information Hiding Technique
paper_content:
Digital medical images are very easy to be modified for illegal purposes. For example, microcalcification in mammography is an important diagnostic clue, and it can be wiped off intentionally for insurance purposes or added intentionally into a normal mammography. In this paper, we proposed two methods to tamper detection and recovery for a medical image. A 1024 × 1024 x-ray mammogram was chosen to test the ability of tamper detection and recovery. At first, a medical image is divided into several blocks. For each block, an adaptive robust digital watermarking method combined with the modulo operation is used to hide both the authentication message and the recovery information. In the first method, each block is embedded with the authentication message and the recovery information of other blocks. Because the recovered block is too small and excessively compressed, the concept of region of interest (ROI) is introduced into the second method. If there are no tampered blocks, the original image can be obtained with only the stego image. When the ROI, such as microcalcification in mammography, is tampered with, an approximate image will be obtained from other blocks. From the experimental results, the proposed near-lossless method is proven to effectively detect a tampered medical image and recover the original ROI image. In this study, an adaptive robust digital watermarking method combined with the operation of modulo 256 was chosen to achieve information hiding and image authentication. With the proposal method, any random changes on the stego image will be detected in high probability.
---
paper_title: Counterfeiting attacks on oblivious block-wise independent invisible watermarking schemes.
paper_content:
In this paper, we describe a class of attacks on certain block-based oblivious watermarking schemes. We show that oblivious watermarking techniques that embed information into a host image in a block-wise independent fashion are vulnerable to a counterfeiting attack. Specifically, given a watermarked image, one can forge the watermark it contains into another image without knowing the secret key used for watermark insertion and in some cases even without explicitly knowing the watermark. We demonstrate successful implementations of this attack on a few watermarking techniques that have been proposed in the literature. We also describe a possible solution to this problem of block-wise independence that makes our attack computationally intractable.
---
paper_title: Spatial Domain- High Capacity Data Hiding in ROI Images
paper_content:
Digital watermarking, one of the data hiding techniques has become an emerging area of research due to the wide spread use of Internet and intranets. Though the watermark is used for authentication purpose, its methodology has been adapted for hiding data in many applications namely electronic patient record (EPR) data hiding in medical images. Medical images are usually large sized images and stored without loss of redundancy. Recent researches have proven that appropriate level of JPEG (joint picture expert group) compression may be used on these image types without loss of diagnostic content. This has provided an opportunity for more rapid image transmission. This work focuses on the estimation of the data hiding capacity of region of interest (ROI) medical images and optimizing the JPEG survival level that allow acceptable JPEG compression for conventional spatial domain watermarking techniques namely LSB technique, additive technique and spread spectrum technique
---
paper_title: Developing a Digital Image Watermarking Model
paper_content:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
---
paper_title: Models of Watermarking
paper_content:
Publisher Summary ::: This chapter provides the conceptual models of watermarking. These models serve as ways of thinking about actual watermarking systems. They fall into two broad groups: models based on a view of watermarking as a method of communication and models based on geometric views of watermarking algorithms. It describes three models of watermarking systems based on the traditional model of a communications channel. These differ in how the cover Work is incorporated into the system. In the basic model, the cover Work is considered noise added during transmission of the watermark signal and in models of watermarking as communications with side information at the transmitter, the cover Work is still considered noise, but the watermark encoding process is provided with its value as side information. In models of watermarking as multiplexing, the cover Work and the watermark are considered two messages multiplexed together for reception by two different “receivers”: a human and a watermark detector, respectively. Part of a watermarking system can be viewed as an extraction process that projects or distorts media space into a marking space. The rest can then be viewed as a simpler watermarking system that operates in marking space rather than in media space. Many watermarking systems fall into the class of correlation-based systems, in which the detector uses some form of correlation as a detection metric. This is true even of many systems not explicitly described as using correlation-based detection.
---
paper_title: The integration of medical images with the electronic patient record and their web-based distribution
paper_content:
Abstract Medical images are currently created digitally and stored in the radiology department’s picture archiving and communication system. Reports are usually stored in the electronic patient record of other information systems, such as the radiology information system (RIS) and the hospital information system (HIS). But high-quality services can only be provided if electronic patient record data is integrated with digital images in picture archiving and communication systems. Clinicians should be able to access both systems’ data in an integrated and consistent way as part of their regular working environment, whether HIS or RIS. Also, this system should allow for teleconferencing with other users, eg, for consultation with a specialist in the radiology department. This article describes a web-based solution that integrates the digital images of picture archiving and communication systems with electronic patient record/HIS/RIS data and has built-in teleconferencing functionality. This integration has been successfully tested using three different commercial RIS and HIS products.
---
paper_title: Robust DWT-SVD domain image watermarking: embedding data in all frequencies
paper_content:
Protection of digital multimedia content has become an increasingly important issue for content owners and service providers. As watermarking is identified as a major technology to achieve copyright protection, the relevant literature includes several distinct approaches for embedding data into a multimedia element (primarily images, audio, and video). Because of its growing popularity, the Discrete Wavelet Transform (DWT) is commonly used in recent watermarking schemes. In a DWT-based scheme, the DWT coefficients are modified with the data that represents the watermark. In this paper, we present a hybrid scheme based on DWT and Singular Value Decomposition (SVD). After decomposing the cover image into four bands, we apply the SVD to each band, and embed the same watermark data by modifying the singular values. Modification in all frequencies allows the development of a watermarking scheme that is robust to a wide range of attacks.
---
paper_title: Tamper detection with self-correction hybrid spatial-DCT domains image authentication technique
paper_content:
The development of effective image authentication techniques is of remarkably growing interest. Some recently developed fragile, semi-fragile/robust or hybrid-watermarking algorithms not only verify the authenticity of the watermarked image but also provide self reconstruction capabilities. However, several algorithms have been reported as vulnerable to various attacks, especially blind pattern matching attacks, with insufficient security. We propose a new blind dual-domain self-embedding watermarking scheme with more secure embedding processes of the image's blocks fragile signatures and robust approximations and more reliable local alterations detection with auto-correction capabilities, surviving normal image content preserving operations. Hence it prevents falsification, the real threat to authentication.
---
paper_title: A data-hiding technique with authentication, integration, and confidentiality for electronic patient records
paper_content:
A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.
---
paper_title: Lossless ROI Medical Image Watermarking Technique with Enhanced Security and High Payload Embedding
paper_content:
In this article, a new fragile, blind, high payload capacity, ROI (Region of Interest) preserving Medical image watermarking (MIW) technique in the spatial domain for gray scale medical images is proposed. We present a watermarking scheme that combines lossless data compression and encryption technique in application to medical images. The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices such as PSNR, MSE and MSSIM enables us to argue that, the method will help to maintain Electronic Patient Report(EPR)/DICOM data privacy and medical image integrity.
---
paper_title: Security Protection of DICOM Medical Images Using Dual-Layer Reversible Watermarking with Tamper Detection Capability
paper_content:
Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.
---
paper_title: Robust and high-quality time-domain audio watermarking based on low-frequency amplitude modification
paper_content:
This work proposes a method of embedding digital watermarks into audio signals in the time domain. The proposed algorithm exploits differential average-of-absolute-amplitude relations within each group of audio samples to represent one-bit information. The principle of low-frequency amplitude modification is employed to scale amplitudes in a group manner (unlike the sample-by-sample manner as used in pseudonoise or spread-spectrum techniques) in selected sections of samples so that the time-domain waveform envelope can be almost preserved. Besides, when the frequency-domain characteristics of the watermark signal are controlled by applying absolute hearing thresholds in the psychoacoustic model, the distortion associated with watermarking is hardly perceivable by human ears. The watermark can be blindly extracted without knowledge of the original signal. Subjective and objective tests reveal that the proposed watermarking scheme maintains high audio quality and is simultaneously highly robust to pirate attacks, including MP3 compression, low-pass filtering, amplitude scaling, time scaling, digital-to-analog/analog-to-digital reacquisition, cropping, sampling rate change, and bit resolution transformation. Security of embedded watermarks is enhanced by adopting unequal section lengths determined by a secret key.
---
paper_title: Robust Image Watermarking Based on Multiband Wavelets and Empirical Mode Decomposition
paper_content:
In this paper, we propose a blind image watermarking algorithm based on the multiband wavelet transformation and the empirical mode decomposition. Unlike the watermark algorithms based on the traditional two-band wavelet transform, where the watermark bits are embedded directly on the wavelet coefficients, in the proposed scheme, we embed the watermark bits in the mean trend of some middle-frequency subimages in the wavelet domain. We further select appropriate dilation factor and filters in the multiband wavelet transform to achieve better performance in terms of perceptually invisibility and the robustness of the watermark. The experimental results show that the proposed blind watermarking scheme is robust against JPEG compression, Gaussian noise, salt and pepper noise, median filtering, and Con-vFilter attacks. The comparison analysis demonstrate that our scheme has better performance than the watermarking schemes reported recently.
---
paper_title: Robust watermarking and compression for medical images based on genetic algorithms
paper_content:
A ROI (region of interest) of a medical image is an area including important information and must be stored without any distortion. In order to achieve optimal compression as well as satisfactory visualization of medical images, we compress the ROI by lossless compression, and the rest by lossy compression. Furthermore, security is an important issue in web-based medical information system. Watermarking skill is often used for protecting medical images. In this paper, we present a robust technique embedding the watermark of signature information or textual data around the ROI of a medical image based on genetic algorithms. A fragile watermark is adopted to detect any unauthorized modification. The embedding of watermark in the frequency domain is more difficult to be pirated than in spatial domain.
---
paper_title: Digital Watermarking and Steganography
paper_content:
Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. ::: ::: *This new edition now contains essential information on steganalysis and steganography ::: *New concepts and new applications including QIM introduced ::: *Digital watermark embedding is given a complete update with new processes and applications
---
paper_title: Secret and public key image watermarking schemes for image authentication and ownership verification
paper_content:
We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique.
---
paper_title: BLIND DETECTION OF MALICIOUS ALTERATIONS ON STILL IMAGES USING ROBUST WATERMARKS
paper_content:
Digital image manipulation software is now readily available on personal computers. It is therefore very simple to tamper with any image and make it available to others. Insuring digital image integrity becomes a major issue. In this paper, we propose an original method to protect image authenticity using an invisible and robust watermark. Our scheme is independent of the signer, however the latter must have a high capacity and be able to extract the watermark in full blind detection mode. Our approach is based on the extraction of features from the image. These features are chosen so as to be unaffected by non malicious alterations such as lossy compression. They are embedded in the image using an iterative process so that watermarked image features and information contained in the watermark coincide perfectly. The authenticity is verified by comparing the features of the tested image, with those of the original image recovered from the watermark. (6 pages)
---
paper_title: Robust Image Watermarking in the Spatial Domain
paper_content:
The rapid evolution of digital image manipulation and transmission techniques has created a pressing need for the protection of the intellectual property rights on images. A copyright protection method that is based on hiding an ‘invisible’ signal, known as digital watermark, in the image is presented in this paper. Watermark casting is performed in the spatial domain by slightly modifying the intensity of randomly selected image pixels. Watermark detection does not require the existence of the original image and is carried out by comparing the mean intensity value of the marked pixels against that of the pixels not marked. Statistical hypothesis testing is used for this purpose. Pixel modifications can be done in such a way that the watermark is resistant to JPEG compression and lowpass filtering. This is achieved by minimizing the energy content of the watermark signal at higher frequencies while taking into account properties of the human visual system. A variation that generates image dependent watermarks as well as a method to handle geometrical distortions are presented. An extension to color images is also pursued. Experiments on real images verify the effectiveness of the proposed techniques.
---
paper_title: A robust watermarking scheme using self-reference image
paper_content:
In this paper, a robust watermark scheme for copyright protection is proposed. By modifying the original image in transform domain and embedding a watermark in the difference values between the original image and its reference image, the proposed scheme overcomes the weak robustness problem of embedding a watermark in the spatial domain. Besides, the watermark extraction does not require the original image so it is more practical in real application. The experimental results show that the proposed scheme provides not only good image quality, but is also robust against various attacks, such as JPEG lossy compression, filtering and noise addition.
---
paper_title: A Region-Based Lossless Watermarking Scheme for Enhancing Security of Medical Data
paper_content:
This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Medical image security and EPR hiding using Shamir's secret sharing scheme
paper_content:
Medical applications such as telediagnosis require information exchange over insecure networks. Therefore, protection of the integrity and confidentiality of the medical images is an important issue. Another issue is to store electronic patient record (EPR) in the medical image by steganographic or watermarking techniques. Studies reported in the literature deal with some of these issues but not all of them are satisfied in a single method. A medical image is distributed among a number of clinicians in telediagnosis and each one of them has all the information about the patient's medical condition. However, disclosing all the information about an important patient's medical condition to each of the clinicians is a security issue. This paper proposes a (k, n) secret sharing scheme which shares medical images among a health team of n clinicians such that at least k of them must gather to reveal the medical image to diagnose. Shamir's secret sharing scheme is used to address all of these security issues in one method. The proposed method can store longer EPR strings along with better authenticity and confidentiality properties while satisfying all the requirements as shown in the results.
---
paper_title: Medical Image Watermarking with Tamper Detection and Recovery
paper_content:
This paper discussed security of medical images and reviewed some work done regarding them. A fragile watermarking scheme was then proposed that could detect tamper and subsequently recover the image. Our scheme required a secret key and a public chaotic mixing algorithm to embed and recover a tampered image. The scheme was also resilient to VQ attack. The purposes were to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) greyscale images in our experiment. We tested our algorithm for up to 50% tampered block and obtained 100% recovery for spread-tampered block. I. INTRODUCTION Security of medical images, derived from strict ethics and legislative rules, gives rights to the patient and duties to the health professionals. This imposes three mandatory characteristics: confidentiality, reliability and availability: • Confidentiality means that only the entitled persons have access to the images; • Reliability which has two aspects; Integrity: the image has not been modified by non-authorized person, and authentication: a proof that the image belongs indeed to the correct patient and is issued from the correct source;
---
paper_title: On the robustness and security of digital image watermarking
paper_content:
In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.
---
paper_title: An additive and lossless watermarking method based on invariant image approximation and Haar wavelet transform
paper_content:
In this article, we propose a new additive lossless watermarking scheme which identifies parts of the image that can be reversibly watermarked and conducts message embedding in the conventional Haar wavelet transform coefficients. Our approach makes use of an approximation of the image signal that is invariant to the watermark addition for classifying the image in order to avoid over/underflows. The method has been tested on different sets of medical images and some usual natural test images as Lena. Experimental result analysis conducted with respect to several aspects including data hiding capacity and image quality preservation, shows that our method is one of the most competitive existing lossless watermarking schemes in terms of high capacity and low distortion.
---
paper_title: Practical analysis of watermarking capacity
paper_content:
Digital watermarking embedding is a hot research field of image processing. Digital watermarking is only possible because our vision system is not perfect. A number of applications have emerged such as copyright notification, time stamp and automated monitoring. There have been ingenious works on watermarking capacity, which all assume the attack distortion are Gaussian additive distribution. We argue that these results are beautiful but not practical. It is hard to classify different attacking methods into a uniform format (especially as Gaussian distribution). In the other hand, the capacity of a given image's information is an interesting and important topic. Instead of conventional analysis based on non-precise assumption of attack distortion, this paper concentrates on the problem of how much information that image can carry without much visibility distortion. Our work is based on wavelet transform and mean-squared error since we believe they are representative measure for most work. Comparing with the former work on the watermarking capacity, this paper pays more attention to the influence on capacity of widely used technique such as spread spectrum. In the last part of this paper, we discuss future trends of new watermarking algorithm and their influence on capacity.
---
paper_title: Robust Image Watermarking Based on Multiscale Gradient Direction Quantization
paper_content:
We propose a robust quantization-based image watermarking scheme, called the gradient direction watermarking (GDWM), based on the uniform quantization of the direction of gradient vectors. In GDWM, the watermark bits are embedded by quantizing the angles of significant gradient vectors at multiple wavelet scales. The proposed scheme has the following advantages: 1) increased invisibility of the embedded watermark because the watermark is embedded in significant gradient vectors, 2) robustness to amplitude scaling attacks because the watermark is embedded in the angles of the gradient vectors, and 3) increased watermarking capacity as the scheme uses multiple-scale embedding. The gradient vector at a pixel is expressed in terms of the discrete wavelet transform (DWT) coefficients. To quantize the gradient direction, the DWT coefficients are modified based on the derived relationship between the changes in the coefficients and the change in the gradient direction. Experimental results show that the proposed GDWM outperforms other watermarking methods and is robust to a wide range of attacks, e.g., Gaussian filtering, amplitude scaling, median filtering, sharpening, JPEG compression, Gaussian noise, salt & pepper noise, and scaling.
---
paper_title: High capacity data hiding schemes for medical images based on difference expansion
paper_content:
Since the difference expansion (DE) technique was proposed, many researchers tried to, improve its performance in terms of hiding capacity and visual quality. In this paper, a new scheme, based on DE is proposed in order to increase the hiding capacity for medical images. One of the characteristics of medical images, among the other types of images, is the large smooth regions. Taking advantage of this characteristic, our scheme divides the image into two regions; smooth region and non-smooth region. For the smooth region, a high embedding capacity scheme is applied, while the original DE method is applied to the non-smooth region. Sixteen DICOM images of different modalities were used for testing the proposed schemes. The results showed that the proposed scheme has higher hiding capacity compared to the original schemes.
---
paper_title: Dual watermark for image tamper detection and recovery
paper_content:
An effective dual watermark scheme for image tamper detection and recovery is proposed in this paper. In our algorithm, each block in the image contains watermark of other two blocks. That is to say, there are two copies of watermark for each non-overlapping block in the image. Therefore, we maintain two copies of watermark of the whole image and provide second chance for block recovery in case one copy is destroyed. A secret key, which is transmitted along with the watermarked image, and a public chaotic mixing algorithm are used to extract the watermark for tamper recovery. By using our algorithm, a 90% tampered image can be recovered to a dim yet still recognizable condition (PSNR ~20dB). Experimental results demonstrate that our algorithm is superior to the compared techniques, especially when the tampered area is large.
---
paper_title: A robust content-based digital image watermarking scheme
paper_content:
This paper presents a content-based digital image-watermarking scheme, which is robust against a variety of common image-processing attacks and geometric distortions. The image content is represented by important feature points obtained by our image-texture-based adaptive Harris corner detector. These important feature points are geometrically significant and therefore are capable of determining the possible geometric attacks with the aid of the Delaunay-tessellation-based triangle matching method. The watermark is encoded by both the error correcting codes and the spread spectrum technique to improve the detection accuracy and ensure a large measure of security against unintentional or intentional attacks. An image-content-based adaptive embedding scheme is applied in discrete Fourier transform (DFT) domain of each perceptually high textured subimage to ensure better visual quality and more robustness. The watermark detection decision is based on the number of matched bits between the recovered and embedded watermarks in embedding subimages. The experimental results demonstrate the robustness of the proposed method against any combination of the geometric distortions and various common image-processing operations such as JPEG compression, filtering, enhancement, and quantization. Our proposed system also yields a better performance as compared with some peer systems in the literature.
---
paper_title: Multiple watermark embedding scheme in wavelet-spatial domains based on ROI of medical images
paper_content:
Watermarking in medical images is a new area of research. It has the potential of being a value-added tool for medical confidentiality protection, patient-related information hiding, and information retrieval. Medical image watermarking requires extreme care when embedding additional data within the medical images because the additional information must not affect the image quality as this may cause misdiagnosis. In this paper we present a scheme that depends on the extraction of the ROI (region of interest) and its use as a watermark to be embedded twice; first as a robust watermark in the RONI (region of non interest) in the wavelet domain and again as a fragile watermark in the ROI in the spatial domain. Moreover multiple watermarks such as the physician's digital signature and EPR (Electronics Patient Record) are embedded in the RONI in wavelet domain depending on a private key. We compare this scheme by another one that we presented before to show the robustness of new scheme. In our work we use MRI brain images with a brain tumor as the ROI. The experimental results showed that the watermarked image is robust to JPEG compression, ROI removal, and addition of an additional tumor to the image and some geometrical attacks; lowpass and median filtering and some types of noise as; Gaussian, Poisson, Salt and Pepper and finally Speckle.
---
paper_title: Authentication and Data Hiding Using a Hybrid ROI-Based Watermarking Scheme for DICOM Images
paper_content:
Authenticating medical images using watermarking techniques has become a very popular area of research, and some works in this area have been reported worldwide recently. Besides authentication, many data-hiding techniques have been proposed to conceal patient’s data into medical images aiming to reduce the cost needed to store data and the time needed to transmit data when required. In this paper, we present a new hybrid watermarking scheme for DICOM images. In our scheme, two well-known techniques are combined to gain the advantages of both and fulfill the requirements of authentication and data hiding. The scheme divides the images into two parts, the region of interest (ROI) and the region of non-interest (RONI). Patient’s data are embedded into ROI using a reversible technique based on difference expansion, while tamper detection and recovery data are embedded into RONI using a robust technique based on discrete wavelet transform. The experimental results show the ability of hiding patient’s data with a very good visual quality, while ROI, the most important area for diagnosis, is retrieved exactly at the receiver side. The scheme also shows some robustness against certain levels of salt and pepper and cropping noise.
---
paper_title: Multiple watermarking of medical images for content authentication and recovery
paper_content:
Medical Image data require strict security, confidentiality and integrity when transmitted from one hospital to another hospital. This can be achieved by adopting the procedures which can guarantee the image quality as well as secrecy of patient data to unauthorized users. To achieve these requirements we have proposed a multiple watermarking method. The scheme embeds robust watermark in region of non interest (RONI) for achieving security and confidentiality. While integrity control is achieved by inserting fragile watermark in region of interest ROI. Since ROI in the medical image is important from diagnosis point of view so it must be preserved. In order to avoid the distortion caused in ROI due to watermark insertion process, original ROI data is first separated and embedded outside the ROI. This will help in recovery of original ROI at the receiving end in contrast to the techniques reported in the literature which do not guarantee the integrity of the ROI after watermarking process. The image visual quality as well as tamper localization has been evaluated. We have used weighted peak signal to noise ratio (WPSNR) for measuring image quality after watermarking.
---
paper_title: A Low Distorsion and Reversible Watermark: Application to Angiographic Images of the Retina
paper_content:
Medical image security can be enhanced using watermarking, which allows embedding the protection information as a digital signature, by modifying the pixel gray levels of the image. In this paper we propose a reversible watermarking scheme which guarantees that once the embedded message is read, alterations introduced during the insertion process can be removed from the image. Thereafter, original pixel gray levels of the image are restored. The proposed approach relies on estimation of image signal that is invariant to the insertion process, and permits to introduce a very slight watermark within the image. In fact, the insertion process adds or subtracts at least one gray level to the pixels of the original image. Depending on the image to be watermarked, in our case angiographic images of the retina, it is expected that such image alteration will not have any impact on the diagnosis quality, and consequently that the watermark can be kept within the image while this one is interpreted
---
paper_title: Medical Image Integrity Control Combining Digital Signature and Lossless Watermarking
paper_content:
Enforcing protection of medical content becomes a major issue of computer security. Since medical contents are more and more widely distributed, it is necessary to develop security mechanism to guarantee their confidentiality, integrity and traceability in an autonomous way. In this context, watermarking has been recently proposed as a complementary mechanism for medical data protection. In this paper, we focus on the verification of medical image integrity through the combination of digital signatures with such a technology, and especially with Reversible Watermarking (RW). RW schemes have been proposed for images of sensitive content for which any modification may affect their interpretation. Whence, we compare several recent RW schemes and discuss their potential use in the framework of an integrity control process in application to different sets of medical images issued from three distinct modalities: Magnetic Resonance Images, Positron Emission Tomography and Ultrasound Imaging. Experimental results with respect to two aspects including data hiding capacity and image quality preservation, show different limitations which depend on the watermark approach but also on image modality specificities.
---
paper_title: Lossless ROI Medical Image Watermarking Technique with Enhanced Security and High Payload Embedding
paper_content:
In this article, a new fragile, blind, high payload capacity, ROI (Region of Interest) preserving Medical image watermarking (MIW) technique in the spatial domain for gray scale medical images is proposed. We present a watermarking scheme that combines lossless data compression and encryption technique in application to medical images. The effectiveness of the proposed scheme, proven through experiments on various medical images through various image quality measure matrices such as PSNR, MSE and MSSIM enables us to argue that, the method will help to maintain Electronic Patient Report(EPR)/DICOM data privacy and medical image integrity.
---
paper_title: A novel blind watermarking of ECG signals on medical images using EZW algorithm.
paper_content:
In this paper, we present a novel blind water- marking method with secret key by embedding ECG signals in medical images. The embedding is done when the original image is compressed using the embedded zero-tree wavelet (EZW) algorithm. The extraction process is performed at the decompression time of the watermarked image. Our algorithm has been tested on several CT and MRI images and the peak signal to noise ratio (PSNR) between the original and watermarked image is greater than 35 dB for watermarking of 512 to 8192 bytes of the mark signal. The proposed method is able to utilize about 15% of the host image to embed the mark signal. This marking percentage has improved previous works while preserving the image details. transmission overheads as well as helping for computer aided diagnostics system. In this paper we present a new watermarking method combined with the EZW-based wavelet coder. The principle is to replace significant wavelet coefficients of ECG signals by the corresponding significant wavelet coefficients belong- ing to the host image which is much bigger in size than the mark signal. This paper presents a brief introduction to watermarking and the EZW coder that acts as a platform for our watermarking algorithm.
---
paper_title: Multiple embedding using robust watermarks for wireless medical images
paper_content:
Within the expanding paradigm of medical imaging and wireless communications there is increasing demand for transmitting diagnostic medical imagery over error-prone wireless communication channels such as those encountered in cellular phone technology. Medical images must be compressed with minimal file size to minimize transmission time and robustly coded to withstand these wireless environments. It has been reinforced through extensive research that the most crucial regions of medical images must not be degraded and compressed by a lossless or near lossless algorithm. This type of area is called the Region of Interest (ROI). Conversely, the Region of Backgrounds (ROB) may be compressed with some loss of information to achieve a higher compression level. This type of hybrid coding scheme is most useful for wireless communication where the 'bit-budget' is devoted to the ROI. This paper also develops a way for this system to operate externally to the Joint Picture Experts Group (JPEG) still image compression standard without the use of hybrid coding. A multiple watermarking technique is developed to verify the integrity of the ROI after transmission and in the situation where there may be incidental degradation that is hard to perceive or unexpected levels of compression that may degrade ROI content beyond an acceptable level. The most useful contribution in this work is assurance of ROI image content integrity after image files are subject to incidental degradation in these environments. This is made possible with extraction of DCT signature coefficients from the ROI and embedding multiply in the ROB. Strong focus is placed on the robustness to JPEG compression and the mobile channel as well as minimizing the image file size while maintaining its integrity with the use of semi-fragile, robust watermarking.
---
paper_title: Watermarking Image Authentication in Hospital Information System
paper_content:
With the broad application of electronic management of medical records, the integrality and reality authentication of medical image in hospital information system have been in front us. In order to solute these problems, a novel watermarking image authentication based on DSA is presented. In this paper, we introduce digital signature arithmetic (DSA), and analyze the advantage and shortage of DSA. Then, considering the specialty of medical image, it brings forward integrating reversible digital watermarking with digital signature to form an authentication system. Finally, the system is designed and implemented by using the c# language. The experiment results show that the medical image is difficult to tamper. It effectively solves the problem of complete integrity and reality authentication of medical image.
---
paper_title: JPEG 2000 and Digital Watermarking Technique Using in Medical Image
paper_content:
The Picture Archiving and Communication System(PACS) was introduced for computerization of the medical system and telediagnosis between the hospital. It is becoming possible to create, store, and transmit medical images via PACS. There has been a growing interest in protecting medical images with an enormous amount of information. To improve transmission speed among the hospitals, the medical image should be compression JPEG 2000 by high compression ratio. This paper proposes an algorithm that utilizes both JPEG 2000 and robust watermarking for protection and compression of the medical image. With the proposed algorithm, it takes considerably less time to do JPEG 2000 and watermarking than when they are done separately. Based on the experiment results, it takes 0.72 second for the proposed algorithm and 1.11 second when they are done separately. We confirmed that the proposed algorithm was faster than when they are done separately.
---
paper_title: A Region-Based Lossless Watermarking Scheme for Enhancing Security of Medical Data
paper_content:
This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.
---
paper_title: A Medical Image Authentication System Based on Reversible Digital Watermarking
paper_content:
This paper discusses the security problem of integrality and reality authentication in medical image, and introduces digital signature technology based on RSA public cryptosystem in brief. It analyzes the merit and insufficiency of digital signature technology. Considering the specialty of medical image, it brings forward integrating reversible digital watermarking with digital signature to form an authentication system. Then, the scheme of the system is carried through. Finally, the system is designed and implemented by using the c# software. The experiment results show that the proposed system has good imperceptibility. It effectively solves the problem of complete integrity and reality authentication of medical image.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Lossless watermarking scheme for enhancing security of medical data in PACS
paper_content:
We propose a lossless watermarking scheme, in a sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images in PACS. Our embedding method includes imposing invertible modulations that induce low distortion level, as well as losslessly compressing the status of the original image and carrying such information in the watermarked image. Digital signature of the whole image is embedded for verifying the integrity of the image. An identifier also presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing of the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is further embedded into several image slices and then extracted for verifying the authenticity. No visual quality degradation is detected in the watermarked image.
---
paper_title: Clinical Assessment of Watermarked Medical Images 1
paper_content:
Problem statement: Digital watermarking provides security to medical images. Watermarking in Region Of Interest (ROI) however distorts medical images but it is known that the resulting loss of fidelity is visually imperceptible. Approach: Clinical assessment will objectively evaluate the distortion on medical images to see whether or not medical diagnosis is altered. We used 75 medical images consisting of x-rays, ultrasound and CT scans. Digital watermarking was inserted in ROI and ROI/Region Of Non Interest (RONI) in all of them. Three assessors were randomly assigned 225 images, each receiving 75, a mixture of watermarked and non watermarked images. Results: Chi square test was used and p<0.05 was considered significant. There was no significant difference between original images and those watermarked in ROI or ROI/RONI. There was no comment on image quality in all the images assessed. Conclusion/Recommendations: Digital watermarking does not alter medical diagnosis when assessed by clinical radiologists. The quality of the watermarked images was also unchanged.
---
paper_title: Hybrid watermarking of medical images for ROI authentication and recovery
paper_content:
Medical image data require strict security, confidentiality and integrity. To achieve these stringent requirements, we propose a hybrid watermarking method which embeds a robust watermark in the region of non-interest (RONI) for achieving security and confidentiality, while integrity control is achieved by inserting a fragile watermark into the region of the interest (ROI). First the information to be modified in ROI is separated and is inserted into RONI, which later is used in recovery of the original ROI. Secondly, to avoid the underflow and overflow, a location map is generated for embedding the watermark block-wise by leaving the suspected blocks. This avoids the preprocessing step of histogram modification. The image visual quality, as well as tamper localization, is evaluated. We use weighted peak signal to noise ratio for measuring image quality of watermarked images. Experimental results show that the proposed method outperforms the existing hybrid watermarking techniques.
---
paper_title: Combination Independent Content Feature with Watermarking Annotation for Medical Image Retrieval
paper_content:
The number of medical images produced in cardiology and radiology department etc. is rising strongly. New challenges arise in efficient medical data management issues, such as information retrieval and security. A novel method of combining watermarking annotation with independent content feature (ICF) for medical image retrieval is proposed. ICF is extracted by independent component analysis (ICA) to represent medical images, and the digital watermark carrying patient' information text is imperceptibly embedded. Experimental results show that the scheme employs locally salient information from medical image, and it has good retrieval performance, and watermarking algorithm used in the scheme has robustness property to the JPEG compression.
---
paper_title: Data hiding in binary image for authentication and annotation
paper_content:
This paper proposes a new method to embed data in binary images, including scanned text, figures, and signatures. The method manipulates "flippable" pixels to enforce specific block-based relationship in order to embed a significant amount of data without causing noticeable artifacts. Shuffling is applied before embedding to equalize the uneven embedding capacity from region to region. The hidden data can be extracted without using the original image, and can also be accurately extracted after high quality printing and scanning with the help of a few registration marks. The proposed data embedding method can be used to detect unauthorized use of a digitized signature, and annotate or authenticate binary documents. The paper also presents analysis and discussions on robustness and security issues.
---
paper_title: Tamper Detection and Recovery for Medical Images Using Near-lossless Information Hiding Technique
paper_content:
Digital medical images are very easy to be modified for illegal purposes. For example, microcalcification in mammography is an important diagnostic clue, and it can be wiped off intentionally for insurance purposes or added intentionally into a normal mammography. In this paper, we proposed two methods to tamper detection and recovery for a medical image. A 1024 × 1024 x-ray mammogram was chosen to test the ability of tamper detection and recovery. At first, a medical image is divided into several blocks. For each block, an adaptive robust digital watermarking method combined with the modulo operation is used to hide both the authentication message and the recovery information. In the first method, each block is embedded with the authentication message and the recovery information of other blocks. Because the recovered block is too small and excessively compressed, the concept of region of interest (ROI) is introduced into the second method. If there are no tampered blocks, the original image can be obtained with only the stego image. When the ROI, such as microcalcification in mammography, is tampered with, an approximate image will be obtained from other blocks. From the experimental results, the proposed near-lossless method is proven to effectively detect a tampered medical image and recover the original ROI image. In this study, an adaptive robust digital watermarking method combined with the operation of modulo 256 was chosen to achieve information hiding and image authentication. With the proposal method, any random changes on the stego image will be detected in high probability.
---
paper_title: Data security in medical information system
paper_content:
The database systems use the mechanisms of granting and revoking privileges and of authorization control to ensure the security of data. However, some users such as administrators or persons in charge of security have access to comprehensive content of database including patient's information that are of intimate nature and must be discreet. To enhance the security and avoid illegal access of such users, we propose in this article a mechanism using the content-based watermarking technique. Patient's information are encrypted and inserted in an image associated to it. This image, with faculties of object-relational and object-oriented databases, is directly integrated into the database. To check the integrity of the image we use the edge map and invariant moments.
---
paper_title: Multiple block based authentication watermarking for distribution of medical images
paper_content:
To provide an effective authentication method for distribution of medical images, we propose a new fragile watermarking method utilizing segmentation information of medical image contents. For medical image modalities like CT, MRI and PET, tissue structures contain a significant amount of clinical information. Hence, it is important to provide an authenticity check of the segmented blocks. The proposed method is based on cryptographically secure fragile watermarking but eliminates the problem of the block-wise independency of existing methods. The vector quantization (VQ) counterfeiting attack is a known vulnerability due to the block-wise independency of existing methods. In our approach, multiple signatures from two different types of blocks are used to defy such attacks. More secure distribution of medical images can be achieved by embedding the watermark.
---
paper_title: Detection and Restoration of a Tampered Medical Image
paper_content:
This paper presents a recoverable image tamper proofing technique using the symmetric key cryptosystem and vector quantization for detecting and restoring of a tampered medical image. Our scheme applies the one-way hashing function and the symmetric key cryptosystem to the host image to generate the verification data. To recover the tampered places, the host image is compressed by vector quantization to generate the recovery data. Once an intruder has modified the host image, our scheme can detect and recover the tampered places according to the embedded verification data and the recovery data. Besides, the proposed scheme can withstand the counterfeit attack.
---
paper_title: Tamper detection with self-correction hybrid spatial-DCT domains image authentication technique
paper_content:
The development of effective image authentication techniques is of remarkably growing interest. Some recently developed fragile, semi-fragile/robust or hybrid-watermarking algorithms not only verify the authenticity of the watermarked image but also provide self reconstruction capabilities. However, several algorithms have been reported as vulnerable to various attacks, especially blind pattern matching attacks, with insufficient security. We propose a new blind dual-domain self-embedding watermarking scheme with more secure embedding processes of the image's blocks fragile signatures and robust approximations and more reliable local alterations detection with auto-correction capabilities, surviving normal image content preserving operations. Hence it prevents falsification, the real threat to authentication.
---
paper_title: Multiple watermarking of medical images for content authentication and recovery
paper_content:
Medical Image data require strict security, confidentiality and integrity when transmitted from one hospital to another hospital. This can be achieved by adopting the procedures which can guarantee the image quality as well as secrecy of patient data to unauthorized users. To achieve these requirements we have proposed a multiple watermarking method. The scheme embeds robust watermark in region of non interest (RONI) for achieving security and confidentiality. While integrity control is achieved by inserting fragile watermark in region of interest ROI. Since ROI in the medical image is important from diagnosis point of view so it must be preserved. In order to avoid the distortion caused in ROI due to watermark insertion process, original ROI data is first separated and embedded outside the ROI. This will help in recovery of original ROI at the receiving end in contrast to the techniques reported in the literature which do not guarantee the integrity of the ROI after watermarking process. The image visual quality as well as tamper localization has been evaluated. We have used weighted peak signal to noise ratio (WPSNR) for measuring image quality after watermarking.
---
paper_title: Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
paper_content:
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
---
paper_title: Digital watermarking of medical image using ROI information
paper_content:
Recently, the medical image has been digitized by the development of computer science and digitization of the medical devices. There are needs for database service of the medical image and long term storage because of the construction of PACS (Picture Archiving and Communication System) following DICOM (Digital Imaging Communications in Medicine) standards, telemedicine, and et al. Furthermore, authentication and copyright protection are required to protect the illegal distortion and reproduction of the medical information data. In this paper, we propose digital watermarking technique for medical image that prevents illegal forgery that can be caused after transmitting medical image data remotely. A wrong diagnosis may be occurred if the watermark is embedded into the whole area of image. Therefore, we embed the watermark into some area of medical image, except the decision area that makes a diagnosis so called region of interest (ROI) area in our paper, to increase invisibility. The watermark is the value of bit-plane in wavelet transform of the decision area for certification method of integrity verification. The experimental results show that the watermark embedded by the proposed algorithm can survive successfully in image processing operations such as JPEG lossy compression.
---
paper_title: A novel watermarking technique for medical image authentication
paper_content:
Medical images are stored in PACS (picture archiving and communication systems) that are accessed over the intranet by radiologists for diagnosis. These days the trend is shifting towards a Web based interface for accessing PACS (image) data. This calls for thorough security measures in the information system of the hospital to ensure integrity of medical image data that is being transferred over the public network. The paper analyses various watermarking techniques with a perspective of applying them to medical images stored on the PACS. It discusses the applicability of invertible watermarking technique for ensuring integrity of medical images. Any modification to the watermarked DICOM (digital imaging and communications in medicine) image can be detected with high reliability using invertible fragile watermarking system. A unique content based digital signature can be generated from the image data (pixel data) which would be embedded inside the image in an imperceptible way without increasing the data size that need to be transferred. This signature can be extracted at the radiologist viewer work stations and used for the authentication while the modified pixel data is restored back to original if the image is found to be authentic. This kind of distortion free (erasable) embedding procedure would ensure image retrieval without any modification to pixel data after the authentication process that caters to the unique need of medical images for diagnosis
---
paper_title: Survey of Medical Image Watermarking Algorithms
paper_content:
Watermarking in medical images is a new area of research and some works in this area have been reported world wide recently. Most of the works are on the tamper detection of the images and embedding of the Electronics Patient Record (EPR) data in the medical images. Watermarked medical images can be used for transmission, storage or telediagnosis. Tamper detection watermarks are useful to locate the regions in the image where some manipulations have been made. EPR data hiding in images improves the confidentiality of the patient data, saves memory storage space and reduce the bandwidth requirement for transmission of images. This paper discusses various aspects of medical image watermarking and makes a review of various watermarking algorithms originally proposed for medical images.
---
paper_title: Security Protection of DICOM Medical Images Using Dual-Layer Reversible Watermarking with Tamper Detection Capability
paper_content:
Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.
---
paper_title: Reversible medical image watermarking for tamper detection and recovery
paper_content:
This research paper discussed the usage of watermarking in medical images to ensure the authenticity and integrity of the image and reviewed some watermarking schemes that had been developed. A design of a reversible tamper detection and recovery watermarking scheme was then proposed. The watermarking scheme uses a 640x480x8 bits ultrasound grayscale image as a sample. The concept of ROI (Region Of Interest) and RONI (Region Of Non Interest) were applied. Watermark embedded can be used to detect tampering and recovery of the image can be done. The watermark is also reversible.
---
paper_title: Watermarking of chest CT scan medical images for content authentication
paper_content:
Medical image is usually comprised of region of interest (ROI) and region of non interest (RONI). ROI is the region that contains the important information from diagnosis point of view so it must be stored without any distortion. We have proposed a digital watermarking technique which avoids the distortion of image in ROI by embedding the watermark information in RONI. The watermark is comprised of patient information, hospital logo and message authentication code, computed using hash function. Earlier BCH encryption of watermark is performed to ensure inaccessibility of embedded data to the adversaries.
---
paper_title: Medical image integrity control seeking into the detail of the tampering
paper_content:
In this paper, we propose a system which aims at verifying integrity of medical images. It not only detects and localizes alterations, but also seeks into the details of the image modification to understand what occurred. For that latter purpose, we developed an image signature which allows our system to approximate modifications by a simple model, a door function of similar dimensions. This signature is partly based on a linear combination of the DCT coefficients of pixel blocks. Protection data is attached to the image by watermarking. Whence, image integrity verification is conducted by comparing this embedded data to the recomputed one from the observed image. Experimental results with malicious image modification illustrate the overall performances of our system.
---
paper_title: Authentication of digital medical images with digital signature technology.
paper_content:
PURPOSE: To determine whether digital signature technology (DST) can authenticate digital medical images to the same level of authenticity required for interbank electronic transfer of funds. MATERIALS AND METHODS: Message digests were computed for two magnetic resonance images that differed only by the value of a single bit. RSA (Rivest, Shamir, and Adleman) public key cryptography was used to encrypt each message digest to form a digital signature for each image, a process analogous to the established use of RSA DST for electronic funds transfer. The process was then reversed to authenticate the original image from its digital signature. RESULTS: Although the images differed by less than 0.000095%, their message digests differed at 94% of their characters. The digital signature of the original image proved that it was authentic and that the altered image was not authentic. CONCLUSION: RSA DST can establish the authenticity of images to at least the level of confidence required for interbank electronic t...
---
paper_title: Medical image security and EPR hiding using Shamir's secret sharing scheme
paper_content:
Medical applications such as telediagnosis require information exchange over insecure networks. Therefore, protection of the integrity and confidentiality of the medical images is an important issue. Another issue is to store electronic patient record (EPR) in the medical image by steganographic or watermarking techniques. Studies reported in the literature deal with some of these issues but not all of them are satisfied in a single method. A medical image is distributed among a number of clinicians in telediagnosis and each one of them has all the information about the patient's medical condition. However, disclosing all the information about an important patient's medical condition to each of the clinicians is a security issue. This paper proposes a (k, n) secret sharing scheme which shares medical images among a health team of n clinicians such that at least k of them must gather to reveal the medical image to diagnose. Shamir's secret sharing scheme is used to address all of these security issues in one method. The proposed method can store longer EPR strings along with better authenticity and confidentiality properties while satisfying all the requirements as shown in the results.
---
paper_title: A Semi-Reversible Watermark for Medical Image Authentication
paper_content:
This paper addresses the secure storage and transmission of medical informatics in rapidly growing applications such as teleradiology and telesurgery. In particular, we propose a frequency domain digital watermarking technique that can be used to authenticate medical images in a distributed diagnosis and home healthcare environment. The most significant result of our method is the semi-reversibility property that undoes most of the degradation attributable to the watermarking process
---
paper_title: Text Fusion Watermarking in Medical Image with Semi-reversible for Secure Transfer and Authentication
paper_content:
Nowadays, the transmission of digitized medical information has become very convenient due to the generality of Internet. Internet has created the biggest benefit to achieve the transmission of patient information efficiently. However, it is easier that the hackers can grab or duplicate the digitized information on the Internet. This will cause the following problems of medical security and copyright protection. In order to fulfil the security and convenience issues of the patients following goals like the prevention of medical fault, the real-time detection of abnormal event, the support of clinical decision and developing of medical service based on patient has to be achieved. For this purpose this paper proposes the technique called binary embedding technique. It uses the binary information of the text embedded with semi reversible properties in the image are called binary embedding. This technique prevents the distortion of embedded information in the original image due to addition of noise. The concept of semi reversible property is used retains the original information, image quality before and after the process of watermarking is presented as a statistical analysis
---
paper_title: Authentication and protection for medical image
paper_content:
This paper proposes a method to authenticate and protect medical images, especially in the region of interest (ROI). The ROI of a medical image is an important area during diagnosis and must not be distorted during transmission and storage in the hospital information system. The rest of the ROI is used for embedding a watermark that contains the patients data and authentication information, and the authentication information is generated from the ROI by analyzing wavelet coefficients with singular value decomposition (SVD), before embedding the watermark into the discrete wavelet transform (DWT) sub-band. It is important that the ratio of ROI and non-ROI areas is consistent between different systems and doctors, and this ratio is analyzed in this paper. The ROI of watermarked medical image is fragile to any distortion, and patients data and authentication information can be easily extracted from non-ROI. The effectiveness of the new approach is demonstrated empirically.
---
paper_title: A Novel Technique for EPR Hiding in Medical Images for Telemedicine
paper_content:
Medical image data hiding has strict constrains such as high imperceptibility, high capacity and high robustness. Achieving these three requirements simultaneously is difficult. Though some works are reported in the literature on data hiding, watermarking and steganography which are suitable for telemedicine applications, none performs better in all aspects. Electronic Patient Report (EPR) data hiding for telemedicine demands a blind and reversible method. This paper proposes a novel approach to blind reversible data hiding based on integer wavelet transform. Experimental results shows that this scheme outperforms the prior arts in terms of zero BER (Bit Error Rate), higher PSNR (Peak Signal to Noise Ratio), and large EPR data embedding capacity with WPSNR (Weighted Peak Signal to Noise Ratio) around 53 dB, compared with the existing reversible data hiding schemes.
---
paper_title: Lossless Watermarking in JPEG2000 for EPR Data Hiding
paper_content:
With the advances in telemedicine, watermarking for Electronic Patient Record(EPR) data hiding has gained profound importance. Most of the techniques proposed for EPR data hiding are not com- pression tolerant. In this paper we propose a new lossless scheme wherein the watermark- ing is done during the JPEG2000 compression process so that both the watermark and the cover image can be recovered as such at the receiving side. The recovered image is the same as that of the decompressed image when no watermarking has been done. Keywords- JPEG2000, EPR, watermark- ing, bit-plane
---
paper_title: Hybrid watermarking of medical images for ROI authentication and recovery
paper_content:
Medical image data require strict security, confidentiality and integrity. To achieve these stringent requirements, we propose a hybrid watermarking method which embeds a robust watermark in the region of non-interest (RONI) for achieving security and confidentiality, while integrity control is achieved by inserting a fragile watermark into the region of the interest (ROI). First the information to be modified in ROI is separated and is inserted into RONI, which later is used in recovery of the original ROI. Secondly, to avoid the underflow and overflow, a location map is generated for embedding the watermark block-wise by leaving the suspected blocks. This avoids the preprocessing step of histogram modification. The image visual quality, as well as tamper localization, is evaluated. We use weighted peak signal to noise ratio for measuring image quality of watermarked images. Experimental results show that the proposed method outperforms the existing hybrid watermarking techniques.
---
paper_title: Medical image tamper approximation based on an image moment signature
paper_content:
In this paper we propose a medical image integrity verification system that not only allows detecting and localizing one alteration, but also provides an approximation of this latter. For that purpose, we suggest the embedding of an image signature or digest derived from Geometric moments of image pixel blocks. Image integrity verification is then conducted by comparing this embedded signature to the recomputed one. This signature helps to approximate modifications by determining the parameters of the nearest generalized 2D Gaussian. Experimental results with local image modification illustrate the overall performances of our method.
---
paper_title: Conception and limits of robust perceptual hashing: towards side information assisted hash functions
paper_content:
In this paper, we consider some basic concepts behind the design of existing robust perceptual hashing techniques for content identification. We show the limits of robust hashing from the communication perspectives as well as propose an approach that is able to overcome these shortcomings in certain setups. The consideration is based on both achievable rate and probability of error. We use the fact that most robust hashing algorithms are based on dimensionality reduction using random projections and quantization. Therefore, we demonstrate the corresponding achievable rate and probability of error based on random projections and compare with the results for the direct domain. The effect of dimensionality reduction is studied and the corresponding approximations are provided based on the Johnson-Lindenstrauss lemma. Side-information assisted robust perceptual hashing is proposed as a solution to the above shortcomings.
---
paper_title: Protecting the Exchange of Medical Images in Healthcare Process Integration with Web Services
paper_content:
There is now an increasing demand for sharing images and image data for process integration among healthcare institutions. Web services technology has recently been widely proposed and gradually adopted as a platform for supporting integrations. The Health Insurance Portability and Accountability Act (HIPAA) imposes rules as national standards to protect individuals' health information in the U.S., highlighting security and privacy protection requirements. There are so far no holistic solutions to tackle the various protection issues in this area, especially in inter-institutional healthcare progress integration. We propose the exchange of medical images through a medical image exchange platform (MIEP), especially replacing ad-hoc and manual exchange practices. We show how contemporary technologies of Web services and watermarking can help archive images with layered implementation architecture. Through dedicated Web services technologies with watermarking mechanism, not only can the development, deployment, and maintenance of software be streamlined, the privacy protection in medical image exchanges as required by disparate processes can also be much facilitated
---
paper_title: Watermarking Is Not Cryptography
paper_content:
A number of analogies to cryptographic concepts have been made about watermarking. In this paper, we argue that these analogies are misleading or incorrect, and highlight several analogies to support our argument. We believe that the fundamental role of watermarking is the reliable embedding and detection of information and should therefore be considered a form of communications. We note that the fields of communications and cryptography are quite distinct and while communications systems often combine technologies from the two fields, a layered architecture is applied that requires no knowledge of the layers above. We discuss how this layered approach can be applied to watermarking applications.
---
|
Title: A Review of Medical Image Watermarking Requirements for Teleradiology
Section 1: Introduction
Description 1: Introduce the background and significance of medical image watermarking in teleradiology, discussing the evolution from film-based to digital imaging and the security and privacy issues involved.
Section 2: Security and Privacy Standards
Description 2: Discuss the prevailing national and international standards and legislative rules for the security and privacy of medical information, such as ISO27799, HIPAA, and DICOM.
Section 3: Medical Information Security Requirements
Description 3: Outline the security requirements for medical information in teleradiology, focusing on confidentiality, integrity, and availability. Discuss the different models of teleradiology and their security domains.
Section 4: Expected Threats and Conventional Security Measures
Description 4: Identify potential security threats to medical images and the conventional measures (firewalls, VPNs, encryption, etc.) currently used to protect them.
Section 5: Limitations of the Existing Security Measures
Description 5: Highlight the limitations of existing conventional security measures in adequately safeguarding medical images and information.
Section 6: Digital Watermarking in Teleradiology
Description 6: Introduce the concept of digital watermarking, its components, and its potential benefits for enhancing security in teleradiology systems.
Section 7: Advantages of Digital Watermarking
Description 7: Explore the various advantages of digital watermarking for medical image applications, including enhanced security, privacy, avoidance of data detachment, and memory and bandwidth savings.
Section 8: Choice of Design and Evaluation Parameters
Description 8: Discuss the critical design and evaluation parameters for developing and validating digital watermarking schemes specifically for medical images.
Section 9: Digital Watermarking Versus Other Security Measures/Tools
Description 9: Compare digital watermarking with other security measures and tools, emphasizing their relative strengths and weaknesses.
Section 10: Objectives and Applications of Watermarking for Medical Images
Description 10: Examine different objectives and specific applications of digital watermarking in medical imaging, such as origin/content authentication, EPR annotation, and tamper detection and recovery.
Section 11: Discussion and Conclusions
Description 11: Summarize the key findings of the study, discuss the implications for teleradiology, and suggest future research directions for improving security through digital watermarking.
|
RSS based Vertical Handoff algorithms for Heterogeneous wireless networks - A Review
| 7 |
---
paper_title: Performance evaluation framework for vertical handoff algorithms in heterogeneous networks
paper_content:
The next generation (4G) wireless network is envisioned as a convergence of different wireless access technologies providing the user with the best anywhere anytime connection and improving the system resource utilization. The integration of wireless local area network (WLAN) hotspots and the third generation (3G) cellular network has recently received much attention. While the 3G-network can provide global coverage with a low data-rate service, the WLAN can provide a high data-rate service within the hotspots. Although increasing the underlay network utilization is expected to increase the user available bandwidth, it may violate the quality-of-service (QoS) requirements of active real-time applications. Hence, achieving seamless handoff between different wireless technologies, known as vertical handoff (VHO), is a major challenge for 4G-system implementation. Several factors, such as application QoS requirements and handoff delay, should be considered to realize an application transparent handoff. We present a novel framework to evaluate the impact of VHO algorithm design on system resource utilization and user perceived QoS. We used this framework to compare the performance of two different VHO algorithms. The results show a very good match between simulation and analytical results. In addition, it clarifies the tradeoff between achieving high resource utilization and satisfying user QoS expectations.
---
paper_title: A Traveling Distance Prediction Based Method to Minimize Unnecessary Handovers from Cellular Networks to WLANs
paper_content:
We propose a handover decision method based on the prediction of traveling distance within an IEEE 802.11 wireless local area network (WLAN) cell. The method uses two thresholds which are calculated by the mobile terminal (MT) as it enters the WLAN cell. The predicted traveling distance is compared against these thresholds to make a handover decision in order to minimize the probability of handover failures or unnecessary handovers from a cellular network to a WLAN. Our analysis shows that the proposed method successfully keeps the number of failed or unnecessary handovers low.
---
|
Title: RSS based Vertical Handoff algorithms for Heterogeneous wireless networks - A Review
Section 1: INTRODUCTION
Description 1: This section introduces the concept of vertical handoff in heterogeneous wireless networks, emphasizing the significance of RSS based algorithms and the challenges associated with vertical handoff decisions.
Section 2: RSS BASED VHD ALGORITHMS
Description 2: This section details the basics of RSS based VHD (Vertical Handoff) algorithms and introduces three specific algorithms explored in this paper.
Section 3: ALIVE-HO (adaptive lifetime based vertical handoff)
Description 3: This section elaborates on the ALIVE-HO algorithm, its operational methodology, and the scenarios in which it is applied, including its advantages and limitations.
Section 4: Algorithm on Adaptive RSS Threshold
Description 4: This section discusses the algorithm proposed by Mohanty and Akyildiz, focusing on how it adapts the RSS threshold dynamically to reduce handoff failures and unnecessary handovers between WLAN and 3G networks.
Section 5: A Traveling Distance Prediction Based Method
Description 5: This section explains the traveling distance prediction-based method developed by Yan et al., which aims to minimize unnecessary handovers by considering the expected time a mobile terminal will spend within a WLAN cell.
Section 6: CONCLUSION
Description 6: This section provides a summary of the comparative analysis of the three RSS based vertical handoff algorithms, highlighting their respective strengths and weaknesses.
Section 7: FUTURE DIRECTIONS
Description 7: This section suggests potential future improvements to the discussed algorithms, including periodic sampling of RSS and refining estimations to enhance performance.
|
Network Layer Mobility: an Architecture and Survey
| 19 |
---
paper_title: Mobile Users: To Update or not to Update?
paper_content:
Tracking strategies for mobile users in wireless networks are studied. In order to save the cost of using the wireless links mobile users should not update their location whenever they cross boundaries of adjacent cells. This paper focuses on three natural strategies in which the mobile users make the decisions when and where to update: the time-based strategy, the number of movements-based strategy, and the distance-based strategy. We consider both memoryless movement patterns and movements with Markovian memory along a topology of cells arranged as a ring. We analyze the performance of each one of the three strategies under such movements, and show the performance differences between the strategies.
---
|
Title: Network Layer Mobility: an Architecture and Survey
Section 1: Introduction
Description 1: Introduce the problem of network layer mobility, discuss the limitations of current Internet protocols, and outline the scope and approach of the paper.
Section 2: Internet Naming and Addressing
Description 2: Discuss the fundamentals of Internet architecture, focusing on the importance of unique network addresses for hosts and the hierarchical structure of addressing.
Section 3: Internet Addressing
Description 3: Provide details on IP addressing, discussing the significance of the network-id and host-id components, and the necessity of hierarchical addressing for scalable routing.
Section 4: Naming
Description 4: Explore the concept of host names as user-defined aliases, their translation to IP addresses via DNS, and the challenges posed by mobility.
Section 5: Mobility Problem: Directory Service View
Description 5: Analyze the impact of host mobility on static name-to-address bindings and the limitations of the DNS in a mobile environment.
Section 6: Mobility Problem: Internet View
Description 6: Explain how mobility disrupts IP addressing and routing, with a focus on why current Internet architectures fail to support mobile hosts without breaking active TCP connections.
Section 7: Network Layer Solution Architecture
Description 7: Describe a proposed network layer architecture that enables the integration of mobile end-systems within the Internet, along with key concepts like Mobile Host, Home Address, and Home Network.
Section 8: Two Tier Addressing
Description 8: Introduce and elaborate on the concept of two-tier addressing to resolve issues associated with dual-use of internet addresses.
Section 9: Architecture Components
Description 9: Detail the key components of the proposed architecture, including Forwarding Agent (FA), Location Directory (LD), and Address Translation Agent (ATA).
Section 10: Location Update Protocol
Description 10: Discuss the protocol required to keep the LD and its cached entries up-to-date and consistent to ensure reliable mobile host accessibility.
Section 11: Packet Forwarding Operation
Description 11: Illustrate the operation of packet forwarding with the inclusion of ATA and FA, detailing the address translation process and transport layer transparency.
Section 12: Address Translation Mechanisms
Description 12: Describe the methods of address translation, focusing on encapsulation and loose source routing (LSR).
Section 13: Mapping to candidate Mobile IP proposals
Description 13: Present various Mobile IP proposals and show how they can be mapped to the proposed network architecture, highlighting their unique design choices and mechanisms.
Section 14: Columbia Scheme
Description 14: Analyze the Columbia Scheme focused on campus environments and its design choices for addressing and forwarding mobility.
Section 15: Sony Scheme
Description 15: Review Sony's approach to mobile networking, including its unique address mapping tables and update mechanisms.
Section 16: LSR Scheme
Description 16: Examine the Loose Source Routing (LSR) Scheme and its use of IP routing options to support mobile hosts.
Section 17: Mobile IP working-group Proposal
Description 17: Discuss the proposal by the IETF Mobile IP working group, focusing on its near-term deployability and design considerations for location updates and address translation.
Section 18: IPv6 Mobility Proposal
Description 18: Summarize the proposal for mobility support in IPv6, highlighting its enhancements over IPv4 and the elimination of Foreign Agents.
Section 19: Summary
Description 19: Summarize the key points of the paper, discussing the proposed network layer architecture and how various Mobile IP proposals align with this architecture.
|
Network Big Data: A Literature Survey on Stream Data Mining
| 11 |
---
paper_title: Network Big Data:Present and Future
paper_content:
Network big data refer to the massive data generated by interaction and fusion of the ternary human-machine-thing universe in the Cyberspace and available on the Internet.The increase of their scale and complexity exceeds that of the capacity of hardware characterized by the Moore law,which brings grand challenges to the architecture and the processing and computing capacity of the contemporary IT systems,meanwhile presents unprecedented opportunities on deeply mining and taking full advantage of the big value of network big data.Therefore,it is pressing to investigate the disciplinary issues and discover the common laws of network big data,and further study the fundamental theory and basic approach to qualitatively or quantitatively dealing with network big data.This paper analyzes the challenges caused by the complexity,uncertainty and emergence of network big data,and summarizes major issues and research status of the awareness,representation,storage,management,mining,and social computing of network big data,as well as network data platforms and applications.It also looks ahead to the development trends of big data science,new modes and paradigm of data computing,new IT infrastructures,and data security and privacy,etc.
---
paper_title: O(ε)-Approximation to physical world by sensor networks
paper_content:
To observe the complicate physical world by a WSN, the sensors in the WSN senses and samples the data from the physical world. Currently, most of the existing work use equi-frequency sampling methods (EFS) or EFS based sampling methods for data acquisition in sensor networks. However, the accuracies of EFS and EFS based sampling methods cannot be guaranteed in practice since the physical world usually varies continuously, and these methods does not support reconstructing of the monitored physical world. To overcome the shortages of EFS and EFS based sampling methods, this paper focuses on designing physical-world-aware data acquisition algorithms to support O(ϵ)-approximation to the physical world for any ϵ ≥ 0. Two physical-world-aware data acquisition algorithms based on Hermit and Spline interpolation are proposed in the paper. Both algorithms can adjust the sensing frequency automatically based on the changing trend of the physical world and given c. The thorough analysis on the performance of the algorithms are also provided, including the accuracies, the smooth of the outputted curves, the error bounds for computing first and second derivatives, the number of the sampling times and complexities of the algorithms. It is proven that the error bounds of the algorithms are O(ϵ) and the complexities of the algorithms are O(1/ϵ1/4). Based on the new data acquisition algorithms, an algorithm for reconstructing physical world is also proposed and analyzed. The theoretical analysis and experimental results show that all the proposed algorithms have high performance in items of accuracy and energy consumption.
---
paper_title: Enabling fast prediction for ensemble models on data streams
paper_content:
Ensemble learning has become a common tool for data stream classification, being able to handle large volumes of stream data and concept drifting. Previous studies focus on building accurate prediction models from stream data. However, a linear scan of a large number of base classifiers in the ensemble during prediction incurs significant costs in response time, preventing ensemble learning from being practical for many real world time-critical data stream applications, such as Web traffic stream monitoring, spam detection, and intrusion detection. In these applications, data streams usually arrive at a speed of GB/second, and it is necessary to classify each stream record in a timely manner. To address this problem, we propose a novel Ensemble-tree (E-tree for short) indexing structure to organize all base classifiers in an ensemble for fast prediction. On one hand, E-trees treat ensembles as spatial databases and employ an R-tree like height-balanced structure to reduce the expected prediction time from linear to sub-linear complexity. On the other hand, E-trees can automatically update themselves by continuously integrating new classifiers and discarding outdated ones, well adapting to new trends and patterns underneath data streams. Experiments on both synthetic and real-world data streams demonstrate the performance of our approach.
---
paper_title: Long-Term Incremental Web-Supervised Learning of Visual Concepts via Random Savannas
paper_content:
The idea of using image and video data available in the World-Wide Web (WWW) as training data for classifier construction has received some attention in the past few years. In this paper, we present a novel incremental and scalable web-supervised learning system that continuously learns appearance models for image categories with heterogeneous appearances and improves these models periodically. Simply specifying the name of the concept that has to be learned initializes the proposed system, and there is no further supervision afterwards. Textual and visual information on web sites are used to filter out irrelevant and misleading training images. To obtain a robust, flexible, and updatable way of learning, a novel learning framework is presented that relies on clustering in order to identify visual subclasses before using an ensemble of random forests, called random savanna, for subclass learning. Experimental results demonstrate that the proposed web-supervised learning approach outperforms a support vector machine (SVM), while at the same time being simply parallelizable in the training and testing phases.
---
paper_title: Dynamic classifier ensemble model for customer classification with imbalanced class distribution
paper_content:
Customer classification is widely used in customer relationship management including churn prediction, credit scoring, cross-selling and so on. In customer classification, an important yet challenging problem is the imbalance of data distribution. In this paper, we combine ensemble learning with cost-sensitive learning, and propose a dynamic classifier ensemble method for imbalanced data (DCEID). For each test customer, it can adaptively select out the more appropriate one from the two kinds of dynamic ensemble approach: dynamic classifier selection (DCS) and dynamic ensemble selection (DES). Meanwhile, new cost-sensitive selection criteria for DCS and DES are constructed respectively to improve the classification ability for imbalanced data. We apply this method to a credit scoring dataset in UCI and a real churn prediction dataset from a telecommunication company. The experimental results show that the classification performance of DCEID is not only better than some static ensemble methods such as weighted random forests and improved balanced random forests, but also better than the existing DCS and DES strategies.
---
paper_title: Classification and Novel Class Detection in Concept-Drifting Data Streams under Time Constraints
paper_content:
Most existing data stream classification techniques ignore one important aspect of stream data: arrival of a novel class. We address this issue and propose a data stream classification technique that integrates a novel class detection mechanism into traditional classifiers, enabling automatic detection of novel classes before the true labels of the novel class instances arrive. Novel class detection problem becomes more challenging in the presence of concept-drift, when the underlying data distributions evolve in streams. In order to determine whether an instance belongs to a novel class, the classification model sometimes needs to wait for more test instances to discover similarities among those instances. A maximum allowable wait time Tc is imposed as a time constraint to classify a test instance. Furthermore, most existing stream classification approaches assume that the true label of a data point can be accessed immediately after the data point is classified. In reality, a time delay Tl is involved in obtaining the true label of a data point since manual labeling is time consuming. We show how to make fast and correct classification decisions under these constraints and apply them to real benchmark data. Comparison with state-of-the-art stream classification techniques prove the superiority of our approach.
---
paper_title: An incremental learning vector quantization algorithm for pattern classification
paper_content:
Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.
---
paper_title: A Semi-supervised Ensemble Approach for Mining Data Streams
paper_content:
There are many challenges in mining data streams, such as infinite length, evolving nature and lack of labeled instances. Accordingly, a semi-supervised ensemble approach for mining data streams is presented in this paper. Data streams are divided into data chunks to deal with the infinite length. An ensemble classification model E is trained with existing labeled data chunks and decision boundary is constructed using E for detecting novel classes. New labeled data chunks are used to update E while unlabeled ones are used to construct unsupervised models. Classes are predicted by a semi-supervised model Ex which is consist of E and unsupervised models in a maximization consensus manner, so better performance can be achieved by using the constraints from unsupervised models with limited labeled instances. Experiments with different datasets demonstrate that our method outperforms conventional methods in mining data streams.
---
paper_title: Exponentially Weighted Moving Average Charts for Detecting Concept Drift
paper_content:
Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an Exponentially Weighted Moving Average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time.
---
paper_title: Mining time-changing data streams
paper_content:
Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them changed during this time, sometimes radically. Although a number of algorithms have been proposed for learning time-changing concepts, they generally do not scale well to very large databases. In this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner. This algorithm, called CVFDT, stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate. CVFDT learns a model which is similar in accuracy to the one that would be learned by reapplying VFDT to a moving window of examples every time a new example arrives, but with O(1) complexity per example, as opposed to O(w), where w is the size of the window. Experiments on a set of large time-changing data streams demonstrate the utility of this approach.
---
paper_title: Density Based Distribute Data Stream Clustering Algorithm
paper_content:
To solve the problem of distributed data streams clustering, the algorithm DB-DDSC (Density-Based Distribute Data Stream Clustering) was proposed. The algorithm consisted of two stages. First presented the concept of circular-point based on the representative points and designed the iterative algorithm to find the density-connected circular-points, then generated the local model at the remote site. Second designed the algorithm to generate global clusters by combining the local models at coordinator site. The DB-DDSC algorithm can find the the clusters of different shapes under the distributed data stream environment, avoid frequently sending data by using the test-update algorithm, and reduce the data transmission. The experiments show that the DB-DDSC algorithm is feasible and scale expandable.
---
paper_title: Online clustering of parallel data streams
paper_content:
In recent years, the management and processing of so-called data streams has become a topic of active research in several fields of computer science such as, e.g., distributed systems, database systems, and data mining. A data stream can roughly be thought of as a transient, continuously increasing sequence of time-stamped data. In this paper, we consider the problem of clustering parallel streams of real-valued data, that is to say, continuously evolving time series. In other words, we are interested in grouping data streams the evolution over time of which is similar in a specific sense. In order to maintain an up-to-date clustering structure, it is necessary to analyze the incoming data in an online manner, tolerating not more than a constant time delay. For this purpose, we develop an efficient online version of the classical K-means clustering algorithm. Our method's efficiency is mainly due to a scalable online transformation of the original data which allows for a fast computation of approximate distances between streams.
---
paper_title: Wavelet Synopsis Based Clustering of Parallel Data Streams
paper_content:
In many real-life applications, such as stock markets, network monitoring, and sensor networks, data are modeled as dynamic evolving time series which is continuous and unbounded in nature, and many such data streams concur usually. Clustering is useful in analyzing such paralleled data streams. This paper is interested in grouping these evolving data streams. For this purpose, a synopsis is maintained dynamically for each data stream. The construction of the synopsis is based on Discrete Wavelet Transform and utilizes the amnesic feature of data stream. By using the synopsis, a fast computation of approximate distances between streams and the cluster center can be implemented, and an efficient online version of the classical K-means clustering algorithm is developed. Experiments have proved the effectiveness of the proposed method.
---
paper_title: A Framework for Clustering Uncertain Data Streams
paper_content:
In recent years, uncertain data management applications have grown in importance because of the large number of hardware applications which measure data approximately. For example, sensors are typically expected to have considerable noise in their readings because of inaccuracies in data retrieval, transmission, and power failures. In many cases, the estimated error of the underlying data stream is available. This information is very useful for the mining process, since it can be used in order to improve the quality of the underlying results. In this paper we will propose a method for clustering uncertain data streams. We use a very general model of the uncertainty in which we assume that only a few statistical measures of the uncertainty are available. We will show that the use of even modest uncertainty information during the mining process is sufficient to greatly improve the quality of the underlying results. We show that our approach is more effective than a purely deterministic method such as the CluStream approach. We will test the approach on a variety of real and synthetic data sets and illustrate the advantages of the method in terms of effectiveness and efficiency.
---
paper_title: Moment: maintaining closed frequent itemsets over a stream sliding window
paper_content:
This paper considers the problem of mining closed frequent itemsets over a sliding window using limited memory space. We design a synopsis data structure to monitor transactions in the sliding window so that we can output the current closed frequent itemsets at any time. Due to time and memory constraints, the synopsis data structure cannot monitor all possible itemsets. However, monitoring only frequent itemsets make it impossible to detect new itemsets when they become frequent. In this paper, we introduce a compact data structure, the closed enumeration tree (CET), to maintain a dynamically selected set of item-sets over a sliding-window. The selected itemsets consist of a boundary between closed frequent itemsets and the rest of the itemsets. Concept drifts in a data stream are reflected by boundary movements in the CET. In other words, a status change of any itemset (e.g., from non-frequent to frequent) must occur through the boundary. Because the boundary is relatively stable, the cost of mining closed frequent item-sets over a sliding window is dramatically reduced to that of mining transactions that can possibly cause boundary movements in the CET. Our experiments show that our algorithm performs much better than previous approaches.
---
paper_title: Verifying and Mining Frequent Patterns from Large Windows over Data Streams
paper_content:
Mining frequent itemsets from data streams has proved to be very difficult because of computational complexity and the need for real-time response. In this paper, we introduce a novel verification algorithm which we then use to improve the performance of monitoring and mining tasks for association rules. Thus, we propose a frequent itemset mining method for sliding windows, which is faster than the state-of-the-art methods - in fact, its running time that is nearly constant with respect to the window size entails the mining of much larger windows than it was possible before. The performance of other frequent itemset mining methods (including those on static data) can be improved likewise, by replacing their counting methods (e.g., those using hash trees) by our verification algorithm.
---
paper_title: Mining frequent itemsets in distributed and dynamic databases
paper_content:
Traditional methods for frequent itemset mining typically assume that data is centralized and static. Such methods impose excessive communication overhead when data is distributed, and they waste computational resources when data is dynamic. We present what we believe to be the first unified approach that overcomes these assumptions. Our approach makes use of parallel and incremental techniques to generate frequent itemsets in the presence of data updates without examining the entire database, and imposes minimal communication overhead when mining distributed databases. Further, our approach is able to generate both local and global frequent itemsets. This ability permits our approach to identify high-contrast frequent itemsets, which allows one to examine how the data is skewed over different sites.
---
paper_title: UDS-FIM: An Efficient Algorithm of Frequent Itemsets Mining over Uncertain Transaction Data Streams
paper_content:
In this paper, we study the problem of finding frequent itemsets from uncertain data streams. To the best of our knowledge, the existing algorithms cannot compress transaction itemsets to a tree as compact as the classical FP-Tree, thus they need much time and memory space to process the tree. To address this issue, we propose an algorithm UDS-FIM and a tree structure UDS-Tree. Firstly, UDS-FIM maintains probability values of each transactions to an array; secondly, compresses each transaction to a UDS-Tree in the same manner as an FP-Tree (so it is as compact as an FP-Tree) and maintains index of probability values of each transaction in the array to the corresponding tail-nodes; lastly, it mines frequent itemsets from the UDS-Tree without additional scan of transactions. The experimental results show that UDS-FIM has achieved a good performance under different experimental conditions in terms of runtime and memory consumption.
---
paper_title: Frequent Items Query Algorithm for Uncertain Sensing Data
paper_content:
With advances in technology,large amounts of streaming data can be generated continuously by sensors.Due to the inherited limitation of sensors,these continuous sensing data can be uncertain.This calls for stream mining of uncertain sensing data.Frequent items query is valuable in wireless sensor networks(WSNs) and it can be widely used in environmental monitoring,association rules mining,and so on.A basic algorithm which continuously maintains sliding window frequent items over WSNs is proposed.However the basic algorithm needs to maintain all items in the window.Due to this,an improved algorithm is further proposed by optimizing in two aspects:(1) the pruning rules by predicting the upper bound of items probability is developed,which can reduce the candidate set and improve the query efficiency;(2) the large amount of the same items in different window can be compressed by cp-list structure in order to minimize the memory utilization.Finally,experimental results and detailed analysis demonstrate that the high efficiency and low memory cost of the proposed algorithms in WSNs.
---
paper_title: O(ε)-Approximation to physical world by sensor networks
paper_content:
To observe the complicate physical world by a WSN, the sensors in the WSN senses and samples the data from the physical world. Currently, most of the existing work use equi-frequency sampling methods (EFS) or EFS based sampling methods for data acquisition in sensor networks. However, the accuracies of EFS and EFS based sampling methods cannot be guaranteed in practice since the physical world usually varies continuously, and these methods does not support reconstructing of the monitored physical world. To overcome the shortages of EFS and EFS based sampling methods, this paper focuses on designing physical-world-aware data acquisition algorithms to support O(ϵ)-approximation to the physical world for any ϵ ≥ 0. Two physical-world-aware data acquisition algorithms based on Hermit and Spline interpolation are proposed in the paper. Both algorithms can adjust the sensing frequency automatically based on the changing trend of the physical world and given c. The thorough analysis on the performance of the algorithms are also provided, including the accuracies, the smooth of the outputted curves, the error bounds for computing first and second derivatives, the number of the sampling times and complexities of the algorithms. It is proven that the error bounds of the algorithms are O(ϵ) and the complexities of the algorithms are O(1/ϵ1/4). Based on the new data acquisition algorithms, an algorithm for reconstructing physical world is also proposed and analyzed. The theoretical analysis and experimental results show that all the proposed algorithms have high performance in items of accuracy and energy consumption.
---
paper_title: A Compression Algorithm for Multi-streams Based on GEP
paper_content:
This paper applied the Methods which based on GEP in compress multi-streams. The contributions of this paper include: 1) giving an introduction to data function finding based on GEP(DFF-GEP), defining the main conception of Multi-Streams, and revealing the map relation in it; 2) putting forward the Compression Algorithm for Multi-Streams according to map relation lied in data between data streams; and 3)providing an experience with the real data and find that (3.1) the compression ratio of the new methods is 120?150 times as the traditional wavelets method, and 35?70 times as the wavelets and coincidence method; (3.2) the relative error of the new method is about 3?, yet maximum relative error is 0.01 by using the traditional relative error standard, the precision is improved from 7% to 15% as compared with the traditional method.
---
paper_title: Continuous privacy preserving publishing of data streams
paper_content:
Recently, privacy preserving data publishing has received a lot of attention in both research and applications. Most of the previous studies, however, focus on static data sets. In this paper, we study an emerging problem of continuous privacy preserving publishing of data streams which cannot be solved by any straightforward extensions of the existing privacy preserving publishing methods on static data. To tackle the problem, we develop a novel approach which considers both the distribution of the data entries to be published and the statistical distribution of the data stream. An extensive performance study using both real data sets and synthetic data sets verifies the effectiveness and the efficiency of our methods.
---
paper_title: Fast Clustering-Based Anonymization Algorithm for Data Streams
paper_content:
In order to prevent the disclosure of sensitive information and protect users’ privacy,the generalization and suppression of technology is often used to anonymize the quasi-identifiers of the data before its sharing.Data streams are inherently infinite and highly dynamic which are very different from static datasets,so that the anonymization of data streams needs to be capable of solving more complicated problems.The methods for anonymizing static datasets cannot be applied to data streams directly.In this paper,an anonymization approach for data streams is proposed with the analysis of the published anonymization methods for data streams.This approach scans the data only once to recognize and reuse the clusters that satisfy the anonymization requirements for speeding up the anonymization process.Experimental results on the real dataset show that the proposed method can reduce the information loss that is caused by generalization and suppression and also satisfies the anonymization requirements and has low time and space complexity.
---
paper_title: Mining Method for Data Quality Detection Rules
paper_content:
Data quality rules are key to the database quality detection. To discover data quality rules from relational databases automatically and detect the error or abnormal data based on them,the form and evaluation measures of data quality rules are studied,and criterions of computing data quality rules are presented based on data item groups and the confidence threshold. The algorithms of mining minimal data quality rules and the main idea of detecting data errors using data quality rules are also given. The new form of data quality rules makes use of confidence mechanism of association rules and the expression of conditional functional dependencies to describe functional dependencies, conditional functional dependencies and association rules in the same format. It can be concluded that this kind of data quality rules has the properties of conciseness,objectivity,completeness and accuracy of detecting the error or abnormal data. Compared with other related research work,the proposed algorithms have lower temporal complexity,and the discovered quality rules improve the detecting rate. The effectiveness and correctness of the proposed methods are proved by the experiments.
---
|
Title: Network Big Data: A Literature Survey on Stream Data Mining
Section 1: INTRODUCTION
Description 1: Provide an introduction to the development of information technology, the significance of big data, and the importance and context of stream data mining in today's landscape.
Section 2: Characteristics of Stream Data Mining
Description 2: Discuss the unique characteristics of stream data mining in the environment of big data, such as usability, instantaneity, diversity of mode, multi-source heterogeneity, and high cognition.
Section 3: Challenges and Research Issues of Stream Data Mining
Description 3: Identify and elaborate on the key challenges and research issues faced in stream data mining under the big data setting.
Section 4: Research on the Pattern of Stream Data Mining
Description 4: Examine the patterns of stream data mining and propose new computational models to handle massive and high-dimensional data sets efficiently.
Section 5: Relative Issues of Stream Data Mining
Description 5: Explore various issues in stream data mining, such as efficient one-time stream data analysis, privacy mining, and mining in resource-constrained environments.
Section 6: RESEARCH PROGRESS IN BIG DATA-ORIENTED STREAM DATA MINING
Description 6: Provide the current state of research progress in big data-oriented stream data mining in several sub-fields, including classification, clustering, and frequent item mining.
Section 7: Stream Data Mining Based on Classification
Description 7: Detail methods of stream data classification, including integrated learning, incremental learning, and concept drift detection.
Section 8: Clustering Stream Data Mining
Description 8: Discuss various clustering methods and algorithms applied to stream data, addressing issues such as high-dimensional space and distributed computing environments.
Section 9: Stream Data Mining Based on Frequent Items
Description 9: Cover techniques for mining frequent items from stream data, such as transaction amount-based methods, time-related methods, and approximate methods.
Section 10: Other Mining Methods and Mining Quality Analysis
Description 10: Review additional stream data mining methods, such as data compression, privacy mining, and quality analysis, and how they contribute to improving the efficiency and effectiveness of stream data mining.
Section 11: CONCLUSION
Description 11: Summarize the main points of the paper, discuss the infancy of big data-oriented stream data mining, and suggest directions for future research.
|
Physical mechanism and modeling of heat generation and transfer in magnetic fluid hyperthermia through Néelian and Brownian relaxation: a review
| 16 |
---
paper_title: Induction of apoptotic cell DNA fragmentation in human cells after treatment with hyperthermia.
paper_content:
The biological significance of apoptosis is becoming increasingly clear. Its relevance in tumor response to treatment as well as recent evidence for its important function as a regulating mechanism in tumorigenesis has also been demonstrated. One of the most prominent biological features of apoptosis is nucleosomal DNA fragmentation. In this communication, we present a study of DNA fragmentation in Raji cells which have been subjected to hyperthermia treatment to induce apoptosis. We found that the induction and onset of fragmentation is swift, and consistent with previous reports that fragmentation must be a rapid event.
---
paper_title: The effects of 41°C hyperthermia on the DNA repair protein, MRE11, correlate with radiosensitization in four human tumor cell lines
paper_content:
Purpose: The goal of this study was to determine if reduced availability of the DNA repair protein, MRE11, for the repair of damaged DNA is a basis for thermal radiosensitization induced by moderate hyperthermia. To test this hypothesis, we measured the total amount of MRE11 DNA repair protein and its heat-induced alterations in four human tumor cell lines requiring different heating times at 41°C to induce measurable radiosensitization.Materials and methods: Human colon adenocarcinoma cell lines (NSY42129, HT29 and HCT15) and HeLa cells were used as the test system. Cells were irradiated immediately after completion of hyperthermia. MRE11 levels in whole cell extract, nuclear extract and cytoplasmic extracts were measured by Western blotting. The nuclear and cytoplasmic extracts were separated by TX100 solubility. The subcellular localization of MRE11 was determined by immunofluorescence staining.Results: The results show that for the human tumor cell lines studied, the larger the endogenous amount of MR...
---
paper_title: Thermal Therapy, Part III: Ablation Techniques
paper_content:
Ablative treatments are gaining increasing attention as an alternative to standard surgical therapies, especially for patients with contraindication or those who refuse open surgery. Thermal ablation is used in clinical applications mainly for treating heart arrhythmias, benign prostate hyperplasia, and nonoperable liver tumors; there is also increasing application to other organ sites, including the kidney, lung, and brain. Potential benefi ts of thermal ablation include reduced morbidity and mortality in comparison with standard surgical resection and the ability to treat nonsurgical patients. The purpose of this review is to outline and discuss the engineering principles and biological responses by which thermal ablation techniques can provide elevation of temperature in organs within the human body. Because of the individual problems associated with each type of treatment, a wide range of ablation techniques have evolved including cryoablation as well as ultrasound, radiofrequency (RF), microwave, and laser ablation. Aspects of each ablation technique, including mechanisms of action, equipment required, selection of eligible patients, treatment techniques, and patient outcomes are presented, along with a discussion of limitations of the techniques and future research directions.
---
paper_title: Magnetic nanoparticle hyperthermia enhances radiation therapy: A study in mouse models of human prostate cancer
paper_content:
AbstractPurpose: We aimed to characterise magnetic nanoparticle hyperthermia (mNPH) with radiation therapy (RT) for prostate cancer.Methods: Human prostate cancer subcutaneous tumours, PC3 and LAPC-4, were grown in nude male mice. When tumours measured 150 mm3 magnetic iron oxide nanoparticles (MIONPs) were injected into tumours to a target dose of 5.5 mg Fe/cm3 tumour, and treated 24 h later by exposure to alternating magnetic field (AMF). Mice were randomly assigned to one of four cohorts to characterise (1) intratumour MIONP distribution, (2) effects of variable thermal dose mNPH (fixed AMF peak amplitude 24 kA/m at 160 ± 5 kHz) with/without RT (5 Gy), (3) effects of RT (RT5: 5 Gy; RT8: 8 Gy), and (4) fixed thermal dose mNPH (43 °C for 20 min) with/without RT (5 Gy). MIONP concentration and distribution were assessed following sacrifice and tissue harvest using inductively coupled plasma mass spectrometry (ICP-MS) and Prussian blue staining, respectively. Tumour growth was monitored and compared among ...
---
paper_title: Microcalorimetric study of the metabolism of U-937 cells undergoing apoptosis induced by the combined treatment of hyperthermia and chemotherapy
paper_content:
Hyperthermia is a useful adjunct in cancer therapy, as it can increase the effectiveness and decrease the toxicity of currently available cancer treatments such as chemotherapy and radiation. In this study we determined the power-time curves of U-937 cell line treated by the combination of hyperthermia and Carmofur by using an LKB 2277 Bioactivity Monitor. The maximal thermal power and the heat production were used to evaluate the antitumor effect. Our results show that the combined treatment of hyperthermia and Carmofur had a synergistic antitumor effect, which is consistent with the apoptosis ratio obtained by TUNEL assay. The results also indicate that the metabolic activity of apoptotic cells is lower than that of normal cells. Thus microcalorimetry is a powerful tool in fields of hyperthermia.
---
paper_title: Efficacy and safety of intratumoral thermotherapy using magnetic iron-oxide nanoparticles combined with external beam radiotherapy on patients with recurrent glioblastoma multiforme
paper_content:
Therapy options at the time of recurrence of glioblastoma multiforme are often limited. We investigated whether treatment with a new intratumoral thermotherapy procedure using magnetic nanoparticles improves survival outcome. In a single-arm study in two centers, 66 patients (59 with recurrent glioblastoma) received neuronavigationally controlled intratumoral instillation of an aqueous dispersion of iron-oxide (magnetite) nanoparticles and subsequent heating of the particles in an alternating magnetic field. Treatment was combined with fractionated stereotactic radiotherapy. A median dose of 30 Gy using a fractionation of 5 × 2 Gy/week was applied. The primary study endpoint was overall survival following diagnosis of first tumor recurrence (OS-2), while the secondary endpoint was overall survival after primary tumor diagnosis (OS-1). Survival times were calculated using the Kaplan–Meier method. Analyses were by intention to treat. The median overall survival from diagnosis of the first tumor recurrence among the 59 patients with recurrent glioblastoma was 13.4 months (95% CI: 10.6–16.2 months). Median OS-1 was 23.2 months while the median time interval between primary diagnosis and first tumor recurrence was 8.0 months. Only tumor volume at study entry was significantly correlated with ensuing survival (P < 0.01). No other variables predicting longer survival could be determined. The side effects of the new therapeutic approach were moderate, and no serious complications were observed. Thermotherapy using magnetic nanoparticles in conjunction with a reduced radiation dose is safe and effective and leads to longer OS-2 compared to conventional therapies in the treatment of recurrent glioblastoma.
---
paper_title: Magnetic properties and antitumor effect of nanocomplexes of iron oxide and doxorubicin
paper_content:
Abstract We present a technology and magneto-mechanical milling chamber for the magneto-mechano-chemical synthesis (MMCS) of magneto-sensitive complex nanoparticles (MNC) comprising nanoparticles Fe 3 O 4 and anticancer drug doxorubicin (DOXO). Magnetic properties of MNC were studied with vibrating magnetometer and electron paramagnetic resonance. Under the influence of mechano-chemical and MMCS, the complex show a hysteresis curve, which is typical for soft ferromagnetic materials. We also demonstrate that Lewis lung carcinoma had a hysteresis loop typical for a weak soft ferromagnet in contrast to surrounding tissues, which were diamagnetic. Combined action of constant magnetic field and radio frequency moderate inductive hyperthermia (RFH) below 40°C and MNC was found to induce greater antitumor and antimetastatic effects as compared to conventional DOXO. Radiospectroscopy shows minimal activity of FeS-protein electron transport chain of mitochondria, and an increase in the content of non-heme iron complexes with nitric oxide in the tumor tissues under the influence of RFH and MNC. From the Clinical Editor This study reports on the top-down synthesis of magneto-sensitive complex nanoparticles comprised of Fe 3 O 4 nanoparticles and doxorubicin. Authors also found that Lewis lung carcinoma had a hysteresis loop typical for a weak soft ferromagnet in contrast to surrounding tissues, which were diamagnetic. Combined action of constant magnetic field and radio frequency induced moderate hyperthermia induced both antitumor and antimetastatic effects greater than conventional DOX alone.
---
paper_title: Potential for therapy of drugs and hyperthermia.
paper_content:
The interaction of hyperthermia (41--45 degrees C) and chemotherapeutic agents frequently results in increased cytotoxicity over that predicted for an additive effect, although to date only a very limited number of drugs have been examined for such a possible interaction. At 42 degrees C, the upper limit of temperature useful for whole-body hyperthermia, the most promising agents of those examined to date appear to be the nitrosoureas and cis-platinum. Insufficient data exist for cyclophosphamide, whose long plasma half-life makes it an attractive candidate. Localized heating seems optimum at higher temperatures (43--45 degrees C). At these temperatures, not only those drugs effective at 42 degrees C but particularly bleomycin and possibly amphotericin B become candidates. No data exist in the literature on possible "thermic sensitizers," i.e., drugs which are noncytotoxic at 37 degrees C but which become effective at elevated temperatures. Two special cases are Adriamycin and actinomycin D. These drugs may be contraindicated for clinical use, since not only synergism but also protection by hyperthermia have been demonstrated, depending upon the time-sequence relationships of the heat and drug treatments.
---
paper_title: Arrhenius analysis of heat survival curves from normal and thermotolerant CHO cells.
paper_content:
The temperature compatible with biphasic hyperthermia-survival curves in Chinese hamster ovary cells was increased from 42.5 to 44°C by acute heat conditioning (10 min, 45°C). An Arrhenius analysis...
---
paper_title: Circulatory responses of malignant tumors during hyperthermia
paper_content:
Abstract The use of hyperthermia (elevation of regional body temperature to 41.5°–45°) as an adjuvant to clinical radiation therapy is becoming accepted in clinical practice at this time. It is, therefore, imperative to define the physiological responses of tumors to this modality. In this article, the effect of hyperthermia on the physiological responses of human and murine tumors are evaluated employing pH, oxygen, and flow ultramicroelectrodes. It is determined that hyperthermia causes a rise in tissue oxygen tension ( T p O 2 ) and blood flow at temperatures up to 41°, with a decrease at higher temperatures. Tumor tissue pH is low (6.8) and decreases during hyperthermia by as much as one unit of pH. The evidence linking these observations and the importance of blood flow modifications are discussed.
---
paper_title: Interaction of Heat and Drugs In Vitro and In Vivo
paper_content:
The intention of this chapter is to review existing experimental data on combinations of cytotoxic drugs and hyperthermia. The focus will be on those drugs which are the most likely candidates for potentiation by hyperthermia, and attention will be drawn to findings which may form a basis for the design of clinical studies.
---
paper_title: Hyperthermia Induces Apoptosis in Thymocytes
paper_content:
Mild hyperthermia (43{degree}C for 1 h) induces extensive double-stranded DNA fragmentation and, at a later time, cell death in murine thymocytes. The cleavage of DNA into oligonucleosome-sized fragments resembles that observed in examples of apoptosis including radiation-induced death of thymocytes. Following hyperthermia, incubation at 37{degree}C is necessary to detect DNA fragmentation, although protein and RNA synthesis do not seem to be required. Two protein synthesis inhibitors, cycloheximide and emetine, and two RNA synthesis inhibitors, actinomycin D and 5,6-dichloro-1-beta-D-ribofuranosylbenzimidazole, do not inhibit DNA fragmentation or cell death in heated thymocytes at concentrations which significantly block these effects in irradiated thymocytes. We have used this difference in sensitivity to show that the DNA fragmentation induced in thymocytes which are irradiated and then heated seems to be caused only by the heating and not by the irradiation.
---
paper_title: The cellular and molecular basis of hyperthermia
paper_content:
Abstract In oncology, the term ‘hyperthermia’ refers to the treatment of malignant diseases by administering heat in various ways. Hyperthermia is usually applied as an adjunct to an already established treatment modality (especially radiotherapy and chemotherapy), where tumor temperatures in the range of 40–43 °C are aspired. In several clinical phase-III trials, an improvement of both local control and survival rates have been demonstrated by adding local/regional hyperthermia to radiotherapy in patients with locally advanced or recurrent superficial and pelvic tumors. In addition, interstitial hyperthermia, hyperthermic chemoperfusion, and whole-body hyperthermia (WBH) are under clinical investigation, and some positive comparative trials have already been completed. In parallel to clinical research, several aspects of heat action have been examined in numerous pre-clinical studies since the 1970s. However, an unequivocal identification of the mechanisms leading to favorable clinical results of hyperthermia have not yet been identified for various reasons. This manuscript deals with discussions concerning the direct cytotoxic effect of heat, heat-induced alterations of the tumor microenvironment, synergism of heat in conjunction with radiation and drugs, as well as, the presumed cellular effects of hyperthermia including the expression of heat-shock proteins (HSP), induction and regulation of apoptosis, signal transduction, and modulation of drug resistance by hyperthermia.
---
paper_title: Radiation therapy and hyperthermia improve the oxygenation of human soft tissue sarcomas.
paper_content:
The adverse prognostic impact of tumor hypoxia has been demonstrated in human malignancy. We report the effects of radiotherapy and hyperthermia (HT) on soft tissue sarcoma oxygenation and the relationship between treatment-induced changes in oxygenation and clinical treatment outcome. Patients receiving preoperative radiotherapy and HT underwent tumor oxygenation measurement pretreatment after the start of radiation/pre-HT and one day after the first HT treatment. The magnitude of improvement in tumor oxygenation after the first HT fraction relative to pretreatment baseline was positively correlated with the amount of necrosis seen in the resection specimen. Patients with <90% resection specimen necrosis experienced longer disease-free survival than those with > or = 90% necrosis. Increasing levels of tumor hypoxia were also correlated with diminished metabolic status as measured by P-31 magnetic resonance spectroscopy.
---
paper_title: The relationship between heating time and temperature: its relevance to clinical hyperthermia.
paper_content:
It is well known that for a given level of damage to either cells in vitro or tissues in situ the relationship between temperature and time of application undergoes a transition in the range 42-43 degrees C and that above this temperature a change of 1 degree C is equivalent to a change in heating time by a factor of two. The present study has concentrated on establishing the relationship between time and temperature over a wide range. The investigation is in two parts, i.e. a review of the literature and an experimental study in which the endpoint used was necrosis in the tail of the baby rat. The aim is to provide information which might help solve a major clinical problem, namely the lack of a satisfactory means of relating treatments given with different temperatures for different lengths of time. The difficulty arises because there is no satisfactory definition of heat dose, in this context. The results confirm the relationship given above for temperatures above the transition. However, below the transition a change of 1 degree C is equivalent to a change in heating time by a factor of six. It is suggested that these relationships provide a means of monitoring a treatment in which the temperature does not remain constant and may vary within a heated volume. The method may also be used to compare treatments from different centres. An indication of the considerable uncertainties of the procedure is given.(ABSTRACT TRUNCATED AT 250 WORDS)
---
paper_title: Cellular responses to hyperthermia (40–46°C): Cell killing and molecular events
paper_content:
The goal of this review is to provide a brief introduction to the effects of hyperthermia on cellular structures and physiology. The review focuses on the effects of hyperthermia thought to contribute to the enhancement of cancer therapy namely the mechanisms of cell killing and the sensitization of cells to ionizing radiation or chemotherapeutic agents. Specifically the review addresses four topics: hyperthermia induced cell killing, mathematical models of cell killing, mechanisms of thermal effects in the hyperthermia temperature range and effects on proteins that contribute to resistance to other stresses, i.e., DNA damage. Hyperthermia has significant effects on proteins including unfolding, exposing hydrophobic groups, and aggregation with proteins not directly altered by hyperthermia. Protein aggregation has effects throughout the cell but has a significant impact within the nucleus. Changes in the associations of nuclear proteins particularly those involved in DNA replication cause the stalling of ...
---
paper_title: Hyperthermia adds to chemotherapy.
paper_content:
The hallmarks of hyperthermia and its pleotropic effects are in favour of its combined use with chemotherapy. Preclinical research reveals that for heat killing and synergistic effects the thermal dose is most critical. Thermal enhancement of drug cytotoxicity is accompanied by cellular death and necrosis without increasing its oncogenic potential. The induction of genetically defined stress responses can deliver danger signals to activate the host's immune system. The positive results of randomised trials have definitely established hyperthermia in combination with chemotherapy as a novel clinical modality for the treatment of cancer. Hyperthermia targets the action of chemotherapy within the heated tumour region without affecting systemic toxicity. In specific clinical settings regional hyperthermia (RHT) or hyperthermic perfusion has proved its value and deserve a greater focus and investigation in other malignancies. In Europe, more specialised centres should be created and maintained as network of excellence for hyperthermia in the field of oncology.
---
paper_title: Thermal Therapy, Part 2: Hyperthermia Techniques
paper_content:
Hyperthermia, the procedure of raising the temperature of a part of or the whole body above normal for a defined period of time, is applied alone or as an adjunctive with various established cancer treatment modalities such as radiotherapy and chemotherapy. Clinical hyperthermia falls into three broad categories, namely, (1) localized hyperthermia, (2) regional hyperthermia, and (3) whole-body hyperthermia (WBH). Because of the various problems associated with each type of treatment, different heating techniques have evolved. In this article, background information on the biological rationale and current status of technologies concerning heating equipment for the application of hyperthermia to human cancer treatment are provided. The results of combinations of other modalities such as radiotherapy or chemotherapy with hyperthermia as a new treatment strategy are summarized. The article concludes with a discussion of challenges and opportunities for the future.
---
paper_title: Intracranial Thermotherapy using Magnetic Nanoparticles Combined with External Beam Radiotherapy: Results of a Feasibility Study on Patients with Glioblastoma Multiforme
paper_content:
We aimed to evaluate the feasibility and tolerability of the newly developed thermotherapy using magnetic nanoparticles on recurrent glioblastoma multiforme. Fourteen patients received 3-dimensional image guided intratumoral injection of aminosilane coated iron oxide nanoparticles. The patients were then exposed to an alternating magnetic field to induce particle heating. The amount of fluid and the spatial distribution of the depots were planned in advance by means of a specially developed treatment planning software following magnetic resonance imaging (MRI). The actually achieved magnetic fluid distribution was measured by computed tomography (CT), which after matching to pre-operative MRI data enables the calculation of the expected heat distribution within the tumor in dependence of the magnetic field strength. Patients received 4–10 (median: 6) thermotherapy treatments following instillation of 0.1–0.7 ml (median: 0.2) of magnetic fluid per ml tumor volume and single fractions (2 Gy) of a radiotherapy series of 16–70 Gy (median: 30). Thermotherapy using magnetic nanoparticles was tolerated well by all patients with minor or no side effects. Median maximum intratumoral temperatures of 44.6°C (42.4–49.5°C) were measured and signs of local tumor control were observed. In conclusion, deep cranial thermotherapy using magnetic nanoparticles can be safely applied on glioblastoma multiforme patients.
---
paper_title: Synergistic cell-killing effect of a combination of hyperthermia and heavy ion beam irradiation: in expectation of a breakthrough in the treatment of refractory cancers (review).
paper_content:
We studied the sensitivity against heavy ion beam and hyperthermia on radioresistant procaryote, Deinococcus radiodurans, for the purpose of cancer therapy. First, we examined the decrease of the survival rate and molecular weight of DNA purified from this cell by acid heat treatment. These decreases were recognized by heating at 55 degrees C below pH 5.0. Then, we assumed that the decrease in survival of D. radiodurans in vivo and damage to its DNA in vitro by acid heating were due to the release of purine rings from the phosphodiester backbone of DNA molecules, i.e., depurination. Second, we investigated the relation between LET (linear energy transfer) and RBE (relative biological effectiveness) on D. radiodurans dry and wet cells using AVF cyclotron at the TIARA facility of JAERI-Takasaki, Japan. These cells were irradiated with carbon (12C5+) ion beam at LET of about 100 keV/microm, neon (20Ne8+) ion beam at LET of about 300 keV/microm and oxygen (16O6+) ion beam at LET of about 400 keV/microm. The peak in the figure of the relation between LET and RBE value was found to increase according to the increase of LET value from 100 keV/microm. Third, we conducted combination treatment with 4.8 kGy of alpha-particles, i.e., boron 10 neutron captured beam induced by Kyoto University Research Nuclear Reactor operated at 5 MW, and hyperthermia at 52 degrees C, which caused the synergistic killing effect on D. radiodurans wet cells. However, being dissimilar to the case of gamma-irradiation, the interval incubation at 30 degrees C in the medium between both treatments could inhibit the recovery of survival.
---
paper_title: Thermal Therapy, Part 1: An Introduction to Thermal Therapy
paper_content:
Thermal therapy is widely known and electromagnetic (EM) energy, ultrasonic waves, and other thermal-conduction-based devices have been used as heating sources. In particular, advances in EM technology have paved the way for promising trends in thermotherapeutical applications such as oncology, physiotherapy, urology, cardiology, ophthalmology, and in other areas of medicine as well. This series of articles is generally written for oncologists, cancer researchers, medical students, biomedical researchers, clinicians, and others who have an interest in this topic. This article reviews key processes and developments in thermal therapy with emphasis on two techniques, namely, hyperthermia [including long-term low-temperature hyperthermia (40-41 degrees C for 6-72 hr), moderate-temperature hyperthermia (42-45 degrees C for 15-60 min), and thermal ablation, or high-temperature hyperthermia (> 50 degrees C for > 4-6 min)]. The article will also provide an overview of a wide range of possible mechanisms and biological effects of heat. This information will be discussed in light of what is known about the degree of temperature rise that is expected from various sources of energy. The review concludes with an evaluation of human exposure risk to EM energy or the corresponding heat, trends in equipment development, and future research directions.
---
paper_title: In Vitro Thermochemotherapy of Human Colon Cancer Cells with cis-Dichlorodiammineplatinum(II) and Mitomycin C
paper_content:
The thermosensitivity of human colon adenocarcinoma (LoVo) cells was investigated as a function of temperature and duration of heating in exponentially growing cultures. At 39-43°, time-dependent survival followed a simple exponential function. D o values decreased progressively with a rise in temperature, from D o at 40° = 38 hr to D o at 42° = 17 hr to D o at 43° = 1.5 hr, thus indicating relative thermoresistance of LoVo cells compared to Chinese hamster ovary cells. Dose-dependent 1-hr survival of LoVo cells treated with cis -dichlorodiammineplatinum(II) and mitomycin C was effectively modified when treatment was conducted under hyperthermic conditions. For both agents and cultures in exponential and stationary growth phases, hyperthermia abolished the initial shoulder portion and steepened the subsequent exponential part of the survival curves for dose-modifying factors at the 10% survival level of 1.5 to 2.0 at 41° and 2.6 to 2.8 at 42°. This significant enhancement of drug-induced cell kill by moderate hyperthermia suggests that thermochemotherapy with mitomycin C and cis -dichlorodiammineplatinum(II) should be tested clinically with both regional and total-body hyperthermia.
---
paper_title: Inhibition of repair of radiation-induced damage by mild temperature hyperthermia, referring to the effect on quiescent cell populations
paper_content:
PurposeWe evaluated the usefulness of mild temperature hyperthermia (MTH) as an inhibitor of the repair of radiation-induced damage in terms of the responses of the total [= proliferating (P) + quiescent (Q)] and Q cell populations in solid tumors in vivo.Materials and methodsSCC VII tumor-bearing mice received a continuous administration of 5-bromo-2′-deoxyuridine (BrdU) to label all P cells. They then underwent high-dose-rate (HDR) γ-ray irradiation immediately followed by MTH or administration of caffeine or wortmannin; alternatively, they underwent reduced-dose rate γ-ray irradiation simultaneously with MTH or administration of caffeine or wortmannin. Nine hours after the start of irradiation, the tumor cells were isolated and incubated with a cytokinesis blocker, and the micronucleus (MN) frequency in cells without BrdU labeling (= Q cells) was determined using immunofluorescence staining for BrdU. The MN frequency in the total tumor cell population was determined using tumors that were not pretreated with BrdU.ResultsIn both the total and Q-cell populations, especially the latter, MTH efficiently suppressed the reduction in sensitivity caused by leaving an interval between HDR irradiation and the assay and decreasing the irradiation dose rate, as well as the combination with wortmannin administration.ConclusionFrom the viewpoint of solid tumor control as a whole, including intratumor Q-cell control, MTH is useful for suppressing the repair of both potentially lethal and sublethal damage.
---
paper_title: Effect of hyperthermia on malignant cellsin vivo: A review and a hypothesis
paper_content:
The relevant literature is reviewed in an attempt to clarify the mechanism of heat-dependent tumor cell destruction in vivo. Malignant cells in vivo appear to be selectively destroyed by hyperthermia in the range of 41-43 degrees C. Heat evidently affects nuclear function, expressed by an inhibited RNA, DNA and protein synthesis and characteristic arrest or delay of cells in certain locations of the cell cycle. However, as these effects appear to be reversible and are observed in normal cells as well as malignant cells, they probably do not explain the hyperthermic induced selective in vivo destruction of malignant cells. Heat-induced cytoplasmic damage appears to be of more importance. Increased lysosomal activation is observed, and is further intensified by a relatively increased anaerobic glycolysis which develops selectively in tumor cells. A hypothesis is proposed and discussed which explains the marked and selective in vivo tumor cell destruction as a consequence of the enhancing effect on the cytoplasmic damage of certain environmental factors (e.g. increased acidity, hypoxia and insufficient nutrition.
---
paper_title: Modification of tirapazamine-induced cytotoxicity in combination with mild hyperthermia and/or nicotinamide: reference to effect on quiescent tumour cells.
paper_content:
C3H/He and Balb/c mice bearing SCC VII or EMT6/KU tumours received continuous administration of 5-bromo-2'-deoxyuridine (BrdU) for 5 days to label all proliferating (P) cells. The tumours were locally heated at 40 degrees C for 60 min and/or the tumour-bearing mice received intraperitoneal injection of nicotinamide, and then tirapazamine (TPZ) was injected intraperitoneally. Sixty minutes after TPZ injection, the tumours were excised, minced and trypsinized. The tumour cell suspensions were incubated with cytochalasin-B (a cytokinesis-blocker), and the micronucleus (MN) frequency in cells without BrdU labelling (quiescent (Q) cells) was determined using immunofluorescence staining for BrdU. The MN frequency in total (P+Q) tumour cells was determined from the tumours that were not pretreated with BrdU. The cytotoxicity of TPZ was evaluated in terms of the frequency of induced micronuclei in binuclear tumour cells (= MN frequency). In both tumour systems, the MN frequencies of Q cells were greater than those of total tumour cell populations. Mild heat treatment elevated the MN frequency in total and Q cells in both tumour systems, but the effect was more marked in Q cells. In total cells, mild heat treatment increased the MN frequency in EMT6/KU tumour cells more markedly than in SCC VII tumour cells. In contrast, in both tumour systems, nicotinamide decreased the MN frequency in both cell populations, with a greater influence on the total cells. The combination of TPZ and mild heat treatment may be useful for sensitizing tumour cells in vivo, including Q cells.
---
paper_title: Cell death induced in a murine mastocytoma by 42-47 degrees C heating in vitro: evidence that the form of death changes from apoptosis to necrosis above a critical heat load.
paper_content:
The pathogenesis of heat-induced cell death is controversial. Categorizing the death occurring after various heat loads as either apoptosis or necrosis might help to elucidate this problem, since it has been shown that these two processes differ in their mode of initiation as well as in their morphological and biochemical features. Log-phase cultures of mastocytoma P-815 x 2.1 were heated at temperatures ranging from 42 to 47 degrees C for 30 min. After 42 degrees C heating a slight increase in apoptosis was observed morphologically. However, after heating at 43, 43.5 and 44 degrees C, there was marked enhancement of apoptosis, and electrophoresis of DNA showed characteristic internucleosomal cleavage. With heating at 45 degrees C both apoptosis and necrosis were enhanced, whereas at 46 and 47 degrees C only necrosis was produced. DNA extracted from the 46 and 47 degrees C cultures showed virtually no degradation, which contrasts with the random DNA breakdown observed in necrosis produced by other types of injury; lysosomal enzymes released during heat-induced necrosis may be inactivated at the higher temperatures. It is suggested that apoptosis following heating may be triggered either by a limited increase in cytosolic calcium levels resulting from mild membrane changes or by DNA damage. Necrosis, on the other hand, is likely to be a consequence of severe membrane disruption.
---
paper_title: Implication of Blood Flow in Hyperthermic Treatment of Tumors
paper_content:
Tumor blood flow varies significantly depending on the type, age, and size of tumors. Furthermore, the distribution of blood perfusion in tumors is quite heterogeneous. Blood flow in tumors may or may not be greater than that in the surrounding normal tissues at normothermic conditions. When heated at 41-430C, tumor blood flow either remains unchanged or increases slightly, usually by a factor of less than 2. The newly formed tumor vessels appear to be so vulnerable to heat that the blood flow decreases at 42-43' C in most of the animal tumors studied so far. By contrast, the blood flow in normal tissues, e. g., skin and muscle, increases by a factor of 3-20 upon heating at 42-450C. Consequently, the heat dissipation by blood flow becomes greater in normal tissues than in tumors during heating, and thereby a greater temperature rise in tumors may occur, resulting in greater damage in tumor relative to normal tissues. The intrinsically acidic intratumor environment becomes further acidic upon heating and accentuates the thermal damage on the tumor cells. Blood perfusion appears to be implicated in such a heat-induced increase in the intratumor acidity.
---
paper_title: Thermotherapy using magnetic nanoparticles combined with external radiation in an orthotopic rat model of prostate cancer
paper_content:
BACKGROUND ::: We evaluated the effects of thermotherapy using magnetic nanoparticles, also referred to as magnetic fluid hyperthermia (MFH), combined with external radiation, in the Dunning model of prostate cancer. ::: ::: ::: METHODS ::: Orthotopic tumors were induced in 96 male Copenhagen rats. Animals were randomly allocated to eight groups, including controls and groups for dose-finding studies of external radiation. Treatment groups received two serial thermotherapy treatments following a single intratumoral injection of magnetic fluid or thermotherapy followed by external radiation (10 Gy). On day 20, after tumor induction, tumor weights in the treatment and control groups were compared and iron measurements in selected organs were carried out. ::: ::: ::: RESULTS ::: Mean maximal and minimal intratumoral temperatures obtained were 58.7 degrees C (centrally) and 42.7 degrees C (peripherally) during the first thermotherapy and 55.4 degrees C and 42.3 degrees C, respectively, during the second of two treatment sessions. Combined thermotherapy and radiation with 20 Gy was significantly more effective than radiation with 20 Gy alone and reduced tumor growth by 87.5-89.2% versus controls. Mean iron content in the prostates on day 20 was 87.5% of the injected dose of ferrites, whereas only 2.5% was found in the liver. ::: ::: ::: CONCLUSIONS ::: An additive effect was demonstrated for the combined treatment at a radiation dose of 20 Gy, which was equally effective in inhibiting tumor growth as radiation alone with 60 Gy. Serial heat treatments were possible without repeated injection of magnetic fluid. The optimal treatment schedules of this combination regarding temperatures, radiation dose, and fractionation need to be defined in further experimental studies.
---
paper_title: Cellular effects of hyperthermia: relevance to the minimum dose for thermal damage.
paper_content:
The specific mechanism of cell killing by hyperthermia is unknown, but the high activation energy of cell killing and other responses to hyperthermia suggest that protein denaturation is the rate-limiting step. Protein denaturation can be directly monitored by differential scanning calorimetry and in general there is a good correlation between protein denaturation and cellular response. Approximately 5% denaturation is necessary for detectable killing. Protein denaturation leads to the aggregation of both denatured and native protein with multiple effects on cellular function.
---
paper_title: Differential response of normal and tumor microcirculation to hyperthermia.
paper_content:
RBC velocity and vessel lumen diameter were measured in individual microvessels in normal (mature granulation) and neoplastic (VX2 carcinoma) tissues grown in a transparent rabbit ear chamber. Blood flow rates were determined before, during, and after local hyperthermia treatments at 40-52 degrees for 1 hr. Blood flow in normal tissue increased dramatically with temperature, but stasis occurred at higher temperatures and/or longer durations of heating. In tumors, blood flow rate did not increase as much, and stasis occurred at lower levels of hyperthermia. Both the magnitude and the time of maximum flow appeared to be bimodal functions of temperature. That is, both of these parameters increased with temperature up to a certain critical temperature, and then decreased at higher temperatures. This critical temperature was approximately 45.7 degrees in normal tissue and 43.0 degrees in tumors. Normal tissue required temperatures greater than 47 degrees to bring about vascular stasis in less than 1 hr, while stasis occurred in tumors in the same time frame at temperatures greater than 41 degrees. Normal tissue could increase its maximum flow capacity up to 6 times its preheating value, while neoplastic tissue could only double its maximum flow capacity. This differential flow response in individual microvessels was used to develop a theoretical framework relating various mechanisms of blood flow modifications due to hyperthermia.
---
paper_title: Arrhenius relationships from the molecule and cell to the clinic.
paper_content:
There are great differences in heat sensitivity between different cell types and tissues. However, for an isoeffefct induced in a specific cell type or tissue by heating for different durations at different temperatures varying from 43-44 degrees C up to about 57 degrees C, the duration of heating must be increased by a factor of about 2 (R value) when the temperature is decreased by 1 degrees C. This same time-temperature relationship has been observed for heat inactivation of proteins, and changing only one amino acid out of 253 can shift the temperature for a given amount of protein denaturation from 46 degrees C to either 43 or 49 degrees C. For cytotoxic temperatures < 43-44 degrees C, R for mammalian cells and tissues is about 4-6. Many factors change the absolute heat sensitivity of mammalian cells by about 1 degrees C, but these factors have little effect on Rs, although the transition in R at 43-44 degrees C may be eliminated or shifted by about 1 degrees C. R for heat radiosensitization are similar to those above for heat cytotoxicity, but Rs for heat chemosensitization are much smaller (usually about 1.1-1.2). In practically all of the clinical trials that have been conducted, heat and radiation have been separated by 30-60 min, for which the primary effect should be heat cytotoxicity and not heat radiosensitization. Data are presented showing the clinical application of the thermal isoeffect dose (TID) concept in which different heating protocols for different times at different temperatures are converted into equiv min at 43 degrees C (EM43). For several heat treatments in the clinic, the TIDs for each treatment can be added to give a cumulative equiv min at 43 degrees C, viz., CEM43. This TID concept was applied by Oleson et al. in a retrospective analysis of clinical data, with the intent of using this approach prospectively to guide future clinical studies. Considerations of laboratory data and the large variations in temperature distributions observed in human tumours indicate that thermal tolerance, which has been observed for mammalian cells for both heat killing and heat radiosensitization, probably is not very important in the clinic.(ABSTRACT TRUNCATED AT 400 WORDS)
---
paper_title: Hyperthermic radiosensitization: mode of action and clinical relevance.
paper_content:
PURPOSE ::: To provide an update on the recent knowledge about the molecular mechanisms of thermal radiosensitization and its possible relevance to thermoradiotherapy. ::: ::: ::: SUMMARY ::: Hyperthermia is probably the most potent cellular radiosensitizer known to date. Heat interacts with radiation and potentiates the cellular action of radiation by interfering with the cells' capability to deal with radiation-induced DNA damage. For ionizing irradiation, heat inhibits the repair of all types of DNA damage. Genetic and biochemical data suggest that the main pathways for DNA double-strand break (DSB) rejoining, non-homologous end-joining and homologous recombination, are not the likely primary targets for heat-induced radiosensitization. Rather, heat is suggested to affect primarily the religation step of base excision repair. Subsequently additional DSB arise during the DNA repair process in irradiated and heated cells and these additional DSB are all repaired with slow kinetics, the repair of which is highly error prone. Both mis- and non-rejoined DSB lead to an elevated number of lethal chromosome aberrations, finally causing additional cell killing. Heat-induced inhibition of DNA repair is considered not to result from altered signalling or enzyme inactivation but rather from alterations in higher-order chromatin structure. Although, the detailed mechanisms are not yet known, a substantial body of indirect and correlative data suggests that heat-induced protein aggregation at the level of attachment of looped DNA to the nuclear matrix impairs the accessibility of the damaged DNA for the repair machinery or impairs the processivity of the repair machinery itself. ::: ::: ::: CONCLUSION ::: Since recent phase III clinical trials have shown significant benefit of adding hyperthermia to radiotherapy regimens for a number of malignancies, it will become more important again to determine the molecular effects underlying this success. Such information could eventually also improve treatment quality in terms of patient selection, improved sequencing of the heat and radiation treatments, the number of heat treatments, and multimodality treatments (i.e. thermochemoradiotherapy).
---
paper_title: THERMAL DOSE DETERMINATION IN CANCER THERAPY
paper_content:
Abstract With the rapid development of clinical hyperthermia for the treatment of cancer either alone or in conjunction with other modalities, a means of measuring a thermal dose in terms which are clinically relevant to the biological effect is needed. A comparison of published data empirically suggests a basic relationship that may be used to calculate a “thermal dose.” From a knowledge of the temperature during treatment as a function of time combined with a mathematical description of the time-temperature relationship, an estimate of the actual treatment calculated as an exposure time at some reference temperature can be determined. This could be of great benefit in providing a real-time accumulated dose during actual patient treatment. For the purpose of this study, a reference temperature of 43°C has been arbitrarily chosen to convert all thermal exposures to “equivalent-minutes” at this temperature. This dose calculation can be compared to an integrated calculation of the “degree-minutes” to determine its prognostic ability. The time-temperature relationship upon which this equivalent dose calculation is based does not predict, nor does it require, that different tissues have the same sensitivity to heat. A computer program written in FORTRAN is included for performing calculations of both equivalent-minutes (t 43 ) and degree-minutes (t dm43 ). Means are provided to alter the reference temperature, the Arrhenius “break” temperature and the time-temperature relationship both above and below the “break” temperature. In addition, the effect of factors such as step-down heating, thermotolerance, and physiological conditions on thermal dose calculations are discussed. The equations and methods described in this report are not intended to represent the only approach for thermal dose estimation; instead, they are intended to provide a simple but effective means for such calculations for clinical use and to stimulate efforts to evaluate data in terms of therapeutically useful thermal units.
---
paper_title: Investigations on the possibility of a thermic tumour therapy—I.: Short-wave treatment of a transplanted isologous mouse mammary carcinoma☆☆☆
paper_content:
The existing literature on studies of the inhibitory effect of moderate heat doses on malignant tumours is briefly reviewed. ::: ::: An experimental technique of heat treatment of mouse tumours in vivo under accurate control of the intratumoural temperature is described. ::: ::: In experiments with an isologous transplantable mouse mammary carcinoma, controlled application of a moderate heat dose led, in many cases, to a permanent cure of the transplanted tumour without causing any damage to the surrounding normal tissue. ::: ::: The necessary heat doses in the temperature range of 41·5 to 43·5°C are worked out revealing a definite relationship between temperature and exposure time. ::: ::: The heat treatment induces distinct histological changes in the tumour cells, whereas it does not cause any damage to the stromal and vascular cells in the tumour, or to the surrounding normal tissue. ::: ::: Immediately after the heat application, definite changes were revealed in the mitochondria and lysosomes of the tumour cells. These changes increased in intensity with the size of the heat dose and become more pronounced within a few hours or days. Within the first few hours, changes in the nuclei of the tumour cells and in chromosomal and nucleolar chromatin developed, with some variation in the individual cells. 24 hr after the application of a curative dose, all tumour cells showed severe injury. After smaller doses the reaction to the heat was less intense, with variations in the individual cells, and several tumour cells did not show any signs of lethal injury. ::: ::: Autolytic disintegration of the heat-damaged tumour cells occurs very rapidly. The connective tissue of the stroma increases markedly in volume, and a scar forms. ::: ::: The histological examination did not reveal all details of the process, but in the light of biochemical observations, it is reasonable to assume that the direct effect of heat is due to an elective activation of the acid hydrolases localized in the lysosomes of the tumour cells.
---
paper_title: Re-setting the biologic rationale for thermal therapy
paper_content:
This review takes a retrospective look at how hyperthermia biology, as defined from studies emerging from the late 1970s and into the 1980s, mis-directed the clinical field of hyperthermia, by placing too much emphasis on the necessity of killing cells with hyperthermia in order to define success. The requirement that cell killing be achieved led to sub-optimal hyperthermia fractionation goals for combinations with radiotherapy, inappropriate sequencing between radiation and hyperthermia and goals for hyperthermia equipment performance that were neither achievable nor necessary. The review then considers the importance of the biologic effects of hyperthermia that occur in the temperature range that lies between that necessary to kill substantial proportions of cells and normothermia (e.g. 39–42°C for 1 h). The effects that occur in this temperature range are compelling including—inhibition of radiation-induced damage repair, changes in perfusion, re-oxygenation, effects on macromolecular and nanoparticle ...
---
paper_title: Physiological mechanisms underlying heat-induced radiosensitization
paper_content:
The objective of this review is to evaluate hyperthermia related changes in tumor physiologic parameters and their relevance for tumor radiosensitization with particular emphases on tumor oxygenation. Elevation of temperature above the physiological level causes changes in blood flow, vascular permeability, metabolism, and tumor oxygenation. These changes in addition to the cellular effects such as direct cytotoxicity, inhibition of potentially lethal damage and sublethal damage repair, have an important influence on the efficacy of radiotherpay. There is now clear evidence that in a variety of rodent and canine, as well as human tumors, the changes in tumor oxygenation status caused by hyperthermia are temperature dependent and this relationship may greatly influence the response of tumors to thermo-radiotherapy. The improvement of tumor oxygenation after mild hyperthermia, which often lasts for as long as 24–48 h after heating, may increase the likelihood of a positive response of tumors to radiation th...
---
paper_title: Local hyperthermia combined with radiotherapy and-/or chemotherapy: recent advances and promises for the future.
paper_content:
Hyperthermia, one of the oldest forms of cancer treatment involves selective heating of tumor tissues to temperatures ranging between 39 and 45°C. Recent developments based on the thermoradiobiological rationale of hyperthermia indicate it to be a potent radio- and chemosensitizer. This has been further corroborated through positive clinical outcomes in various tumor sites using thermoradiotherapy or thermoradiochemotherapy approaches. Moreover, being devoid of any additional significant toxicity, hyperthermia has been safely used with low or moderate doses of reirradiation for retreatment of previously treated and recurrent tumors, resulting in significant tumor regression. Recent in vitro and in vivo studies also indicate a unique immunomodulating prospect of hyperthermia, especially when combined with radiotherapy. In addition, the technological advances over the last decade both in hardware and software have led to potent and even safer loco-regional hyperthermia treatment delivery, thermal treatment planning, thermal dose monitoring through noninvasive thermometry and online adaptive temperature modulation. The review summarizes the outcomes from various clinical studies (both randomized and nonrandomized) where hyperthermia is used as a thermal sensitizer of radiotherapy and-/or chemotherapy in various solid tumors and presents an overview of the progresses in loco-regional hyperthermia. These recent developments, supported by positive clinical outcomes should merit hyperthermia to be incorporated in the therapeutic armamentarium as a safe and an effective addendum to the existing oncological treatment modalities.
---
paper_title: Thermal dose and time—temperature factors for biological responses to heat shock
paper_content:
The application of hyperthermia in human cancer therapy, especially by radiotherapists who are accustomed to prescribing ionizing radiation treatments in physical dose units, has stimulated workers in this area to consider the possibility and utility of defining a unit of ‘thermal dose’. Previous thermal dose definitions have, primarily, been based on biological isoeffect response relationships, which attempt to relate exposure times that elicit a given biological response at one temperature to exposure times at another temperature that elicit the same biological response. This ‘equivalent time’ method is shown to have certain limitations. For both 42.4 and 45°C hyperthermia, these relationships accurately describe cell survival responses only when the heating rate is rapid (< 0.5°C min−1 from ambient to hyperthermic temperature). Further, the form of these isoeffect relationships appears to be temperature range and cell/tissue-type dependent, and it is suggested that these relationships be referred to as...
---
paper_title: Differential thermal sensitivity of tumour and normal tissue microvascular response during hyperthermia
paper_content:
The goal of this study was to investigate the heat sensitivity of the microcirculation in normal C3H murine leg muscle and a variety of transplanted tumour lines (KHT, SCC-VII, RIF-1, C3H mouse mammary carcinoma, two human mammary carcinomas MDA-468 and S5). Clearance rate of a radioactive tracer monitored following an intra-tissue injection was used as a measurement of microvascular integrity during heat treatment. Clearance rate in all tumours studied was significantly lower after 1 h of heating at 44°C than the initial pretreatment clearance rate. Response of normal muscle differed from that of tumours in that the clearance rate after 1 h of heating at 44°C was similar to the initial clearance rate. Vasculature in the KHT fibrosarcoma was more sensitive to heat treatment than that in other tumours. In response to a heat treatment at 43, 44, 45, and 46°C the same level of microvascular damage occurred in half the time in KHT fibrosarcoma than in normal muscle. Furthermore, vascular damage in both muscle...
---
paper_title: Modification of Cell Lethality at Elevated Temperatures The pH Effect
paper_content:
The lethal response of Chinese hamster ovary cells to hyperthermia was determined at selected extracellular pH. Decreasing pH from 7.6 to 6.7 increased the lethal response of cells over the temperature range of 41 to 44°C. Cell viability was not effected over this pH range at 37°C. The pH sensitizing affect was most prominent at temperatures which were marginally lethal at normal pH (7.4). Four hours of exposure to 42°C decreased survival to 10% at pH 7.4 and 0.01% at pH 6.7. Enhanced cell killing was observed when the cells were exposed to reduced pH and elevated temperatures simultaneously. Prolonging the time of pH exposure before and after hyperthermia did not influence survival. High-density culturing increased the sensitivity of cells to hyperthermia. This affect was due to metabolic acidification of the medium and could be reversed by adjusting the pH.
---
paper_title: EGFR-targeted magnetic nanoparticle heaters kill cancer cells without a perceptible temperature rise.
paper_content:
It is currently believed that magnetic nanoparticle heaters (MNHs) can kill cancer cells only when the temperature is raised above 43 °C due to energy dissipation in an alternating magnetic field. On the other hand, simple heat conduction arguments indicate that in small tumors or single cells the relative rates of energy dissipation and heat conduction result in a negligible temperature rise, thus limiting the potential of MNHs in treating small tumors and metastatic cancer. Here we demonstrate that internalized MNHs conjugated to epidermal growth factor (EGF) and which target the epidermal growth factor receptor (EGFR) do result in a significant (up to 99.9%) reduction in cell viability and clonogenic survival in a thermal heat dose dependent manner, without the need for a perceptible temperature rise. The effect appears to be cell type specific and indicates that magnetic nanoparticles in alternating magnetic fields may effectively kill cancer cells under conditions previously considered as not possible.
---
paper_title: Thermotherapy of Prostate Cancer Using Magnetic Nanoparticles: Feasibility, Imaging, and Three-Dimensional Temperature Distribution
paper_content:
Abstract Objectives To investigate the feasibility of thermotherapy using biocompatible superparamagnetic nanoparticles in patients with locally recurrent prostate cancer and to evaluate an imaging-based approach for noninvasive calculations of the three-dimensional temperature distribution. Methods Ten patients with locally recurrent prostate cancer following primary therapy with curative intent were entered into a prospective phase 1 trial. The magnetic fluid was injected transperineally into the prostates according to a preplan. Patients received six thermal therapies of 60-min duration at weekly intervals using an alternating magnetic field applicator. A method of three-dimensional thermal analysis based on computed tomography (CT) of the prostates was developed and correlated with invasive and intraluminal temperature measurements. The sensitivity of nanoparticle detection by means of CT was investigated in phantoms. Results The median detection rate of iron oxide nanoparticles in tissue specimens using CT was 89.5% (range: 70–98%). Maximum temperatures up to 55°C were achieved in the prostates. Median temperatures in 20%, 50%, and 90% of the prostates were 41.1°C (range: 40.0–47.4°C), 40.8°C (range: 39.5–45.4°C), and 40.1°C (range: 38.8–43.4°C), respectively. Median urethral and rectal temperatures were 40.5°C (range: 38.4–43.6°C) and 39.8°C (range: 38.2–43.4°C). The median thermal dose was 7.8 (range: 3.5–136.4) cumulative equivalent minutes at 43°C in 90% of the prostates. Conclusion The heating technique using magnetic nanoparticles was feasible. Hyperthermic to thermoablative temperatures were achieved in the prostates at 25% of the available magnetic field strength, indicating a significant potential for higher temperatures. A noninvasive thermometry method specific for this approach could be developed, which may be used for thermal dosimetry in future studies.
---
paper_title: Normal tissue and solid tumor effects of hyperthermia in animal models and clinical trials.
paper_content:
Localized hyperthermia therapy by high-energy radio-frequency waves was evaluated in malignant and adjacent normal tissue of 30 patients with 10 types of cancer. Hyperthermia was delivered to superficial and deep visceral cancers in awake patients who had refractory disease. Histological and clinical responses were recorded serially. Toxicity tests in dogs, sheep, and pigs showed that progressive necrosis of normal and cancer tissue occurred at temperatures above 45 degrees C (113 degrees F). However, as normal tissues approached this temperature, intrinsic heat dissipation occurred (possibly due to augmented blood flow) so that temperatures below 45 degrees C could be maintained, whereas most solid tumors did not have this adaptive capacity and could be heated to 50 degrees C (122 degrees F) with virtually no injury to normal organs, s.c. tissue, or skin. To date, 69 treatments have been administered to 36 tumors in the 30 patients. Selective heating was observed in both primary and metastatic tumors located in surface tissues and internal organs. Response appeared to be related to tumor size in that differential heating was possible more often in the larger lesions. In tumors successfully heated, moderate to marked necrosis occurred. Radio-frequency hyperthermia appears to be a safe and potentially useful form of therapy for selected cancer patients. While other cancer treatments are more effective for small tumors, hyperthermia may be uniquely beneficial against larger lesions.
---
paper_title: Nearly complete regression of tumors via collective behavior of magnetic nanoparticles in hyperthermia
paper_content:
One potential cancer treatment selectively deposits heat to the tumor through activation of magnetic nanoparticles inside the tumor. This can damage or kill the cancer cells without harming the surrounding healthy tissue. The properties assumed to be most important for this heat generation (saturation magnetization, amplitude and frequency of external magnetic field) originate from theoretical models that assume non-interacting nanoparticles. Although these factors certainly contribute, the fundamental assumption of 'no interaction' is flawed and consequently fails to anticipate their interactions with biological systems and the resulting heat deposition. Experimental evidence demonstrates that for interacting magnetite nanoparticles, determined by their spacing and anisotropy, the resulting collective behavior in the kilohertz frequency regime generates significant heat, leading to nearly complete regression of aggressive mammary tumors in mice.
---
paper_title: Medical application of functionalized magnetic nanoparticles.
paper_content:
Since magnetic particles have unique features, the development of a variety of medical applications has been possible. The most unique feature of magnetic particles is their reaction to a magnetic force, and this feature has been utilized in applications such as drug targeting and bioseparation including cell sorting. Recently, magnetic nanoparticles have attracted attention because of their potential as contrast agents for magnetic resonance imaging (MRI) and heating mediators for cancer therapy (hyperthermia). Magnetite cationic liposomes (MCLs), one of the groups of cationic magnetic particles, can be used as carriers to introduce magnetite nanoparticles into target cells since their positively charged surface interacts with the negatively charged cell surface; furthermore, they find applications to hyperthermic treatments. Magnetite nanoparticles conjugated with antibodies (antibody-conjugated magnetoliposomes, AMLs) are also applied to hyperthermia and have enabled tumor-specific contrast enhancement in MRI via systemic administration. Since magnetic nanoparticles are attracted to a high magnetic flux density, it is possible to manipulate cells labeled with magnetic nanoparticles using magnets; this feature has been applied in tissue engineering. Magnetic force and MCLs were used to construct multilayered cell structures and a heterotypic layered 3D coculture system. Thus, the applications of these functionalized magnetic nanoparticles with their unique features will further improve medical techniques.
---
paper_title: Tumor irradiation with intense ultrasound
paper_content:
Tumors implanted in the hamster flank have been irradiated in vivo with intense focused ultrasound at a spatial peak intensity of 907 W/cm2. A matrix of points was irradiated under c.w. conditions through the central plane of the tumor and perpendicular to the longitudinal axis of the sound field. A center spacing of 4 mm between matrix points and a time-on period of 2.5 sec at each point produced no cures. A spacing distance of 2 mm with 7 sec time-on period at each point increased mean survival time in non-cured animals and produced a cure rate of 29.4%. Combining the second regime of ultrasound treatment with administration of a chemotherapeutic agent (BCNU) 24 hr after irradiation did not increase mean survival time in the non-cured animals compared to the BCNU non-irradiated shams; however, the cure rate increased to 40%. Secondary tumors which were not seen in any ultrasound shams or controls were observed in all other regimes including BCNU non-irradiated shams. The incidence of secondary tumors was inversely related to the cure rate.
---
paper_title: Clinical hyperthermia of prostate cancer using magnetic nanoparticles: presentation of a new interstitial technique.
paper_content:
The aim of this pilot study was to evaluate whether the technique of magnetic fluid hyperthermia can be used for minimally invasive treatment of prostate cancer. This paper presents the first clinical application of interstitial hyperthermia using magnetic nanoparticles in locally recurrent prostate cancer. Treatment planning was carried out using computerized tomography (CT) of the prostate. Based on the individual anatomy of the prostate and the estimated specific absorption rate (SAR) of magnetic fluids in prostatic tissue, the number and position of magnetic fluid depots required for sufficient heat deposition was calculated while rectum and urethra were spared. Nanoparticle suspensions were injected transperineally into the prostate under transrectal ultrasound and flouroscopy guidance. Treatments were delivered in the first magnetic field applicator for use in humans, using an alternating current magnetic field with a frequency of 100 kHz and variable field strength (0-18 kA m(-1)). Invasive thermometry of the prostate was carried out in the first and last of six weekly hyperthermia sessions of 60 min duration. CT-scans of the prostate were repeated following the first and last hyperthermia treatment to document magnetic nanoparticle distribution and the position of the thermometry probes in the prostate. Nanoparticles were retained in the prostate during the treatment interval of 6 weeks. Using appropriate software (AMIRA), a non-invasive estimation of temperature values in the prostate, based on intra-tumoural distribution of magnetic nanoparticles, can be performed and correlated with invasively measured intra-prostatic temperatures. Using a specially designed cooling device, treatment was well tolerated without anaesthesia. In the first patient treated, maximum and minimum intra-prostatic temperatures measured at a field strength of 4.0-5.0 kA m(-1) were 48.5 degrees C and 40.0 degrees C during the 1st treatment and 42.5 degrees C and 39.4 degrees C during the 6th treatment, respectively. These first clinical experiences prompted us to initiate a phase I study to evaluate feasibility, toxicity and quality of life during hyperthermia using magnetic nanoparticles in patients with biopsy-proven local recurrence of prostate cancer following radiotherapy with curative intent. To the authors' knowledge, this is the first report on clinical application of interstitial hyperthermia using magnetic nanoparticles in the treatment of human cancer.
---
paper_title: Progress in applications of magnetic nanoparticles in biomedicine
paper_content:
A progress report is presented on a selection of scientific, technological and commercial advances in the biomedical applications of magnetic nanoparticles since 2003. Particular attention is paid to (i) magnetic actuation for in vitro non-viral transfection and tissue engineering and in vivo drug delivery and gene therapy, (ii) recent clinical results for magnetic hyperthermia treatments of brain and prostate cancer via direct injection, and continuing efforts to develop new agents suitable for targeted hyperthermia following intravenous injection and (iii) developments in medical sensing technologies involving a new generation of magnetic resonance imaging contrast agents, and the invention of magnetic particle imaging as a new modality. Ongoing prospects are also discussed.
---
paper_title: High intensity focused ultrasound for the treatment of rat tumours
paper_content:
Discrete implanted liver tumours in the rat have been exposed to arrays of 1.7 MHz ultrasound lesions. Focal peak intensities in the range 1.4-3.5 kWcm-2 were used for an exposure time of 10 s. It has been demonstrated that where the whole tumour volume was exposed to the focused ultrasound beam, no evidence of tumour growth could be detected histologically. Where the ultrasonic lesion array was not contiguous, regrowth occurred. Preliminary histological studies confirmed this finding.
---
paper_title: Magnetic nanoparticle design for medical diagnosis and therapy
paper_content:
Magnetic nanoparticles have attracted attention because of their current and potential usefulness as contrast agents for magnetic resonance imaging (MRI) or colloidal mediators for cancer magnetic hyperthermia. This review examines these in vivo applications through an understanding of the involved problems and the current and future possibilities for resolving them. A special emphasis is made on magnetic nanoparticle requirements from a physical viewpoint (e.g. relaxivity for MRI and specific absorption rate for hyperthermia), the factors affecting their biodistribution (e.g. size, surface hydrophobic/hydrophilic balance, etc.) and the solutions envisaged for enhancing their half-life in the blood compartment and targeting tumour cells.
---
paper_title: Magnetic fluid hyperthermia (MFH)reduces prostate cancer growth in the orthotopic Dunning R3327 rat model
paper_content:
BACKGROUND ::: ::: Magnetic fluid hyperthermia (MFH) is a new technique for interstitial hyperthermia or thermoablation based on AC magnetic field-induced excitation of biocompatible superparamagnetic nanoparticles. Preliminary studies in the Dunning tumor model of prostate cancer have demonstrated the feasibility of MFH in vivo. To confirm these results and evaluate the potential of MFH as a minimally invasive treatment of prostate cancer we carried out a systematic analysis of the effects of MFH in the orthotopic Dunning R3327 tumor model of the rat. ::: ::: ::: ::: METHODS ::: ::: Orthotopic tumors were induced by implantation of MatLyLu-cells into the prostates of 48 male Copenhagen rats. Animals were randomly allocated to 4 groups of 12 rats each, including controls. Treatment animals received two MFH treatments following a single intratumoral injection of a magnetic fluid. Treatments were carried out on days 10 and 12 after tumor induction using an AC magnetic field applicator system operating at a frequency of 100 kHz and a variable field strength (0–18 kA/m). On day 20, animals were sacrificed and tumor weights in the treatment and control groups were compared. In addition, tumor growth curves were generated and histological examinations and iron measurements in selected organs were carried out. ::: ::: ::: ::: RESULTS ::: ::: Maximum intratumoral temperatures of over 70°C could be obtained with MFH at an AC magnetic field strength of 18 kA/m. At a constant field strength of 12.6 kA/m, mean maximal and minimal intratumoral temperatures recorded were 54.8°C (centrally) and 41.2°C (peripherally). MFH led to an inhibition of tumor growth of 44%–51% over controls. Mean iron content in the prostates of treated and untreated (injection of magnetic fluids but no AC magnetic field exposure) animals was 82.5%, whereas only 5.3% of the injected dose was found in the liver, 1.0% in the lung, and 0.5% in the spleen. ::: ::: ::: ::: CONCLUSIONS ::: ::: MFH led to a significant growth inhibition in this orthotopic model of the aggressive MatLyLu tumor variant. Intratumoral deposition of magnetic fluids was found to be stable, allowing for serial MFH treatments without repeated injection. The optimal treatment schedules and temperatures for MFH need to be defined in further studies. © 2005 Wiley-Liss, Inc.
---
paper_title: Magnetic fluid hyperthermia inhibits the growth of breast carcinoma and downregulates vascular endothelial growth factor expression
paper_content:
The application of magnetic fluid hyperthermia (MFH) with nanoparticles has been shown to inhibit tumor growth in several animal models. However, the feasibility of using MFH in vivo to treat breast cancer is uncertain, and the mechanism is unclear. In the present study, it was observed that the intratumoral administration of MFH induced hyperthermia significantly in rats with Walker-265 breast carcinomas. The hyperthermia treatment with magnetic nanoparticles inhibited tumor growth in vivo and promoted the survival of the tumor-bearing rats. Furthermore, it was found that MFH treatment downregulated the protein expression of vascular endothelial growth factor (VEGF) in the tumor tissue, as observed by immunohistochemistry. MFH treatment also decreased the gene expression of VEGF and its receptors, VEGF receptor 1 and 2, and inhibited angiogenesis in the tumor tissues. Taken together, these results indicate that the application of MFH with nanoparticles is feasible for the treatment of breast carcinoma. The MFH-induced downregulation of angiogenesis may also contribute to the induction of an anti-tumor effect.
---
paper_title: Status of hyperthermia in the treatment of advanced liver cancer.
paper_content:
The vast majority of patients with malignant liver tumors have inoperable disease. These patients must rely on chemotherapy, radiotherapy, and various locoregional treatments. Although these treatments have demonstrated encouraging response rates, symptom palliation and occasional down staging of tumors, their impact on survival is minor. As a result there has been renewed interest in hyperthermia as a treatment option. This study reviews the current modalities of hyperthermia in terms of clinical results, side effects, limitations, and therapeutic standing.
---
paper_title: Theoretical predictions for spatially-focused heating of magnetic nanoparticles guided by magnetic particle imaging field gradients
paper_content:
Magnetic nanoparticles in alternating magnetic fields (AMFs) transfer some of the field's energy to their surroundings in the form of heat, a property that has attracted significant attention for use in cancer treatment through hyperthermia and in developing magnetic drug carriers that can be actuated to release their cargo externally using magnetic fields. To date, most work in this field has focused on the use of AMFs that actuate heat release by nanoparticles over large regions, without the ability to select specific nanoparticle-loaded regions for heating while leaving other nanoparticle-loaded regions unaffected. In parallel, magnetic particle imaging (MPI) has emerged as a promising approach to image the distribution of magnetic nanoparticle tracers in vivo, with sub-millimeter spatial resolution. The underlying principle in MPI is the application of a selection magnetic field gradient, which defines a small region of low bias field, superimposed with an AMF (of lower frequency and amplitude than those normally used to actuate heating by the nanoparticles) to obtain a signal which is proportional to the concentration of particles in the region of low bias field. Here we extend previous models for estimating the energy dissipation rates of magnetic nanoparticles in uniform AMFs to provide theoretical predictions of how the selection magnetic field gradient used in MPI can be used to selectively actuate heating by magnetic nanoparticles in the low bias field region of the selection magnetic field gradient. Theoretical predictions are given for the spatial decay in energy dissipation rate under magnetic field gradients representative of those that can be achieved with current MPI technology. These results underscore the potential of combining MPI and higher amplitude/frequency actuation AMFs to achieve selective magnetic fluid hyperthermia (MFH) guided by MPI.
---
paper_title: Selective inductive heating of lymph nodes.
paper_content:
Selective Inductive Heating of Lymph Nodes R. GILCHRIST;RICHARD MEDAL;WILLIAM SHOREY;RUSSELL HANSELMAN;JOHN PARROTT;C. TAYLOR; Annals of Surgery
---
paper_title: Use of magnetic nanoparticle heating in the treatment of breast cancer.
paper_content:
Magnetic nanoparticles are promising tools for the minimal invasive elimination of small tumours in the breast using magnetically-induced heating. The approach complies with the increasing demand for breast conserving therapies and has the advantage of offering a selective and refined tuning of the degree of energy deposition allowing an adequate temperature control at the target. The biophysical basis of the approach, the magnetic and structural properties of magnetic nanoparticles are reviewed. Results with model targets and in vivo experiments in laboratory animals are reported.
---
paper_title: High Therapeutic Efficiency of Magnetic Hyperthermia in Xenograft Models Achieved with Moderate Temperature Dosages in the Tumor Area
paper_content:
Purpose ::: Tumor cells can be effectively inactivated by heating mediated by magnetic nanoparticles. However, optimized nanomaterials to supply thermal stress inside the tumor remain to be identified. The present study investigates the therapeutic effects of magnetic hyperthermia induced by superparamagnetic iron oxide nanoparticles on breast (MDA-MB-231) and pancreatic cancer (BxPC-3) xenografts in mice in vivo.
---
paper_title: Radiofrequency Ablation of Malignant Liver Tumors
paper_content:
Background: Radiofrequency ablation (RFA) is being used to treat primary and metastatic liver tumors. The indications, treatment planning, and limitations of hepatic RFA must be defined and refined by surgeons treating hepatic malignancies. Methods: A review of the experience using RFA to treat unresectable primary and secondary hepatic malignancies at the University of Texas M. D. Anderson Cancer Center in Houston, Texas, and the G. Pascale National Cancer Institute in Naples, Italy, is provided. Patient selection, treatment approach, local recurrence rates, and overall cancer recurrence rates following RFA are described. The current literature on RFA of hepatic malignancies is reviewed. Results: RFA of hepatic tumors can be performed percutaneously, laparoscopically, or during an open surgical procedure. Incomplete treatment manifest as local recurrence is more common with a percutaneous approach. The morbidity and mortality rates associated with hepatic RFA are low. Local recurrence rates are low if meticulous treatment planning is performed. RFA can be combined safely with partial hepatic resection of large lesions. The long-term survival rates following RFA of primary and metastatic liver tumors have not yet been established. Conclusions: RFA of hepatic malignancies is a safe and promising technique to produce coagulative necrosis of unresectable hepatic malignancies. Experience with this treatment modality is not yet mature enough to establish long-term outcomes.
---
paper_title: Magnetic fluid hyperthermia: focus on superparamagnetic iron oxide nanoparticles.
paper_content:
Abstract Due to their unique magnetic properties, excellent biocompatibility as well as multi-purpose biomedical potential (e.g., applications in cancer therapy and general drug delivery), superparamagnetic iron oxide nanoparticles (SPIONs) are attracting increasing attention in both pharmaceutical and industrial communities. The precise control of the physiochemical properties of these magnetic systems is crucial for hyperthermia applications, as the induced heat is highly dependent on these properties. In this review, the limitations and recent advances in the development of superparamagnetic iron oxide nanoparticles for hyperthermia are presented.
---
paper_title: Magnetic Particle Imaging Tracers: State-of-the-Art and Future Directions
paper_content:
Magnetic particle imaging (MPI) is an emerging imaging modality with promising applications in diagnostic imaging and guided therapy. The image quality in MPI is strongly dependent on the nature of its iron oxide nanoparticle-based tracers. The selection of potential MPI tracers is currently limited, and the underlying physics of tracer response is not yet fully understood. An in-depth understanding of the magnetic relaxation processes that govern MPI tracers, gained through concerted theoretical and experimental work, is crucial to the development of optimized MPI tracers. Although tailored tracers will lead to improvements in image quality, tailored relaxation may also be exploited for biomedical applications or more flexible image contrast, as in the recent demonstration of color MPI.
---
paper_title: Morbidity and quality of life during thermotherapy using magnetic nanoparticles in locally recurrent prostate cancer: Results of a prospective phase I trial
paper_content:
Purpose: To investigate the treatment-related morbidity and quality of life (QoL) during thermotherapy using superparamagnetic nanoparticles in patients with locally recurrent prostate cancer.Materials and Methods: Ten patients with biopsy-proven locally recurrent prostate cancer following primary therapy with curative intent and no detectable metastases were entered on a prospective phase I trial. Endpoints were feasibility, toxicity and QoL. Following intraprostatic injection of a nanoparticle dispersion, six thermal therapy sessions of 60 min duration were delivered at weekly intervals using an alternating magnetic field. National Cancer Institute (NCI) common toxicity criteria (CTC) and the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30 and QLQ-PR25 questionnaires were used to evaluate toxicity and QoL, respectively. In addition, prostate specific antigen (PSA) measurements were carried out.Results: Maximum temperatures up to 55°C were achieved in the prostates at 25–30% of...
---
paper_title: Nanoparticle-mediated thermal therapy: Evolving strategies for prostate cancer therapy
paper_content:
Purpose: Recent advances in nanotechnology have resulted in the manufacture of a plethora of nanoparticles of different sizes, shapes, core physicochemical properties and surface modifications that are being investigated for potential medical applications, particularly for the treatment of cancer. This review focuses on the therapeutic use of customised gold nanoparticles, magnetic nanoparticles and carbon nanotubes that efficiently generate heat upon electromagnetic (light and magnetic fields) stimulation after direct injection into tumours or preferential accumulation in tumours following systemic administration. This review will also focus on the evolving strategies to improve the therapeutic index of prostate cancer treatment using nanoparticle-mediated hyperthermia.Conclusions: Nanoparticle-mediated thermal therapy is a new and minimally invasive tool in the armamentarium for the treatment of cancers. Unique challenges posed by this form of hyperthermia include the non-target biodistribution of nanop...
---
paper_title: Physical limits of hyperthermia using magnetite fine particles
paper_content:
Structural and magnetic properties of fine particles of magnetite are investigated with respect to the application for hyperthermia. Magnetic hysteresis losses are measured in dependence on the field amplitude for selected commercial powders and are discussed in terms of grain size and structure of the particles. For ferromagnetic powders as well as for ferrofluids, results of heating experiments within organic gels in a magnetic high frequency field are reported. The heating effect depends strongly on the magnetic properties of the magnetite particles which may vary appreciably for different samples in dependence on the particle size and microstructure. In particular, the transition from ferromagnetic to superparamagnetic behavior causes changes of the loss mechanism, and accordingly, of the heating effect. The maximum attainable heating effect is discussed in terms of common theoretical models. Rise of temperature at the surface of a small heated sample as well as in its immediate neighborhood in the surrounding medium is measured in dependence on time and is compared with solutions of the corresponding heat conductivity problem. Conclusions with respect to clinical applications are given.
---
paper_title: Heating magnetic fluid with alternating magnetic field
paper_content:
This study develops analytical relationships and computations of power dissipation in magnetic fluid (ferrofluid) subjected to alternating magnetic field. The dissipation results from the orientational relaxation of particles having thermal fluctuations in a viscous medium.
---
paper_title: Accuracy of available methods for quantifying the heat power generation of nanoparticles for magnetic hyperthermia.
paper_content:
In magnetic hyperthermia, characterising the specific functionality of magnetic nanoparticle arrangements is essential to plan the therapies by simulating maximum achievable temperatures. This functionality, i.e. the heat power released upon application of an alternating magnetic field, is quantified by means of the specific absorption rate (SAR), also referred to as specific loss power (SLP). Many research groups are currently involved in the SAR/SLP determination of newly synthesised materials by several methods, either magnetic or calorimetric, some of which are affected by important and unquantifiable uncertainties that may turn measurements into rough estimates. This paper reviews all these methods, discussing in particular sources of uncertainties, as well as their possible minimisation. In general, magnetic methods, although accurate, do not operate in the conditions of magnetic hyperthermia. Calorimetric methods do, but the easiest to implement, the initial-slope method in isoperibol conditions, derives inaccuracies coming from the lack of matching between thermal models, experimental set-ups and measuring conditions, while the most accurate, the pulse-heating method in adiabatic conditions, requires more complex set-ups.
---
paper_title: Inductive heating of ferrimagnetic particles and magnetic fluids: Physical evaluation of their potential for hyperthermia
paper_content:
The potential of colloidal subdomain ferrite particle suspensions (SDP) ('magnetic fluids'), exposed to an alternating magnetic field, is evaluated for hyperthermia. Power absorption measurements of different magnetic fluids are presented in comparison to multidomain ferrite particles (MDP). Variations with frequency as well as magnetic field strength have been investigated. The experimental results clearly indicate a definite superiority of even non-optimized magnetic fluids over MDP ferrites regarding their specific absorption rate (SAR). Based on the work of Shliomis et al. (1990) and Hanson (1991), a solid-state physical model is applied to explain the specific properties of magnetic fluids with respect to a possible use in hyperthermia. The experimentally determined SAR data on magnetic fluids are used to estimate the heating capabilities of a magnetic induction heating technique assuming typical human dimensions and tissue parameters. It is considered that for a moderate concentration of 5 mg ferrite per gram tumour (i.e. 0.5% w/w) and clinically acceptable magnetic fields, intratumoral power absorption is comparable to RF heating with local applicators and superior to regional RF heating (by comparison with clinical SAR measurements from regional and local hyperthermia treatments). Owing to the high particle density per volume, inductive heating by magnetic fluids can improve temperature distributions in critical regions. Furthermore, localized application of magnetic fluids in a tumour might be easier and less traumatic than interstitial implantation techniques.
---
paper_title: Heating ability of magnetite nanobeads with various sizes for magnetic hyperthermia at 120kHz, a noninvasive frequency
paper_content:
We synthesized four kinds of magnetite particles having average sizes of 7, 18, 40, and 80nm and investigated their heating ability when they were dispersed in an agar gel and exposed to an ac magnetic field at 120kHz, a noninvasive frequency for anticancer hyperthermia. The particles 18nm in average diameter gave the highest heating ability, though they exhibited narrow hysteresis loops as compared to the particles having average diameters of 40 and 80nm. This indicates that hysteresis loss does not contribute much to the heat rise by the 120kHz ac field, and Neel relaxation is dominantly contributing to the heat rise by the 18nm sized particles. A calculation based on Neel relaxation loss gave a plausible explanation.
---
paper_title: Physics of heat generation using magnetic nanoparticles for hyperthermia.
paper_content:
Magnetic nanoparticle hyperthermia and thermal ablation have been actively studied experimentally and theoretically. In this review, we provide a summary of the literature describing the properties of nanometer-scale magnetic materials suspended in biocompatible fluids and their interactions with external magnetic fields. Summarised are the properties and mechanisms understood to be responsible for magnetic heating, and the models developed to understand the behaviour of single-domain magnets exposed to alternating magnetic fields. Linear response theory and its assumptions have provided a useful beginning point; however, its limitations are apparent when nanoparticle heating is measured over a wide range of magnetic fields. Well-developed models (e.g. for magnetisation reversal mechanisms and pseudo-single domain formation) available from other fields of research are explored. Some of the methods described include effects of moment relaxation, anisotropy, nanoparticle and moment rotation mechanisms, interactions and collective behaviour, which have been experimentally identified to be important. Here, we will discuss the implicit assumptions underlying these analytical models and their relevance to experiments. Numerical simulations will be discussed as an alternative to these simple analytical models, including their applicability to experimental data. Finally, guidelines for the design of optimal magnetic nanoparticles will be presented.
---
paper_title: Magnetic Nanoparticles for Cancer Therapy
paper_content:
Today, technologies based on magnetic nanoparticles (MNPs) are routinely applied to biological systems with diagnostic or therapeutic purposes. The paradigmatic example is the magnetic resonance imaging (MRI), a technique that uses the magnetic moments of MNPs as a disturbance of the proton resonance to obtain images. Similarly, magnetic fluid hyperthermia (MFH) uses MNPs as heat generators to induce localized cell death. The physical basis of these techniques relies on the interaction with external magnetic fields, and therefore the magnetic moment of the particles has to be maximized for these applications. Targeted drug-delivery based on 'smart' nanoparticles is the next step towards more efficient oncologic therapies, by delivering a minimal dose of drug only to the vicinity of the target. Current improvements in this fields relay on a) particle functionalization with specific ligands for targeting cell membrane recep- tors and b) loading MNPs onto cells (e.g., dendritic cells, T-cells, macrophages) having an active role in tumor grow. Here we review the current state of research on applications of magnetic carriers for cancer therapy, discussing the advances and drawbacks of both passive and targeted delivery of MNPs. The most promising strategies for targeted delivery of MNPs are analyzed, evaluating the expected im- pact on clinical MRI and MFH protocols.
---
paper_title: Impact of magnetic field parameters and iron oxide nanoparticle properties on heat generation for use in magnetic hyperthermia.
paper_content:
Abstract Heating of nanoparticles (NPs) using an AC magnetic field depends on several factors, and optimization of these parameters can improve the efficiency of heat generation for effective cancer therapy while administering a low NP treatment dose. This study investigated magnetic field strength and frequency, NP size, NP concentration, and solution viscosity as important parameters that impact the heating efficiency of iron oxide NPs with magnetite (Fe 3 O 4 ) and maghemite (γ-Fe 2 O 3 ) crystal structures. Heating efficiencies were determined for each experimental setting, with specific absorption rates (SARs) ranging from 3.7 to 325.9 W/g Fe. Magnetic heating was conducted on iron oxide NPs synthesized in our laboratories (with average core sizes of 8, 11, 13, and 18 nm), as well as commercially-available iron oxides (with average core sizes of 8, 9, and 16 nm). The experimental magnetic coil system made it possible to isolate the effect of magnetic field parameters and independently study the effect on heat generation. The highest SAR values were found for the 18 nm synthesized particles and the maghemite nanopowder. Magnetic field strengths were applied in the range of 15.1–47.7 kA/m, with field frequencies ranging from 123 to 430 kHz. The best heating was observed for the highest field strengths and frequencies tested, with results following trends predicted by the Rosensweig equation. An increase in solution viscosity led to lower heating rates in nanoparticle solutions, which can have significant implications for the application of magnetic fluid hyperthermia in vivo .
---
paper_title: Simple models for dynamic hysteresis loop calculations of magnetic single-domain nanoparticles: Application to magnetic hyperthermia optimization
paper_content:
To optimize the heating properties of magnetic nanoparticles (MNPs) in magnetic hyperthermia applications, it is necessary to calculate the area of their hysteresis loops in an alternating magnetic field. The separation between “relaxation losses” and “hysteresis losses” presented in several articles is artificial and criticized here. The three types of theories suitable for describing hysteresis loops of MNPs are presented and compared to numerical simulations: equilibrium functions, Stoner–Wohlfarth model based theories (SWMBTs), and a linear response theory (LRT) using the Neel–Brown relaxation time. The configuration where the easy axis of the MNPs is aligned with respect to the magnetic field and the configuration of a random orientation of the easy axis are both studied. Suitable formulas to calculate the hysteresis areas of major cycles are deduced from SWMBTs and from numerical simulations; the domain of validity of the analytical formula is explicitly studied. In the case of minor cycles, the hys...
---
paper_title: Magnetic particle hyperthermia?a promising tumour therapy?
paper_content:
We present a critical review of the state of the art of magnetic particle hyperthermia (MPH) as a minimal invasive tumour therapy. Magnetic principles of heating mechanisms are discussed with respect to the optimum choice of nanoparticle properties. In particular, the relation between superparamagnetic and ferrimagnetic single domain nanoparticles is clarified in order to choose the appropriate particle size distribution and the role of particle mobility for the relaxation path is discussed. Knowledge of the effect of particle properties for achieving high specific heating power provides necessary guidelines for development of nanoparticles tailored for tumour therapy. Nanoscale heat transfer processes are discussed with respect to the achievable temperature increase in cancer cells. The need to realize a well-controlled temperature distribution in tumour tissue represents the most serious problem of MPH, at present. Visionary concepts of particle administration, in particular by means of antibody targeting, are far from clinical practice, yet. On the basis of current knowledge of treating cancer by thermal damaging, this article elucidates possibilities, prospects, and challenges for establishment of MPH as a standard medical procedure.
---
paper_title: Magnetic particle hyperthermia : nanoparticle magnetism and materials development for cancer therapy
paper_content:
Loss processes in magnetic nanoparticles are discussed with respect to optimization of the specific loss power (SLP) for application in tumour hyperthermia. Several types of magnetic iron oxide nanoparticles representative for different preparation methods (wet chemical precipitation, grinding, bacterial synthesis, magnetic size fractionation) are the subject of a comparative study of structural and magnetic properties. Since the specific loss power useful for hyperthermia is restricted by serious limitations of the alternating field amplitude and frequency, the effects of the latter are investigated experimentally in detail. The dependence of the SLP on the mean particle size is studied over a broad size range from superparamagnetic up to multidomain particles, and guidelines for achieving large SLP under the constraints valid for the field parameters are derived. Particles with the mean size of 18 nm having a narrow size distribution proved particularly useful. In particular, very high heating power may be delivered by bacterial magnetosomes, the best sample of which showed nearly 1 kW g −1 at 410 kHz and 10 kA m −1 . This value may even be exceeded by metallic magnetic particles, as indicated by measurements on cobalt particles.
---
paper_title: Maghemite nanoparticles with very high AC-losses for application in RF-magnetic hyperthermia
paper_content:
Maghemite nanoparticles covalently coated with polyethylene glycol are investigated with respect to different loss processes in magnetic AC-fields. Transmission electron microscopy reveals a narrow size distribution which may be well approximated by a normal distribution (mean diameter 15.3 nm and distribution width 4.9 nm). Aqueous ferrofluids were characterised by DC-magnetometry, by measuring susceptibility spectra for a frequency range 20 Hz to 1 MHz and by calorimetric measurements of specific loss power (SLP) at 330 and 410 kHz for field amplitudes up to 11.7 kA/m. Extremely high values of SLP in the order of 600 W/g result for 400 kHz and 11 kA/m. In addition to liquid ferrofluids measurements were performed with suspensions in gel in order to elucidate the role of Brownian relaxation. The measured susceptibility spectra may be well reproduced by a model using a superposition of Neel and Brown loss processes under consideration of the observed narrow normal size distribution. In this way the observed very high specific heating power may be well understood. Results are discussed with respect to further optimisation of SLP for medical as well as technical RF-heating applications.
---
paper_title: Evaluation of iron-cobalt/ferrite core-shell nanoparticles for cancer thermotherapy
paper_content:
Magnetic nanoparticles (MNPs) offer promise for local hyperthermia or thermoablative cancer therapy. Magnetic hyperthermia uses MNPs to heat cancerous regions in an rf field. Metallic MNPs have larger magnetic moments than iron oxides, allowing similar heating at lower concentrations. By tuning the magnetic anisotropy in alloys, the heating rate at a particular particle size can be optimized. Fe–Co core-shell MNPs have protective CoFe2O4 shell which prevents oxidation. The oxide coating also aids in functionalization and improves biocompatibility of the MNPs. We predict the specific loss power (SLP) for FeCo (SLP ∼450W∕g) at biocompatible fields to be significantly larger in comparision to oxide materials. The anisotropy of Fe-Co MNPs may be tuned by composition and/or shape variation to achieve the maximum SLP at a desired particle size.
---
paper_title: The effects of magnetic nanoparticle properties on magnetic fluid hyperthermia
paper_content:
Magnetic fluid hyperthermia (MFH) is a noninvasive treatment that destroys cancer cells by heating a ferrofluid-impregnated malignant tissue with an ac magnetic field while causing minimal damage to the surrounding healthy tissue. The strength of the magnetic field must be sufficient to induce hyperthermia but it is also limited by the human ability to safely withstand it. The ferrofluid material used for hyperthermia should be one that is readily produced and is nontoxic while providing sufficient heating. We examine six materials that have been considered as candidates for MFH use. Examining the heating produced by nanoparticles of these materials, barium-ferrite and cobalt-ferrite are unable to produce sufficient MFH heating, that from iron-cobalt occurs at a far too rapid rate to be safe, while fcc iron-platinum, magnetite, and maghemite are all capable of producing stable controlled heating. We simulate the heating of ferrofluid-loaded tumors containing nanoparticles of the latter three materials to ...
---
paper_title: Theoretical assessment of FePt nanoparticles as heating elements for magnetic hyperthermia
paper_content:
FePt magnetic nanoparticles (MNPs) are expected to be a high-performance nanoheater for magnetic hyperthermia because of their high Curie temperature, high saturation magnetization, and high chemical stability. Here, we present a theoretical performance assessment of chemically disordered fcc-phase FePt MNPs. We calculate heat generation and heat transfer in the tissue when an MNP-loaded tumor is placed on an external alternating magnetic field. For comparison, we estimate the performances of magnetite, maghemite, FeCo, and L10-phase FePt MNPs. We find that an fcc FePt MNP has a superior ability in magnetic hyperthermia
---
paper_title: Nanomedicine: Magnetic Nanoparticles and their Biomedical Applications
paper_content:
During this past decade, science and engineering have seen a rapid increase in interest for nanoscale materials with dimensions less than 100 nm, which lie in the intermediate state between atoms and bulk (solid) materials. Their attributes are significantly altered relative to the corresponding bulk materials as they exhibit size dependent behavior such as quantum size effects (depending on bulk Bohr radius), optical absorption and emission, coulomb staircase behavior (electrical transport), superparamagnetism and various unique properties. They are active components of ferrofluids, recording tape, flexible disk recording media along with potential future applications in spintronics: a new paradigm of electronics utilizing intrinsic charge and spin of electrons for ultra-high-density data storage and quantum computing. They are used in a gamut of biomedical applications: bioseparation of biological entities, therapeutic drugs and gene delivery, radiofrequency-induced destruction of cells and tumors (hyperthermia), and contrast-enhancement agents for magnetic resonance imaging (MRI). The magnetic nanoparticles have optimizable, controllable sizes enabling their comparison to cells (10-100 μm), viruses (20-250 nm), proteins (3-50 nm), and genes (10-100 nm). Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), Atomic Force Microscopy (AFM) and X-ray photoelectron spectroscopy (XPS) provide necessary characterization methods that enable accurate structural and functional analysis of interaction of the biofunctional particles with the target bioentities. The goal of the present discussion is to provide a broad review of magnetic nanoparticle research with a special focus on the synthesis, functionalization and medical applications of these particles, which have been carried out during the past decade, and to examine several prospective directions.
---
paper_title: The heating effect of magnetic fluids in an alternating magnetic field
paper_content:
The heating mechanism and influencing factors of magnetite particles in a 63 kHz alternating magnetic field and 7 kA/m were studied. The results from in vivo heating experiments suggest that magnetite particles can generate enough energy to heat tumor tissue and perform effective hyperthermia. A novel model for predicting power losses has been proposed.
---
paper_title: Water-Soluble Iron Oxide Nanocubes with High Values of Specific Absorption Rate for Cancer Cell Hyperthermia Treatment
paper_content:
Iron oxide nanocrystals (IONCs) are appealing heat mediator nanoprobes in magnetic-mediated hyperthermia for cancer treatment. Here, specific absorption rate (SAR) values are reported for cube-shaped water-soluble IONCs prepared by a one-pot synthesis approach in a size range between 13 and 40 nm. The SAR values were determined as a function of frequency and magnetic field applied, also spanning technical conditions which are considered biomedically safe for patients. Among the different sizes tested, IONCs with an average diameter of 19 ± 3 nm had significant SAR values in clinical conditions and reached SAR values up to 2452 W/gFe at 520 kHz and 29 kAm–1, which is one of the highest values so far reported for IONCs. In vitro trials carried out on KB cancer cells treated with IONCs of 19 nm have shown efficient hyperthermia performance, with cell mortality of about 50% recorded when an equilibrium temperature of 43 °C was reached after 1 h of treatment.
---
paper_title: Enhancing cancer therapeutics using size-optimized magnetic fluid hyperthermia
paper_content:
Magnetic fluid hyperthermia (MFH) employs heat dissipation from magnetic nanoparticles to elicit a therapeutic outcome in tumor sites, which results in either cell death (>42 °C) or damage (<42 °C) depending on the localized rise in temperature. We investigated the therapeutic effect of MFH in immortalized T lymphocyte (Jurkat) cells using monodisperse magnetite (Fe3O4) nanoparticles (MNPs) synthesized in organic solvents and subsequently transferred to aqueous phase using a biocompatible amphiphilic polymer. Monodisperse MNPs, ∼16 nm diameter, show maximum heating efficiency, or specific loss power (watts/g Fe3O4) in a 373 kHz alternating magnetic field. Our in vitro results, for 15 min of heating, show that only 40% of cells survive for a relatively low dose (490 μg Fe/ml) of these size-optimized MNPs, compared to 80% and 90% survival fraction for 12 and 13 nm MNPs at 600 μg Fe/ml. The significant decrease in cell viability due to MNP-induced hyperthermia from only size-optimized nanoparticles demonstrates the central idea of tailoring size for a specific frequency in order to intrinsically improve the therapeutic potency of MFH by optimizing both dose and time of application.
---
paper_title: Simulation and experimental studies on magnetic hyperthermia with use of superparamagnetic iron oxide nanoparticles.
paper_content:
Our purpose of this study was to present simulation and experimental studies on magnetic hyperthermia (MH) with use of an alternating magnetic field (AMF) and superparamagnetic iron oxide nanoparticles (Resovist®). In the simulation studies, the energy dissipation (P) and temperature rise rate (∆T/∆t) were computed under various conditions by use of the probability density function of the particle size distribution based on a log-normal distribution. P and ∆T/∆t and their dependence on the frequency of the AMF (f) largely depended on the particle size of Resovist®. P and ∆T/∆t reached maximum at a diameter of ~24 nm, and were proportional to the amplitude of the AMF (H (0)) raised to a power of ~2.0. In the experimental studies, we made a device for generating an AMF, and measured the temperature rise under various concentrations of Resovist®, H (0), and f. The temperature rise at 10 min after the start of heating was linearly proportional to the concentration of Resovist®, and proportional to H (0) raised to a power of ~2.4, which was slightly greater than that expected from the simulation studies. There was a tendency for the temperature rise to saturate with increasing f. In conclusion, this study will be useful for investigating the feasibility of MH with Resovist® and optimizing the parameters for it.
---
paper_title: The heating effect of magnetic fluids in an alternating magnetic field
paper_content:
The heating mechanism and influencing factors of magnetite particles in a 63 kHz alternating magnetic field and 7 kA/m were studied. The results from in vivo heating experiments suggest that magnetite particles can generate enough energy to heat tumor tissue and perform effective hyperthermia. A novel model for predicting power losses has been proposed.
---
paper_title: Size dependence of specific power absorption of Fe3O4 particles in AC magnetic field
paper_content:
Abstract The specific absorption rate (SAR) values of aqueous suspensions of magnetite particles with different diameters varying from 7.5 to 416 nm were investigated by measuring the time-dependent temperature curves in an external alternating magnetic field (80 kHz, 32.5 kA/m). Results indicate that the SAR values of magnetite particles are strongly size dependent. For magnetite particles larger than 46 nm, the SAR values increase as the particle size decreases where hysteresis loss is the main contribution mechanism. For magnetite particles of 7.5 and 13 nm which are superparamagnetic, hysteresis loss decreases to zero and, instead, relaxation losses (Neel loss and Brownian rotation loss) dominate, but Brown and Neel relaxation losses of the two samples are all relatively small in the applied frequency of 80 kHz.
---
paper_title: Simple models for dynamic hysteresis loop calculations of magnetic single-domain nanoparticles: Application to magnetic hyperthermia optimization
paper_content:
To optimize the heating properties of magnetic nanoparticles (MNPs) in magnetic hyperthermia applications, it is necessary to calculate the area of their hysteresis loops in an alternating magnetic field. The separation between “relaxation losses” and “hysteresis losses” presented in several articles is artificial and criticized here. The three types of theories suitable for describing hysteresis loops of MNPs are presented and compared to numerical simulations: equilibrium functions, Stoner–Wohlfarth model based theories (SWMBTs), and a linear response theory (LRT) using the Neel–Brown relaxation time. The configuration where the easy axis of the MNPs is aligned with respect to the magnetic field and the configuration of a random orientation of the easy axis are both studied. Suitable formulas to calculate the hysteresis areas of major cycles are deduced from SWMBTs and from numerical simulations; the domain of validity of the analytical formula is explicitly studied. In the case of minor cycles, the hys...
---
paper_title: Evaluation of iron-cobalt/ferrite core-shell nanoparticles for cancer thermotherapy
paper_content:
Magnetic nanoparticles (MNPs) offer promise for local hyperthermia or thermoablative cancer therapy. Magnetic hyperthermia uses MNPs to heat cancerous regions in an rf field. Metallic MNPs have larger magnetic moments than iron oxides, allowing similar heating at lower concentrations. By tuning the magnetic anisotropy in alloys, the heating rate at a particular particle size can be optimized. Fe–Co core-shell MNPs have protective CoFe2O4 shell which prevents oxidation. The oxide coating also aids in functionalization and improves biocompatibility of the MNPs. We predict the specific loss power (SLP) for FeCo (SLP ∼450W∕g) at biocompatible fields to be significantly larger in comparision to oxide materials. The anisotropy of Fe-Co MNPs may be tuned by composition and/or shape variation to achieve the maximum SLP at a desired particle size.
---
paper_title: Simple models for dynamic hysteresis loop calculations of magnetic single-domain nanoparticles: Application to magnetic hyperthermia optimization
paper_content:
To optimize the heating properties of magnetic nanoparticles (MNPs) in magnetic hyperthermia applications, it is necessary to calculate the area of their hysteresis loops in an alternating magnetic field. The separation between “relaxation losses” and “hysteresis losses” presented in several articles is artificial and criticized here. The three types of theories suitable for describing hysteresis loops of MNPs are presented and compared to numerical simulations: equilibrium functions, Stoner–Wohlfarth model based theories (SWMBTs), and a linear response theory (LRT) using the Neel–Brown relaxation time. The configuration where the easy axis of the MNPs is aligned with respect to the magnetic field and the configuration of a random orientation of the easy axis are both studied. Suitable formulas to calculate the hysteresis areas of major cycles are deduced from SWMBTs and from numerical simulations; the domain of validity of the analytical formula is explicitly studied. In the case of minor cycles, the hys...
---
paper_title: Magnetic Properties of Magnetic Nanoparticles for Efficient Hyperthermia
paper_content:
Localized magnetic hyperthermia using magnetic nanoparticles (MNPs) under the application of small magnetic fields is a promising tool for treating small or deep-seated tumors. For this method to be applicable, the amount of MNPs used should be minimized. Hence, it is essential to enhance the power dissipation or heating efficiency of MNPs. Several factors influence the heating efficiency of MNPs, such as the amplitude and frequency of the applied magnetic field and the structural and magnetic properties of MNPs. We discuss some of the physics principles for effective heating of MNPs focusing on the role of surface anisotropy, interface exchange anisotropy and dipolar interactions. Basic magnetic properties of MNPs such as their superparamagnetic behavior, are briefly reviewed. The influence of temperature on anisotropy and magnetization of MNPs is discussed. Recent development in self-regulated hyperthermia is briefly discussed. Some physical and practical limitations of using MNPs in magnetic hyperthermia are also briefly discussed.
---
paper_title: Spin pinning at ferrite‐organic interfaces
paper_content:
We have previously reported a drastic moment decrease in NiFe2O4 fine particles (∠100A dia.) coated with an organic surfactant such as oleic acid. Removal of the organic coating restored the moment of the particles. Continued investigation has demonstrated that the apparent moment decrease is due to strong pinning of the spins of those ferrite cations that are bonded to the organic molecules. The evidence for this model includes: (1) UNcoated NiFe2O4 particles, prepared in an otherwise identical fashion and with the same size distribution showed no decrease in moment. This observation eliminates the possibility that defects or abnormal surface morphology are responsible for the decreased moment. (2) Low temperature Mossbauer measurements in zero field of coated and uncoated particles showed identical spectra associated with ordered fine particle NiFe2O4. (3) Mossbauer data taken on coated particles in a field of 68.6 kOe applied along the direction of γ‐ray emission showed only a small decrease of the Pm=...
---
paper_title: Magnetic hyperthermia efficiency in the cellular environment for different nanoparticle designs.
paper_content:
Abstract Magnetic hyperthermia mediated by magnetic nanomaterials is one promising antitumoral nanotherapy, particularly for its ability to remotely destroy deep tumors. More and more new nanomaterials are being developed for this purpose, with improved heat-generating properties in solution. However, although the ultimate target of these treatments is the tumor cell, the heating efficiency, and the underlying mechanisms, are rarely studied in the cellular environment. Here we attempt to fill this gap by making systematic measurements of both hyperthermia and magnetism in controlled cell environments, using a wide range of nanomaterials. In particular, we report a systematic fall in the heating efficiency for nanomaterials associated with tumour cells. Real-time measurements showed that this loss of heat-generating power occurred very rapidly, within a matter of minutes. The fall in heating correlated with the magnetic characterization of the samples, demonstrating a complete inhibition of the Brownian relaxation in cellular conditions.
---
paper_title: Simulating physiological conditions to evaluate nanoparticles for magnetic fluid hyperthermia (MFH) therapy applications
paper_content:
Abstract Magnetite nanoparticles with high self-heating capacity and low toxicity characteristics are a promising candidate for cancer hyperthermia treatment. In order to achieve minimum dosage to a patient, magnetic nanoparticles with high heating capacity are needed. In addition, the influence of physiological factors on the heat capacity of a material should be investigated in order to determine the feasibility. In this study, magnetite nanoparticles coated with lauric acid were prepared by co-precipitation of Fe3+:Fe2+ in a ratio of 2:1, 5:3, 3:2, and 4:3, and the pH was controlled using NaOH. Structural and magnetization characterization by means of X-ray diffractometry (XRD) and a superconducting quantum interference device (SQUID) revealed that the main species was Fe3O4 and further showed that most of the nanoparticles exhibited superparamagnetic properties. All of the magnetic nanoparticles showed a specific absorption rate (SAR) increase that was linear with the magnetic field strength and frequency of the alternating magnetic field. Among all, the magnetic nanoparticles prepared in a 3:2 ratio showed the highest SAR. To further test the influence of physiological factors on the 3:2 ratio magnetic nanoparticles, we simulated the environment with protein (bovine serum albumin, BSA), blood sugar (dextrose), electrolytes (commercial norm-saline) and viscosity (glycerol) to examine the heating capacity under these conditions. Our results showed that the SAR value was unaffected by the protein and blood sugar environments. On the other hand, the SAR value was significantly reduced in the electrolyte environment, due to precipitation and aggregation with sodium ions. For the simulated viscous environment with glycerol, the result showed that the SAR values reduced with increasing glycerol concentration. We have further tested the heating capacity contribution from the Neel mechanism by trapping the magnetic nanoparticles in a solid form of polydimethylsiloxane (PDMS) to eliminate the heating pathway due to a Brownian motion. We measured the heating capability and determined that 47% of the total heat generated by the magnetic nanoparticles was from the Neel mechanism contribution. For evaluating magnetic nanoparticles, this method provides a fast and low cost method for determining qualitative and quantitative information measurement for the effect of physiological interference and could greatly reduce the cost and time by in vitro or animal test.
---
paper_title: In vitro characterization of movement, heating and visualization of magnetic nanoparticles for biomedical applications
paper_content:
Magnetic nanoparticles can be used for a variety of biomedical applications. They can be used in the targeted delivery of therapeutic agents in vivo, in the hyperthermic treatment of cancers, in magnetic resonance (MR) imaging as contrast agents and in the biomagnetic separations of biomolecules. In this study, a characterization of the movement and heating of three different types of magnetic nanoparticles in physiological systems in vitro is made in a known external magnetic field and alternating field respectively. Infra-red (IR) imaging and MR imaging were used to visualize these nanoparticles in vitro. A strong dependence on the size and the suspending medium is observed on the movement and heating of these nanoparticles. First, two of the particles (mean diameter d = 10 nm, uncoated Fe3O4 and d = 2.8 µm, polystyrene coated Fe3O4+γ-Fe2O3) did not move while only a dextran coated nanoparticle (d = 50 nm, γ-Fe2O3) moved in type 1 collagen used as an in vitro model system. It is also observed that the time taken by a collection of these nanoparticles to move even a smaller distance (5 mm) in collagen (~100 min) is almost ten times higher when compared to the time taken to move twice the distance (10 mm) in glycerol (~10 min) under the same external field. Second, the amount of temperature rise increases with the concentration of nanoparticles regardless of the microenvironments in the heating studies. However, the amount of heating in collagen (maximum change in temperature ΔTmax~9 °C at 1.9 mg Fe ml−1 and 19 °C at 3.7 mg Fe ml−1) is significantly less than that in water (ΔTmax~15 °C at 1.9 mg Fe ml−1 and 33 °C at 3.7 mg Fe ml−1) and glycerol (ΔTmax~13.5 °C at 1.9 mg Fe ml−1 and 30 °C at 3.7 mg Fe ml−1). Further, IR imaging provides at least a ten times improvement in the range of imaging magnetic nanoparticles, whereby a concentration of (0–4 mg Fe ml−1) could bevisualized as compared to (0–0.4 mg Fe ml−1) by MR imaging. Based on these in vitro studies, important issues and parameters that require further understanding and characterization of these nanoparticles in vivo are discussed.
---
paper_title: Magnetic nanoparticles for interstitial thermotherapy - : feasibility, tolerance and achieved temperatures
paper_content:
Background: The concept of magnetic fluid hyperthermia is clinically evaluated after development of the whole body magnetic field applicator MFH® 300F and the magnetofluid MFL 082AS. This new system for localized thermotherapy is suitable either for hyperthermia or thermoablation. The magnetic fluid, composed of iron oxide nanoparticles dispersed in water, must be distributed in the tumour and is subsequently heated by exposing to an alternating magnetic field in the applicator. We performed a feasibility study with 22 patients suffering from heavily pretreated recurrences of different tumour entities, where hyperthermia in conjunction with irradiation and/or chemotherapy was an option. The potential to estimate (by post-implantation analyses) and to achieve (by improving the technique) a satisfactory temperature distribution was evaluated in dependency on the implantation technique.Material and methods: Three implantation methods were established: Infiltration under CT fluoroscopy (group A), TRUS (transr...
---
paper_title: Water-Soluble Iron Oxide Nanocubes with High Values of Specific Absorption Rate for Cancer Cell Hyperthermia Treatment
paper_content:
Iron oxide nanocrystals (IONCs) are appealing heat mediator nanoprobes in magnetic-mediated hyperthermia for cancer treatment. Here, specific absorption rate (SAR) values are reported for cube-shaped water-soluble IONCs prepared by a one-pot synthesis approach in a size range between 13 and 40 nm. The SAR values were determined as a function of frequency and magnetic field applied, also spanning technical conditions which are considered biomedically safe for patients. Among the different sizes tested, IONCs with an average diameter of 19 ± 3 nm had significant SAR values in clinical conditions and reached SAR values up to 2452 W/gFe at 520 kHz and 29 kAm–1, which is one of the highest values so far reported for IONCs. In vitro trials carried out on KB cancer cells treated with IONCs of 19 nm have shown efficient hyperthermia performance, with cell mortality of about 50% recorded when an equilibrium temperature of 43 °C was reached after 1 h of treatment.
---
paper_title: Design Maps for the Hyperthermic Treatment of Tumors with Superparamagnetic Nanoparticles
paper_content:
A plethora of magnetic nanoparticles has been developed and investigated under different alternating magnetic fields (AMF) for the hyperthermic treatment of malignant tissues. Yet, clinical applications of magnetic hyperthermia are sporadic, mostly due to the low energy conversion efficiency of the metallic nanoparticles and the high tissue concentrations required. Here, we study the hyperthermic performance of commercially available formulations of superparamagnetic iron oxide nanoparticles (SPIOs), with core diameter of 5, 7 and 14 nm, in terms of absolute temperature increase ΔT and specific absorption rate (SAR). These nanoparticles are operated under a broad range of AMF conditions, with frequency f varying between 0.2 and 30 MHz; field strength H ranging from 4 to 10 kA m(-1); and concentration cMNP varying from 0.02 to 3.5 mg ml(-1). At high frequency field (∼30 MHz), non specific heating dominates and ΔT correlates with the electrical conductivity of the medium. At low frequency field (<1 MHz), non specific heating is negligible and the relaxation of the SPIO within the AMF is the sole energy source. We show that the ΔT of the medium grows linearly with cMNP , whereas the SARMNP of the magnetic nanoparticles is independent of cMNP and varies linearly with f and H(2) . Using a computational model for heat transport in a biological tissue, the minimum requirements for local hyperthermia (Ttissue >42°C) and thermal ablation (Ttissue >50°C) are derived in terms of cMNP , operating AMF conditions and blood perfusion. The resulting maps can be used to rationally design hyperthermic treatments and identifying the proper route of administration - systemic versus intratumor injection - depending on the magnetic and biodistribution properties of the nanoparticles.
---
paper_title: Numerical FEM Models for the Planning of Magnetic Induction Hyperthermia Treatments With Nanoparticles
paper_content:
A numerical FEM model of a magnetic fluid hyperthermia (MFH) treatment on an hepatocellular carcinoma (HCC) metastasis has been simulated. Starting from actual CT images of the patient, a 3-D geometry of the anatomical district has been reconstructed and a coupled electromagnetic and thermal transient analysis has been performed, in order to predict the temperature distribution during the treatment. The in vivo effect of blood perfusion has also been implemented through the Pennes's model. Various simulations have been carried out, based on different particle sizes and concentrations, as well as different exciting field intensities.
---
paper_title: Water-Soluble Iron Oxide Nanocubes with High Values of Specific Absorption Rate for Cancer Cell Hyperthermia Treatment
paper_content:
Iron oxide nanocrystals (IONCs) are appealing heat mediator nanoprobes in magnetic-mediated hyperthermia for cancer treatment. Here, specific absorption rate (SAR) values are reported for cube-shaped water-soluble IONCs prepared by a one-pot synthesis approach in a size range between 13 and 40 nm. The SAR values were determined as a function of frequency and magnetic field applied, also spanning technical conditions which are considered biomedically safe for patients. Among the different sizes tested, IONCs with an average diameter of 19 ± 3 nm had significant SAR values in clinical conditions and reached SAR values up to 2452 W/gFe at 520 kHz and 29 kAm–1, which is one of the highest values so far reported for IONCs. In vitro trials carried out on KB cancer cells treated with IONCs of 19 nm have shown efficient hyperthermia performance, with cell mortality of about 50% recorded when an equilibrium temperature of 43 °C was reached after 1 h of treatment.
---
paper_title: Design Maps for the Hyperthermic Treatment of Tumors with Superparamagnetic Nanoparticles
paper_content:
A plethora of magnetic nanoparticles has been developed and investigated under different alternating magnetic fields (AMF) for the hyperthermic treatment of malignant tissues. Yet, clinical applications of magnetic hyperthermia are sporadic, mostly due to the low energy conversion efficiency of the metallic nanoparticles and the high tissue concentrations required. Here, we study the hyperthermic performance of commercially available formulations of superparamagnetic iron oxide nanoparticles (SPIOs), with core diameter of 5, 7 and 14 nm, in terms of absolute temperature increase ΔT and specific absorption rate (SAR). These nanoparticles are operated under a broad range of AMF conditions, with frequency f varying between 0.2 and 30 MHz; field strength H ranging from 4 to 10 kA m(-1); and concentration cMNP varying from 0.02 to 3.5 mg ml(-1). At high frequency field (∼30 MHz), non specific heating dominates and ΔT correlates with the electrical conductivity of the medium. At low frequency field (<1 MHz), non specific heating is negligible and the relaxation of the SPIO within the AMF is the sole energy source. We show that the ΔT of the medium grows linearly with cMNP , whereas the SARMNP of the magnetic nanoparticles is independent of cMNP and varies linearly with f and H(2) . Using a computational model for heat transport in a biological tissue, the minimum requirements for local hyperthermia (Ttissue >42°C) and thermal ablation (Ttissue >50°C) are derived in terms of cMNP , operating AMF conditions and blood perfusion. The resulting maps can be used to rationally design hyperthermic treatments and identifying the proper route of administration - systemic versus intratumor injection - depending on the magnetic and biodistribution properties of the nanoparticles.
---
paper_title: Usable Frequencies in Hyperthermia with Thermal Seeds
paper_content:
Temperature distributions are computed for tissue models assumed to be heated by constant power seeds, and from that, the heating power which the implants have to produce to achieve clinically acceptable temperatures in the tumor are obtained. Calculations of the heat produced by thermal seeds exposed to an electromagnetic induction field showed it to be strongly dependent on the permeability of the material, on the field frequency, on the seed diameter, and on the orientation of the implants with respect to the field. It is recommended that, other parameters permitting, the implants be oriented parallel to the induction field and that the field frequency be approximately 200 kHz or lower. Under these conditions, implants with diameters as small as 0.25 mm produce sufflcient heat for any clinical application without undue heating by eddy currents flowing within the patient. The use of frequencies above the recommended range puts certain restrictions on the implant geometry and on the magnetic properties of their material. Needles oriented perpendicular to the field produce enough heat to reach therapeutic temperatures only within a narrow range of parameters.
---
paper_title: Numerical assessment of a criterion for the optimal choice of the operative conditions in magnetic nanoparticle hyperthermia on a realistic model of the human head.
paper_content:
PURPOSE ::: This paper presents a numerical study aiming at assessing the effectiveness of a recently proposed optimisation criterion for determining the optimal operative conditions in magnetic nanoparticle hyperthermia applied to the clinically relevant case of brain tumours. ::: ::: ::: MATERIALS AND METHODS ::: The study is carried out using the Zubal numerical phantom, and performing electromagnetic-thermal co-simulations. The Pennes model is used for thermal balance; the dissipation models for the magnetic nanoparticles are those available in the literature. The results concerning the optimal therapeutic concentration of nanoparticles, obtained through the analysis, are validated using experimental data on the specific absorption rate of iron oxide nanoparticles, available in the literature. ::: ::: ::: RESULTS ::: The numerical estimates obtained by applying the criterion to the treatment of brain tumours shows that the acceptable values for the product between the magnetic field amplitude and frequency may be two to four times larger than the safety threshold of 4.85 × 10(8)A/m/s usually considered. This would allow the reduction of the dosage of nanoparticles required for an effective treatment. In particular, depending on the tumour depth, concentrations of nanoparticles smaller than 10 mg/mL of tumour may be sufficient for heating tumours smaller than 10 mm above 42 °C. Moreover, the study of the clinical scalability shows that, whatever the tumour position, lesions larger than 15 mm may be successfully treated with concentrations lower than 10 mg/mL. The criterion also allows the prediction of the temperature rise in healthy tissue, thus assuring safe treatment. ::: ::: ::: CONCLUSIONS ::: The criterion can represent a helpful tool for planning and optimising an effective hyperthermia treatment.
---
paper_title: Magnetic nanoparticles for interstitial thermotherapy - : feasibility, tolerance and achieved temperatures
paper_content:
Background: The concept of magnetic fluid hyperthermia is clinically evaluated after development of the whole body magnetic field applicator MFH® 300F and the magnetofluid MFL 082AS. This new system for localized thermotherapy is suitable either for hyperthermia or thermoablation. The magnetic fluid, composed of iron oxide nanoparticles dispersed in water, must be distributed in the tumour and is subsequently heated by exposing to an alternating magnetic field in the applicator. We performed a feasibility study with 22 patients suffering from heavily pretreated recurrences of different tumour entities, where hyperthermia in conjunction with irradiation and/or chemotherapy was an option. The potential to estimate (by post-implantation analyses) and to achieve (by improving the technique) a satisfactory temperature distribution was evaluated in dependency on the implantation technique.Material and methods: Three implantation methods were established: Infiltration under CT fluoroscopy (group A), TRUS (transr...
---
paper_title: Design Maps for the Hyperthermic Treatment of Tumors with Superparamagnetic Nanoparticles
paper_content:
A plethora of magnetic nanoparticles has been developed and investigated under different alternating magnetic fields (AMF) for the hyperthermic treatment of malignant tissues. Yet, clinical applications of magnetic hyperthermia are sporadic, mostly due to the low energy conversion efficiency of the metallic nanoparticles and the high tissue concentrations required. Here, we study the hyperthermic performance of commercially available formulations of superparamagnetic iron oxide nanoparticles (SPIOs), with core diameter of 5, 7 and 14 nm, in terms of absolute temperature increase ΔT and specific absorption rate (SAR). These nanoparticles are operated under a broad range of AMF conditions, with frequency f varying between 0.2 and 30 MHz; field strength H ranging from 4 to 10 kA m(-1); and concentration cMNP varying from 0.02 to 3.5 mg ml(-1). At high frequency field (∼30 MHz), non specific heating dominates and ΔT correlates with the electrical conductivity of the medium. At low frequency field (<1 MHz), non specific heating is negligible and the relaxation of the SPIO within the AMF is the sole energy source. We show that the ΔT of the medium grows linearly with cMNP , whereas the SARMNP of the magnetic nanoparticles is independent of cMNP and varies linearly with f and H(2) . Using a computational model for heat transport in a biological tissue, the minimum requirements for local hyperthermia (Ttissue >42°C) and thermal ablation (Ttissue >50°C) are derived in terms of cMNP , operating AMF conditions and blood perfusion. The resulting maps can be used to rationally design hyperthermic treatments and identifying the proper route of administration - systemic versus intratumor injection - depending on the magnetic and biodistribution properties of the nanoparticles.
---
paper_title: Magnetic nanoparticle hyperthermia cancer treatment efficacy dependence on cellular and tissue level particle concentration and particle heating properties
paper_content:
The use of nanotechnology for the treatment of cancer affords the possibility of highly specific tumor targeting and improved treatment efficacy. Iron oxide magnetic nanoparticles (IONPs) have demonstrated success as an ablative mono-therapy and targetable adjuvant therapy. However, the relative therapeutic value of intracellular vs. extracellular IONPs remains unclear. Our research demonstrates that both extracellular and intracellular IONPs generate cytotoxicity when excited by an alternating magnetic field (AMF). While killing individual cells via intracellular IONP heating is an attractive goal, theoretical models and experimental results suggest that this may not be possible due to limitations of cell volume, applied AMF, IONP concentration and specific absorption rate (SAR). The goal of this study was to examine the importance of tumor size (cell number) with respect to IONP concentration. Mouse mammary adenocarcinoma cells were incubated with IONPs, washed, spun into different pellet sizes (0.1, 0.5 and 2 million cells) and exposed to AMF. The level of heating and associated cytotoxicity depended primarily on the number of IONPs /amount Fe per cell pellet volume and the relative volume of the cell pellet. Specifically, larger cell pellets achieved greater relative cytotoxicity due to greater iron amounts, close association and subsequently higher temperatures.
---
paper_title: Study of the Optimum Dose of Ferromagnetic Nanoparticles Suitable for Cancer Therapy Using MFH
paper_content:
At present, a successful realization of the magnetic fluid hyperthermia (MFH) therapy is conditioned by some unsolved problems. One of these problems is the choice of the correct particle concentration in order to achieve a defined temperature increase in the tumor tissue. A computer-based model was created using COMSOL: Multiphysics in order to simulate the heat dissipation within the tissue for typical configurations of the tumor position in relation to neighboring blood vessels as well as particle distribution within the tumor. The temperature achieved on the tumor border was investigated taking into account physiological parameters of different types of tissues. Using the correct nanoparticle dosage and considering their specific loss power, it is possible to estimate the efficiency of this therapeutic method. If the tumor shape and position are known by suitable medical imaging techniques (e.g., MRI, CT), simulations like this one could provide data in order to achieve the optimum dose and particle distribution in the tumor.
---
paper_title: In vitro characterization of movement, heating and visualization of magnetic nanoparticles for biomedical applications
paper_content:
Magnetic nanoparticles can be used for a variety of biomedical applications. They can be used in the targeted delivery of therapeutic agents in vivo, in the hyperthermic treatment of cancers, in magnetic resonance (MR) imaging as contrast agents and in the biomagnetic separations of biomolecules. In this study, a characterization of the movement and heating of three different types of magnetic nanoparticles in physiological systems in vitro is made in a known external magnetic field and alternating field respectively. Infra-red (IR) imaging and MR imaging were used to visualize these nanoparticles in vitro. A strong dependence on the size and the suspending medium is observed on the movement and heating of these nanoparticles. First, two of the particles (mean diameter d = 10 nm, uncoated Fe3O4 and d = 2.8 µm, polystyrene coated Fe3O4+γ-Fe2O3) did not move while only a dextran coated nanoparticle (d = 50 nm, γ-Fe2O3) moved in type 1 collagen used as an in vitro model system. It is also observed that the time taken by a collection of these nanoparticles to move even a smaller distance (5 mm) in collagen (~100 min) is almost ten times higher when compared to the time taken to move twice the distance (10 mm) in glycerol (~10 min) under the same external field. Second, the amount of temperature rise increases with the concentration of nanoparticles regardless of the microenvironments in the heating studies. However, the amount of heating in collagen (maximum change in temperature ΔTmax~9 °C at 1.9 mg Fe ml−1 and 19 °C at 3.7 mg Fe ml−1) is significantly less than that in water (ΔTmax~15 °C at 1.9 mg Fe ml−1 and 33 °C at 3.7 mg Fe ml−1) and glycerol (ΔTmax~13.5 °C at 1.9 mg Fe ml−1 and 30 °C at 3.7 mg Fe ml−1). Further, IR imaging provides at least a ten times improvement in the range of imaging magnetic nanoparticles, whereby a concentration of (0–4 mg Fe ml−1) could bevisualized as compared to (0–0.4 mg Fe ml−1) by MR imaging. Based on these in vitro studies, important issues and parameters that require further understanding and characterization of these nanoparticles in vivo are discussed.
---
paper_title: Heating magnetic fluid with alternating magnetic field
paper_content:
This study develops analytical relationships and computations of power dissipation in magnetic fluid (ferrofluid) subjected to alternating magnetic field. The dissipation results from the orientational relaxation of particles having thermal fluctuations in a viscous medium.
---
paper_title: Size-dependant heating rates of iron oxide nanoparticles for magnetic fluid hyperthermia.
paper_content:
Using the thermal decomposition of organometallics method we have synthesized high-quality, iron oxide nanoparticles of tailorable size up to ~15nm and transferred them to a water phase by coating with a biocompatible polymer. The magnetic behavior of these particles was measured and fit to a log-normal distribution using the Chantrell method and their polydispersity was confirmed to be very narrow. By performing calorimetry measurements with these monodisperse particles we have unambiguously demonstrated, for the first time, that at a given frequency, heating rates of superparamagnetic particles are dependent on particle size, in agreement with earlier theoretical predictions.
---
paper_title: Monodispersed magnetite nanoparticles optimized for magnetic fluid hyperthermia: Implications in biological systems
paper_content:
Magnetite (Fe3O4) nanoparticles (MNPs) are suitable materials for Magnetic Fluid Hyperthermia (MFH), provided their size is carefully tailored to the applied alternating magnetic field (AMF) frequency. Since aqueous synthesis routes produce polydisperse MNPs that are not tailored for any specific AMF frequency, we have developed a comprehensive protocol for synthesizing highly monodispersed MNPs in organic solvents, specifically tailored for our field conditions (f = 376 kHz, H0 = 13.4 kA/m) and subsequently transferred them to water using a biocompatible amphiphilic polymer. These MNPs (σavg. = 0.175) show truly size-dependent heating rates, indicated by a sharp peak in the specific loss power (SLP, W/g Fe3O4) for 16 nm (diameter) particles. For broader size distributions (σavg. = 0.266), we observe a 30% drop in overall SLP. Furthermore, heating measurements in biological medium [Dulbecco’s modified Eagle medium (DMEM) + 10% fetal bovine serum] show a significant drop for SLP (∼30% reduction in 16 nm MN...
---
paper_title: Simple models for dynamic hysteresis loop calculations of magnetic single-domain nanoparticles: Application to magnetic hyperthermia optimization
paper_content:
To optimize the heating properties of magnetic nanoparticles (MNPs) in magnetic hyperthermia applications, it is necessary to calculate the area of their hysteresis loops in an alternating magnetic field. The separation between “relaxation losses” and “hysteresis losses” presented in several articles is artificial and criticized here. The three types of theories suitable for describing hysteresis loops of MNPs are presented and compared to numerical simulations: equilibrium functions, Stoner–Wohlfarth model based theories (SWMBTs), and a linear response theory (LRT) using the Neel–Brown relaxation time. The configuration where the easy axis of the MNPs is aligned with respect to the magnetic field and the configuration of a random orientation of the easy axis are both studied. Suitable formulas to calculate the hysteresis areas of major cycles are deduced from SWMBTs and from numerical simulations; the domain of validity of the analytical formula is explicitly studied. In the case of minor cycles, the hys...
---
paper_title: The role of blood flow in hyperthermia.
paper_content:
Abstract In hyperthermia blood flowing through a tissue modifies the temperature distribution produced by the heating technique. The effects of blood flow on temperature distribution during uniform heating of tissues have been calculated using a simple mathematical model, normally employed in the measurement of blood flow by radioisotopes. Blood flow increases the heating time required to produce a certain temperature; it sets a maximum temperature which can be attained by a particular heat input and it produces a pronounced difference in temperature of tissues, with different flows, within the heated volume.
---
paper_title: The Energy Conservation Equation for Living Tissue
paper_content:
The widely used bio-heat transfer equation is examined regarding its term describing the thermal effects of blood perfusion. It is demonstrated, both by physical arguments and numerical results, that the description of thermal convection by blood is inconsistent in this equation, and that results are in error by the same order of magnitude as the convective energy transport itself.
---
paper_title: Thermal injury kinetics in electrical trauma.
paper_content:
The distribution of electrical current and the resultant Joule heating in tissues of the human upper extremity for a worst-case hand-to-hand high-voltage electrical shock was modelled by solving the Bioheat equation using the finite element method. The model of the upper extremity included skin, fat, skeletal muscle, and bone. The parameter sets for these tissues included specific thermal and electrical properties and their respective tissue blood flow rates. The extent of heat mediated cellular injury was estimated by using a damage rate equation based on a single energy barrier chemical reaction model. No cellular injury was assumed to occur for temperatures less than 42 degrees C. This model was solved for the duration of Joule heating required to produce membrane damage in cells, termed the lethal time (of contact) for injury. LT's were determined for contact voltages ranging from 5 to 20 kV. For a 10,000 volt electrical shock LT's for skeletal muscle are predicted to be: 0.5 second in the distal forearm, 1.1 second in the mid-forearm, 1.2 second in the proximal elbow, and 2.0 seconds in the mid-arm. This analysis of the electrical shock provides useful insight into the mechanisms of resultant tissue damage and provides important performance guidelines for the development of safety devices.
---
paper_title: Magnetic Fluid Hyperthermia Modeling Based on Phantom Measurements and Realistic Breast Model
paper_content:
Magnetic fluid hyperthermia (MFH) is a minimally invasive procedure that destroys cancer cells. It is based on a superparamagnetic heat phenomenon and consists in feeding a ferrofluid into a tumor, and then applying an external electromagnetic field, which leads to apoptosis. The strength of the magnetic field, optimal dose of the ferrofluid, the volume of the tumor and the safety standards have to be taken into consideration when MFH treatment is planned. In this study, we have presented the novel complementary investigation based both on the experiments and numerical methodology connected with female breast cancer. We have conducted experiments on simplified female breast phantoms with numerical analysis and then we transferred the results on an anatomically-like breast model.
---
paper_title: A New Simplified Bioheat Equation for the Effect of Blood Flow on Local Average Tissue Temperature
paper_content:
A new simplified three-dimensional bioheat equation is derived to describe the effect of blood flow on blood-tissue heat transfer. In two recent theoretical and experimental studies [1, 2] the authors have demonstrated that the so-called isotropic blood perfusion term in the existing bioheat equation is negligible because of the microvascular organization, and that the primary mechanism for blood-tissue energy exchange is incomplete countercurrent exchange in the thermally significant microvessels. The new theory to describe this basic mechanism shows that the vascularization of tissue causes it to behave as an anisotropic heat transfer medium. A remarkably simple expression is derived for the tensor conductivity of the tissue as a function of the local vascular geometry and flow velocity in the thermally significant countercurrent vessels. It is also shown that directed as opposed to isotropic blood perfusion between the countercurrent vessels can have a significant influence on heat transfer in regions where the countercurrent vessels are under 70-micron diameter. The new bioheat equation also describes this mechanism.
---
paper_title: Theory and experiment for the effect of vascular microstructure on surface tissue heat transfer--Part I: Anatomical foundation and model conceptualization.
paper_content:
A new theoretical model supported by ultrastructural studies and high-spatial resolution temperature measurements is presented for surface tissue heat transfer in a two-part study. In this first paper, vascular casts of the rabbit thigh prepared by the tissue clearance method were serially sectioned parallel to the skin surface to determine the detailed variation of the vascular geometry as a function of tissue depth. Simple quantitative models of the basic vascular structures observed were then analyzed in terms of their characteristic thermal relaxation lengths and a new three-layer conceptual model proposed for surface tissue heat transfer. Fine wire temperature measurements with an 80-micron average diameter thermocouple junction and spatial increments of 20 micrometers between measurement sites have shown for the first time the detailed temperature fluctuations in the microvasculature and have confirmed the fundamental assumptions of the proposed three-layer model for the deep tissue, skeletal muscle and cutaneous layers.
---
paper_title: Theory and experiment for the effect of vascular microstructure on surface tissue heat transfer--Part II: Model formulation and solution.
paper_content:
In this paper the conceptual three-layer representation of surface tissue heat transfer proposed in Weinbaum, Jiji and Lemons [1], is developed into a detailed quantitative model. This model takes into consideration the variation of the number density, size and flow velocity of the countercurrent arterio-venous vessels as a function of depth from the skin surface, the directionality of blood perfusion in the transverse vessel layer and the superficial shunting of blood to the cutaneous layer. A closed form analytic solution for the boundary value problem coupling the three layers is obtained. This solution is in terms of numerically evaluated integrals describing the detailed vascular geometry, a capillary bleed-off distribution function and parameters describing the shunting of blood to the cutaneous layer. Representative heat transfer results for typical physiological conditions are presented.
---
paper_title: Recent developments in modeling heat transfer in blood perfused tissues
paper_content:
Successful hyperthermia treatment of tumors requires understanding the attendant thermal processes in both diseased and healthy tissue. Accordingly, it is essential for developers and users of hyperthermia equipment to predict, measure and interpret correctly the tissue thermal and vascular response to heating. Modeling of heat transfer in living tissues is a means towards this end. Due to the complex morphology of living tissues, such modeling is a difficult task and some simplifying assumptions are needed. Some investigators have recently argued that Pennes' interpretation of the vascular contribution to heat transfer in perfused tissues fails to account for the actual thermal equilibration process between the flowing blood and the surrounding tissue and proposed new models, presumably based on a more realistic anatomy of the perfused tissue. The present review compares and contrasts several of the new bio-heat transfer models, emphasizing the problematics of their experimental validation, in the absence of measuring equipment capable of reliable evaluation of tissue properties and their variations that occur in the spatial scale of blood vessels with diameters less than about 0.2 mm. For the most part, the new models still lack sound experimental grounding, and in view of their inherent complexity, the best practical approach for modeling bio-heat transfer during hyperthermia may still be the Pennes model, providing its use is based on some insights gained from the studies described here. In such cases, these models should yield a more realistic description of tissue locations and/or thermal conditions for which the Pennes model might not apply. >
---
paper_title: Is intracellular hyperthermia superior to extracellular hyperthermia in the thermal sense?
paper_content:
More than 20 years ago, it was hypothesized that intracellular hyperthermia is superior to extracellular hyperthermia. It was further hypothesized that even a single biological cell containing magnetic nanoparticles can be treated for hyperthermia by an AC magnetic field, independent of its surrounding cells. Since experimental investigation of the thermal effects of intracellular hyperthermia is not feasible, these hypotheses have been studied theoretically. The current report shows that nano-scale heating effects are negligible. This study further shows that intracellular heat generation is sufficient to create the necessary conditions for hyperthermia only in a large group of cells loaded with nanoparticles, having an overall diameter of at least 1mm. It is argued in this report that there is no reason to believe that intracellular hyperthermia is superior to extracellular hyperthermia in the thermal sense.
---
paper_title: Modeling of temperature profile during magnetic thermotherapy for cancer treatment
paper_content:
Magnetic nanoparticles (MNPs) used as heat sources for cancer thermotherapy have received much recent attention. While the mechanism for power dissipation in MNPs in a rf field is well understood, a challenge in moving to clinical trials is an inadequate understanding of the power dissipation in MNP-impregnated systems and the discrepancy between the predicted and observed heating rates in the same. Here we use the Rosensweig [J. Magn. Magn. Mater. 252, 370 (2002)] model for heat generation in a single MNP, considering immediate heating of the MNPs, and the double spherical-shell heat transfer equations developed by Andra et al. [J. Magn. Magn. Mater. 194, 197 (1999)] to model the heat distribution in and around a ferrofluid sample or a tumor impregnated with MNPs. We model the heat generated at the edge of a 2.15 cm spherical sample of FeCo/(Fe,Co)3O4 agglomerates containing 95 vol % MNPs with mean radius of 9 nm, dispersed at 1.5–1.6 vol % in bisphenol F. We match the model against experimental data for...
---
paper_title: Transient solution to the bioheat equation and optimization for magnetic fluid hyperthermia treatment
paper_content:
Two finite concentric spherical regions were considered as the tissue model for magnetic fluid hyperthermia treatment. The inner sphere represents the diseased tissue containing magnetic particles that generate heat when an alternating magnetic field is applied. The outer sphere represents the healthy tissue. Blood perfusion effects are included in both the regions. Analytical and numerical solutions of the one-dimensional bioheat transfer equation were obtained with constant and spatially varying heat generation in the inner sphere. The numerical solution was found to be in good agreement with the analytical solution. In an ideal hyperthermia treatment, all the diseased tissues should be selectively heated without affecting any healthy tissue. The present work optimized the magnetic particle concentration in an attempt to achieve the ideal hyperthermia conditions. It was found that, for a fixed amount of magnetic particles, optimizing the magnetic particle distribution in the diseased tissue can signific...
---
paper_title: Temperature distribution as function of time around a small spherical heat source of local magnetic hyperthermia
paper_content:
A spherical region containing magnetic particles embedded in extended muscle tissue is taken as model of small breast carcinomas. Using analytically derived equations the spatial temperature distribution is calculated as function of the time for exposing to an alternating magnetic field. In vitro measurements with muscle tissue yielded such an agreement with the calculations that treatment of small tumors in slightly vascularized tissues on the base of mathematical predictions seems now to be more promising than in the past.
---
paper_title: Magnetic Fluid Hyperthermia Modeling Based on Phantom Measurements and Realistic Breast Model
paper_content:
Magnetic fluid hyperthermia (MFH) is a minimally invasive procedure that destroys cancer cells. It is based on a superparamagnetic heat phenomenon and consists in feeding a ferrofluid into a tumor, and then applying an external electromagnetic field, which leads to apoptosis. The strength of the magnetic field, optimal dose of the ferrofluid, the volume of the tumor and the safety standards have to be taken into consideration when MFH treatment is planned. In this study, we have presented the novel complementary investigation based both on the experiments and numerical methodology connected with female breast cancer. We have conducted experiments on simplified female breast phantoms with numerical analysis and then we transferred the results on an anatomically-like breast model.
---
paper_title: Errors between two- and three-dimensional thermal model predictions of hyperthermia treatments.
paper_content:
A simulation program to study the three-dimensional temperature distributions produced by hyperthermia in anatomically realistic inhomogenous tissue models has been developed using the bioheat transfer equation. The anatomical data for the inhomogeneous tissues of the human body are entered on a digitizing tablet from serial computed tomography (CT) scans. Power deposition patterns from various heating modalities must be calculated independently. The program has been used to comparatively evaluate two- and three-dimensional simulations in a series of parametric calculations based on a simple inhomogeneous tissue model for uniform power deposition. The conclusions are that two-dimensional simulations always lead to significant errors at the ends of tumors (up to tens of degrees). However, they can give valid results for the central region of large tumors, but only with tumor blood perfusions greater than approximately 1 kg/m3/s. These conclusions from the geometrically simple model are substantiated by the results obtained using the full three-dimensional model for actual patient anatomical simulations. In summary, three-dimensional simulations will be necessary for accurate patient treatment planning. The effect of the thermal conductivity, used in the models, on the temperature field has also been studied. The results show that using any thermal conductivity value in the range of 0.4 to 0.6 W/m/degrees C sufficiently characterizes most soft tissues, especially in the presence of high blood perfusion. However, bone (thermal conductivity of 1.16 W/m/degrees C) and fat (thermal conductivity of 0.2 W/m/degrees C) do not fit this generalization and significant errors result if soft tissue values are used.
---
paper_title: Study of the Optimum Injection Sites for a Multiple Metastases Region in Cancer Therapy by Using MFH
paper_content:
Metastases represent the final stage in cancer progression. Their early diagnosis and appropriate treatment are very important in order to maintain a high survival prognostic. The interest in MFH (magnetic fluid hyperthermia) and cancer therapy has noticeably increased in the last years. There are still numerous problems that need to be solved before a clinical model may be tested. The goal of this paper is to both quantify the optimum dose of magnetic material and optimize injection sites in order to achieve a therapeutic temperature of 42degC that may induce apoptosis in tumor cells. A successful realization of this therapy implies a heating zone of at least 2 mm around the tumor. finite element method (FEM) simulations of spherical metastases in liver and breast tissues near a blood vessel were performed using COMSOL multiphysics (heat transfer module) in order to simulate the temperature field produced by ferromagnetic nanoparticles within the tumor and healthy tissues. A systematical variation of tumor diameter and particle dosage was performed for every physical parameter for the tumor tissues mentioned above (e.g., tissue density, tumor/tissue perfusion rate) in order to understand the interdependence of these parameters and their effects on hyperthermia therapy.
---
paper_title: Modeling of temperature profile during magnetic thermotherapy for cancer treatment
paper_content:
Magnetic nanoparticles (MNPs) used as heat sources for cancer thermotherapy have received much recent attention. While the mechanism for power dissipation in MNPs in a rf field is well understood, a challenge in moving to clinical trials is an inadequate understanding of the power dissipation in MNP-impregnated systems and the discrepancy between the predicted and observed heating rates in the same. Here we use the Rosensweig [J. Magn. Magn. Mater. 252, 370 (2002)] model for heat generation in a single MNP, considering immediate heating of the MNPs, and the double spherical-shell heat transfer equations developed by Andra et al. [J. Magn. Magn. Mater. 194, 197 (1999)] to model the heat distribution in and around a ferrofluid sample or a tumor impregnated with MNPs. We model the heat generated at the edge of a 2.15 cm spherical sample of FeCo/(Fe,Co)3O4 agglomerates containing 95 vol % MNPs with mean radius of 9 nm, dispersed at 1.5–1.6 vol % in bisphenol F. We match the model against experimental data for...
---
paper_title: Transient solution to the bioheat equation and optimization for magnetic fluid hyperthermia treatment
paper_content:
Two finite concentric spherical regions were considered as the tissue model for magnetic fluid hyperthermia treatment. The inner sphere represents the diseased tissue containing magnetic particles that generate heat when an alternating magnetic field is applied. The outer sphere represents the healthy tissue. Blood perfusion effects are included in both the regions. Analytical and numerical solutions of the one-dimensional bioheat transfer equation were obtained with constant and spatially varying heat generation in the inner sphere. The numerical solution was found to be in good agreement with the analytical solution. In an ideal hyperthermia treatment, all the diseased tissues should be selectively heated without affecting any healthy tissue. The present work optimized the magnetic particle concentration in an attempt to achieve the ideal hyperthermia conditions. It was found that, for a fixed amount of magnetic particles, optimizing the magnetic particle distribution in the diseased tissue can signific...
---
paper_title: Ferromagnetic Nanoparticles Dose Based on Tumor Size in Magnetic Fluid Hyperthermia Cancer Therapy
paper_content:
The interest in magnetic fluid hyperthermia (MFH) and cancer therapy has noticeably increased in the last years. At present, a successful realization of this interdisciplinary research is hampered by some unsolved problems. One of these problems this paper intended to clarify is how to find an estimate of the appropriate dosage of magnetic nanoparticles that injected into the tumor would help achieve an optimum temperature of 42degC, thus resulting in an increase of the susceptibility for apoptosis in tumor cells. We created a computational model in COMSOL: Multiphysics in order to analyze the heat dissipation within the tumor tissue. By considering various types of tissues with their respective physical and physiological properties (breast, liver, and skin tissues) and also by taking into account the amount of heat generated through the Brownian rotation and the Neel relaxation, it has been studied the tumor border temperature achieved for various concentrations of magnetic nanoparticles in their superparamagnetic behavior. Distinct simulations of a spherical tumor located in a cubical region of a volume of 1.2-3.5 cm3 within the tissue were designed. We performed a systematical variation of tumor diameter and particle dosage for every physical parameter of above mentioned tumor tissues (e.g., tissue density, tumor/tissue perfusion rate). By this systematization we intended to understand the interdependence of these parameters and their effects on hyperthermia therapy.
---
paper_title: 3D numerical simulations on GPUs of hyperthermia with nanoparticles by a nonlinear bioheat model
paper_content:
This paper deals with the numerical modeling of hyperthermia treatments by magnetic nanoparticles considering a 3D nonlinear Pennes' bioheat transfer model with a temperature-dependent blood perfusion in order to yield more accurate results. The tissue is modeled by considering skin, fat and muscle layers in addition to the tumor. The FDM in a heterogeneous medium is employed and the resulting system of nonlinear equations in the time domain is solved by a predictor-multicorrector algorithm. Since the execution of the three-dimensional model requires a large amount of time, CUDA is used to speedup it. Experimental results showed that the parallelization with CUDA was very effective in improving performance, yielding gains up to 242 times when compared to the sequential execution time.
---
paper_title: Temperature distribution as function of time around a small spherical heat source of local magnetic hyperthermia
paper_content:
A spherical region containing magnetic particles embedded in extended muscle tissue is taken as model of small breast carcinomas. Using analytically derived equations the spatial temperature distribution is calculated as function of the time for exposing to an alternating magnetic field. In vitro measurements with muscle tissue yielded such an agreement with the calculations that treatment of small tumors in slightly vascularized tissues on the base of mathematical predictions seems now to be more promising than in the past.
---
paper_title: Numerical simulation of effect of vessel bifurcation on heat transfer in the magnetic fluid hyperthermia
paper_content:
Abstract This research aimed to analyze the mass and heat transfer mechanisms in magnetic fluid hyperthermia (MFH) treatment, revealing the effect of blood flow in a blood vessel bifurcation on the accurate spatial control of the thermal dose. A three-dimensional multiphysical model was developed to obtain the blood flow velocity distribution, concentration distribution of magnetic fluid, and temperature distribution of the treated tumor tissues. The calculated results demonstrate that the structure, size, and position of a bifurcation vessel greatly affect the selection of injection parameters for MFH treatment. The injection parameters considered in this study are the concentration of magnetic fluid, injection volume, arrangement of injections within the targeted tissues, and distance between the injection site and bifurcation. Diffuse injection patterns, large volumes, and low concentrations generally decrease the temperature differences within the tissues. To achieve uniform heating, high injection density and high-concentration magnetic fluids may be applied to the area near the vessel in order to reduce the cooling effect of blood flow. However, a more diffuse injection pattern is advantageous if the distance between the injection site and blood vessel is relatively short for the purpose of eliminating the heating effect of magnetic fluids.
---
paper_title: FEM numerical model study of heating in magnetic nanoparticles
paper_content:
Electromagnetic heating of nanoparticles is complicated by the extremely short thermal relaxation time constants and difficulty of coupling sufficient power into the particles to achieve desired temperatures. Magnetic field heating by the hysteresis loop mechanism at frequencies between about 100 and 300 kHz has proven to be an effective mechanism in magnetic nanoparticles. Experiments at 2.45 GHz show that Fe3O4 magnetite nanoparticle dispersions in the range of 1012 to 1013 NP/mL also heat substantially at this frequency. An FEM numerical model study was undertaken to estimate the order of magnitude of volume power density, Qgen (W m-3) required to achieve significant heating in evenly dispersed and aggregated clusters of nanoparticles. The FEM models were computed using Comsol Multiphysics; consequently the models were confined to continuum formulations and did not include film nano-dimension heat transfer effects at the nanoparticle surface. As an example, the models indicate that for a single 36 nm diameter particle at an equivalent dispersion of 1013 NP/mL located within one control volume (1.0 × 10-19 m3) of a capillary vessel a power density in the neighborhood of 1017 (W m-3) is required to achieve a steady state particle temperature of 52 °C - the total power coupled to the particle is 2.44 μW. As a uniformly distributed particle cluster moves farther from the capillary the required power density decreases markedly. Finally, the tendency for particles in vivo to cluster together at separation distances much less than those of the uniform distribution further reduces the required power density.
---
paper_title: Modelling mass and heat transfer in nano-based cancer hyperthermia
paper_content:
We derive a sophisticated mathematical model for coupled heat and mass transport in the tumour microenvironment and we apply it to study nanoparticle delivery and hyperthermic treatment of cancer. The model has the unique ability of combining the following features: (i) realistic vasculature; (ii) coupled capillary and interstitial flow; (iii) coupled capillary and interstitial mass transfer applied to nanoparticles; and (iv) coupled capillary and interstitial heat transfer, which are the fundamental mechanisms governing nano-based hyperthermic treatment. This is an improvement with respect to previous modelling approaches, where the effect of blood perfusion on heat transfer is modelled in a spatially averaged form. We analyse the time evolution and the spatial distribution of particles and temperature in a tumour mass treated with superparamagnetic nanoparticles excited by an alternating magnetic field. By means of numerical experiments, we synthesize scaling laws that illustrate how nano-based hyperthermia depends on tumour size and vascularity. In particular, we identify two distinct mechanisms that regulate the distribution of particle and temperature, which are characterized by perfusion and diffusion, respectively.
---
paper_title: Errors in temperature measurement by thermocouple probes during ultrasound induced hyperthermia
paper_content:
A major problem in the use of localised hyperthermia for treatment of malignant tumours is to obtain an accurate measurement of the temperature in the tissue being treated. Thermocouple probes have generally been employed for measuring temperature elevation during ultrasound irradiation. However, when small objects such as thermocouples are in an ultrasound field in a medium such as tissue, viscous forces acting between the object and the tissue will cause an additional local rise in temperature (Fry & Fry, 1954a, b; Dunn, 1962; Hynynen et al, 1982). This will produce an error in any measurement of tissue temperature with invasive probes. The magnitude of the temperature elevation resulting from shear viscosity has been measured for 50 μm thermocouples in tumour tissue.
---
paper_title: A methodology for determining optimal thermal damage in magnetic nanoparticle hyperthermia cancer treatment
paper_content:
SUMMARY ::: ::: Hyperthermia treatment of tumors uses localized heating to damage cancer cells and can also be utilized to increase the efficacy of other treatment methods such as chemotherapy. Magnetic nanoparticle hyperthermia is one of the least invasive techniques of delivering heat. It is based on injecting magnetic nanoparticles into the tumor and subjecting them to an alternating magnetic field. The technique is aimed at damaging the tumor without affecting the surrounding healthy tissue. In this preliminary study, we consider a simplified model (two concentric spheres that represent the tumor and its surrounding tissues) that employs a numerical solution of the Pennes bioheat equation. The model assumes a Gaussian distribution for the spatial variation of the applied thermal energy and an exponential decay function for the time variation. The objective of the study is to optimize the parameters that control the spatial and the time variation of the thermal energy. The optimization process is performed by formulating a fitness function that rewards damage in the region representing the tumor but penalizes damage in the surrounding tissues. Because of the flatness of this fitness function near the optimum, a genetic algorithm is used as the optimization method for its robust non-gradient-based approach. The overall aim of this work is to propose a methodology that can be used for hyperthermia treatment in a clinical scenario. Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: Magnetic Resonance Thermometry During Hyperthermia for Human High-Grade Sarcoma
paper_content:
Purpose: To determine the feasibility of measuring temperature noninvasively with magnetic resonance imaging during hyperthermia treatment of human tumors. ::: ::: Methods: The proton chemical shift detected using phase-difference magnetic resonance imaging (MRI) was used to measure temperature in phantoms and human tumors during treatment with hyperthermia. Four adult patients having high-grade primary sarcoma tumors of the lower leg received 5 hyperthermia treatments in the MR scanner using an MRI-compatible radiofrequency heating applicator. Prior to each treatment, an average of 3 fiberoptic temperature probes were invasively placed into the tumor (or phantom). Hyperthermia was applied concurrent with MR thermometry. Following completion of the treatment, regions of interest (ROI) were defined on MR phase images at each temperature probe location, in bone marrow, and in gel standards placed outside the heated region. The median phase difference (compared to pretreatment baseline images) was calculated for each ROI. This phase difference was corrected for phase drift observed in standards and bone marrow. The observed phase difference, with and without corrections, was correlated with the fiberoptic temperature measurements. ::: ::: Results: The phase difference observed with MRI was found to correlate with temperature. Phantom measurements demonstrated a linear regression coefficient of 4.70° phase difference per ° Celsius, with an R2 = 0.998. After human images with artifact were excluded, the linear regression demonstrated a correlation coefficient of 5.5° phase difference per ° Celsius, with an R2 = 0.84. In both phantom and human treatments, temperature measured via corrected phase difference closely tracked measurements obtained with fiberoptic probes during the hyperthermia treatments. ::: ::: Conclusions: Proton chemical shift imaging with current MRI and hyperthermia technology can be used to monitor and control temperature during treatment of large tumors in the distal lower extremity.
---
paper_title: The practical use of thermocouples for temperature measurement in clinical hyperthermia
paper_content:
The use of thermocouples as invasive thermometers in clinical hyperthermia is comprehensively and critically reviewed. The ability to construct thermocouple probes as small-bore, multiple junction assemblies is a major reason for their popularity and full constructional details are given. The potential sources of measurement error when using thermocouples both in temperature gradients and in electromagnetic or ultrasonic heating fields are discussed. Emphasis is placed upon simple practical solutions to these problems and a combination of good measurement practice and electrical filtering can reduce errors to an insignificant level. Techniques are suggested for the assessment of thermocouple performance during clinical measurement. With careful use, thermocouples can be reliable and convenient thermometers.
---
paper_title: Transient solution to the bioheat equation and optimization for magnetic fluid hyperthermia treatment
paper_content:
Two finite concentric spherical regions were considered as the tissue model for magnetic fluid hyperthermia treatment. The inner sphere represents the diseased tissue containing magnetic particles that generate heat when an alternating magnetic field is applied. The outer sphere represents the healthy tissue. Blood perfusion effects are included in both the regions. Analytical and numerical solutions of the one-dimensional bioheat transfer equation were obtained with constant and spatially varying heat generation in the inner sphere. The numerical solution was found to be in good agreement with the analytical solution. In an ideal hyperthermia treatment, all the diseased tissues should be selectively heated without affecting any healthy tissue. The present work optimized the magnetic particle concentration in an attempt to achieve the ideal hyperthermia conditions. It was found that, for a fixed amount of magnetic particles, optimizing the magnetic particle distribution in the diseased tissue can signific...
---
paper_title: Enhancement in treatment planning for magnetic nanoparticle hyperthermia: optimization of the heat absorption pattern.
paper_content:
In clinical applications of magnetic nanoparticle hyperthermia for cancer treatment it is very important to ensure a maximum damage to the tumor while protecting the normal tissue. The resultant heating pattern by the nanoparticle distribution in tumor is closely related to the injection parameters. In this study we develop an optimization algorithm to inversely determine the optimum heating patterns induced by multiple nanoparticle injections in tumor models with irregular geometries. The injection site locations, thermal properties of tumor and tissue, and local blood perfusion rates are used as inputs to the algorithm to determine the optimum parameters of the heat sources for all nanoparticle injection sites. The design objective is to elevate the temperature of at least 90% of the tumor above 43 degrees C, and to ensure only less than 10% of the normal tissue is heated to temperatures of 43 degrees C or higher. The efficiency, flexibility and capability of this approach have been demonstrated in a case study of two tumors with simple or complicated geometry. An extensive experimental database should be developed in the future to relate the optimized heating pattern parameters found in this study to their appropriate nanoparticle concentration, injection amount, and injection rate. We believe that the optimization algorithm developed in this study can be used as a guideline for physicians to design an optimal treatment plan in magnetic nanoparticle hyperthermia.
---
|
Title: Physical Mechanism and Modeling of Heat Generation and Transfer in Magnetic Fluid Hyperthermia through Néelian and Brownian Relaxation: A Review
Section 1: Introduction
Description 1: Provide background on the current cancer treatments, the concept of thermotherapy, types of hyperthermia, and the rationale behind using hyperthermia as a cancer therapy method.
Section 2: Magnetic Fluid Hyperthermia
Description 2: Explain the concept of Magnetic Fluid Hyperthermia (MFH), its advantages over other hyperthermia methods, and its applications in clinical trials.
Section 3: Heat Generation Model Based on Néelian and Brownian Relaxation
Description 3: Detail the mechanisms of heat generation in magnetic nanoparticles via Néelian and Brownian relaxation, including mathematical formulations and the factors influencing these processes.
Section 4: Type of Material
Description 4: Discuss different types of magnetic nanoparticles used in MFH, their properties, and their suitability for clinical applications.
Section 5: Particle Size
Description 5: Describe the influence of particle size on the heating efficiency and treatment outcome of MFH.
Section 6: Anisotropy
Description 6: Discuss the role of magnetic anisotropy in MFH and its influence on the heating properties of nanoparticles.
Section 7: Viscosity
Description 7: Explain the impact of the medium's viscosity on the heat generation in MFH.
Section 8: Magnetic Field Strength
Description 8: Detail the relationship between magnetic field strength and heat generation in MFH.
Section 9: Magnetic Field Frequency
Description 9: Describe the effect of magnetic field frequency on the specific absorption rate (SAR) and the safety considerations for clinical applications.
Section 10: Concentration
Description 10: Discuss the effect of nanoparticles concentration on the heat generation and treatment outcomes in MFH.
Section 11: Polydispersity
Description 11: Explain the impact of particle size distribution (polydispersity) on the heat generation efficiency in MFH.
Section 12: Bioheat Transfer Model for Heat Distribution
Description 12: Provide an overview of various bioheat transfer models used to describe heat distribution in biological tissues during MFH.
Section 13: Analytical Modeling
Description 13: Summarize the development and application of analytical models to predict temperature distribution during MFH.
Section 14: Numerical Modeling
Description 14: Describe the use of numerical methods, such as FEM and FDM, to simulate heat transfer phenomena in MFH.
Section 15: Optimization Model
Description 15: Discuss optimization techniques to maximize therapeutic effects and minimize side effects in MFH.
Section 16: Conclusion
Description 16: Conclude the review by summarizing the potential of MFH as a cancer treatment and the need for further studies to improve simulation accuracy and treatment effectiveness.
|
A Survey on Intrusion Detection System for Wireless Network
| 9 |
---
paper_title: Using Internal Sensors For Computer Intrusion Detection
paper_content:
This dissertation introduces the concept of using internal sensors to perform intrusion detection in computer systems. It shows its practical feasibility and discusses its characteristics and related design and implementation issues. ::: We introduce a classification of data collection mechanisms for intrusion detection systems. At a conceptual level, these mechanisms are classified as direct and indirect monitoring. At a practical level, direct monitoring can be implemented using external or internal sensors. Internal sensors provide advantages with respect to reliability, completeness, timeliness and volume of data, in addition to efficiency and resistance against attacks. ::: We introduce an architecture called ESP as a framework for building intrusion detection systems based on internal sensors. We describe in detail a prototype implementation based on the ESP architecture and introduce the concept of embedded detectors as a mechanism for localized data reduction. ::: We show that it is possible to build both specific (specialized for a certain intrusion) and generic (able to detect different types of intrusions) detectors. Furthermore, we provide information about the types of data and places of implementation that are most effective in detecting different types of attacks. ::: Finally, performance testing of the ESP implementation shows the impact that embedded detectors can have on a computer system. Detection testing shows that embedded detectors have the capability of detecting a significant percentage of new attacks.
---
paper_title: An overview of wireless intrusion prevention systems
paper_content:
With the fast development of wireless network, the problems on wireless security have become more and more prominence. And technologies of firewall and intrusion detection cannot solve these problems satisfactorily. However, wireless intrusion prevention systems which can prevent attacks for WLAN excellently have become the research hotspot. In this paper, we firstly indicate the developing history of WIPS, and then summarize the related work on WIPS in academic and engineering application area respectively. Based on these work, we propose a common wireless intrusion prevention framework (CWIPF), and describe some key technologies used in this framework. Finally, we propose some research issues should be focused on in the future.
---
|
Title: A Survey on Intrusion Detection System for Wireless Network
Section 1: INTRODUCTION
Description 1: Write about the rapid evolution of wireless networks, their susceptibility to various attacks, the importance of intrusion detection systems (IDS) in securing these networks, and the challenges associated with implementing IDS in wireless environments.
Section 2: Types of Wireless network for Intrusion Detection
Description 2: Introduce the basics of intrusion detection in wireless networks, defining intrusion and attacks, motivation for intrusion detection, and challenges in developing effective intrusion detection schemes.
Section 3: Types of attacks in Wireless network
Description 3: Describe various types of attacks faced by wireless networks, including probing & network discovery, surveillance, DOS (Denial of Services) attacks, impersonation, and man-in-the-middle and rogue AP attacks.
Section 4: Intrusion Detection System
Description 4: Explain what an Intrusion Detection System (IDS) is, how it works, different types of IDS (anomaly detection, misuse detection), and the methods used for intrusion detection.
Section 5: Requirements of IDS in Wireless Network
Description 5: Discuss the key requirements for an effective IDS in a wireless network, including minimal system resource usage, fault-tolerance, ability to resist subversion, and accuracy in detecting intrusions.
Section 6: Wireless Intrusion Prevention System
Description 6: Describe the concept of Wireless Intrusion Prevention System (WIPS), its purpose, and how it extends the capabilities of IDS by not only detecting but also preventing wireless intrusions.
Section 7: RESEARCH SCOPE
Description 7: Provide an overview of the research scope, including the history and development of WIPS, current methodologies, a proposed common framework for wireless intrusion prevention, and future research directions.
Section 8: FUTURE WORK
Description 8: Suggest potential areas for future research based on the current findings, emphasizing the application and comparison of new algorithms in real-time environments.
Section 9: CONCLUSION
Description 9: Summarize the findings of the survey, restating the importance of intrusion detection and prevention in wireless networks, and the methods used to address these issues.
|
A Survey on Homomorphic Encryption Schemes: Theory and Implementation
| 9 |
---
paper_title: Replication is not needed: single database, computationally-private information retrieval
paper_content:
We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0.
---
paper_title: An Intermediate Greek-English Lexicon: Founded Upon the Seventh Edition of Liddell and Scott's Greek-English Lexicon
paper_content:
This abridgement of the world's most authoritative dictionary of ancient Greek is based on the 1883 revision. It includes some discussion of word usage, citing examples and characteristic phrases. Generally speaking, only words used by late writers and scientific terms have been omitted from the full lexicon. From Homer downwards, to the close of Attic Greek, care has been taken to include all words, as well as those used by Aristotle, Plutarch in his Lives, Polybius, Strabo, Lucian, and the writers of the New Testament.
---
paper_title: A new public key cryptosystem based on higher residues
paper_content:
This paper describes a new public-key cryptosystem based on the hardness of computing higher residues modulo a composite RSA integer. We introduce two versions of our scheme, one deterministic and the other probabilistic. The deterministic version is practically oriented: encryption amounts to a single exponentiation w.r.t. a modulus with at least 768 bits and a 160-bit exponent. Decryption can be suitably opti- mized so as to become less demanding than a couple RSA decryptions. Although slower than RSA, the new scheme is still reasonably compet- itive and has several specific applications. The probabilistic version ex- hibits an homomorphic encryption scheme whose expansion rate is much better than previously proposed such systems. Furthermore, it has se- mantic security, relative to the hardness of computing higher residues for suitable moduli.
---
paper_title: Evaluating branching programs on encrypted data
paper_content:
We present a public-key encryption scheme with the following properties. Given a branching program P and an encryption c′ of an input x, it is possible to efficiently compute a succinct ciphertext c′ from which P(x) can be efficiently decoded using the secret key. The size of c′ depends polynomially on the size of x and the length of P, but does not further depend on the size of P. As interesting special cases, one can efficiently evaluate finite automata, decision trees, and OBDDs on encrypted data, where the size of the resulting ciphertext c′ does not depend on the size of the object being evaluated. These are the first general representation models for which such a feasibility result is shown. Our main construction generalizes the approach of Kushilevitz and Ostrovsky (FOCS 1997) for constructing single-server Private Information Retrieval protocols. ::: ::: We also show how to strengthen the above so that c′ does not contain additional information about P (other than P(x) for some x) even if the public key and the ciphertext c are maliciously formed. This yields a two-message secure protocol for evaluating a length-bounded branching program P held by a server on an input x held by a client. A distinctive feature of this protocol is that it hides the size of the server's input P from the client. In particular, the client's work is independent of the size of P.
---
paper_title: Probabilistic encryption & how to play mental poker keeping secret all partial information
paper_content:
This paper proposes an Encryption Scheme that possess the following property : An adversary, who knows the encryption algorithm and is given the cyphertext, cannot obtain any information about the clear-text. ::: Any implementation of a Public Key Cryptosystem, as proposed by Diffie and Hellman in [8], should possess this property. ::: Our Encryption Scheme follows the ideas in the number theoretic implementations of a Public Key Cryptosystem due to Rivest, Shamir and Adleman [13], and Rabin [12].
---
paper_title: Fundamentals of Abstract Algebra
paper_content:
Sets, relations and integers introduction to groups permution groups subgroups and normal subgroups homomorphisms and isomorphisms of groups sylow theorem solvable and nilpotent groups finitely generated abelian groups introduction to rings subrings, ideals and homomorphisms extensions of rings direct sum of rings polynomial rings eculidean rings unique factorization domains maximal, prime and primary ideals noetherian and artinian rings modules and vector spaces matrix rings field extensions algebraic extensions multiplicity of roots finite fields Galois theory and applications normal extensions geometric constructions binary codes Groebner bases answers and hints to selected exercises.
---
paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System
paper_content:
We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits.
---
paper_title: Protocols for secure computations
paper_content:
Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.
---
paper_title: Public-Key Cryptosystems Based on Composite Degree Residuosity Classes
paper_content:
This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.
---
paper_title: A method for obtaining digital signatures and public-key cryptosystems
paper_content:
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n .
---
paper_title: Evaluating 2-dnf formulas on ciphertexts
paper_content:
Let ψ be a 2-DNF formula on boolean variables x1,...,xn ∈ {0,1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn. In other words, given the encryption of the bits x1,...,xn, anyone can create the encryption of ψ(x1,...,xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: ::: ::: In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from $\sqrt{n}$ to $\sqrt[3]{n}$. ::: ::: An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. ::: ::: A protocol for universally verifiable computation.
---
paper_title: Multi-bit cryptosystems based on lattice problems
paper_content:
We propose multi-bit versions of several single-bit cryptosystems based on lattice problems, the error-free version of the Ajtai-Dwork cryptosystem by Goldreich, Goldwasser, and Halevi [CRYPTO '97], the Regev cryptosystems [JACM 2004 and STOC 2005], and the Ajtai cryptosystem [STOC 2005]. We develop a universal technique derived from a general structure behind them for constructing their multi-bit versions without increase in the size of ciphertexts. By evaluating the trade-off between the decryption errors and the hardness of underlying lattice problems, it is shown that our multi-bit versions encrypt O(log n)-bit plaintexts into ciphertexts of the same length as the original ones with reasonable sacrifices of the hardness of the underlying lattice problems. Our technique also reveals an algebraic property, named pseudohomomorphism, of the lattice-based cryptosystems.
---
paper_title: Verifiable secret-ballot elections
paper_content:
Privacy in secret-ballot elections has traditionally been attained by using a ballot box or voting booth to disassociate voters from ballots. Although such a system might achieve privacy, there is often little confidence in the accuracy of the announced tally. This thesis describes a practical scheme for conducting secret-ballot elections in which the outcome of an election is verifiable by all participants and even by non-participating observers. All communications are public, yet under a suitable number-theoretic assumption, the privacy of votes remains intact. ::: The tools developed here to conduct such elections have additional independent applications. Cryptographic capsules allow a prover to convince verifiers that either statement A or statement B is true without revealing substantial information as to which. Secret sharing homomorphisms enable computation on shared (secret) data and give a method of distributing shares of a secret such that each shareholder can verify the validity of all shares.
---
paper_title: Homomorphic encryption in the cloud
paper_content:
Since the first notions of fully homomorphic encryption more than 30 years ago, there has been numerous attempts to develop such a system. Finally, in 2009 Craig Gentry succeeded. Homomorphic encryption brings great advantages but it seems that, at least for now, it also brings many practical difficulties. Furthermore, in the last couple of years, several other fully homomorphic systems arose where each has its one advantages and drawbacks. However, with the developments in cloud computing, we need it more than ever to become practical for real-world usages. In this paper we are discussing the strengths and weaknesses of homomorphic encryption and we give a brief description of several promising fully homomorphic encryption systems. Next, we give a special attention to the homomorphic encryption systems for cloud computing. Finally, we discuss some recent developments by IBM and their open-source library for homomorphic encryption.
---
paper_title: Survey of Various Homomorphic Encryption algorithms and Schemes
paper_content:
Homomorphic encryption is the encryption scheme which means the operations on the encrypted data. Homomorphic encryption can be applied in any system by using various public key algorithms. When the data is transferred to the public area, there are many encryption algorithms to secure the operations and the storage of the data. But to process data located on remote server and to preserve privacy, homomorphic encryption is useful that allows the operations on the cipher text, which can provide the same results after calculations as the working directly on the raw data. In this paper, the main focus is on public key cryptographic algorithms based on homomorphic encryption scheme for preserving security. The case study on various principles and properties of homomorphic encryption is given and then various homomorphic algorithms using asymmetric key systems such as RSA, ElGamal, Paillier algorithms as well as various homomorphic encryption schemes such as BrakerskiGentry-Vaikuntanathan (BGV), Enhanced homomorphic Cryptosystem (EHC), Algebra homomorphic encryption scheme based on updated ElGamal (AHEE), Non-interactive exponential homomorphic encryption scheme (NEHE) are investigated.
---
paper_title: Computing Blindfolded: New Developments in Fully Homomorphic Encryption
paper_content:
A fully homomorphic encryption scheme enables computation of arbitrary functions on encrypted data. Fully homomorphic encryption has long been regarded as cryptography's prized "holy grail" - extremely useful yet rather elusive. Starting with the groundbreaking work of Gentry in 2009, the last three years have witnessed numerous constructions of fully homomorphic encryption involving novel mathematical techniques, and a number of exciting applications. We will take the reader through a journey of these developments and provide a glimpse of the exciting research directions that lie ahead.
---
paper_title: State Of Art in Homomorphic Encryption Schemes
paper_content:
The demand for privacy of digital data and of algorithms for handling more complex structures have increased exponentially over the last decade. However, the critical problem arises when there is a requirement for publicly computing with private data or to modify functions or algorithms in such a way that they are still executable while their privacy is ensured. This is where homomorphic cryptosystems can be used since these systems enable computations with encrypted data. A fully homomorphic encryption scheme enables computation of arbitrary functions on encrypted data.. This enables a customer to generate a program that can be executed by a third party, without revealing the underlying algorithm or the processed data. We will take the reader through a journey of these developments and provide a glimpse of the exciting research directions that lie ahead. In this paper, we propose a selection of the most important available solutions, discussing their properties and limitations.
---
paper_title: Fully Homomorphic Encryption: Cryptography's holy grail
paper_content:
For more than 30 years, cryptographers have embarked on a quest to construct an encryption scheme that would enable arbitrary computation on encrypted data. Conceptually simple, yet notoriously difficult to achieve, cryptography's holy grail opens the door to many new capabilities in our cloud-centric, data-driven world.
---
paper_title: A Survey of Homomorphic Encryption for Nonspecialists
paper_content:
Processing encrypted signals requires special properties of the underlying encryption scheme. A possible choice is the use of homomorphic encryption. In this paper, we propose a selection of the most important available solutions, discussing their properties and limitations.
---
paper_title: Homomorphic Encryption: Theory&Applications
paper_content:
The goal of this chapter is to present a survey of homomorphic encryption techniques and their applications. After a detailed discussion on the introduction and motivation of the chapter, we present some basic concepts of cryptography. The fundamental theories of homomorphic encryption are then discussed with suitable examples. The chapter then provides a survey of some of the classical homomorphic encryption schemes existing in the current literature. Various applications and salient properties of homomorphic encryption schemes are then discussed in detail. The chapter then introduces the most important and recent research direction in the filed - fully homomorphic encryption. A significant number of propositions on fully homomorphic encryption is then discussed. Finally, the chapter concludes by outlining some emerging research trends in this exicting field of cryptography.
---
paper_title: Recent Advances in Homomorphic Encryption: A Possible Future for Signal Processing in the Encrypted Domain
paper_content:
Since the introduction of the notion of privacy homomorphism by Rivest et al. in the late 1970s, the design of efficient and secure encryption schemes allowing the performance of general computations in the encrypted domain has been one of the holy grails of the cryptographic community. Despite numerous partial answers, the problem of designing such a powerful primitive has remained open until the theoretical breakthrough of the fully homomorphic encryption (FHE) scheme published by Gentry in the late 2000s. Since then, progress has been fast-paced, and it can now be reasonably said that practical homomorphic encryption-based computing will become a reality in the near future.
---
paper_title: Practical homomorphic encryption: A survey
paper_content:
Cloud computing technology has rapidly evolved over the last decade, offering an alternative way to store and work with large amounts of data. However data security remains an important issue particularly when using a public cloud service provider. The recent area of homomorphic cryptography allows computation on encrypted data, which would allow users to ensure data privacy on the cloud and increase the potential market for cloud computing. A significant amount of research on homomorphic cryptography appeared in the literature over the last few years; yet the performance of existing implementations of encryption schemes remains unsuitable for real time applications. One way this limitation is being addressed is through the use of graphics processing units (GPUs) and field programmable gate arrays (FPGAs) for implementations of homomorphic encryption schemes. This review presents the current state of the art in this promising new area of research and highlights the interesting remaining open problems.
---
paper_title: Homomorphic encryption: from private-key to public-key
paper_content:
We show how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-key encryption scheme. By compact we mean that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created. We do not require anything else on the distribution of homomorphically generated encryptions (in particular, we do not require them to be distributed like real ciphertexts). ::: ::: Our resulting public-key scheme is homomorphic in the following sense. If the private-key scheme is i+1-hop homomorphic with respect to some set of operations then the public-key scheme we construct is i-hop homomorphic with respect to the same set of operations.
---
paper_title: A new public key cryptosystem based on higher residues
paper_content:
This paper describes a new public-key cryptosystem based on the hardness of computing higher residues modulo a composite RSA integer. We introduce two versions of our scheme, one deterministic and the other probabilistic. The deterministic version is practically oriented: encryption amounts to a single exponentiation w.r.t. a modulus with at least 768 bits and a 160-bit exponent. Decryption can be suitably opti- mized so as to become less demanding than a couple RSA decryptions. Although slower than RSA, the new scheme is still reasonably compet- itive and has several specific applications. The probabilistic version ex- hibits an homomorphic encryption scheme whose expansion rate is much better than previously proposed such systems. Furthermore, it has se- mantic security, relative to the hardness of computing higher residues for suitable moduli.
---
paper_title: Probabilistic encryption & how to play mental poker keeping secret all partial information
paper_content:
This paper proposes an Encryption Scheme that possess the following property : An adversary, who knows the encryption algorithm and is given the cyphertext, cannot obtain any information about the clear-text. ::: Any implementation of a Public Key Cryptosystem, as proposed by Diffie and Hellman in [8], should possess this property. ::: Our Encryption Scheme follows the ideas in the number theoretic implementations of a Public Key Cryptosystem due to Rivest, Shamir and Adleman [13], and Rabin [12].
---
paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System
paper_content:
We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits.
---
paper_title: New Directions in Cryptography
paper_content:
Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
---
paper_title: Public-Key Cryptosystems Based on Composite Degree Residuosity Classes
paper_content:
This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.
---
paper_title: A method for obtaining digital signatures and public-key cryptosystems
paper_content:
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n , of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n .
---
paper_title: Multi-bit cryptosystems based on lattice problems
paper_content:
We propose multi-bit versions of several single-bit cryptosystems based on lattice problems, the error-free version of the Ajtai-Dwork cryptosystem by Goldreich, Goldwasser, and Halevi [CRYPTO '97], the Regev cryptosystems [JACM 2004 and STOC 2005], and the Ajtai cryptosystem [STOC 2005]. We develop a universal technique derived from a general structure behind them for constructing their multi-bit versions without increase in the size of ciphertexts. By evaluating the trade-off between the decryption errors and the hardness of underlying lattice problems, it is shown that our multi-bit versions encrypt O(log n)-bit plaintexts into ciphertexts of the same length as the original ones with reasonable sacrifices of the hardness of the underlying lattice problems. Our technique also reveals an algebraic property, named pseudohomomorphism, of the lattice-based cryptosystems.
---
paper_title: Probabilistic encryption & how to play mental poker keeping secret all partial information
paper_content:
This paper proposes an Encryption Scheme that possess the following property : An adversary, who knows the encryption algorithm and is given the cyphertext, cannot obtain any information about the clear-text. ::: Any implementation of a Public Key Cryptosystem, as proposed by Diffie and Hellman in [8], should possess this property. ::: Our Encryption Scheme follows the ideas in the number theoretic implementations of a Public Key Cryptosystem due to Rivest, Shamir and Adleman [13], and Rabin [12].
---
paper_title: Quadratic Residuosity Problem.
paper_content:
An airfoil member for a land vehicle in particular for fixing to the roof of a cab of a tractor-trailer rig, in which the airfoil member is provided to reduce drag when positioned substantially horizontally for the case where either there is no trailer unit or the height of the trailer unit is lower than that of the cab of the tractor unit. The airfoil member is also movable between a position in which it acts as an air deflector and a position in which it acts as an airfoil. The airfoil member has a pair of side portions and a central portion having a much thinner cross-section in order to reduce its weight. Its height above the roof of the cab can also be adjusted so that it can be set to the best suited position either as an air deflector or an airfoil in order to reduce drag to a minimum.
---
paper_title: CRT-based Fully Homomorphic Encryption over the Integers ⋆
paper_content:
In 1978, Rivest, Adleman and Dertouzos introduced the basic concept of privacy homomorphism that allows computation on encrypted data without decryption. It was an interesting work whose idea precedes the recent development of fully homomorphic encryption, although actual example schemes proposed in the paper are all susceptible to simple known-plaintext attacks.In this paper, we revisit one of their proposals, in particular the third scheme which is based on the Chinese Remainder Theorem and is ring homomorphic. It is known that only a single pair of known plaintext/ciphertext is needed to break this scheme. However, by exploiting the standard technique to insert an error to a message before encryption, we can cope with this problem. We present a secure modification of their proposal by showing that the proposed scheme is fully homomorphic and secure against the chosen plaintext attacks under the approximate GCD assumption and the sparse subset sum assumption when the message space is restricted to Z 2 k .Interestingly, the proposed scheme can be regarded as a generalization of the DGHV scheme with larger plaintext space. Our scheme has O ~ ( λ 5 ) ciphertext expansion overhead while the DGHV has O ~ ( λ 8 ) for the security parameter λ . When restricted to the homomorphic encryption scheme with depth of O ( log λ ) , the overhead is reduced to O ~ ( λ ) . Our scheme can be used in applications requiring a large message space Z Q for log Q = O ( λ 4 ) , or SIMD style operations on Z Q k for log Q = O ( λ ) , k = O ( λ 3 ) , with O ~ ( λ 5 ) ciphertext size as in the DGHV.
---
paper_title: New Directions in Cryptography
paper_content:
Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
---
paper_title: A Generalisation, a Simplification and some Applications of Paillier’s Probabilistic Public-Key System
paper_content:
We propose a generalisation of Paillier's probabilistic public key system, in which the expansion factor is reduced and which allows to adjust the block length of the scheme even after the public key has been fixed, without losing the homomorphic property. We show that the generalisation is as secure as Paillier's original system. We construct a threshold variant of the generalised scheme as well as zero-knowledge protocols to show that a given ciphertext encrypts one of a set of given plaintexts, and protocols to verify multiplicative relations on plaintexts. We then show how these building blocks can be used for applying the scheme to efficient electronic voting. This reduces dramatically the work needed to compute the final result of an election, compared to the previously best known schemes. We show how the basic scheme for a yes/no vote can be easily adapted to casting a vote for up to t out of L candidates. The same basic building blocks can also be adapted to provide receipt-free elections, under appropriate physical assumptions. The scheme for 1 out of L elections can be optimised such that for a certain range of parameter values, a ballot has size only O(log L) bits.
---
paper_title: Public-Key Cryptosystems Based on Composite Degree Residuosity Classes
paper_content:
This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.
---
paper_title: Evaluating branching programs on encrypted data
paper_content:
We present a public-key encryption scheme with the following properties. Given a branching program P and an encryption c′ of an input x, it is possible to efficiently compute a succinct ciphertext c′ from which P(x) can be efficiently decoded using the secret key. The size of c′ depends polynomially on the size of x and the length of P, but does not further depend on the size of P. As interesting special cases, one can efficiently evaluate finite automata, decision trees, and OBDDs on encrypted data, where the size of the resulting ciphertext c′ does not depend on the size of the object being evaluated. These are the first general representation models for which such a feasibility result is shown. Our main construction generalizes the approach of Kushilevitz and Ostrovsky (FOCS 1997) for constructing single-server Private Information Retrieval protocols. ::: ::: We also show how to strengthen the above so that c′ does not contain additional information about P (other than P(x) for some x) even if the public key and the ciphertext c are maliciously formed. This yields a two-message secure protocol for evaluating a length-bounded branching program P held by a server on an input x held by a client. A distinctive feature of this protocol is that it hides the size of the server's input P from the client. In particular, the client's work is independent of the size of P.
---
paper_title: Protocols for secure computations
paper_content:
Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.
---
paper_title: Evaluating 2-dnf formulas on ciphertexts
paper_content:
Let ψ be a 2-DNF formula on boolean variables x1,...,xn ∈ {0,1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn. In other words, given the encryption of the bits x1,...,xn, anyone can create the encryption of ψ(x1,...,xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: ::: ::: In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from $\sqrt{n}$ to $\sqrt[3]{n}$. ::: ::: An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. ::: ::: A protocol for universally verifiable computation.
---
paper_title: Subgroup membership problems and public key cryptosystems
paper_content:
Public key encryption was first proposed by Diffie and Hellman [16], and widely popularised with the RSA cryptosystem [37]. Over the years, the security goals of public key encryption have been studied [17, 22], as have adversary models [30, 36], and many public key cryptosystems have been proposed and analysed.It turns out that the security of many of those cryptosystems [16, 18, 22, 29, 34, 35] are based on a common class of mathematical problems, called subgroup membership problems. Cramer and Shoup [10] designed a chosen-ciphertextsecure cryptosystem based on a general subgroup membership problem (generalising their previous work [9]), and provided two new instances. Yamamura and Saito [41] defined a general subgroup membership problem, catalogued several known subgroup membership problems, and designed a private information retrieval system based on a subgroup membership problem. Nieto, Boyd and Dawson [31] designed a cryptosystem based on essentially a symmetric subgroup membership problem (see Section 4.4 and Section 6.1).Chapter 2 and 3 contain certain preliminary discussions necessary for the later work. In Chapter 4, we discuss subgroup membership problems, both abstractly and concrete families. For all of the concrete examples, there is a related problem called the splitting problem. We discuss various elementary reductions, both abstract and for concrete families. In cryptographic applications, a third related problem, called the subgroup discrete logarithm problem, is also interesting, and we discuss this in some detail. We also discuss a variant of the subgroup membership problem where there are two subgroups that are simultaneously hard to distinguish. We prove a useful reduction (Theorem 4.11) for this case. The technique used in the proof is reused throughout the thesis.In Chapter 5, we discuss two homomorphic cryptosystems, based on trapdoor splitting problems. This gives us a uniform description of a number of homomorphic cryptosystems, and allows us to apply the theory and results of Chapter 4 to the security of those cryptosystems.Using the technique of Theorem 4.11, we develop a homomorphic cryptosystem that is not based on a trapdoor problem. This gives us a fairly efficient cryptosystem, with potentially useful properties.We also discuss the security of a homomorphic cryptosystem under a nonstandard assumption. While these results are very weak, they are stronger than results obtained in the generic model.In Chapter 6, we develop two key encapsulation methods. The first can be proven secure against passive attacks, using the same technique as in the proof of Theorem 4.11. The second method can be proven secure against active attacks in the random oracle model, but to do this, we need a certain non-standard assumption.Finally, in Chapter 7 we discuss a small extension to the framework developed by Cramer and Shoup [10], again by essentially reusing the technique used to prove Theorem 4.11. This gives us a cryptosystem that is secure against chosen ciphertext attacks, without recourse to the random oracle model or nonstandard assumptions. The cryptosystem is quite practical, and performs quite well compared to other variants of the Cramer-Shoup cryptosystem.
---
paper_title: Protocols for secure computations
paper_content:
Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.
---
paper_title: Evaluating branching programs on encrypted data
paper_content:
We present a public-key encryption scheme with the following properties. Given a branching program P and an encryption c′ of an input x, it is possible to efficiently compute a succinct ciphertext c′ from which P(x) can be efficiently decoded using the secret key. The size of c′ depends polynomially on the size of x and the length of P, but does not further depend on the size of P. As interesting special cases, one can efficiently evaluate finite automata, decision trees, and OBDDs on encrypted data, where the size of the resulting ciphertext c′ does not depend on the size of the object being evaluated. These are the first general representation models for which such a feasibility result is shown. Our main construction generalizes the approach of Kushilevitz and Ostrovsky (FOCS 1997) for constructing single-server Private Information Retrieval protocols. ::: ::: We also show how to strengthen the above so that c′ does not contain additional information about P (other than P(x) for some x) even if the public key and the ciphertext c are maliciously formed. This yields a two-message secure protocol for evaluating a length-bounded branching program P held by a server on an input x held by a client. A distinctive feature of this protocol is that it hides the size of the server's input P from the client. In particular, the client's work is independent of the size of P.
---
paper_title: Homomorphic Public-Key Cryptosystems and Encrypting Boolean Circuits
paper_content:
In this paper homomorphic cryptosystems are designed for the first time over any finite group. Applying Barrington's construction we produce for any boolean circuit of the logarithmic depth its encrypted simulation of a polynomial size over an appropriate finitely generated group.
---
paper_title: Cryptanalysis of Polly Cracker
paper_content:
An attack on the public key cryptosystem Polly Cracker is described, that reveals the complete secret key /spl sigma/ /spl isin/ F/sub q//sup n/ by means of n (nonadaptively) chosen "fake" ciphertexts.
---
paper_title: Cryptanalysis of a homomorphic public-key cryptosystem over a finite group
paper_content:
The paper cryptanalyses a public-key cryptosystem recently proposed by Grigoriev and Ponomarenko, which encrypts an element from a fixed finite group defined in terms of generators and relations to produce a ciphertext from SL(2, Z). The paper presents a heuristic method for recovering the secret key from the public key, and so this cryptosystem should not be used in practice.
---
paper_title: Evaluating 2-dnf formulas on ciphertexts
paper_content:
Let ψ be a 2-DNF formula on boolean variables x1,...,xn ∈ {0,1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn. In other words, given the encryption of the bits x1,...,xn, anyone can create the encryption of ψ(x1,...,xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: ::: ::: In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from $\sqrt{n}$ to $\sqrt[3]{n}$. ::: ::: An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. ::: ::: A protocol for universally verifiable computation.
---
paper_title: Multi-bit cryptosystems based on lattice problems
paper_content:
We propose multi-bit versions of several single-bit cryptosystems based on lattice problems, the error-free version of the Ajtai-Dwork cryptosystem by Goldreich, Goldwasser, and Halevi [CRYPTO '97], the Regev cryptosystems [JACM 2004 and STOC 2005], and the Ajtai cryptosystem [STOC 2005]. We develop a universal technique derived from a general structure behind them for constructing their multi-bit versions without increase in the size of ciphertexts. By evaluating the trade-off between the decryption errors and the hardness of underlying lattice problems, it is shown that our multi-bit versions encrypt O(log n)-bit plaintexts into ciphertexts of the same length as the original ones with reasonable sacrifices of the hardness of the underlying lattice problems. Our technique also reveals an algebraic property, named pseudohomomorphism, of the lattice-based cryptosystems.
---
paper_title: A Decade of Lattice Cryptography.
paper_content:
Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks in contrast with most number-theoretic cryptography, high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution SIS and learning with errors LWE problems and their more efficient ring-based variants, their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications.
---
paper_title: Lattice-based cryptography
paper_content:
We describe some of the recent progress on lattice-based cryptography, starting from the seminal work of Ajtai, and ending with some recent constructions of very efficient cryptographic schemes.
---
paper_title: NTRU: A Ring-Based Public Key Cryptosystem
paper_content:
We describe NTRU, a new public key cryptosystem. NTRU features reasonably short, easily created keys, high speed, and low memory requirements. NTRU encryption and decryption use a mixing system suggested by polynomial algebra combined with a clustering principle based on elementary probability theory. The security of the NTRU cryptosystem comes from the interaction of the polynomial mixing system with the independence of reduction modulo two relatively prime integers p and q.
---
paper_title: On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption
paper_content:
We propose a new notion of secure multiparty computation aided by a computationally-powerful but untrusted "cloud" server. In this notion that we call on-the-fly multiparty computation (MPC), the cloud can non-interactively perform arbitrary, dynamically chosen computations on data belonging to arbitrary sets of users chosen on-the-fly. All user's input data and intermediate results are protected from snooping by the cloud as well as other users. This extends the standard notion of fully homomorphic encryption (FHE), where users can only enlist the cloud's help in evaluating functions on their own encrypted data. In on-the-fly MPC, each user is involved only when initially uploading his (encrypted) data to the cloud, and in a final output decryption phase when outputs are revealed; the complexity of both is independent of the function being computed and the total number of users in the system. When users upload their data, they need not decide in advance which function will be computed, nor who they will compute with; they need only retroactively approve the eventually-chosen functions and on whose data the functions were evaluated. This notion is qualitatively the best possible in minimizing interaction, since the users' interaction in the decryption stage is inevitable: we show that removing it would imply generic program obfuscation and is thus impossible. Our contributions are two-fold:- We show how on-the-fly MPC can be achieved using a new type of encryption scheme that we call multikey FHE, which is capable of operating on inputs encrypted under multiple, unrelated keys. A ciphertext resulting from a multikey evaluation can be jointly decrypted using the secret keys of all the users involved in the computation. - We construct a multikey FHE scheme based on NTRU, a very efficient public-key encryption scheme proposed in the 1990s. It was previously not known how to make NTRU fully homomorphic even for a single party. We view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest. Although the transformation to a fully homomorphic system deteriorates the efficiency of NTRU somewhat, we believe that this system is a leading candidate for a practical FHE scheme.
---
paper_title: Public-Key Cryptosystems from Lattice Reduction Problems
paper_content:
We present a new proposal for a trapdoor one-way function, from which we derive public-key encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction problems, providing a possible alternative to existing public-key encryption algorithms and digital signatures such as RSA and DSS.
---
paper_title: Revisiting fully homomorphic encryption schemes and their cryptographic primitives
paper_content:
Lattice-based cryptography plays an important role in modern cryptography. Apart from being a perfect alternative of classic public key cryptosystems, should the quantum computers become available, the lattice-based cryptography also enables many applications that conventional cryptosystems, such as RSA encryption scheme, can not deliver. One of the most significant aspects from this point of view is the fully homomorphic encryption schemes. A fully homomorphic encryption scheme allows one to arbitrarily operate on the encrypted messages, without decrypting it. This notion was raised in 1978, and it becomes a “holy grail” for the cryptographers for 30 years until 2009, Craig Gentry presented a framework to construct a fully homomorphic encryption using ideal lattice. The fully homomorphic encryption schemes, although they may be lacking of efficiency at its current stage, enable many important applications, such as secured cloud searching verifiable outsourced computing. Nevertheless, just like other cryptosystems, and perhaps all other inventions at the initial stage, the fully homomorphic encryption is young, prospective, and hence requires more research. In this thesis, we focus on the security of fully homomorphic encryption schemes. The security of all known fully homomorphic encryption schemes can be reduced to some lattice problems. Therefore, our main tool, not surprisingly, is lattice. Previous work has shown that some of the fully homomorphic encryption schemes can be broken using lattice reduction algorithms. Indeed, there exist several lattice reduction algorithms, such as LLL and L, that run in polynomial time, that can break a homomorphic encryption scheme. However, the running time, even though it is a polynomial algorithm, is still beyond tolerance. Hence, our first step is to optimize those algorithms. In this thesis, we show three different improvements. To sum up, combining those techniques, we are able to accelerate the reduction form O(dβ + dβ) to O(dβ + dβ) when the algorithm is dedicated for those cryptosystems, where d is the dimension of the lattice, and β is the maximum bit-length of the norm of input
---
paper_title: Factoring polynomials with rational coefficients
paper_content:
In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).
---
paper_title: Public-Key Cryptosystems from Lattice Reduction Problems
paper_content:
We present a new proposal for a trapdoor one-way function, from which we derive public-key encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of lattice-reduction problems, providing a possible alternative to existing public-key encryption algorithms and digital signatures such as RSA and DSS.
---
paper_title: Fully homomorphic encryption with relatively small key and ciphertext sizes
paper_content:
We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.
---
paper_title: Experiments with the Plaintext Space in Gentry’S Somewhat Homomorphic Scheme
paper_content:
In this paper we propose an interesting improvement of the im- plementation of the original Gentry-Halevi somewhat homomorphic scheme. We suggest to choose a bigger plaintext space, by changing the underlying ideal from $I = (2) to I = (p)$ for some bigger prime $p$. Our analysis show that bigger plaintext space will improve the homomor- phic computation of the somewhat homomorphic scheme while it only slightly increases the complexity of the key generation procedure. The encryption and de- cryption functions have the same complexity. We provide also some experimental computations that support the analysis.
---
paper_title: Toward basing fully homomorphic encryption on worst-case hardness
paper_content:
Gentry proposed a fully homomorphic public key encryption scheme that uses ideal lattices. He based the security of his scheme on the hardness of two problems: an average-case decision problem over ideal lattices, and the sparse (or "low-weight") subset sum problem (SSSP). ::: ::: We provide a key generation algorithm for Gentry's scheme that generates ideal lattices according to a "nice" average-case distribution. Then, we prove a worst-case / average-case connection that bases Gentry's scheme (in part) on the quantum hardness of the shortest independent vector problem (SIVP) over ideal lattices in the worst-case. (We cannot remove the need to assume that the SSSP is hard.) Our worst-case / average-case connection is the first where the average-case lattice is an ideal lattice, which seems to be necessary to support the security of Gentry's scheme.
---
paper_title: An improvement of key generation algorithm for Gentry's homomorphic encryption scheme
paper_content:
One way of improving efficiency of Gentry's fully homomorphic encryption is controlling the number of operations, but our recollection is that any scheme which controls the bound has not proposed. ::: ::: In this paper, we propose a key generation algorithm for Gentry's homomorphic encryption scheme that controls the bound of the circuit depth by using the relation between the circuit depth and the eigenvalues of a basis of a lattice. We present experimental results that show that the proposed algorithm is practical. We discuss security of the basis of the lattices generated by the algorithm for practical use.
---
paper_title: Improved key generation for Gentry’s fully homomorphic encryption scheme
paper_content:
A key problem with the original implementation of the Gentry Fully Homomorphic Encryption scheme was the slow key generation process. Gentry and Halevi provided a fast technique for 2-power cyclotomic fields. We present an extension of the Gentry---Halevi key generation technique for arbitrary cyclotomic fields. Our new method is roughly twice as efficient as the previous best methods. Our estimates are backed up with experimental data.
---
paper_title: Faster Fully Homomorphic Encryption
paper_content:
We describe two improvements to Gentry’s fully homomorphic scheme based on ideal lattices and its analysis: we provide a more aggressive analysis of one of the hardness assumptions (the one related to the Sparse Subset Sum Problem) and we introduce a probabilistic decryption algorithm that can be implemented with an algebraic circuit of low multiplicative degree. Combined together, these improvements lead to a faster fully homomorphic scheme, with a O(λ 3.5) bit complexity per elementary binary add/mult gate, where λ is the security parameter. These improvements also apply to the fully homomorphic schemes of Smart and Vercauteren [PKC’2010] and van Dijk et al. [Eurocrypt’2010].
---
paper_title: (Batch) Fully Homomorphic Encryption over Integers for Non-Binary Message Spaces
paper_content:
In this paper, we construct a fully homomorphic encryption (FHE) scheme over integers with the message space \(\mathbb {Z}_Q\) for any prime \(Q\). Even for the binary case \(Q=2\), our decryption circuit has a smaller degree than that of the previous scheme; the multiplicative degree is reduced from \(O(\lambda (\log \lambda )^2)\) to \(O(\lambda )\), where \(\lambda \) is the security parameter. We also extend our FHE scheme to a batch FHE scheme.
---
paper_title: Fully Homomorphic symmetric scheme without bootstrapping
paper_content:
Capability of operating over encrypted data makes Fully Homomorphic Encryption(FHE)the Holy Grail for secure data processing applications. Though many applications need only secret keys, FHE has not been achieved properly through symmetric cryptography. Major hurdle is the need to refresh noisy ciphertexts which essentially requires public key and bootstrapping. We introduce a refreshing procedure to make a somewhat homomorphic scheme, fully homomorphic without requiring bootstrapping. Our scheme uses symmetric keys and has performance superior to existing public-key schemes.
---
paper_title: Scale-Invariant Fully Homomorphic Encryption over the Integers
paper_content:
At Crypto 2012, Brakerski constructed a scale-invariant fully homomorphic encryption scheme based on the LWE problem, in which the same modulus is used throughout the evaluation process, instead of a ladder of moduli when doing "modulus switching". In this paper we describe a variant of the van Dijk et al. FHE scheme over the integers with the same scale-invariant property. Our scheme has a single secret modulus whose size is linear in the multiplicative depth of the circuit to be homomorphically evaluated, instead of exponential; we therefore construct a leveled fully homomorphic encryption scheme. This scheme can be transformed into a pure fully homomorphic encryption scheme using bootstrapping, and its security is still based on the Approximate-GCD problem. ::: ::: We also describe an implementation of the homomorphic evaluation of the full AES encryption circuit, and obtain significantly improved performance compared to previous implementations: about 23 seconds resp. 3 minutes per AES block at the 72-bit resp. 80-bit security level on a mid-range workstation. ::: ::: Finally, we prove the equivalence between the error-free decisional Approximate-GCD problem introduced by Cheon et al. Eurocrypt 2013 and the classical computational Approximate-GCD problem. This equivalence allows to get rid of the additional noise in all the integer-based FHE schemes described so far, and therefore to simplify their security proof.
---
paper_title: Fully homomorphic encryption without modulus switching from classical GapSVP
paper_content:
We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically $$B \rightarrow B^2\cdot \text {poly}n$$ with every multiplication before "refreshing", our noise only grows linearly $$B \rightarrow B\cdot \text {poly}n$$. ::: ::: We use this technique to construct a scale-invariant fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus q and the initial noise level B, and not on their absolute values. ::: ::: Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process no need for "modulus switching", and this modulus can take arbitrary form. In addition, security can be classically reduced from the worst-case hardness of the GapSVP problem with quasi-polynomial approximation factor, whereas previous constructions could only exhibit a quantum reduction from GapSVP.
---
paper_title: Efficient Fully Homomorphic Encryption from (Standard) LWE
paper_content:
We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:\begin{enumerate}\item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. \item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {\em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {\em without introducing additional assumptions}. \end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog(k)+\log \dbs$ bits per single-bit query (here, $k$ is a security parameter).
---
paper_title: Lattice-based FHE as secure as PKE
paper_content:
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of O(n1.5+e)-approximation for lattice problems (such as GapSVP) under quantum reductions for any e 〉 0 (or O(n2+e)-approximation under classical reductions). This matches the best known hardness for "regular" (non-homomorphic) lattice based public-key encryption up to the e factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.) Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.
---
paper_title: A Decade of Lattice Cryptography.
paper_content:
Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks in contrast with most number-theoretic cryptography, high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution SIS and learning with errors LWE problems and their more efficient ring-based variants, their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications.
---
paper_title: On lattices, learning with errors, random linear codes, and cryptography
paper_content:
Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical.Using the main result, we obtain a public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP. Previous lattice-based public-key cryptosystems such as the one by Ajtai and Dwork were only based on unique-SVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size O(n2) and encrypting a message increases its size by O(n)(in previous cryptosystems these values are O(n4) and O(n2), respectively). In fact, under the assumption that all parties share a random bit string of length O(n2), the size of the public key can be reduced to O(n).
---
paper_title: On ideal lattices and learning with errors over rings
paper_content:
The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). ::: ::: We resolve this question in the affirmative by introducing an algebraic variant of LWE called ring-LWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE. Finally, the algebraic structure of ring-LWE might lead to new cryptographic applications previously not known to be based on LWE.
---
paper_title: LWE-Based FHE with Better Parameters
paper_content:
Fully homomorphic encryption allows a remote server to perform computable functions over the encrypted data without decrypting them first. We propose a public-key fully homomorphic encryption with better parameters from the learning with errors problem. It is a public key variant of Alperin-Sheriff and Peikert's symmetric-key fully homomorphic encryption CRYPTO 2014, a simplified symmetric version of Gentry, Sahai and Waters' CRYPTO 2013, and is based on a minor variant of Regev's public-key encryption scheme STOC 2005 which may be of independent interest. Meanwhile, it can not only homomorphically handle "NAND" circuits, but also conform to the traditional wisdom that circuit evaluation procedure should be a naive combination of homomorphic additions and multiplications.
---
paper_title: Optimizations of Brakerski's fully homomorphic encryption scheme
paper_content:
In this paper, we propose two methods to improve the efficiency of the scheme. Our main optimization is a new method to fulfill the tensor product of vectors, which can significantly reduce the computational overhead in the key switching algorithm, and the basic homomorphic operation is about 40%-50% faster than before by using this method. By truncating the least significant bits from every element of ciphertext vector, another optimization is presented. Using this method, the ciphertext vector can be represented concisely, which can further improve the efficiency.
---
paper_title: Multi-identity and Multi-key Leveled FHE from Learning with Errors
paper_content:
Gentry, Sahai and Waters recently presented the first (leveled) identity-based fully homomorphic (IBFHE) encryption scheme (CRYPTO 2013). Their scheme however only works in the single-identity setting; that is, homomorphic evaluation can only be performed on ciphertexts created with the same identity. In this work, we extend their results to the multi-identity setting and obtain a multi-identity IBFHE scheme that is selectively secure in the random oracle model under the hardness of Learning with Errors (LWE). We also obtain a multi-key fully-homomorphic encryption (FHE) scheme that is secure under LWE in the standard model. This is the first multi-key FHE based on a well-established assumption such as standard LWE. The multi-key FHE of Lopez-Alt, Tromer and Vaikuntanathan (STOC 2012) relied on a non-standard assumption, referred to as the Decisional Small Polynomial Ratio assumption.
---
paper_title: Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
paper_content:
We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving “relinearization”. In this work, we propose a new technique for building FHE schemes that we call the approximate eigenvector method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand.
---
paper_title: Accelerating NTRU based homomorphic encryption using GPUs
paper_content:
We introduce a large polynomial arithmetic library optimized for Nvidia GPUs to support fully homomorphic encryption schemes. To realize the large polynomial arithmetic library we convert polynomials with large coefficients using the Chinese Remainder Theorem into many polynomials with small coefficients, and then carry out modular multiplications in the residue space using a custom developed discrete Fourier transform library. We further extend the library to support the homomorphic evaluation operations, i.e. addition, multiplication, and relinearization, in an NTRU based somewhat homomorphic encryption library. Finally, we put the library to use to evaluate homomorphic evaluation of two block ciphers: Prince and AES, which show 2.57 times and 7.6 times speedup, respectively, over an Intel Xeon software implementation.
---
paper_title: Fully homomorphic encryption without modulus switching from classical GapSVP
paper_content:
We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically $$B \rightarrow B^2\cdot \text {poly}n$$ with every multiplication before "refreshing", our noise only grows linearly $$B \rightarrow B\cdot \text {poly}n$$. ::: ::: We use this technique to construct a scale-invariant fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus q and the initial noise level B, and not on their absolute values. ::: ::: Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process no need for "modulus switching", and this modulus can take arbitrary form. In addition, security can be classically reduced from the worst-case hardness of the GapSVP problem with quasi-polynomial approximation factor, whereas previous constructions could only exhibit a quantum reduction from GapSVP.
---
paper_title: Efficient Fully Homomorphic Encryption from (Standard) LWE
paper_content:
We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:\begin{enumerate}\item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. \item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {\em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {\em without introducing additional assumptions}. \end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog(k)+\log \dbs$ bits per single-bit query (here, $k$ is a security parameter).
---
paper_title: Efficient architecture and implementation for NTRUEncrypt system
paper_content:
NTRU has gained much attention recently because it is relatively efficient for practical implementation among the post-quantum public key cryptosystems. In this paper, an efficient hardware architecture and FPGA implementation of NTRUEncrypt is proposed. The new architecture takes advantage of linear feedback shift register (LFSR) structure for its compact circuitry and high speed. A novel design of the modular arithmetic unit is proposed to reduce the critical path delay. The FPGA implementation results have shown that the proposed design outperforms all the existing works in terms of area-delay product.
---
paper_title: Practical Bootstrapping in Quasilinear Time
paper_content:
Gentry’s “bootstrapping” technique (STOC 2009) constructs a fully homomorphic encryption (FHE) scheme from a “somewhat homomorphic” one that is powerful enough to evaluate its own decryption function. To date, it remains the only known way of obtaining unbounded FHE. Unfortunately, bootstrapping is computationally very expensive, despite the great deal of effort that has been spent on improving its efficiency. The current state of the art, due to Gentry, Halevi, and Smart (PKC 2012), is able to bootstrap “packed” ciphertexts (which encrypt up to a linear number of bits) in time only quasilinear O(λ) = λ · logO(1) λ in the security parameter. While this performance is asymptotically optimal up to logarithmic factors, the practical import is less clear: the procedure composes multiple layers of expensive and complex operations, to the point where it appears very difficult to implement, and its concrete runtime appears worse than those of prior methods (all of which have quadratic or larger asymptotic runtimes).
---
paper_title: NTRU: A Ring-Based Public Key Cryptosystem
paper_content:
We describe NTRU, a new public key cryptosystem. NTRU features reasonably short, easily created keys, high speed, and low memory requirements. NTRU encryption and decryption use a mixing system suggested by polynomial algebra combined with a clustering principle based on elementary probability theory. The security of the NTRU cryptosystem comes from the interaction of the polynomial mixing system with the independence of reduction modulo two relatively prime integers p and q.
---
paper_title: On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption
paper_content:
We propose a new notion of secure multiparty computation aided by a computationally-powerful but untrusted "cloud" server. In this notion that we call on-the-fly multiparty computation (MPC), the cloud can non-interactively perform arbitrary, dynamically chosen computations on data belonging to arbitrary sets of users chosen on-the-fly. All user's input data and intermediate results are protected from snooping by the cloud as well as other users. This extends the standard notion of fully homomorphic encryption (FHE), where users can only enlist the cloud's help in evaluating functions on their own encrypted data. In on-the-fly MPC, each user is involved only when initially uploading his (encrypted) data to the cloud, and in a final output decryption phase when outputs are revealed; the complexity of both is independent of the function being computed and the total number of users in the system. When users upload their data, they need not decide in advance which function will be computed, nor who they will compute with; they need only retroactively approve the eventually-chosen functions and on whose data the functions were evaluated. This notion is qualitatively the best possible in minimizing interaction, since the users' interaction in the decryption stage is inevitable: we show that removing it would imply generic program obfuscation and is thus impossible. Our contributions are two-fold:- We show how on-the-fly MPC can be achieved using a new type of encryption scheme that we call multikey FHE, which is capable of operating on inputs encrypted under multiple, unrelated keys. A ciphertext resulting from a multikey evaluation can be jointly decrypted using the secret keys of all the users involved in the computation. - We construct a multikey FHE scheme based on NTRU, a very efficient public-key encryption scheme proposed in the 1990s. It was previously not known how to make NTRU fully homomorphic even for a single party. We view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest. Although the transformation to a fully homomorphic system deteriorates the efficiency of NTRU somewhat, we believe that this system is a leading candidate for a practical FHE scheme.
---
paper_title: Making NTRU as secure as worst-case problems over ideal lattices
paper_content:
NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Silverman, is the fastest known lattice-based encryption scheme. Its moderate key-sizes, excellent asymptotic performance and conjectured resistance to quantum computers could make it a desirable alternative to factorisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify NTRUEncrypt to make it provably secure in the standard model, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the secret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the R-LWE problem.
---
paper_title: Improved Security for a Ring-Based Fully Homomorphic Encryption Scheme
paper_content:
In 1996, Hoffstein, Pipher and Silverman introduced an efficient lattice based encryption scheme dubbed NTRUEncrypt . Unfortunately, this scheme lacks a proof of security. However, in 2011, Stehle and Steinfeld showed how to modify NTRUEncrypt to reduce security to standard problems in ideal lattices. In 2012, Lopez-Alt, Tromer and Vaikuntanathan proposed a fully homomorphic scheme based on this modified system. However, to allow homomorphic operations and prove security, a non-standard assumption is required. In this paper, we show how to remove this non-standard assumption via techniques introduced by Brakerski and construct a new fully homomorphic encryption scheme from the Stehle and Steinfeld version based on standard lattice assumptions and a circular security assumption. The scheme is scale-invariant and therefore avoids modulus switching and the size of ciphertexts is one ring element. Moreover, we present a practical variant of our scheme, which is secure under stronger assumptions, along with parameter recommendations and promising implementation results. Finally, we present an approach for encrypting larger input sizes by extending ciphertexts to several ring elements via the CRT on the message space.
---
paper_title: A Subfield Lattice Attack on Overstretched NTRU Assumptions
paper_content:
The subfield attack exploits the presence of a subfield to solve overstretched versions of the NTRU assumption: norming the public key h down to a subfield may lead to an easier lattice problem and any sufficiently good solution may be lifted to a short vector in the full NTRU-lattice. This approach was originally sketched in a paper of Gentry and Szydlo at Eurocrypt'02 and there also attributed to Jonsson, Nguyen and Stern. However, because it does not apply for small moduli and hence NTRUEncrypt, it seems to have been forgotten. In this work, we resurrect this approach, fill some gaps, analyze and generalize it to any subfields and apply it to more recent schemes. We show that for significantly larger moduli -- a case we call overstretched -- the subfield attack is applicable and asymptotically outperforms other known attacks. ::: ::: This directly affects the asymptotic security of the bootstrappable homomorphic encryption schemes LTV and YASHE which rely on a mildly overstretched NTRU assumption: the subfield lattice attack runs in sub-exponential time $$2^{O\lambda /\log ^{1/3}\lambda }$$ invalidating the security claim of $$2^{\varTheta \lambda }$$ . The effect is more dramatic on GGH-like Multilinear Maps: this attack can run in polynomial time without encodings of zero nor the zero-testing parameter, yet requiring an additional quantum step to recover the secret parameters exactly. ::: ::: We also report on practical experiments. Running LLL in dimension 512 we obtain vectors that would have otherwise required running BKZ with block-size 130 in dimension 8192. Finally, we discuss concrete aspects of this attack, the condition on the modulus q to guarantee full immunity, discuss countermeasures and propose open questions.
---
paper_title: Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
paper_content:
We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving “relinearization”. In this work, we propose a new technique for building FHE schemes that we call the approximate eigenvector method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand.
---
paper_title: Fully homomorphic encryption with relatively small key and ciphertext sizes
paper_content:
We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.
---
paper_title: Faster Algorithms for Approximate Common Divisors: Breaking Fully-Homomorphic-Encryption Challenges over the Integers
paper_content:
At EUROCRYPT '10, van Dijk et al. presented simple fully- homomorphic encryption (FHE) schemes based on the hardness of approximate integer common divisors problems, which were introduced in 2001 by Howgrave-Graham. There are two versions for these problems: the partial version (PACD) and the general version (GACD). The seemingly easier problem PACD was recently used by Coron et al. at CRYPTO '11 to build a more efficient variant of the FHE scheme by van Dijk et al.. We present a new PACD algorithm whose running time is essentially the "square root" of that of exhaustive search, which was the best attack in practice. This allows us to experimentally break the FHE challenges proposed by Coron et al. Our PACD algorithm directly gives rise to a new GACD algorithm, which is exponentially faster than exhaustive search. Interestingly, our main technique can also be applied to other settings, such as noisy factoring and attacking low-exponent RSA.
---
paper_title: Fully Homomorphic Encryption over the Integers with Shorter Public Keys
paper_content:
At Eurocrypt 2010 van Dijk et al. described a fully homomorphic encryption scheme over the integers. The main appeal of this scheme (compared to Gentry's) is its conceptual simplicity. This simplicity comes at the expense of a public key size in O(λ10) which is too large for any practical system. In this paper we reduce the public key size to O(λ7) by encrypting with a quadratic form in the public key elements, instead of a linear form. We prove that the scheme remains semantically secure, based on a stronger variant of the approximate-GCD problem, already considered by van Dijk et al. ::: ::: We alsodescribe the first implementation of the resulting fully homomorphic scheme. Borrowing some optimizations from the recent Gentry-Halevi implementation of Gentry's scheme, we obtain roughly the same level of efficiency. This shows that fully homomorphic encryption can be implemented using simple arithmetic operations.
---
paper_title: A Comparison of the Homomorphic Encryption Schemes FV and YASHE
paper_content:
We conduct a theoretical and practical comparison of two Ring-LWE-based, scale-invariant, leveled homomorphic encryption schemes – Fan and Vercauteren’s adaptation of BGV and the YASHE scheme proposed by Bos, Lauter, Loftus and Naehrig. In particular, we explain how to choose parameters to ensure correctness and security against lattice attacks. Our parameter selection improves the approach of van de Pol and Smart to choose parameters for schemes based on the Ring-LWE problem by using the BKZ-2.0 simulation algorithm.
---
paper_title: Improved Security for a Ring-Based Fully Homomorphic Encryption Scheme
paper_content:
In 1996, Hoffstein, Pipher and Silverman introduced an efficient lattice based encryption scheme dubbed NTRUEncrypt . Unfortunately, this scheme lacks a proof of security. However, in 2011, Stehle and Steinfeld showed how to modify NTRUEncrypt to reduce security to standard problems in ideal lattices. In 2012, Lopez-Alt, Tromer and Vaikuntanathan proposed a fully homomorphic scheme based on this modified system. However, to allow homomorphic operations and prove security, a non-standard assumption is required. In this paper, we show how to remove this non-standard assumption via techniques introduced by Brakerski and construct a new fully homomorphic encryption scheme from the Stehle and Steinfeld version based on standard lattice assumptions and a circular security assumption. The scheme is scale-invariant and therefore avoids modulus switching and the size of ciphertexts is one ring element. Moreover, we present a practical variant of our scheme, which is secure under stronger assumptions, along with parameter recommendations and promising implementation results. Finally, we present an approach for encrypting larger input sizes by extending ciphertexts to several ring elements via the CRT on the message space.
---
paper_title: Can homomorphic encryption be practical?
paper_content:
The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted. In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency -- can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold. First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is "somewhat" homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes. Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem. The scheme is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.
---
paper_title: A Comparison of the Homomorphic Encryption Schemes FV and YASHE
paper_content:
We conduct a theoretical and practical comparison of two Ring-LWE-based, scale-invariant, leveled homomorphic encryption schemes – Fan and Vercauteren’s adaptation of BGV and the YASHE scheme proposed by Bos, Lauter, Loftus and Naehrig. In particular, we explain how to choose parameters to ensure correctness and security against lattice attacks. Our parameter selection improves the approach of van de Pol and Smart to choose parameters for schemes based on the Ring-LWE problem by using the BKZ-2.0 simulation algorithm.
---
paper_title: Ciphers for MPC and FHE
paper_content:
Designing an efficient cipher was always a delicate balance between linear and non-linear operations. This goes back to the design of DES, and in fact all the way back to the seminal work of Shannon.
---
paper_title: On the Homomorphic Computation of Symmetric Cryptographic Primitives
paper_content:
We present an analysis on the homomorphic computability of different symmetric cryptographic primitives, with the goal of understanding their characteristics with respect to the homomorphic evaluation according to the BGV scheme. Specifically, we start from the framework presented by Gentry, Halevi and Smart for evaluating AES. We provide an improvement of it, then we perform a detailed evaluation on the homomorphic computation of cryptographic algorithms of different families Salsa20 stream cipher, SHA-256 hash function and Keccak sponge function. After the analysis, we report the performance results of the primitives we have implemented using the recently released HElib. In the conclusions we discuss our findings for the different primitives we have analyzed to draw a general conclusion on the homomorphic evaluation of symmetric cryptographic primitives.
---
paper_title: Can homomorphic encryption be practical?
paper_content:
The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted. In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency -- can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold. First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is "somewhat" homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes. Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem. The scheme is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.
---
paper_title: Scale-Invariant Fully Homomorphic Encryption over the Integers
paper_content:
At Crypto 2012, Brakerski constructed a scale-invariant fully homomorphic encryption scheme based on the LWE problem, in which the same modulus is used throughout the evaluation process, instead of a ladder of moduli when doing "modulus switching". In this paper we describe a variant of the van Dijk et al. FHE scheme over the integers with the same scale-invariant property. Our scheme has a single secret modulus whose size is linear in the multiplicative depth of the circuit to be homomorphically evaluated, instead of exponential; we therefore construct a leveled fully homomorphic encryption scheme. This scheme can be transformed into a pure fully homomorphic encryption scheme using bootstrapping, and its security is still based on the Approximate-GCD problem. ::: ::: We also describe an implementation of the homomorphic evaluation of the full AES encryption circuit, and obtain significantly improved performance compared to previous implementations: about 23 seconds resp. 3 minutes per AES block at the 72-bit resp. 80-bit security level on a mid-range workstation. ::: ::: Finally, we prove the equivalence between the error-free decisional Approximate-GCD problem introduced by Cheon et al. Eurocrypt 2013 and the classical computational Approximate-GCD problem. This equivalence allows to get rid of the additional noise in all the integer-based FHE schemes described so far, and therefore to simplify their security proof.
---
paper_title: Fully homomorphic encryption with relatively small key and ciphertext sizes
paper_content:
We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.
---
paper_title: On Dual Lattice Attacks Against Small-Secret LWE and Parameter Choices in HElib and SEAL
paper_content:
We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of \((2\,L)/(2\,L+1)\) when \(\log q = \varTheta {\left( L \log n\right) }\), when the secret has constant hamming weight \(h\) and where \(L\) is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of \(2^{h}\) operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with \(n=1024\) and \(\log _2 q \approx {47}\), while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
---
paper_title: Efficient Fully Homomorphic Encryption from (Standard) LWE
paper_content:
We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:\begin{enumerate}\item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. \item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {\em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {\em without introducing additional assumptions}. \end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog(k)+\log \dbs$ bits per single-bit query (here, $k$ is a security parameter).
---
paper_title: Algorithms in HElib
paper_content:
HElib is a software library that implements homomorphic encryption (HE), specifically the Brakerski-Gentry-Vaikuntanathan (BGV) scheme, focusing on effective use of the Smart-Vercauteren ciphertext packing techniques and the Gentry-Halevi-Smart optimizations. The underlying cryptosystem serves as the equivalent of a “hardware platform” for HElib, in that it defines a set of operations that can be applied homomorphically, and specifies their cost. This “platform” is a SIMD environment (somewhat similar to Intel SSE and the like), but with unique cost metrics and parameters. In this report we describe some of the algorithms and optimization techniques that are used in HElib for data movement, linear algebra, and other operations over this “platform.”
---
paper_title: Accelerating NTRU based homomorphic encryption using GPUs
paper_content:
We introduce a large polynomial arithmetic library optimized for Nvidia GPUs to support fully homomorphic encryption schemes. To realize the large polynomial arithmetic library we convert polynomials with large coefficients using the Chinese Remainder Theorem into many polynomials with small coefficients, and then carry out modular multiplications in the residue space using a custom developed discrete Fourier transform library. We further extend the library to support the homomorphic evaluation operations, i.e. addition, multiplication, and relinearization, in an NTRU based somewhat homomorphic encryption library. Finally, we put the library to use to evaluate homomorphic evaluation of two block ciphers: Prince and AES, which show 2.57 times and 7.6 times speedup, respectively, over an Intel Xeon software implementation.
---
paper_title: Accelerating SWHE Based PIRs Using GPUs
paper_content:
In this work we focus on tailoring and optimizing the computational Private Information Retrieval (cPIR) scheme proposed in WAHC 2014 for efficient execution on graphics processing units (GPUs). Exploiting the mass parallelism in GPUs is a commonly used approach in speeding up cPIRs. Our goal is to eliminate the efficiency bottleneck of the Doroz et al. construction which would allow us to take advantage of its excellent bandwidth performance. To this end, we develop custom code to support polynomial ring operations and extend them to realize the evaluation functions in an optimized manner on high end GPUs. Specifically, we develop optimized CUDA code to support large degree/large coefficient polynomial arithmetic operations such as modular multiplication/reduction, and modulus switching. Moreover, we choose same prime numbers for both the CRT domain representation of the polynomials and for the modulus switching implementation of the somewhat homomorphic encryption scheme. This allows us to combine two arithmetic domains, which reduces the number of domain conversions and permits us to perform faster arithmetic. Our implementation achieves 14–34 times speedup for index comparison and 4–18 times speedup for data aggregation compared to a pure CPU software implementation.
---
paper_title: High-Speed Fully Homomorphic Encryption Over the Integers
paper_content:
A fully homomorphic encryption (FHE) scheme is envisioned as a key cryptographic tool in building a secure and reliable cloud computing environment, as it allows arbitrary evaluation of a ciphertext without revealing the plaintext. However, existing FHE implementations remain impractical due to very high time and resource costs. To the authors’ knowledge, this paper presents the first hardware implementation of a full encryption primitive for FHE over the integers using FPGA technology. A large-integer multiplier architecture utilising Integer-FFT multiplication is proposed, and a large-integer Barrett modular reduction module is designed incorporating the proposed multiplier. The encryption primitive used in the integer-based FHE scheme is designed employing the proposed multiplier and modular reduction modules. The designs are verified using the Xilinx Virtex-7 FPGA platform. Experimental results show that a speed improvement factor of up to 44 is achievable for the hardware implementation of the FHE encryption scheme when compared to its corresponding software implementation. Moreover, performance analysis shows further speed improvements of the integer-based FHE encryption primitives may still be possible, for example through further optimisations or by targeting an ASIC platform.
---
paper_title: Evaluating the Hardware Performance of a Million-Bit Multiplier
paper_content:
In this work we present the first full and complete evaluation of a very large multiplication scheme in custom hardware. We designed a novel architecture to realize a million-bit multiplication architecture based on the Schonhage-Strassen Algorithm and the Number Theoretical Transform (NTT). The construction makes use of an innovative cache architecture along with processing elements customized to match the computation and access patterns of the FFT-based recursive multiplication algorithm. When synthesized using a 90nm TSMC library operating at a frequency of 666 MHz, our architecture is able to compute the product of integers in excess of a million bits in 7.74 milliseconds. Estimates show that the performance of our design matches that of previously reported software implementations on a high-end 3 Ghz Intel Xeon processor, while requiring only a tiny fraction of the area.
---
paper_title: Accelerating Fully Homomorphic Encryption in Hardware
paper_content:
We present a custom architecture for realizing the Gentry-Halevi fully homomorphic encryption (FHE) scheme. This contribution presents the first full realization of FHE in hardware. The architecture features an optimized multi-million bit multiplier based on the Schonhage Strassen multiplication algorithm. Moreover, a number of optimizations including spectral techniques as well as a precomputation strategy is used to significantly improve the performance of the overall design. When synthesized using 90 nm technology, the presented architecture achieves to realize the encryption, decryption, and recryption operations in 18.1 msec, 16.1 msec, and 3.1 sec, respectively, and occupies a footprint of less than 30 million gates.
---
paper_title: Accelerating Homomorphic Evaluation on Reconfigurable Hardware
paper_content:
Homomorphic encryption allows computation on encrypted data and makes it possible to securely outsource computational tasks to untrusted environments. However, all proposed schemes are quite inefficient and homomorphic evaluation of ciphertexts usually takes several seconds on high-end CPUs, even for evaluating simple functions. In this work we investigate the potential of FPGAs for speeding up those evaluation operations. We propose an architecture to accelerate schemes based on the ring learning with errors (RLWE) problem and specifically implemented the somewhat homomorphic encryption scheme YASHE, which was proposed by Bos, Lauter, Loftus, and Naehrig in 2013. Due to the large size of ciphertexts and evaluation keys, on-chip storage of all data is not possible and external memory is required. For efficient utilization of the external memory we propose an efficient double-buffered memory access scheme and a polynomial multiplier based on the number theoretic transform (NTT). For the parameter set (\(n=16384,\lceil \log _2 q \rceil ={512}\)) capable of evaluating 9 levels of multiplications, we can perform a homomorphic addition in 0.94 ms and a homomorphic multiplication in 48.67 ms.
---
paper_title: Accelerating integer-based fully homomorphic encryption using Comba multiplication
paper_content:
Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integerbased FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with mediumsized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.
---
paper_title: Accelerating bootstrapping in FHEW using GPUs
paper_content:
Recently, the usage of GPU is not limited to the jobs associated with graphics and a wide variety of applications take advantage of the flexibility of GPUs to accelerate the computing performance. Among them, one of the most emerging applications is the fully homomorphic encryption (FHE) scheme, which enables arbitrary computations on encrypted data. Despite much research effort, it cannot be considered as practical due to the enormous amount of computations, especially in the bootstrapping procedure. In this paper, we accelerate the performance of the recently suggested fast bootstrapping method in FHEW scheme using GPUs, as a case study of a FHE scheme. In order to optimize, we explored the reference code and carried out profiling to find out candidates for performance acceleration. Based on the profiling results, combined with more flexible tradeoff method, we optimized the bootstrapping algorithm in FHEW using GPU and CUDA's programming model. The empirical result shows that the bootstrapping of FHEW ciphertext can be done in less than 0.11 second after optimization.
---
paper_title: Practical homomorphic encryption: A survey
paper_content:
Cloud computing technology has rapidly evolved over the last decade, offering an alternative way to store and work with large amounts of data. However data security remains an important issue particularly when using a public cloud service provider. The recent area of homomorphic cryptography allows computation on encrypted data, which would allow users to ensure data privacy on the cloud and increase the potential market for cloud computing. A significant amount of research on homomorphic cryptography appeared in the literature over the last few years; yet the performance of existing implementations of encryption schemes remains unsuitable for real time applications. One way this limitation is being addressed is through the use of graphics processing units (GPUs) and field programmable gate arrays (FPGAs) for implementations of homomorphic encryption schemes. This review presents the current state of the art in this promising new area of research and highlights the interesting remaining open problems.
---
paper_title: On CCA-secure somewhat homomorphic encryption
paper_content:
It is well known that any encryption scheme which supports any form of homomorphic operation cannot be secure against adaptive chosen ciphertext attacks. The question then arises as to what is the most stringent security definition which is achievable by homomorphic encryption schemes. Prior work has shown that various schemes which support a single homomorphic encryption scheme can be shown to be IND-CCA1, i.e. secure against lunchtime attacks. In this paper we extend this analysis to the recent fully homomorphic encryption scheme proposed by Gentry, as refined by Gentry, Halevi, Smart and Vercauteren. We show that the basic Gentry scheme is not IND-CCA1; indeed a trivial lunchtime attack allows one to recover the secret key. We then show that a minor modification to the variant of the somewhat homomorphic encryption scheme of Smart and Vercauteren will allow one to achieve IND-CCA1, indeed PA-1, in the standard model assuming a lattice based knowledge assumption. We also examine the security of the scheme against another security notion, namely security in the presence of ciphertext validity checking oracles; and show why CCA-like notions are important in applications in which multiple parties submit encrypted data to the "cloud" for secure processing.
---
paper_title: Notes on Two Fully Homomorphic Encryption Schemes Without Bootstrapping.
paper_content:
Recently, IACR ePrint archive posted two fully homomorphic encryption schemes without bootstrapping. In this note, we show that these schemes are trivially insecure. Furthermore, we also show that the encryption schemes of Liu and Wang [6] in CCS 2012 and the encryption scheme of Liu, Bertino, and Xun [5] in ASIACCS 2014 are insecure either.
---
paper_title: Between a Rock and a Hard Place: Interpolating Between MPC and FHE
paper_content:
We present a computationally secure MPC protocol for threshold adversaries which is parametrized by a value L. When L = 2 we obtain a classical form of MPC protocol in which interaction is required for multiplications, as L increases interaction is reduced, in that one requires interaction only after computing a higher degree function. When L approaches infinity one obtains the FHE based protocol of Gentry, which requires no interaction. Thus one can trade communication for computation in a simple way. Our protocol is based on an interactive protocol for “bootstrapping” a somewhat homomorphic encryption (SHE) scheme. The key contribution is that our presented protocol is highly communication efficient enabling us to obtain reduced communication when compared to traditional MPC protocols for relatively small values of L.
---
paper_title: Outsourcing secure two-party computation as a black box
paper_content:
Secure multiparty computation SMC offers a technique to preserve functionality and data privacy in mobile applications. Current protocols that make this costly cryptographic construction feasible on mobile devices securely outsource the bulk of the computation to a cloud provider. However, these outsourcing techniques are built on specific secure computation assumptions and tools, and applying new SMC ideas to the outsourced setting requires the protocols to be completely rebuilt and proven secure. In this work, we develop a generic technique for lifting any secure two-party computation protocol into an outsourced two-party SMC protocol. By augmenting the function being evaluated with auxiliary consistency checks and input values, we can create an outsourced protocol with low overhead cost. Our implementation and evaluation show that in the best case our outsourcing additions execute within the confidence intervals of two servers running the same computation and consume approximately the same bandwidth. In addition, the mobile device itself uses minimal bandwidth over a single round of communication. This work demonstrates that efficient outsourcing is possible with any underlying SMC scheme and provides an outsourcing protocol that is efficient and directly applicable to current and future SMC techniques. Copyright © 2016 John Wiley & Sons, Ltd.
---
paper_title: Multiparty computation from somewhat homomorphic encryption
paper_content:
We propose a general multiparty computation protocol secure against an active adversary corrupting up to $$n-1$$ of the n players. The protocol may be used to compute securely arithmetic circuits over any finite field $$\mathbb {F}_{p^k}$$. Our protocol consists of a preprocessing phase that is both independent of the function to be computed and of the inputs, and a much more efficient online phase where the actual computation takes place. The online phase is unconditionally secure and has total computational and communication complexity linear in n, the number of players, where earlier work was quadratic in n. Moreover, the work done by each player is only a small constant factor larger than what one would need to compute the circuit in the clear. We show this is optimal for computation in large fields. In practice, for 3 players, a secure 64-bit multiplication can be done in 0.05 ms. Our preprocessing is based on a somewhat homomorphic cryptosystem. We extend a scheme by Brakerski et al., so that we can perform distributed decryption and handle many values in parallel in one ciphertext. The computational complexity of our preprocessing phase is dominated by the public-key operations, we need $$On^2/s$$ operations per secure multiplication where s is a parameter that increases with the security parameter of the cryptosystem. Earlier work in this model needed $$\varOmega n^2$$ operations. In practice, the preprocessing prepares a secure 64-bit multiplication for 3 players in about 13 ms.
---
paper_title: On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption
paper_content:
We propose a new notion of secure multiparty computation aided by a computationally-powerful but untrusted "cloud" server. In this notion that we call on-the-fly multiparty computation (MPC), the cloud can non-interactively perform arbitrary, dynamically chosen computations on data belonging to arbitrary sets of users chosen on-the-fly. All user's input data and intermediate results are protected from snooping by the cloud as well as other users. This extends the standard notion of fully homomorphic encryption (FHE), where users can only enlist the cloud's help in evaluating functions on their own encrypted data. In on-the-fly MPC, each user is involved only when initially uploading his (encrypted) data to the cloud, and in a final output decryption phase when outputs are revealed; the complexity of both is independent of the function being computed and the total number of users in the system. When users upload their data, they need not decide in advance which function will be computed, nor who they will compute with; they need only retroactively approve the eventually-chosen functions and on whose data the functions were evaluated. This notion is qualitatively the best possible in minimizing interaction, since the users' interaction in the decryption stage is inevitable: we show that removing it would imply generic program obfuscation and is thus impossible. Our contributions are two-fold:- We show how on-the-fly MPC can be achieved using a new type of encryption scheme that we call multikey FHE, which is capable of operating on inputs encrypted under multiple, unrelated keys. A ciphertext resulting from a multikey evaluation can be jointly decrypted using the secret keys of all the users involved in the computation. - We construct a multikey FHE scheme based on NTRU, a very efficient public-key encryption scheme proposed in the 1990s. It was previously not known how to make NTRU fully homomorphic even for a single party. We view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest. Although the transformation to a fully homomorphic system deteriorates the efficiency of NTRU somewhat, we believe that this system is a leading candidate for a practical FHE scheme.
---
paper_title: Secure outsourced garbled circuit evaluation for mobile devices
paper_content:
Garbled circuits provide a powerful tool for jointly evaluating functions while preserving the privacy of each user's inputs. While recent research has made the use of this primitive more practical, such solutions generally assume that participants are symmetrically provisioned with massive computing resources. In reality, most people on the planet only have access to the comparatively sparse computational resources associated with their mobile phones, and those willing and able to pay for access to public cloud computing infrastructure cannot be assured that their data will remain unexposed. We address this problem by creating a new SFE protocol that allows mobile devices to securely outsource the majority of computation required to evaluate a garbled circuit. Our protocol, which builds on the most efficient garbled circuit evaluation techniques, includes a new out-sourced oblivious transfer primitive that requires significantly less bandwidth and computation than standard OT primitives and outsourced input validation techniques that force the cloud to prove that it is executing all protocols correctly. After showing that our extensions are secure in the malicious model, we conduct an extensive performance evaluation for a number of standard SFE test applications as well as a privacy-preserving navigation application designed specifically for the mobile usecase. Our system reduces execution time by 98.92% and bandwidth by 99.95% for the edit distance problem of size 128 compared to non-outsourced evaluation. These results show that even the least capable devices are capable of evaluating some of the largest garbled circuits generated for any platform.
---
paper_title: Adaptively Secure Multi-Party Computation from LWE via Equivocal FHE
paper_content:
Adaptively secure Multi-Party Computation MPC is an essential and fundamental notion in cryptography. In this work, we construct Universally Composable UC MPC protocols that are adaptively secure against all-but-one corruptions based on LWE. Our protocols have a constant number of rounds and communication complexity dependant only on the length of the inputs and outputs it is independent of the circuit size. ::: ::: Such protocols were only known assuming an honest majority. Protocols in the dishonest majority setting, such as the work of Ishai et al. CRYPTO 2008, require communication complexity proportional to the circuit size. In addition, constant-round adaptively secure protocols assuming dishonest majority are known to be impossible in the stand-alone setting with black-box proofs of security in the plain model. Here, we solve the problem in the UC setting using a set-up assumption which was shown necessary in order to achieve dishonest majority. ::: ::: The problem of constructing adaptively secure constant-round MPC protocols against arbitrary corruptions is considered a notorious hard problem. A recent line of works based on indistinguishability obfuscation construct such protocols with near-optimal number of rounds against arbitrary corruptions. However, based on standard assumptions, adaptively secure protocols secure against even just all-but-one corruptions with near-optimal number of rounds are not known. However, in this work we provide a three-round solution based only on LWE and NIZK secure against all-but-one corruptions. ::: ::: In addition, Asharov et al. EUROCRYPT 2012 and more recently Mukherjee and Wichs ePrint 2015 presented constant-round protocols based on LWE which are secure only in the presence of static adversaries. Assuming NIZK and LWE their static protocols run in two rounds where the latter one is only based on a common random string. Assuming adaptively secure UC NIZK, proposed by Groth et al. ACM 2012, and LWE as mentioned above our adaptive protocols run in three rounds. ::: ::: Our protocols are constructed based on a special type of cryptosystem we call equivocal FHE from LWE. We also build adaptively secure UC commitments and UC zero-knowledge proofs of knowledge from LWE. Moreover, in the decryption phase using an AMD code mechanism we avoid the use of ZK and achieve communication complexity that does not scale with the decryption circuit.
---
paper_title: A Practical, Secure, and Verifiable Cloud Computing for Mobile Systems
paper_content:
Cloud computing systems, in which clients rent and share computing resources of third party platforms, have gained widespread use in recent years. Furthermore, cloud computing for mobile systems (i.e., systems in which the clients are mobile devices) have too been receiving considerable attention in technical literature. We propose a new method of delegating computations of resourceconstrained mobile clients, in which multiple servers interact to construct an encrypted program known as garbled circuit. Next, using garbled inputs from a mobile client, another server executes this garbled circuit and returns the resulting garbled outputs. Our system assures privacy of the mobile client’s data, even if the executing server chooses to collude with all but one of the other servers. We adapt the garbled circuit design of Beaver et al. and the secure multiparty computation protocol of Goldreich et al. for the purpose of building a secure cloud computing for mobile systems. Our method incorporates the novel use of the cryptographically secure pseudo random number generator of Blum et al. that enables the mobile client to efficiently retrieve the result of the computation, as well as to verify that the evaluator actually performed the computation. We analyze the server-side and client-side complexity of our system. Using real-world data, we evaluate our system for a privacy preserving search application that locates the nearest bank/ATM from the mobile client. We also measure the time taken to construct and evaluate the garbled circuit for varying number of servers, demonstrating the feasibility of our secure and verifiable cloud computing for mobile systems. c
---
paper_title: An Efficient Leveled Identity-Based FHE
paper_content:
Gentry, Sahai and Waters constructed the first identity-based fully homomorphic encryption schemes from identity-based encryption schemes in CRYPTO 2013. In this work, we focus on improving their IBFHE schemes, using Micciancio and Peikert’s novel and powerful trapdoor in conjunction with Alperin-Sheriff and Peikert’s simple and tight noise analysis technique when performing homomorphic evaluation.
---
paper_title: Reuse It Or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values
paper_content:
Two-party secure-function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input-validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again. This paper presents PartialGC, an SFE system that allows the reuse of encrypted values generated during a garbled-circuit computation. We show that using PartialGC can reduce computation time by as much as 96% and bandwidth by as much as 98% in comparison with previous outsourcing schemes for secure computation. We demonstrate the feasibility of our approach with two sets of experiments, one in which the garbled circuit is evaluated on a mobile device and one in which it is evaluated on a server. We also use PartialGC to build a privacy-preserving ``friend-finder'' application for Android. The reuse of previous inputs to allow stateful evaluation represents a new way of looking at SFE and further reduces computational barriers.
---
paper_title: Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based
paper_content:
We describe a comparatively simple fully homomorphic encryption (FHE) scheme based on the learning with errors (LWE) problem. In previous LWE-based FHE schemes, multiplication is a complicated and expensive step involving “relinearization”. In this work, we propose a new technique for building FHE schemes that we call the approximate eigenvector method. In our scheme, for the most part, homomorphic addition and multiplication are just matrix addition and multiplication. This makes our scheme both asymptotically faster and (we believe) easier to understand.
---
|
Title: A Survey on Homomorphic Encryption Schemes: Theory and Implementation
Section 1: INTRODUCTION
Description 1: Discusses the origins and basic concept of homomorphic encryption (HE), including its significance, types (PHE, SWHE, FHE), and the main motivation behind this survey.
Section 2: RELATED WORK
Description 2: Details the existing surveys in the field of HE, comparing them with this work and highlighting the unique contributions of this survey.
Section 3: HOMOMORPHIC ENCRYPTION SCHEMES
Description 3: Explains the basics of HE theory and presents notable PHE, SWHE, and FHE schemes along with their descriptions, homomorphic properties, and evolution.
Section 4: Partially Homomorphic Encryption Schemes
Description 4: Focuses on major PHE schemes such as RSA, Goldwasser-Micali, El-Gamal, Benaloh, Paillier, and others, describing their key generation, encryption, decryption algorithms, and homomorphic properties.
Section 5: Somewhat Homomorphic Encryption Schemes
Description 5: Discusses significant SWHE schemes, including Boneh-Goh-Nissim (BGN), Polly Cracker, Sander-Young-Yung (SYY), and others, focusing on their contributions towards FHE and performance characteristics.
Section 6: Fully Homomorphic Encryption Schemes
Description 6: Presents various FHE schemes categorized into four main families: Ideal Lattice-based, Integer-based, LWE-based, and NTRU-like schemes. Provides detailed discussions on Gentry's original scheme, subsequent improvements, and recent advancements.
Section 7: IMPLEMENTATIONS OF SWHE AND FHE SCHEMES
Description 7: Summarizes the implementations of SWHE and FHE schemes after Gentry's work, providing performance evaluations, optimizations, and the impact of various implementation techniques.
Section 8: FURTHER RESEARCH DIRECTIONS AND LESSONS LEARNED
Description 8: Evaluates the security, speed, and simplicity of current FHE schemes, discussing open problems like circular security, noise-free FHE, multikey FHE, and potential applications such as Functional Encryption (FE) and multiparty computation (MPC).
Section 9: CONCLUSION
Description 9: Concludes the survey by emphasizing the importance of privacy in the digital age and the potential of HE in preserving the privacy of sensitive data, while summarizing the key findings and future outlook of HE research.
|
An Overview of Reflection and Its Use in Cooperation
| 12 |
---
paper_title: Semantics and implementation of schema evolution in object-oriented databases
paper_content:
Object-oriented programming is well-suited to such data-intensive application domains as CAD/CAM, AI, and OIS (office information systems) with multimedia documents. At MCC we have built a prototype object-oriented database system, called ORION. It adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. One of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. In this paper, following a brief review of the object-oriented data model that we support in ORION, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation.
---
paper_title: Models of Expertise in Knowledge Acquisition
paper_content:
Publisher Summary This chapter presents the models of expertise in knowledge acquisition. The conceptual models, which are developed from interpretation models, consist most often of combinations of models for generic tasks. The conceptual model consists of a diagnostic task, which has as one of its sub tasks a planning task. Project management consists of alternating planning, monitoring, diagnostic, and remedying tasks. The interpretation models provide support for developing the interference and task layers, but far less support exists for domain structures. However, this reflects the state of the art in artificial intelligence (AI), where formalisms are developed for knowledge representation, but there is little insight in what good representations (models) are for objects and actions in a domain. Current research in AI toward models for time, space, substance, and processes may in the near future become valuable for analyzing domains and constructing robust domain knowledge bases. These models or domain theories can be developed into tools that provide initial structures for major domain concepts. The majority of these studies indicate that KCLM as a bridge between data and the design of the KBS is useful. The studies also indicate that KCLM and the interpretation models with KADS are not completely stable and fully developed.
---
paper_title: KADS: a modelling approach to knowledge engineering
paper_content:
Abstract This paper discusses the KADs approach to knowledge engineering. In KADS, the development of a knowledge-based system (KBS) is viewed as a modelling activity. A KBS is not a container filled with knowledge extracted from an expert, but an operational model that exhibits some desired behaviour that can be observed in terms of real-world phenomena. Five basic principles underlying the KADS approach are discussed, namely (i) the introduction of partial models as a means to cope with the complexity of the knowledge engineering process, (ii) the KADS four-layer framework for modelling the required expertise, (iii) the re-usability of generic model components as templates supporting top-down knowledge acquisition, (iv) the process of differentiating simple models into more complex ones and (v) the importance of structure—preserving transformation of models of expertise into design and implementation. The actual activities that a knowledge engineer has to undertake are briefly discussed. We compare the KADS approach to related approaches and discuss experiences and future developments. The approach is illustrated throughout the paper with examples in the domain of troubleshooting audio equipment.
---
paper_title: Debugging concurrent systems based on object groups
paper_content:
This paper presents a debugging method for Concurrent Object-Oriented Systems. Our method is based upon a new notion called Object Groups. An Object Group is a collection of objects which forms a natural unit for performing collective tasks. An Object Group’s Task differs from C. Manning’s nested transaction which is based on the nested request-reply bilateral message passing structures. Each Object Group’s Task permits more general message passing structures. The language constructs which specify and use Object Groups have been introduced into an object-oriented concurrent language ABCL/l. The paper also describes ABCL/l’s debugging tools based on Object Groups.
---
|
Title: An Overview of Reflection and Its Use in Cooperation
Section 1: Introduction
Description 1: Introduce the concept of reflection in information systems, its importance, and provide an overview of the research paper's objectives.
Section 2: Approaches to Reflection
Description 2: Discuss various models and methodologies for reflection in knowledge-based systems and programming languages, including KADS and CLOS.
Section 3: KADS: (Knowledge Acquisition and Design System)
Description 3: Provide an overview of the KADS methodology, its origins, and its application in developing knowledge-based systems.
Section 4: KADS: The Four-layer Model of Expertise
Description 4: Explain the four-layer model of expertise in KADS, focusing on domain knowledge, inference knowledge, task knowledge, and strategic knowledge.
Section 5: REFLECT: The Meta-level Architecture
Description 5: Describe the REFLECT project's approach to enhancing the strategy layer of KADS for reasoning about problem-solving competence.
Section 6: CLOS: Common Lisp Object System
Description 6: Introduce CLOS and its self-describing facilities, emphasizing its meta-object protocols and extensions for programming reflection.
Section 7: Summary
Description 7: Summarize the differences between task reflection and programming reflection, and introduce the concept of operational reflection, which integrates both.
Section 8: Cooperation through Reflection
Description 8: Explain the benefits of reflection in cooperative information systems, particularly in multi-database environments, and introduce the concept of operational reflection.
Section 9: Cooperative Aspects of the R-OK Model
Description 9: Describe the R-OK (Reflective Object Knowledge) model and its four types of metaobjects for facilitating cooperation among pre-existing information systems.
Section 10: Formalisation
Description 10: Formalize the reflective aspects of the R-OK model using the Z notation and explain its application in inheritance, location, and action within information systems.
Section 11: Reflective Cooperative Processing Activities
Description 11: Provide examples of how reflection can be used for cooperative processing, including cooperative instantiation and emergency triggers.
Section 12: Self-explanation
Description 12: Discuss the principle of self-explanation in reflective systems and its implications for model utility, completeness, and consistency.
|
Derandomization: a brief overview
| 16 |
---
paper_title: Randomized Algorithms
paper_content:
For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. This book introduces the basic concepts in the design and analysis of randomized algorithms. The first part of the text presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications. Algorithmic examples are also given to illustrate the use of each tool in a concrete setting. In the second part of the book, each chapter focuses on an important area to which randomized algorithms can be applied, providing a comprehensive and representative selection of the algorithms that might be used in each of these areas. Although written primarily as a text for advanced undergraduates and graduate students, this book should also prove invaluable as a reference for professionals and researchers.
---
paper_title: How to generate cryptographically strong sequences of pseudo random bits
paper_content:
We give a set of conditions that allow one to generate 50–50 unpredictable bits.Based on those conditions, we present a general algorithmic scheme for constructing polynomial-time deterministic algorithms that stretch a short secret random input into a long sequence of unpredictable pseudo-random bits.We give an implementation of our scheme and exhibit a pseudo-random bit generator for which any efficient strategy for predicting the next output bit with better than 50–50 chance is easily transformable to an “equally efficient” algorithm for solving the discrete logarithm problem. In particular: if the discrete logarithm problem cannot be solved in probabilistic polynomial time, no probabilistic polynomial-time algorithm can guess the next output bit better than by flipping a coin: if “head” guess “0”, if “tail” guess “1”
---
paper_title: On the generation of cryptographically strong pseudo-random sequences
paper_content:
In this paper we show how to generate from a short random seed S a long sequence of pseudo-random numbers Ri in which the problem of computing one more Ri value given an arbitrarily large subset of the other values is provably equivalent to the cryptanalysis of the associated Rivest-Shamir-Adleman encryption function.
---
paper_title: A hard-core predicate for all one-way functions
paper_content:
A central tool in constructing pseudorandom generators, secure encryption functions, and in other areas are “hard-core” predicates b of functions (permutations) ƒ, discovered in [Blum Micali 82]. Such b ( x ) cannot be efficiently guessed (substantially better than 50-50) given only ƒ( x ). Both b , ƒ are computable in polynomial time. [Yao 82] transforms any one-way function ƒ into a more complicated one, ƒ * , which has a hard-core predicate. The construction applies the original ƒ to many small pieces of the input to ƒ * just to get one “hard-core” bit. The security of this bit may be smaller than any constant positive power of the security of ƒ. In fact, for inputs (to ƒ * ) of practical size, the pieces effected by ƒ are so small that ƒ can be inverted (and the “hard-core” bit computed) by exhaustive search. In this paper we show that every one-way function, padded to the form ƒ( p , x ) = ( p , g ( x )), VV p VV = V x V, has by itself a hard-core predicate of the same (within a polynomial) security. Namely, we prove a conjecture of [Levin 87, sec. 5.6.2] that the scalar product of Boolean vectors p , x is a hard-core of every one-way function ƒ( p , x ) = ( p , g ( x )). The result extends to multiple (up to the logarithm of security) such bits and to any distribution on the x 's for which ƒ is hard to invert.
---
paper_title: One-way functions and pseudorandom generators
paper_content:
One-way are those functions which are easy to compute, but hard to invert on a non-negligible fraction of instances. The existence of such functions with some additional assumptions was shown to be sufficient for generating perfect pseudorandom strings |Blum, Micali 82|, |Yao 82|, |Goldreich, Goldwasser, Micali 84|. Below, among a few other observations, a weaker assumption about one-way functions is suggested, which is not only sufficient, but also necessary for the existence of pseudorandom generators. The main theorem can be understood without reading the sections 3-6.
---
paper_title: Hardness vs. randomness
paper_content:
A simple construction for a pseudorandom bit generator is presented. It stretches a short string of truly random bits into a long string that looks random to any algorithm from a complexity class C (e.g. P, NC, PSPACE, etc.), using an arbitrary function that is hard for C. This generator reveals an equivalence between the problems of proving lower bounds and the problem of generating good pseudorandom sequences. Combining this construction with other arguments, a number of consequences are obtained. >
---
paper_title: On yao’s XOR-lemma
paper_content:
A fundamental lemma of Yao states that computational weakunpredictability of Boolean predicates is amplified when the results of several independent instances are XOR together. We survey two known proofs of Yao's Lemma and present a third alternative proof. The third proof proceeds by first proving that a function constructed by concatenating the values of the original function on several independent instances is much more unpredictable, with respect to specified complexity bounds, than the original function. This statement turns out to be easier to prove than the XOR-Lemma. Using a result of Goldreich and Levin (1989) and some elementary observation, we derive the XOR-Lemma.
---
paper_title: Hiding instances in multioracle queries
paper_content:
Abadi, Feigenbaum, and Kilian have considered instance-hiding schemes [1]. Let f be a function for which no randomized polynomial-time algorithm is known; randomized polynomial-time machine A wants to query an oracle B for f to obtain f(x), without telling B exactly what x is. It is shown in [1] that, if f is an NP-hard function, A cannot query a single oracle B while hiding all but the size of the instance, assuming that the polynomial hierarchy does not collapse. This negative result holds for all oracles B, including those that are non-r.e.
---
paper_title: BPP has subexponential time simulations unless EXPTIME has publishable proofs
paper_content:
It is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time collapses to the second level of the polynomial-time hierarchy, has polynomial-size circuits, and has publishable proofs (EXPTIME=MA). It is also shown that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, it is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages in MA/P. The proofs are based on the recent characterization of the power of multiprover interactive protocols and on random self-reducibility via low degree polynomials. They exhibit an interplay between Boolean circuit simulation, interactive proofs and classical complexity classes. An important feature of this proof is that it does not relativize.<<ETX>>
---
paper_title: Hard-core distributions for somewhat hard problems
paper_content:
Consider a decision problem that cannot be 1-/spl delta/ approximated by circuits of a given size in the sense that any such circuit fails to give the correct answer on at least a /spl delta/ fraction of instances. We show that for any such problem there is a specific "hard core" set of inputs which is at least a /spl delta/ fraction of all inputs and on which no circuit of a slightly smaller size can get even a small advantage over a random guess. More generally, our argument holds for any non uniform model of computation closed under majorities. We apply this result to get a new proof of the Yao XOR lemma (A.C. Yao, 1982), and to get a related XOR lemma for inputs that are only k wise independent.
---
paper_title: Hardness vs. randomness
paper_content:
A simple construction for a pseudorandom bit generator is presented. It stretches a short string of truly random bits into a long string that looks random to any algorithm from a complexity class C (e.g. P, NC, PSPACE, etc.), using an arbitrary function that is hard for C. This generator reveals an equivalence between the problems of proving lower bounds and the problem of generating good pseudorandom sequences. Combining this construction with other arguments, a number of consequences are obtained. >
---
paper_title: Non-Deterministic Exponential Time has Two-Prover Interactive Protocols
paper_content:
We determine the exact power of two-prover interactive proof systems introduced by Ben-Or, Goldwasser, Kilian, and Wigderson (1988). In this system, two all-powerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the inputx belongs to the languageL. We show that the class of languages having tow-prover interactive proof systems is nondeterministic exponential time.We also show that to prove membership in languages inEXP, the honest provers need the power ofEXP only.The first part of the proof of the main result extends recent techniques of polynomial extrapolation used in the single prover case by Lund, Fortnow, Karloff, Nisan, and Shamir.The second part is averification scheme for multilinearity of a function in several variables held by an oracle and can be viewed as an independent result onprogram verification. Its proof rests on combinatorial techniques employing a simple isoperimetric inequality for certain graphs:
---
paper_title: Worst-Case Hardness Suffices for Derandomization: A New Method for Hardness-Randomness Trade-Offs
paper_content:
Up to know, the known derandomization methods have been derived assuming average-case hardness conditions. In this paper we instead present the first worst-case hardness conditions sufficient to obtain P=BPP.
---
paper_title: A new general derandomization method
paper_content:
We show that quick hitting set generators can replace quick pseudorandom generators to derandomize any probabilistic two-sided error algorithms. Up to now quick hitting set generators have been known as the general and uniform derandomization method for probabilistic one-sided error algorithms, while quick pseudorandom generators as the generators as the general and uniform method to derandomize probabilistic two-sided error algorithms. Our method is based on a deterministic algorithm that, given a Boolean circuit C and given access to a hitting set generator, constructs a discrepancy set for C . The main novelty is that the discrepancy set depends on C , so the new derandomization method is not uniform (i.e., not oblivious ). The algorithm works in time exponential in k(p(n)) where k (*) is the price of the hitting set generator and p (*) is a polynomial function in the size of C . We thus prove that if a logarithmic price quick hitting set generator exists then BPP = P.
---
paper_title: BPP has subexponential time simulations unless EXPTIME has publishable proofs
paper_content:
It is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time collapses to the second level of the polynomial-time hierarchy, has polynomial-size circuits, and has publishable proofs (EXPTIME=MA). It is also shown that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, it is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages in MA/P. The proofs are based on the recent characterization of the power of multiprover interactive protocols and on random self-reducibility via low degree polynomials. They exhibit an interplay between Boolean circuit simulation, interactive proofs and classical complexity classes. An important feature of this proof is that it does not relativize.<<ETX>>
---
paper_title: Decoding of Reed Solomon codes beyond the error-correction bound
paper_content:
We present a randomized algorithm which takes as inputndistinct points {(xi,yi)}i= 1nfromF×F(whereFis a field) and integer parameterstanddand returns a list of all univariate polynomialsfoverFin the variablexof degree at mostdwhich agree with the given set of points in at leasttplaces (i.e.,yi=f(xi) for at leasttvalues ofi), providedt= ?(nd). The running time is bounded by a polynomial inn. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides error recovery capability beyond the error-correction bound of a code for any efficient (i.e., constant or even polynomial rate) code.
---
paper_title: Improved low-degree testing and its applications
paper_content:
NP = PCP(logn, 1) and related results crucially depend upon the close connection between the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [30]. The strongest previously known connection for this test states that a function passes the test with probability δ for some δ > 7/8 iff the function has agreement ≈ δ with a polynomial of degree d. We present a new, and surprisingly strong, analysis which shows that the preceding statement is true for arbitrarily small δ, provided the field size is polynomially larger than d/δ. The analysis uses a version of Hilbert irreducibility, a tool of algebraic geometry. As a consequence we obtain an alternate construction for the following proof system: A constant prover 1-round proof system for NP languages in which the verifier uses O(logn) random bits, receives answers of size O(logn) bits, and has an error probability of at most 2− log 1− . Such a proof system, which implies the NP-hardness of approximating Set Cover to within Ω(logn) factors, has already been obtained by Raz and Safra [29]. Raz and Safra obtain their result by giving a strong analysis, in the sense described above, of a new low-degree test that they present. A second consequence of our analysis is a self tester/corrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on δ fraction of inputs where δ = 1/ |F| 0.5, then the tester/corrector determines δ and generates O( 1 δ ) values for every input, such that one of them is the correct output. In fact, our results yield something stronger: Given the buggy program, we can construct O( 1 δ ) randomized programs such that one of them is correct on every input, with high probability. Such a strong selfcorrector is a useful tool in complexity theory with some applications known. ∗[email protected]. Supported by an NSF CAREER award, an Alfred P. Sloan Fellowship, and a Packard Fellowship. †[email protected]. Laboratory for Computer Science, MIT, Cambridge, MA 02139. Part of this work was done when this author was at the IBM Thomas J. Watson Research Center.
---
paper_title: Pseudorandom generators without the XOR Lemma
paper_content:
Summary form only given. R. Impagliazzo and A. Wigderson (1997) have recently shown that if there exists a decision problem solvable in time 2/sup O(n)/ and having circuit complexity 2/sup /spl Omega/(n)/ (for all but finitely many n) then P=BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan-Wigderson (1994) generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardness-randomness trade-offs.
---
paper_title: List decoding: algorithms and applications
paper_content:
Over the years coding theory and complexity theory have benefited from a number of mutually enriching connections. This article focuses on a new connection that has emerged between the two topics in the recent years. This connection is centered around the notion of “list-decoding” for error-correcting codes. In this survey we describe the list-decoding problem, the algorithms that have been developed, and a diverse collection of applications within complexity theory.
---
paper_title: Binary codes with specified minimum distance
paper_content:
Two n -digit sequences, called "points," of binary digits are said to be at distance d if exactly d corresponding digits are unlike in the two sequences. The construction of sets of points, called codes, in which some specified minimum distance is maintained between pairs of points is of interest in the design of self-checking systems for computing with or transmitting binary digits, the minimum distance being the minimum number of digital errors required to produce an undetected error in the system output. Previous work in the field had established general upper bounds for the number of n -digit points in codes of minimum distance d with certain properties. This paper gives new results in the field in the form of theorems which permit systematic construction of codes for given n, d ; for some n, d , the codes contain the greatest possible numbers of points.
---
paper_title: Graph nonisomorphism has subexponential size proofs unless the polynomial-time hierarchy collapses
paper_content:
We establish hardness versus randomness trade-offs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round Arthur-Merlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless the polynomial-time hierarchy collapses. This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. We then apply our derandomization technique to four fundamental complexity theoretic constructions: The Valiant-Vazirani random hashing technique which prunes the number of satisfying assignments of a Boolean formula to one, and related procedures like computing satisfying assignments to Boolean formulas non-adaptively given access to an oracle for satisfiability. The algorithm of Bshouty et al. for learning Boolean circuits. Constructing matrices with high rigidity. Constructing polynomial-size universal traversal sequences. We also show that if linear space requires exponential size circuits, then space bounded randomized computations can be simulated deterministically with only a constant factor overhead in space.
---
paper_title: BPP has subexponential time simulations unless EXPTIME has publishable proofs
paper_content:
It is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time collapses to the second level of the polynomial-time hierarchy, has polynomial-size circuits, and has publishable proofs (EXPTIME=MA). It is also shown that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, it is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages in MA/P. The proofs are based on the recent characterization of the power of multiprover interactive protocols and on random self-reducibility via low degree polynomials. They exhibit an interplay between Boolean circuit simulation, interactive proofs and classical complexity classes. An important feature of this proof is that it does not relativize.<<ETX>>
---
paper_title: Trading group theory for randomness
paper_content:
In a previous paper [BS] we proved, using the elements of the theory of nilpotent groups , that some of the fundamental computational problems in matriz groups belong to NP . These problems were also shown to belong to coNP , assuming an unproven hypothesis concerning finite simple groups . The aim of this paper is to replace most of the (proven and unproven) group theory of [BS] by elementary combinatorial arguments. The result we prove is that relative to a random oracle B , the mentioned matrix group problems belong to ( NP∩coNP ) B . The problems we consider are membership in and order of a matrix group given by a list of generators. These problems can be viewed as multidimensional versions of a close relative of the discrete logarithm problem. Hence NP∩coNP might be the lowest natural complexity class they may fit in. We remark that the results remain valid for black box groups where group operations are performed by an oracle. The tools we introduce seem interesting in their own right. We define a new hierarchy of complexity classes AM ( k ) “just above NP ”, introducing Arthur vs. Merlin games , the bounded-away version of Papdimitriou's Games against Nature . We prove that in spite of their analogy with the polynomial time hierarchy, the finite levels of this hierarchy collapse to AM=AM (2). Using a combinatorial lemma on finite groups [BE], we construct a game by which the nondeterministic player (Merlin) is able to convince the random player (Arthur) about the relation [ G ]= N provided Arthur trusts conclusions based on statistical evidence (such as a Slowly-Strassen type “proof” of primality). One can prove that AM consists precisely of those languages which belong to NP B for almost every oracle B . Our hierarchy has an interesting, still unclarified relation to another hierarchy, obtained by removing the central ingredient from the User vs. Expert games of Goldwasser, Micali and Rackoff.
---
paper_title: Near-optimal conversion of hardness into pseudo-randomness
paper_content:
Various efforts have been made to derandomize probabilistic algorithms using the assumption that there exists a problem in E=dtime(2/sup O(n)/) that requires circuits of size s(n) (for some function s). These results are based on the NW (Nisan & Wigderson, 1997) generator. For the strong lower bound s(n)=2/sup /spl epsi/n/, the optimal derandomization is P=BPP. However, for weaker lower bound functions s(n), these constructions fall short of the natural conjecture for optimal derandomization that bptime(t)/spl sube/ dtime(2?O[s/sup -1/(t)]). The gap is due to an inherent efficiency limitation in NW-style pseudorandom generators. We are able to obtain derandomization in almost optimal time using any lower bound s(n). We do this by using the NW-generator in a more sophisticated way. We view any failure of the generator as a reduction from the given hard function to its restrictions on smaller input sizes. Thus, either the original construction works optimally or one of the restricted functions is as hard as the original. Any such restriction can then be plugged into the NW-generator recursively. This process generates many candidate generators, and at least one is guaranteed to be good. To perform the approximation of the acceptance probability of the given circuit, we run a tournament between the candidate generators which yields an accurate estimate. We explore information theoretic analogs of our new construction. The inherent limitation of the NW-generator makes the extra randomness required by that extractor suboptimal. However, applying our construction, we get an almost optimal disperser.
---
paper_title: Simple extractors for all min-entropies and a new pseudorandom generator
paper_content:
A “randomness extractor” is an algorithm that given a sample from a distribution with sufficiently high min-entropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Min-entropy is a measure of the amount of randomness in a distribution.) We present a simple, self-contained extractor construction that produces good extractors for all min-entropies. Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma et al. [2001b]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n)O(1/α) and seed length (1 p α)log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the min-entropy of the input distribution.A “pseudorandom generator” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the Nisan-Wigderson generator [Nisan and Wigderson 1994], and turns worst-case hardness directly into pseudorandomness. The parameters of our generator match those in Impagliazzo and Wigderson [1997] and Sudan et al. [2001] and in particular are strong enough to obtain a new proof that P = BPP if E requires exponential size circuits.Our construction also gives the following improvements over previous work:---We construct an optimal “hitting set generator” that stretches O(log n) random bits into sΩ(1) pseudorandom bits when given a function on log n bits that requires circuits of size s. This yields a quantitatively optimal hardness versus randomness tradeoff for both RP and BPP and solves an open problem raised in Impagliazzo et al. [1999].---We give the first construction of pseudorandom generators that fool nondeterministic circuits when given a function that requires large nondeterministic circuits. This technique also give a quantitatively optimal hardness versus randomness tradeoff for AM and the first hardness amplification result for nondeterministic circuits.
---
paper_title: A new general derandomization method
paper_content:
We show that quick hitting set generators can replace quick pseudorandom generators to derandomize any probabilistic two-sided error algorithms. Up to now quick hitting set generators have been known as the general and uniform derandomization method for probabilistic one-sided error algorithms, while quick pseudorandom generators as the generators as the general and uniform method to derandomize probabilistic two-sided error algorithms. Our method is based on a deterministic algorithm that, given a Boolean circuit C and given access to a hitting set generator, constructs a discrepancy set for C . The main novelty is that the discrepancy set depends on C , so the new derandomization method is not uniform (i.e., not oblivious ). The algorithm works in time exponential in k(p(n)) where k (*) is the price of the hitting set generator and p (*) is a polynomial function in the size of C . We thus prove that if a logarithmic price quick hitting set generator exists then BPP = P.
---
paper_title: Another proof that BPP ⊆ PH (and more
paper_content:
We provide another proof of the Sipser-Lautemann Theorem by which BPP ⊆ MA (⊆ PH). The current proof is based on strong results regarding the amplification of BPP, due to Zuckerman (1996). Given these results, the current proof is even simpler than previous ones. Furthermore, extending the proof leads to two results regarding MA: MA ⊆ ZPPNP (which seems to be new), and that two-sided error MA equals MA. Finally, we survey the known facts regarding the fragment of the polynomial-time hierarchy that contains MA.
---
paper_title: NP is as easy as detecting unique solutions
paper_content:
Abstract For every known NP-complete problem, the number of solutions of its instances varies over a large range, from zero to exponentially many. It is therefore natural to ask if the inherent intractability of NP-complete problem is caused by this wide variation. We give a negative answer to this question using the notion of randomized polynomial time reducibility. We show that the problems of distinguishing between instances of SAT having zero or one solution, or of finding solutions to instances of SAT having a unique solution, are as hard as SAT, under randomized reductions. Several corollaries about the difficulty of specific problems follow. For example, computing the parity of the number of solutions of a SAT formula is shown to be NP-hard, and deciding if a SAT formula has a unique solution is shown to be D p -hard, under randomized reduction. Central to the study of cryptography is the question as to whether there exist NP-problems whose instances have solutions that are unique but are hard to find. Our result can be interpreted as strengthening the belief that such problems exist.
---
paper_title: Graph nonisomorphism has subexponential size proofs unless the polynomial-time hierarchy collapses
paper_content:
We establish hardness versus randomness trade-offs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round Arthur-Merlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless the polynomial-time hierarchy collapses. This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. We then apply our derandomization technique to four fundamental complexity theoretic constructions: The Valiant-Vazirani random hashing technique which prunes the number of satisfying assignments of a Boolean formula to one, and related procedures like computing satisfying assignments to Boolean formulas non-adaptively given access to an oracle for satisfiability. The algorithm of Bshouty et al. for learning Boolean circuits. Constructing matrices with high rigidity. Constructing polynomial-size universal traversal sequences. We also show that if linear space requires exponential size circuits, then space bounded randomized computations can be simulated deterministically with only a constant factor overhead in space.
---
paper_title: Simple extractors for all min-entropies and a new pseudorandom generator
paper_content:
A “randomness extractor” is an algorithm that given a sample from a distribution with sufficiently high min-entropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Min-entropy is a measure of the amount of randomness in a distribution.) We present a simple, self-contained extractor construction that produces good extractors for all min-entropies. Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma et al. [2001b]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n)O(1/α) and seed length (1 p α)log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the min-entropy of the input distribution.A “pseudorandom generator” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the Nisan-Wigderson generator [Nisan and Wigderson 1994], and turns worst-case hardness directly into pseudorandomness. The parameters of our generator match those in Impagliazzo and Wigderson [1997] and Sudan et al. [2001] and in particular are strong enough to obtain a new proof that P = BPP if E requires exponential size circuits.Our construction also gives the following improvements over previous work:---We construct an optimal “hitting set generator” that stretches O(log n) random bits into sΩ(1) pseudorandom bits when given a function on log n bits that requires circuits of size s. This yields a quantitatively optimal hardness versus randomness tradeoff for both RP and BPP and solves an open problem raised in Impagliazzo et al. [1999].---We give the first construction of pseudorandom generators that fool nondeterministic circuits when given a function that requires large nondeterministic circuits. This technique also give a quantitatively optimal hardness versus randomness tradeoff for AM and the first hardness amplification result for nondeterministic circuits.
---
paper_title: Derandomizing Arthur–Merlin Games using Hitting Sets
paper_content:
We prove that AM (and hence Graph Nonisomorphism) is in NP if for some e > 0, some language in NE ∩ coNE requires nondeterministic circuits of size 2en. This improves results of Arvind and Kobler and of Klivans and van Melkebeek who proved the same conclusion, but under stronger hardness assumptions.The previous results on derandomizing AM were based on pseudorandom generators. In contrast, our approach is based on a strengthening of Andreev, Clementi and Rolim's hitting set approach to derandomization. As a spin-off, we show that this approach is strong enough to give an easy proof of the following implication: for some e > 0, if there is a language in E which requires nondeterministic circuits of size 2en, then P = BPP. This differs from Impagliazzo and Wigderson's theorem "only" by replacing deterministic circuits with nondeterministic ones.
---
paper_title: Extractors from Reed-Muller codes
paper_content:
Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. Previous research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, researchers had failed to build extractors directly from a good code without using other tools from pseudorandomness. We succeed in constructing an extractor directly from a Reed-Muller code. To do this, we develop a novel proof technique. Furthermore, our construction is the first to achieve a degree close to linear. In contrast, the best previous constructions brought the log of the degree within a constant of optimal, which gives polynomial degree. This improvement is important for certain applications. For example, it follows that approximating the VC dimension to within a factor of N/sup 1-/spl delta// is AM-hard for any positive /spl delta/.
---
paper_title: Simple extractors for all min-entropies and a new pseudorandom generator
paper_content:
A “randomness extractor” is an algorithm that given a sample from a distribution with sufficiently high min-entropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Min-entropy is a measure of the amount of randomness in a distribution.) We present a simple, self-contained extractor construction that produces good extractors for all min-entropies. Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma et al. [2001b]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n)O(1/α) and seed length (1 p α)log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the min-entropy of the input distribution.A “pseudorandom generator” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the Nisan-Wigderson generator [Nisan and Wigderson 1994], and turns worst-case hardness directly into pseudorandomness. The parameters of our generator match those in Impagliazzo and Wigderson [1997] and Sudan et al. [2001] and in particular are strong enough to obtain a new proof that P = BPP if E requires exponential size circuits.Our construction also gives the following improvements over previous work:---We construct an optimal “hitting set generator” that stretches O(log n) random bits into sΩ(1) pseudorandom bits when given a function on log n bits that requires circuits of size s. This yields a quantitatively optimal hardness versus randomness tradeoff for both RP and BPP and solves an open problem raised in Impagliazzo et al. [1999].---We give the first construction of pseudorandom generators that fool nondeterministic circuits when given a function that requires large nondeterministic circuits. This technique also give a quantitatively optimal hardness versus randomness tradeoff for AM and the first hardness amplification result for nondeterministic circuits.
---
paper_title: Randomness vs. time: de-randomization under a uniform assumption
paper_content:
We prove that if BPP/spl ne/EXP, then every problem in BPP can be solved deterministically in subexponential time on almost every input (on every samplable ensemble for infinitely many input sizes). This is the first derandomization result for BPP based on uniform, noncryptographic hardness assumptions. It implies the following gap in the average-instance complexities of problems in BPP: either these complexities are always sub-exponential or they contain arbitrarily large exponential functions. We use a construction of a small "pseudorandom" set of strings from a "hard function" in EXP which is identical to that used in the analogous non-uniform results described previously. However, previous proofs of correctness assume the "hard function" is not in P/poly. They give a non-constructive argument that a circuit distinguishing the pseudo-random strings from truly random strings implies that a similarly-sized circuit exists computing the "hard function". Our main technical contribution is to show that, if the "hard function" has certain properties, then this argument can be made constructive. We then show that, assuming ESP/spl sube/P/poly, there are EXP-complete functions with these properties.
---
paper_title: The complexity of computing the permanent
paper_content:
Abstract It is shown that the permanent function of (0, 1)-matrices is a complete problem for the class of counting problems associated with nondeterministic polynomial time computations. Related counting problems are also considered. The reductions used are characterized by their nontrivial use of arithmetic.
---
paper_title: BPP has subexponential time simulations unless EXPTIME has publishable proofs
paper_content:
It is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time collapses to the second level of the polynomial-time hierarchy, has polynomial-size circuits, and has publishable proofs (EXPTIME=MA). It is also shown that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, it is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages in MA/P. The proofs are based on the recent characterization of the power of multiprover interactive protocols and on random self-reducibility via low degree polynomials. They exhibit an interplay between Boolean circuit simulation, interactive proofs and classical complexity classes. An important feature of this proof is that it does not relativize.<<ETX>>
---
paper_title: PP IS AS HARD AS THE POLYNOMIAL-TIME HIERARCHY*
paper_content:
In this paper, two interesting complexity classes, PP and $ \oplus {\text{P}}$, are compared with PH, the polynomial-time hierarchy. It is shown that every set in PH is polynomial-time Turing reducible to a set in PP, and PH is included in ${\text{BP}} \cdot \oplus {\text{P}}$. As a consequence of the results, it follows that ${\text{PP}} \subseteq {\text{PH}}$ (or $\oplus {\text{P}} \subseteq {\text{PH}}$) implies a collapse of PH. A stronger result is also shown: every set in PP(PH) is polynomial-time Turing reducible to a set in PP.
---
paper_title: Algebraic methods for interactive proof systems
paper_content:
A new algebraic technique for the construction of interactive proof systems is presented. Our technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system. This technique played a pivotal role in the recent proofs that IP = PSPACE [28] and that MIP = NEXP [4].
---
paper_title: The complexity of computing the permanent
paper_content:
Abstract It is shown that the permanent function of (0, 1)-matrices is a complete problem for the class of counting problems associated with nondeterministic polynomial time computations. Related counting problems are also considered. The reductions used are characterized by their nontrivial use of arithmetic.
---
paper_title: Pseudorandomness and average-case complexity via uniform reductions
paper_content:
Impagliazzo and Wigderson (1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. We obtain an optimal worst-case to average-case connection for EXP: if EXP BPTIME(( )), EXP has problems that are cannot be solved on a fraction 1/2 1/'( ) of the inputs by BPTIME('( )) algorithms, for ' = /sup 1/. We exhibit a PSPACE-complete downward self-reducible and random self-reducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson (1998), which used a a P-complete problem with these properties. We argue that the results in Impagliazzo and Wigderson (1998) and in this paper cannot be proved via "black-box" uniform reductions.
---
paper_title: PP IS AS HARD AS THE POLYNOMIAL-TIME HIERARCHY*
paper_content:
In this paper, two interesting complexity classes, PP and $ \oplus {\text{P}}$, are compared with PH, the polynomial-time hierarchy. It is shown that every set in PH is polynomial-time Turing reducible to a set in PP, and PH is included in ${\text{BP}} \cdot \oplus {\text{P}}$. As a consequence of the results, it follows that ${\text{PP}} \subseteq {\text{PH}}$ (or $\oplus {\text{P}} \subseteq {\text{PH}}$) implies a collapse of PH. A stronger result is also shown: every set in PP(PH) is polynomial-time Turing reducible to a set in PP.
---
paper_title: Pseudorandom generators without the XOR Lemma
paper_content:
Summary form only given. R. Impagliazzo and A. Wigderson (1997) have recently shown that if there exists a decision problem solvable in time 2/sup O(n)/ and having circuit complexity 2/sup /spl Omega/(n)/ (for all but finitely many n) then P=BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan-Wigderson (1994) generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardness-randomness trade-offs.
---
paper_title: Derandomizing Arthur-Merlin Games under Uniform Assumptions
paper_content:
We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under non-uniform assumptions [13,14]. First, we prove that either AM = NP or it appears to any nondeterministic polynomial time adversary that NP is contained in deterministic subexponential time infinitely often. This implies that to any nondeterministic polynomial time adversary, the graph non-isomorphism problem appears to have subexponential-size proofs infinitely often, the first nontrivial derandomization of this problem without any assumption. Next, we show that either all BPP = P, AM = NP, and PH ⊆ ⊕P hold, or for any t(n) = 2Ω(n), DTIME(t(n)) ⊆ DSPACE(tƐ(n)) infinitely often for any constant Ɛ > 0. Similar tradeoffs also hold for a whole range of parameters. This improves previous results [17,5] and seems to be the first example of an interesting condition that implies three derandomiztion results at once.
---
paper_title: Efficiently Approximable Real-Valued Functions
paper_content:
We define a class, denoted APP, of real-valued functions f : {0, 1}n → [0, 1] such that f can be approximated to within any > 0 by a probabilistic Turing machine running in time poly(n, 1/ ). The class APP can be viewed as a generalization of BPP. We argue that APP is more natural and more important than BPP, and that most results about BPP are better stated as results about APP. We show that APP contains a natural complete problem: computing the acceptance probability of a given Boolean circuit. In contrast, no complete problem is known for BPP. We observe that all known complexity-theoretic assumptions under which BPP can be derandomized also allow APP to be derandomized. On the other hand we construct an oracle under which BPP = P but APP does not collapse to the corresponding deterministic class AP. (However any oracle collapsing APP to AP also collapses BPP to P.)
---
paper_title: Comparing notions of full derandomization
paper_content:
Most of the hypotheses of full derandomization fall into two sets of equivalent statements: those equivalent to the existence of efficient pseudorandom generators and those equivalent to approximating the accepting probability of a circuit. We give the first relativized world where these sets of equivalent statements are not equivalent to each other.
---
paper_title: In search of an easy witness: exponential time vs. probabilistic polynomial time
paper_content:
Restricting the search space (0, l}” to the set of truth tables of “easy” Boolean functions on log 1% variables, as well as using some known hardness-randomness tradeoff., we establish a number of results relating the complexi@ of exponential-time and probabilistic polynomial-time coniplexity classes. In particular; we show that NEXP C P/poly e NEXP = MA; this can be interpreted to say that no derandomization of MA (and, hence, of promise- BPP) is possible unless N EXP contains a hard Boolean function. We also prove several downward closure results for ZPP, RP, BPP, and MA; e.g., we show EXP = BPP EE = BPE, where EE is the double-exponential time class and BPE is the exponential-time analogue of BPP.
---
paper_title: Derandomizing Arthur–Merlin Games using Hitting Sets
paper_content:
We prove that AM (and hence Graph Nonisomorphism) is in NP if for some e > 0, some language in NE ∩ coNE requires nondeterministic circuits of size 2en. This improves results of Arvind and Kobler and of Klivans and van Melkebeek who proved the same conclusion, but under stronger hardness assumptions.The previous results on derandomizing AM were based on pseudorandom generators. In contrast, our approach is based on a strengthening of Andreev, Clementi and Rolim's hitting set approach to derandomization. As a spin-off, we show that this approach is strong enough to give an easy proof of the following implication: for some e > 0, if there is a language in E which requires nondeterministic circuits of size 2en, then P = BPP. This differs from Impagliazzo and Wigderson's theorem "only" by replacing deterministic circuits with nondeterministic ones.
---
paper_title: Nonrelativizing separations
paper_content:
We show that MA/sub EXP/, the exponential time version of the Merlin-Arthur class, does not have polynomial size circuits. This significantly improves the previous known result due to Kannan since we furthermore show that our result does not relativize. This is the first separation result in complexity theory that does not relativize. As a corollary to our separation result we also obtain that PEXP, the exponential time version of PP is nor in P/poly.
---
paper_title: Non-Deterministic Exponential Time has Two-Prover Interactive Protocols
paper_content:
We determine the exact power of two-prover interactive proof systems introduced by Ben-Or, Goldwasser, Kilian, and Wigderson (1988). In this system, two all-powerful noncommunicating provers convince a randomizing polynomial time verifier in polynomial time that the inputx belongs to the languageL. We show that the class of languages having tow-prover interactive proof systems is nondeterministic exponential time.We also show that to prove membership in languages inEXP, the honest provers need the power ofEXP only.The first part of the proof of the main result extends recent techniques of polynomial extrapolation used in the single prover case by Lund, Fortnow, Karloff, Nisan, and Shamir.The second part is averification scheme for multilinearity of a function in several variables held by an oracle and can be viewed as an independent result onprogram verification. Its proof rests on combinatorial techniques employing a simple isoperimetric inequality for certain graphs:
---
paper_title: Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds
paper_content:
We show that derandomizing Polynomial Identity Testing is essentially equivalent to proving arithmetic circuit lower bounds for NEXP. More precisely, we prove that if one can test in polynomial time (or even nondeterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero polynomial, then either (i) NEXP ⊄ P/poly or (ii) Permanent is not computable by polynomial-size arithmetic circuits. We also prove a (partial) converse: If Permanent requires superpolynomial-size arithmetic circuits, then one can test in subexponential time whether a given arithmetic circuit of polynomially bounded degree computes an identically zero polynomial.Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or even coRP ⊆ ∩e > 0 NTIME(2ne), infinitely often), then NEXP is not computable by polynomial-size arithmetic circuits. Thus establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.We also prove unconditionally that NEXPRP does not have polynomial-size Boolean or arithmetic circuits. Finally, we show that NEXP ⊄ P/poly if both BPP = P and low-degree testing is in P; here low-degree testing is the problem of checking whether a given Boolean circuit computes a function that is close to some low-degree polynomial over a finite field.
---
paper_title: The complexity of computing the permanent
paper_content:
Abstract It is shown that the permanent function of (0, 1)-matrices is a complete problem for the class of counting problems associated with nondeterministic polynomial time computations. Related counting problems are also considered. The reductions used are characterized by their nontrivial use of arithmetic.
---
paper_title: PP IS AS HARD AS THE POLYNOMIAL-TIME HIERARCHY*
paper_content:
In this paper, two interesting complexity classes, PP and $ \oplus {\text{P}}$, are compared with PH, the polynomial-time hierarchy. It is shown that every set in PH is polynomial-time Turing reducible to a set in PP, and PH is included in ${\text{BP}} \cdot \oplus {\text{P}}$. As a consequence of the results, it follows that ${\text{PP}} \subseteq {\text{PH}}$ (or $\oplus {\text{P}} \subseteq {\text{PH}}$) implies a collapse of PH. A stronger result is also shown: every set in PP(PH) is polynomial-time Turing reducible to a set in PP.
---
paper_title: Graph nonisomorphism has subexponential size proofs unless the polynomial-time hierarchy collapses
paper_content:
We establish hardness versus randomness trade-offs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round Arthur-Merlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless the polynomial-time hierarchy collapses. This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. We then apply our derandomization technique to four fundamental complexity theoretic constructions: The Valiant-Vazirani random hashing technique which prunes the number of satisfying assignments of a Boolean formula to one, and related procedures like computing satisfying assignments to Boolean formulas non-adaptively given access to an oracle for satisfiability. The algorithm of Bshouty et al. for learning Boolean circuits. Constructing matrices with high rigidity. Constructing polynomial-size universal traversal sequences. We also show that if linear space requires exponential size circuits, then space bounded randomized computations can be simulated deterministically with only a constant factor overhead in space.
---
|
Title: Derandomization: a brief overview
Section 1: History
Description 1: Provide a historical context and background on derandomization, including the emergence and significance of randomized algorithms and the belief that BPP may be equal to P.
Section 2: Hardness-randomness tradeoffs
Description 2: Discuss the core concept of trading computational hardness for randomness and the fundamental results and examples supporting this idea.
Section 3: The Nisan-Wigderson generator
Description 3: Explain the significance of the NW generator, its construction, and how it improves upon previous results by using hardness assumptions.
Section 4: Worst-case hardness-randomness tradeoffs
Description 4: Detail the advancements in derandomization achieved through focusing on worst-case hardness and its implications for proving BPP = P.
Section 5: Hitting-set generators
Description 5: Describe the concept of hitting-set generators, their construction, and their role in achieving derandomization results.
Section 6: Recent developments
Description 6: Summarize recent advances in derandomization research, including improvements in hardness-randomness tradeoffs and new applications.
Section 7: Achieving optimal hardness-randomness tradeoffs
Description 7: Discuss what constitutes an optimal hardness-randomness tradeoff and recent results that approach or achieve these optimal tradeoffs.
Section 8: Beyond computational complexity
Description 8: Explore the implications and applications of hardness-randomness tradeoffs beyond the realm of computational complexity, such as in information theory.
Section 9: Back to computational complexity
Description 9: Examine how insights from the information-theoretic setting can be used to improve computational complexity results, particularly in constructing extractors.
Section 10: Derandomizing BPP
Description 10: Provide an in-depth look at the methods and results for derandomizing the class BPP, including both non-uniform and uniform hardness assumptions.
Section 11: Derandomizing RP
Description 11: Discuss techniques specific to derandomizing the class RP, along with relevant results and their proofs.
Section 12: Derandomizing AM
Description 12: Detail the challenges and results related to derandomizing the class AM, including significant theorems and their implications.
Section 13: Circuit lower bounds from the derandomization of MA
Description 13: Investigate the connection between derandomization of MA and circuit lower bounds, including key results and their proofs.
Section 14: Circuit lower bounds from the derandomization of BPP
Description 14: Explore recent findings that link derandomizing BPP to proving circuit lower bounds for classes like NEXP and the Permanent function.
Section 15: Other Results
Description 15: Highlight additional results and advancements in derandomization and complexity theory, covering a variety of topics and open questions.
Section 16: What Next?
Description 16: Discuss open problems and future directions in derandomization research, with a focus on extending current results and addressing unresolved questions.
|
Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
| 15 |
---
paper_title: Occlusion effect of earmolds with different venting systems.
paper_content:
In this study the occlusion effect was quantified for five types of earmolds with different venting. Nine normal-hearing listeners and ten experienced hearing aid users were provided with conventional earmolds with 1.6 and 2.4 mm circular venting, shell type earmolds with a novel vent design with equivalent crosssectional vent areas, and nonoccluding soft silicone eartips of a commercial hearing instrument. For all venting systems, the occlusion effect was measured using a probe microphone system and subjectively rated in test and retest sessions. The results for both normal-hearing subjects and hearing aid users showed that the novel vents caused significantly less occlusion than the traditional vents. Occlusion effect associated with the soft silicone eartip was comparable to the nonoccluded ear. Test-retest reproducibility was higher for the subjective occlusion rating than for the objectively measured occlusion. Perceived occlusion revealed a closer relationship to measured occlusion in the ear in which the measured occlusion effect was higher (“high OE” ear) than in the “low OE” ear. As our results suggest that subjective judgment of occlusion is directly related to the acoustic mass of the air column in the vent, the amount of perceived occlusion may be predicted by the vent dimensions.
---
paper_title: Acoustic Attenuation between the Ears
paper_content:
In an investigation of the acoustical insulation between the ears, various earphones and obturating devices were used. Bone conduction was shown to be chiefly responsible for the acoustical leakage between the ears. Conditions were determined under which interaural insulation could be increased considerably. Most of the measurements were performed with a compensation method which appears to give more precise results than methods previously used, and which permits phase measurements.
---
paper_title: A Multicenter Trial of an Assess-and-Fit Hearing Aid Service Using Open Canal Fittings and Comply Ear Tips
paper_content:
Large potential benefits have been suggested for an assess-and-fit approach to hearing health care, particularly using open canal fittings. However, the clinical effectiveness has not previously been evaluated, nor has the efficiency of this approach in a National Health Service setting. These two outcomes were measured in a variety of clinical settings in the United Kingdom. Twelve services in England and Wales participated, and 540 people with hearing problems, not previously referred for assessment, were included. Of these, 68% (n = 369) were suitable and had hearing aids fitted to NAL NL1 during the assess-and-fit visit using either open ear tips, or Comply ear tips. The Glasgow Hearing Aid Benefit Profile was used to compare patients fitted with open ear tips with a group of patients from the English Modernization of Hearing Aid Services evaluation, who used custom earmolds. This showed a significant improvement in outcome for those with open ear tips after allowing for age and hearing loss in the analysis. In particular, the benefits of using bilateral open ear tips were significantly larger than bilateral custom ear-molds. This assess-and-fit model showed a mean service efficiency gain of about 5% to 10%. The actual gain will depend on current practice, in particular on the separate appointments used, the numbers of patients failing to attend appointments, and the numbers not accepting a hearing aid solution for their problem. There are potentially further efficiency and quality gains to be made if patients are appropriately triaged before referral.
---
paper_title: Preferred signal path delay and high-pass cut-off in open fittings
paper_content:
AbstractThe combination of delayed sound from a digital hearing aid with direct sound through an open or vented fitting can potentially degrade the sound quality due to audible changes in timbre and/or perception of echo. The present study was designed to test a number of delay and high-pass combinations under worst-case (i.e. most sensitive) conditions. Eighteen normal-hearing and 18 mildly hearing-impaired subjects performed the test in a paired comparison (A/B) task. The subjects were asked to select a preferred setting with respect to sound quality. The test was set in an anechoic chamber using recorded speech, environmental sounds, and own voice. Experimental hearing aids were fitted binaurally with open domes thus providing maximum ventilation. The preference data were processed using a statistical choice model that derives a ratio-scale. The analysis indicated that in these test conditions there was no change in sound quality when varying the delay in the range 5–10 ms and that there was a preferen...
---
paper_title: Comparison of Vent Effects between a Solid Earmold and a Hollow Earmold
paper_content:
where the thickness of the shell was the length of the vent. Vent diameters were 0, 1, 2, and 3 mm. Data Collection and Analysis: The vent effect was evaluated on real-ear aided response, real-ear occluded response during vocalization, subjective occlusion rating, insertion loss, and maximum available gain before feedback. Real-ear measurements were made with the Fonix 6500 probemicrophone real-ear system. Vocalizations from the participants were analyzed with a custom MATLAB program, and statistical analysis was conducted with SPSS software. Results: A systematic vent effect was seen with each earmold type as the nominal vent diameter changed. For the same vent diameter, the vent effect seen with the hollow earmold was greater than that of the solid earmold. Conclusions: Because of the difference in vent length (and thus acoustic mass) between a solid and a hollow earmold, the effect of vent diameter in a hollow earmold is more pronounced than that seen in a solid earmold of the same nominal vent diameter. Thus, a smaller vent diameter will be needed in a hollow earmold than in a solid earmold to achieve similar vent effects.
---
paper_title: Own voice qualities (OVQ) in hearing-aid users: There is more than just occlusion
paper_content:
AbstractObjective: Hearing-aid users’ problems with their own voice caused by occlusion are well known. Conversely, it remains essentially undocumented whether hearing-aid users expected not to have occlusion-related problems experience own-voice issues. Design: To investigate this topic, a dedicated Own Voice Qualities (OVQ) questionnaire was developed and used in two experiments with stratified samples. Study sample: In the main experiment, the OVQ was administered to 169 hearing-aid users (most of whom were expected not to have occlusion-related problems) and to a control group of 56 normally-hearing people. In the follow-up experiment, the OVQ was used in a cross-over study where 43 hearing-aid users rated own voice for an open fitting and a small-vent earmould fitting. Results: The results from the main experiment show that hearing-aid users (without occlusion) have more problems than the normal-hearing controls on several dimensions of own voice. The magnitude of these differences was found to be ge...
---
paper_title: Comparison of Vent Effects between a Solid Earmold and a Hollow Earmold
paper_content:
where the thickness of the shell was the length of the vent. Vent diameters were 0, 1, 2, and 3 mm. Data Collection and Analysis: The vent effect was evaluated on real-ear aided response, real-ear occluded response during vocalization, subjective occlusion rating, insertion loss, and maximum available gain before feedback. Real-ear measurements were made with the Fonix 6500 probemicrophone real-ear system. Vocalizations from the participants were analyzed with a custom MATLAB program, and statistical analysis was conducted with SPSS software. Results: A systematic vent effect was seen with each earmold type as the nominal vent diameter changed. For the same vent diameter, the vent effect seen with the hollow earmold was greater than that of the solid earmold. Conclusions: Because of the difference in vent length (and thus acoustic mass) between a solid and a hollow earmold, the effect of vent diameter in a hollow earmold is more pronounced than that seen in a solid earmold of the same nominal vent diameter. Thus, a smaller vent diameter will be needed in a hollow earmold than in a solid earmold to achieve similar vent effects.
---
paper_title: The Occlusion Effect in Unilateral versus Bilateral Hearing Aids
paper_content:
The benefit of bilateral hearing aids is well documented, but many hearing-aid users still wear only one aid. It is plausible that the occlusion effect is part of the reason for some hearing-aid users not wearing both hearing aids. In this study we quantified the subjective occlusion effect by asking ten experienced users of bilateral hearing aids and a reference group of ten normal-hearing individuals to rate the naturalness of their own voice while reading a text sample aloud. The subjective occlusion effect was evaluated in the unilateral versus bilateral condition for a variety of vent designs in earmolds and in a custom hearing aid. The subjective occlusion effect was significantly higher for bilateral hearing aids with all vent designs with the exception of a non-occluding eartip option. The subjective occlusion effect was reduced with the more open vent designs in both the unilateral and bilateral conditions. Assuming that the occlusion effect is a barrier to bilateral hearing aid use, these results indicate that open-hearing-aid fittings can help promote the use of two aids.
---
paper_title: Vent configurations on subjective and objective occlusion effect.
paper_content:
The current study reexamined the effect of vent diameters on objective and subjective occlusion effect (OE) while minimizing two possible sources of variability. Nine hearing-impaired participants with primarily a high-frequency hearing loss were evaluated. Laser shell-making technology was used to make ear inserts of completely-in-the-canal (CIC) hearing aids for the study. This was to minimize any potential slit leakage from the inserts. The vent dimensions were systematically altered during the study. Participants sustained /i/ for 5 sec, and the real-ear occluded response was measured with a custommade program that performed frequency averaging to reduce response variability. Participants also repeated the phrase “Baby Jeannie is teeny tiny” and rated their own voice. The results showed a systematic change in the objective OE and subjective ratings of OE as the vent diameter was modified. Furthermore, a significant correlation was seen between subjective rating and objective occlusion effect.
---
paper_title: Relative loudness perception of low and high frequency sounds in the open and occluded ear
paper_content:
A comparison of published equal loudness contours indicates that different shapes are obtained at a comfortable level when the measurements are done in an occluded ear than when they are done in an open ear, even though all measurements are expressed as dB SPL at the eardrum. This paper presents the result from a loudness balancing test which confirms this observation. Eleven normal-hearing listeners balanced the level of a 500- and a 3000-Hz octave band babble-noise to the level of a 1500-Hz octave band babble-noise. The balancing test was completed in open and occluded ears using a loudspeaker and a hearing aid receiver, respectively. A probe tube microphone was used to measure the actual levels presented near the individual’s eardrum. The results show that an average of 10 dB higher level was selected for the 500-Hz octave band when listening with the occluded ear than when listening with the open ear. A large range of factors is discussed, but no physical explanation for the discrepancy was found. The...
---
paper_title: Occlusion effect of earmolds with different venting systems.
paper_content:
In this study the occlusion effect was quantified for five types of earmolds with different venting. Nine normal-hearing listeners and ten experienced hearing aid users were provided with conventional earmolds with 1.6 and 2.4 mm circular venting, shell type earmolds with a novel vent design with equivalent crosssectional vent areas, and nonoccluding soft silicone eartips of a commercial hearing instrument. For all venting systems, the occlusion effect was measured using a probe microphone system and subjectively rated in test and retest sessions. The results for both normal-hearing subjects and hearing aid users showed that the novel vents caused significantly less occlusion than the traditional vents. Occlusion effect associated with the soft silicone eartip was comparable to the nonoccluded ear. Test-retest reproducibility was higher for the subjective occlusion rating than for the objectively measured occlusion. Perceived occlusion revealed a closer relationship to measured occlusion in the ear in which the measured occlusion effect was higher (“high OE” ear) than in the “low OE” ear. As our results suggest that subjective judgment of occlusion is directly related to the acoustic mass of the air column in the vent, the amount of perceived occlusion may be predicted by the vent dimensions.
---
paper_title: Comparison of Vent Effects between a Solid Earmold and a Hollow Earmold
paper_content:
where the thickness of the shell was the length of the vent. Vent diameters were 0, 1, 2, and 3 mm. Data Collection and Analysis: The vent effect was evaluated on real-ear aided response, real-ear occluded response during vocalization, subjective occlusion rating, insertion loss, and maximum available gain before feedback. Real-ear measurements were made with the Fonix 6500 probemicrophone real-ear system. Vocalizations from the participants were analyzed with a custom MATLAB program, and statistical analysis was conducted with SPSS software. Results: A systematic vent effect was seen with each earmold type as the nominal vent diameter changed. For the same vent diameter, the vent effect seen with the hollow earmold was greater than that of the solid earmold. Conclusions: Because of the difference in vent length (and thus acoustic mass) between a solid and a hollow earmold, the effect of vent diameter in a hollow earmold is more pronounced than that seen in a solid earmold of the same nominal vent diameter. Thus, a smaller vent diameter will be needed in a hollow earmold than in a solid earmold to achieve similar vent effects.
---
paper_title: The Occlusion Effect in Unilateral versus Bilateral Hearing Aids
paper_content:
The benefit of bilateral hearing aids is well documented, but many hearing-aid users still wear only one aid. It is plausible that the occlusion effect is part of the reason for some hearing-aid users not wearing both hearing aids. In this study we quantified the subjective occlusion effect by asking ten experienced users of bilateral hearing aids and a reference group of ten normal-hearing individuals to rate the naturalness of their own voice while reading a text sample aloud. The subjective occlusion effect was evaluated in the unilateral versus bilateral condition for a variety of vent designs in earmolds and in a custom hearing aid. The subjective occlusion effect was significantly higher for bilateral hearing aids with all vent designs with the exception of a non-occluding eartip option. The subjective occlusion effect was reduced with the more open vent designs in both the unilateral and bilateral conditions. Assuming that the occlusion effect is a barrier to bilateral hearing aid use, these results indicate that open-hearing-aid fittings can help promote the use of two aids.
---
paper_title: Vent configurations on subjective and objective occlusion effect.
paper_content:
The current study reexamined the effect of vent diameters on objective and subjective occlusion effect (OE) while minimizing two possible sources of variability. Nine hearing-impaired participants with primarily a high-frequency hearing loss were evaluated. Laser shell-making technology was used to make ear inserts of completely-in-the-canal (CIC) hearing aids for the study. This was to minimize any potential slit leakage from the inserts. The vent dimensions were systematically altered during the study. Participants sustained /i/ for 5 sec, and the real-ear occluded response was measured with a custommade program that performed frequency averaging to reduce response variability. Participants also repeated the phrase “Baby Jeannie is teeny tiny” and rated their own voice. The results showed a systematic change in the objective OE and subjective ratings of OE as the vent diameter was modified. Furthermore, a significant correlation was seen between subjective rating and objective occlusion effect.
---
paper_title: Occlusion effect of earmolds with different venting systems.
paper_content:
In this study the occlusion effect was quantified for five types of earmolds with different venting. Nine normal-hearing listeners and ten experienced hearing aid users were provided with conventional earmolds with 1.6 and 2.4 mm circular venting, shell type earmolds with a novel vent design with equivalent crosssectional vent areas, and nonoccluding soft silicone eartips of a commercial hearing instrument. For all venting systems, the occlusion effect was measured using a probe microphone system and subjectively rated in test and retest sessions. The results for both normal-hearing subjects and hearing aid users showed that the novel vents caused significantly less occlusion than the traditional vents. Occlusion effect associated with the soft silicone eartip was comparable to the nonoccluded ear. Test-retest reproducibility was higher for the subjective occlusion rating than for the objectively measured occlusion. Perceived occlusion revealed a closer relationship to measured occlusion in the ear in which the measured occlusion effect was higher (“high OE” ear) than in the “low OE” ear. As our results suggest that subjective judgment of occlusion is directly related to the acoustic mass of the air column in the vent, the amount of perceived occlusion may be predicted by the vent dimensions.
---
paper_title: The Occlusion Effect in Unilateral versus Bilateral Hearing Aids
paper_content:
The benefit of bilateral hearing aids is well documented, but many hearing-aid users still wear only one aid. It is plausible that the occlusion effect is part of the reason for some hearing-aid users not wearing both hearing aids. In this study we quantified the subjective occlusion effect by asking ten experienced users of bilateral hearing aids and a reference group of ten normal-hearing individuals to rate the naturalness of their own voice while reading a text sample aloud. The subjective occlusion effect was evaluated in the unilateral versus bilateral condition for a variety of vent designs in earmolds and in a custom hearing aid. The subjective occlusion effect was significantly higher for bilateral hearing aids with all vent designs with the exception of a non-occluding eartip option. The subjective occlusion effect was reduced with the more open vent designs in both the unilateral and bilateral conditions. Assuming that the occlusion effect is a barrier to bilateral hearing aid use, these results indicate that open-hearing-aid fittings can help promote the use of two aids.
---
paper_title: Occlusion effect of earmolds with different venting systems.
paper_content:
In this study the occlusion effect was quantified for five types of earmolds with different venting. Nine normal-hearing listeners and ten experienced hearing aid users were provided with conventional earmolds with 1.6 and 2.4 mm circular venting, shell type earmolds with a novel vent design with equivalent crosssectional vent areas, and nonoccluding soft silicone eartips of a commercial hearing instrument. For all venting systems, the occlusion effect was measured using a probe microphone system and subjectively rated in test and retest sessions. The results for both normal-hearing subjects and hearing aid users showed that the novel vents caused significantly less occlusion than the traditional vents. Occlusion effect associated with the soft silicone eartip was comparable to the nonoccluded ear. Test-retest reproducibility was higher for the subjective occlusion rating than for the objectively measured occlusion. Perceived occlusion revealed a closer relationship to measured occlusion in the ear in which the measured occlusion effect was higher (“high OE” ear) than in the “low OE” ear. As our results suggest that subjective judgment of occlusion is directly related to the acoustic mass of the air column in the vent, the amount of perceived occlusion may be predicted by the vent dimensions.
---
paper_title: Hearing aids with external receivers: Can they offer power and cosmetics?
paper_content:
The design of hearing solutions for people with moderate to severe hearing loss (HL) traditionally encounters the tradeoff between form and function, appearance and performance. That is, patients typically must sacrifice cosmetics and/or ease of use to obtain devices that can provide them with adequate gain and output. Conversely, larger hearing aid (HA) styles may provide sufficient gain, yet reduce patient satisfaction along a number of other variables. In fact, compared with patients with milder losses, persons with moderate to severe sensorineural HL are less satisfied with their hearing aids along a number of dimensions, 1 including:
---
paper_title: Effect of low-frequency gain and venting effects on the benefit derived from directionality and noise reduction in hearing aids
paper_content:
When the frequency range over which vent-transmitted sound dominates amplification increases, the potential benefit from directional microphones and noise reduction decreases. Fitted with clinically appropriate vent sizes, 23 aided listeners with varying low-frequency hearing thresholds evaluated six schemes comprising three levels of gain at 250 Hz (0, 6, and 12 dB) combined with two features (directional microphone and noise reduction) enabled or disabled in the field. The low-frequency gain was 0 dB for vent-dominated sound, while the higher gains were achieved by amplifier-dominated sounds. A majority of listeners preferred 0-dB gain at 250 Hz and the features enabled. While the amount of low-frequency gain had no significant effect on speech recognition in noise or horizontal localization, speech recognition and front/back discrimination were significantly improved when the features were enabled, even when vent-transmitted sound dominated the low frequencies. The clinical implication is that there is...
---
paper_title: The Accuracy of Matching Target Insertion Gains With Open-Fit Hearing Aids
paper_content:
Purpose To assess the accuracy with which target insertion gains were matched for a single type of open-fit hearing aid, both on initial fitting and after adjustment. Method The hearing aids were f...
---
paper_title: NAL-NL1 procedure for fitting nonlinear hearing aids: characteristics and comparisons with other procedures.
paper_content:
A new procedure for fitting nonlinear hearing aids (National Acoustic Laboratories' nonlinear fitting procedure, version 1 [NAL-NL1]) is described . The rationale is to maximize speech intelligibility while constraining loudness to be normal or less. Speech intelligibility is predicted by the Speech Intelligibility Index (SII), which has been modified to account for the reduction in performance associated with increasing degrees of hearing loss, especially at high frequencies . Prescriptions are compared for the NAL-NL1, desired sensation level [input/output], FIG6, and a threshold version of the Independent Hearing Aid Fitting Forum procedures . For an average speech input level, the NAL-NL1 prescriptions are very similar to those of the well-established NAL-Revised, Profound procedure . Compared with the other procedures, NAL-NL1 prescribes less low-frequency gain for flat and upward sloping audiograms . It prescribes less high-frequency gain for steeply sloping high-frequency hearing losses. NAL-NL1 tends to prescribe less compression than the other procedures . All procedures differ considerably from one another for some audiograms .
---
paper_title: Own voice qualities (OVQ) in hearing-aid users: There is more than just occlusion
paper_content:
AbstractObjective: Hearing-aid users’ problems with their own voice caused by occlusion are well known. Conversely, it remains essentially undocumented whether hearing-aid users expected not to have occlusion-related problems experience own-voice issues. Design: To investigate this topic, a dedicated Own Voice Qualities (OVQ) questionnaire was developed and used in two experiments with stratified samples. Study sample: In the main experiment, the OVQ was administered to 169 hearing-aid users (most of whom were expected not to have occlusion-related problems) and to a control group of 56 normally-hearing people. In the follow-up experiment, the OVQ was used in a cross-over study where 43 hearing-aid users rated own voice for an open fitting and a small-vent earmould fitting. Results: The results from the main experiment show that hearing-aid users (without occlusion) have more problems than the normal-hearing controls on several dimensions of own voice. The magnitude of these differences was found to be ge...
---
paper_title: The International Outcome Inventory for Hearing Aids (IOI-HA): psychometric properties of the English version.
paper_content:
The International Outcome Inventory for Hearing Aids (IOI-HA) is a seven-item questionnaire designed to be generally applicable in evaluating the effectiveness of hearing aid treatments. The inventory was developed to facilitate cooperation among researchers and program evaluators in diverse settings. It is brief and general enough to be appended to other outcome measures that might be planned in a particular application, and will provide directly comparable data across otherwise incompatible projects. For this plan to be successful, it is essential to generate psychometrically equivalent translations in the languages in which hearing aid research and treatment assessments are performed. This article reports the psychometric properties of the inventory for the original English version. The items are reasonably internally consistent, providing adequate statistical support for summing the scores to generate a total outcome score. However, for maximum internal consistcncy, it would be desirable to generate t...
---
paper_title: The abbreviated profile of hearing aid benefit.
paper_content:
OBJECTIVE ::: To develop and evaluate a shortened version of the Profile of Hearing Aid Benefit, to be called the Abbreviated Profile of Hearing Aid Benefit, or APHAB. ::: ::: ::: DESIGN ::: The Profile of Hearing Aid Benefit (PHAB) is a 66-item self-assessment, disability-based inventory that can be used to document the outcome of a hearing aid fitting, to compare several fittings, or to evaluate the same fitting over time. Data from 128 completed PHABs were used to select items for the Abbreviated PHAB. All subjects were elderly hearing-impaired who wore conventional analog hearing aids. Statistics of score distributions and psychometric properties of each of the APHAB subscales were determined. Data from 27 similar subjects were used to examine the test-retest properties of the instrument. Finally, equal-percentile profiles were generated for unaided, aided and benefit scores obtained from successful wearers of linear hearing aids. ::: ::: ::: RESULTS ::: The APHAB uses a subset of 24 of the 66 items from the PHAB, scored in four 6-item subscales. Three of the subscales, Ease of Communication, Reverberation, and Background Noise address speech understanding in various everyday environments. The fourth subscale, Aversiveness of Sounds, quantifies negative reactions to environmental sounds. The APHAB typically requires 10 minutes or less to complete, and it produces scores for unaided and aided performance as well as hearing aid benefit. Test-retest correlation coefficients were found to be moderate to high and similar to those reported in the literature for other scales of similar content and length. Critical differences for each subscale taken individually were judged to be fairly large, however, smaller differences between two tests from the same individual can be significant if the three speech communication subscales are considered jointly. ::: ::: ::: CONCLUSIONS ::: The APHAB is a potentially valuable clinical instrument. It can be useful for quantifying the disability associated with a hearing loss and the reduction of disability that is achieved with a hearing aid.
---
paper_title: The Occlusion Effect in Unilateral versus Bilateral Hearing Aids
paper_content:
The benefit of bilateral hearing aids is well documented, but many hearing-aid users still wear only one aid. It is plausible that the occlusion effect is part of the reason for some hearing-aid users not wearing both hearing aids. In this study we quantified the subjective occlusion effect by asking ten experienced users of bilateral hearing aids and a reference group of ten normal-hearing individuals to rate the naturalness of their own voice while reading a text sample aloud. The subjective occlusion effect was evaluated in the unilateral versus bilateral condition for a variety of vent designs in earmolds and in a custom hearing aid. The subjective occlusion effect was significantly higher for bilateral hearing aids with all vent designs with the exception of a non-occluding eartip option. The subjective occlusion effect was reduced with the more open vent designs in both the unilateral and bilateral conditions. Assuming that the occlusion effect is a barrier to bilateral hearing aid use, these results indicate that open-hearing-aid fittings can help promote the use of two aids.
---
paper_title: Measuring Satisfaction with Amplification in Daily Life: The SADL Scale
paper_content:
ObjectiveTo develop a self-report inventory to quantify satisfaction with hearing aids.DesignThe inventory was developed in several stages. To determine the elements that are most important to satisfaction for most people, we conducted structured interviews and then designed a questionnaire. Hearing
---
paper_title: Occlusion effect of earmolds with different venting systems.
paper_content:
In this study the occlusion effect was quantified for five types of earmolds with different venting. Nine normal-hearing listeners and ten experienced hearing aid users were provided with conventional earmolds with 1.6 and 2.4 mm circular venting, shell type earmolds with a novel vent design with equivalent crosssectional vent areas, and nonoccluding soft silicone eartips of a commercial hearing instrument. For all venting systems, the occlusion effect was measured using a probe microphone system and subjectively rated in test and retest sessions. The results for both normal-hearing subjects and hearing aid users showed that the novel vents caused significantly less occlusion than the traditional vents. Occlusion effect associated with the soft silicone eartip was comparable to the nonoccluded ear. Test-retest reproducibility was higher for the subjective occlusion rating than for the objectively measured occlusion. Perceived occlusion revealed a closer relationship to measured occlusion in the ear in which the measured occlusion effect was higher (“high OE” ear) than in the “low OE” ear. As our results suggest that subjective judgment of occlusion is directly related to the acoustic mass of the air column in the vent, the amount of perceived occlusion may be predicted by the vent dimensions.
---
paper_title: Effects of earmold type on ability to locate sounds when wearing hearing aids.
paper_content:
Objective:To determine whether the choice of earmold type can affect aided auditory localization. Hypotheses: 1) for sensorineural hearing losses with good low-frequency hearing (Low group, n = 10), the use of open earmolds could avoid decrements in horizontal plane localization found with closed (o
---
paper_title: Effect of low-frequency gain and venting effects on the benefit derived from directionality and noise reduction in hearing aids
paper_content:
When the frequency range over which vent-transmitted sound dominates amplification increases, the potential benefit from directional microphones and noise reduction decreases. Fitted with clinically appropriate vent sizes, 23 aided listeners with varying low-frequency hearing thresholds evaluated six schemes comprising three levels of gain at 250 Hz (0, 6, and 12 dB) combined with two features (directional microphone and noise reduction) enabled or disabled in the field. The low-frequency gain was 0 dB for vent-dominated sound, while the higher gains were achieved by amplifier-dominated sounds. A majority of listeners preferred 0-dB gain at 250 Hz and the features enabled. While the amount of low-frequency gain had no significant effect on speech recognition in noise or horizontal localization, speech recognition and front/back discrimination were significantly improved when the features were enabled, even when vent-transmitted sound dominated the low frequencies. The clinical implication is that there is...
---
paper_title: The national acoustic laboratories (NAL) new procedure for selecting the gain and frequency response of a hearing aid
paper_content:
ABSTRACT A new procedure is presented for selecting the gain and frequency response of a hearing aid from pure-tone thresholds. This was developed from research which showed that a previous procedure did not meet its aim of amplifying all frequency bands of speech to equal loudness but that frequency responses which did so were considerably more effective. Measurements of 30 sensorineurally hearing-impaired ears (27 subjects), together with data from other studies, were analyzed to determine the best formula for predicting the optimal frequency response, for each individual, from the audiogram. The analysis indicated that a flat audiogram would require a rising frequency response characteristic of about 8 dB/octave up to 1.25 kHz and thereafter a falling characteristic of about 2 dB/octave. Variations in audiogram slope required about one-third as much variation in response slope. Three frequency average (3FA) gain was calculated to equal the 3FA gain of the previous procedure. Forty-four subjects (67 aided ears) fitted by the new procedure were evaluated by paired comparison judgments of the intelligibility and pleasantness of speech. The prescribed frequency response was seldom inferior to, and usually better than, any of several variations having more, or less, low and/or high-frequency amplification. On the average, used gain was approximately equal to prescribed gain. It is concluded that the new formula should prescribe a near optimal frequency response with few exceptions.
---
paper_title: Are real-ear measurements (REM) accurate when using the modified pressure with stored equalization (MPSE) method?
paper_content:
AbstractAudiologists typically verify hearing instrument fitting using real-ear measurements (REM). Recently the modified pressure with stored equalization method (MPSE) has been recommended for use when verifying open non-occluding hearing instruments. The MPSE method does not use a reference microphone to maintain loudspeaker output during real-ear measurements and is therefore susceptible to changes in the signal level at the client's ear which result from movement of the client's head and torso during the verification process. To determine the size of these errors, the real-ear unaided response (REUR) was measured prior to and following the fitting of a non-functioning hearing aid in the contralateral ear. Twenty young adults participated. Identical head positions for the two measurements should yield zero difference measures across all frequencies measured. Loudspeaker-to-client azimuths of 0° and 45° were investigated. Mean difference measures across the frequencies investigated were less than 1dB f...
---
paper_title: Real-ear measurement verification for open, non-occluding hearing instruments
paper_content:
Real-ear measurements using the modified pressure method with concurrent (real-time) equalization can be inaccurate, when amplified sound leaks out of the ear canal and reaches the reference microphone. In such situations the reference microphone will detect an increased sound level and reduce the output of the loudspeaker to maintain the desired level. The risk of having errors due to leaks increases if digital feedback suppression (DFS) is used, thus achieving higher feedback-free gain levels. The following hypotheses were tested: a) using the concurrent equalization method for fitting hearing instruments with DFS may result in underestimated real-ear insertion gain (especially when using open fittings) and b) as the benefit of the DFS system increases, this error also increases. Real-ear measurements were carried out in twenty-one subjects using the modified pressure method with stored equalization as well as with concurrent equalization. The results of the study supports both hypotheses. As a conseque...
---
paper_title: Real ear measurement methods for open fit hearing aids: Modified pressure concurrent equalization (MPCE) versus modified pressure stored equalization (MPSE)
paper_content:
AbstractObjective: The aim of this study was to assess differences between real ear insertion gains (REIG) measured with the modified pressure concurrent equalization (MPCE) and modified pressure stored equalization (MPSE) methods for open fittings in a typical audiology patient population. Design: REIGs were compared for the two methods using a warble tone sweep at 65 dB SPL. The differences between the two methods at 0.25, 0.5, 1, 2, 3, 4 and 6 kHz were recorded. Study sample: Eighty-three ears of a consecutive sample of 48 candidates for open-fit hearing aids were included. Results: The mean difference between MPSE and MPCE REIGs was less than 1 dB at all frequencies. Analysis of variance showed that the main effect of method was not significant, and there was no significant interaction between method and frequency. Conclusions: The results for the MPSE and MPCE methods did not differ significantly for the patients with mild-to-moderate hearing losses tested here, for whom REIGs were generally less tha...
---
paper_title: Sentences for Testing Speech Intelligibility in Noise
paper_content:
A list of ten spoken Swedish sentences was computer edited to obtain new lists with exactly the same content of sound, but with new sentences. A noise was synthesized from the speech material by the computer to produce exactly the same spectrum of speech and noise. The noise was also amplitude modulated by a low frequency noise to make it sound more natural. This material was tested monaurally on 20 normal-hearing subjects. The equality in intelligibility of some of the lists was investigated. Repeated threshold measurements in noise showed a standard deviation of 0.44 dB when the learning effect was outbalanced. Only a small part of the learning effect was due to learning of the word material. Intelligibility curves fitted to the data points in noise and without noise showed maximum steepnesses of 25 and 10%/dB respectively. At constant signal to noise ratio (S/N) the best performance was achieved at a speech level of 53 dB.
---
paper_title: Tolerable Hearing Aid Delays. V. Estimation of Limits for Open Canal Fittings
paper_content:
Objectives: Open canal fittings are a popular alternative to close-fitting earmolds for use with patients whose low-frequency hearing is near normal. Open canal fittings reduce the occlusion effect but also provide little attenuation of external air-borne sounds. The wearer therefore receives a mixture of air-borne sound and amplified but delayed sound through the hearing aid. To explore systematically the effect of the mixing, we simulated with varying degrees of complexity the effects of both a hearing loss and a high-quality hearing aid programmed to compensate for that loss, and used normal-hearing participants to assess the processing. Design: The off-line processing was intended to simulate the percept of listening to the speech of a single (external) talker. The effect of introducing a delay on a subjective measure of speech quality (disturbance rating on a scale from 1 to 7, 7 being maximal disturbance) was assessed using both a constant gain and a gain that varied across frequency. In three experiments we assessed the effects of different amounts of delay, maximum aid gain and rate of change of gain with frequency. The simulated hearing aids were chosen to be appropriate for typical mild to moderate high-frequency losses starting at 1 or 2 kHz. Two of the experiments used simulations of linear hearing aids, whereas the third used fast-acting multichannel wide-dynamic-range compression and a simulation of loudness recruitment. In one experiment, a condition was included in which spectral ripples produced by comb-filtering were partially removed using a digital filter. Results: For linear hearing aids, disturbance increased progressively with increasing delay and with decreasing rate of change of gain; the effect of amount of gain was small when the gain varied across frequency. The effect of reducing spectral ripples was also small. When the simulation of dynamic processes was included (experiment 3), the pattern with delay remained similar, but disturbance increased with increasing gain. It is argued that this is mainly due to disturbance increasing with increasing simulated hearing loss, probably because of the dynamic processing involved in the hearing aid and recruitment simulation. Conclusions: A disturbance rating of 3 may be considered as just acceptable. This rating was reached for delays of about 5 and 6 msec, for simulated hearing losses starting at 2 and 1 kHz, respectively. The perceptual effect of reducing the spectral ripples produced by comb-filtering was small; the effect was greatest when the hearing aid gain was small and when the hearing loss started at a low frequency.
---
paper_title: Unaided and aided performance with a directional open-fit hearing aid.
paper_content:
Differences in performance between unaided and aided performance (omnidirectional and directional) were measured using an open-fit behind-the-ear (BTE) hearing aid. Twenty-six subjects without prior experience with amplification were fitted bilaterally using the manufacturer's recommended procedure. After wearing the hearing aids for one week, the fitting parameters were fine-tuned, based on subjective comments. Four weeks later, differences in performance between unaided and aided (omnidirectional and directional) were assessed by measuring reception thresholds for sentences (RTS in dB), using HINT sentences presented at 0° with R-SpaceTM restaurant noise held constant at 65dBA and presented via eight loudspeakers set 45° apart. In addition, the APHAB was administered to assess subjective impressions of the experimental aid.Results revealed that significant differences in RTS (in dB) were present between directional and omnidirectional performance, as well as directional and unaided performance. Aided om...
---
paper_title: Directivity quantification in hearing aids: fitting and measurement effects.
paper_content:
OBJECTIVE: To evaluate the impact of venting, microphone port orientation, and compression on the electroacoustically measured directivity of directional and omnidirectional behind-the-ear hearing aids. In addition, the average directivity provided across three brands of directional and omnidirectional behind-the-ear hearing aids was compared with that provided by the open ear. DESIGN: Three groups of hearing aids (four instruments in each group) representing three commercial models (a total of 12) were selected for electroacoustic evaluation of directivity. Polar directivity patterns were measured and directivity index was calculated across four different venting configurations, and for five different microphone port angles. All measurements were made for instruments in directional and omnidirectional modes. Single source traditional, and two-source modified front-to-back ratios were also measured with the hearing aids in linear and compression modes. RESULTS: The directivity provided by the open (Knowles Electronics Manikin for Acoustic Research) ear was superior to that of the omnidirectional hearing aids in this study. Although the directivity measured for directional hearing aids was significantly better than that of omnidirectional models, significant variability was measured both within and across the tested models both on average and at specific test frequencies. Both venting and microphone port orientation affected the measured directivity. Although compression reduced the magnitude of traditionally measured front-to-back ratios, no difference from linear amplification was noted using a modified methodology. CONCLUSIONS: The variation in the measured directivity both within and across the directional microphone hearing aid brands suggests that manufacturer's specification of directivity may not provide an accurate index of the actual performance of all individual instruments. The significant impact of venting and microphone port orientation on directivity indicate that these variables must be addressed when fitting directional hearing aids on hearing-impaired listeners. Modified front-to-back ratio results suggest that compression does not affect the directivity of hearing aids, if it is assumed that the signal of interest from one azimuth, and the competing signal from a different azimuth, occur at the same time.
---
paper_title: Preferred signal path delay and high-pass cut-off in open fittings
paper_content:
AbstractThe combination of delayed sound from a digital hearing aid with direct sound through an open or vented fitting can potentially degrade the sound quality due to audible changes in timbre and/or perception of echo. The present study was designed to test a number of delay and high-pass combinations under worst-case (i.e. most sensitive) conditions. Eighteen normal-hearing and 18 mildly hearing-impaired subjects performed the test in a paired comparison (A/B) task. The subjects were asked to select a preferred setting with respect to sound quality. The test was set in an anechoic chamber using recorded speech, environmental sounds, and own voice. Experimental hearing aids were fitted binaurally with open domes thus providing maximum ventilation. The preference data were processed using a statistical choice model that derives a ratio-scale. The analysis indicated that in these test conditions there was no change in sound quality when varying the delay in the range 5–10 ms and that there was a preferen...
---
paper_title: Speech recognition in noise using bilateral open-fit hearing aids: The limited benefit of directional microphones and noise reduction
paper_content:
To investigate speech recognition performance in noise with bilateral open-fi t hearing aids and as reference also with closed earmolds, in omnidirectional mode, directional mode, and directional mode in conjunction with noise reduction. Design: A within-subject design with repeated measures across conditions was used. Speech recognition thresholds in noise were obtained for the different conditions. Study sample: Twenty adults without prior experience with hearing aids. All had symmetric sensorineural mild hearing loss in the lower frequencies and moderate to severe hearing loss in the higher frequencies. Results: Speech recognition performance in noise was not signifi cantly better with an omnidirectional microphone compared to unaided, whereas performance was signifi cantly better with a directional microphone (1.6 dB with open fi tting and 4.4 dB with closed earmold) compared to unaided. With open fi tting, no signifi cant additional advantage was obtained by combining the directional microphone with a noise reduction algorithm, but with closed earmolds a signifi cant additional advantage of 0.8 dB was obtained. Conclusions: The signifi cant, though limited, advantage of directional microphones and the absence of additional signifi cant improvement by a noise reduction algorithm should be considered when fi tting open-fi t hearing aids.
---
paper_title: Acoustic Attenuation between the Ears
paper_content:
In an investigation of the acoustical insulation between the ears, various earphones and obturating devices were used. Bone conduction was shown to be chiefly responsible for the acoustical leakage between the ears. Conditions were determined under which interaural insulation could be increased considerably. Most of the measurements were performed with a compensation method which appears to give more precise results than methods previously used, and which permits phase measurements.
---
paper_title: Comparison of Vent Effects between a Solid Earmold and a Hollow Earmold
paper_content:
where the thickness of the shell was the length of the vent. Vent diameters were 0, 1, 2, and 3 mm. Data Collection and Analysis: The vent effect was evaluated on real-ear aided response, real-ear occluded response during vocalization, subjective occlusion rating, insertion loss, and maximum available gain before feedback. Real-ear measurements were made with the Fonix 6500 probemicrophone real-ear system. Vocalizations from the participants were analyzed with a custom MATLAB program, and statistical analysis was conducted with SPSS software. Results: A systematic vent effect was seen with each earmold type as the nominal vent diameter changed. For the same vent diameter, the vent effect seen with the hollow earmold was greater than that of the solid earmold. Conclusions: Because of the difference in vent length (and thus acoustic mass) between a solid and a hollow earmold, the effect of vent diameter in a hollow earmold is more pronounced than that seen in a solid earmold of the same nominal vent diameter. Thus, a smaller vent diameter will be needed in a hollow earmold than in a solid earmold to achieve similar vent effects.
---
paper_title: Investigation of the Auditory Occlusion Effect with Implications for Hearing Protection and Hearing Aid Design
paper_content:
Previous research has shown that auditory occlusion effects could inhibit people from using hearing protection devices or hearing aids, which raises safety and usability concerns. The objective of this study was to evaluate occlusion effects as a function of insertion depth (shallow and deep), earplug type (foam earplug and medical balloonbased earplug), and excitation source (bone vibrator and self vocal utterance). Ten participants, six male and four female, completed the experiment. The ANOVA and post hoc tests conducted on the measured occlusion effects revealed main effects of insertion depth and earplug type, as well as an interaction effect between insertion depth and earplug type. The occlusion effect of deeply inserted earplugs was smaller than that of shallowly inserted earplugs by 11.2 dB. At deep insertion, the balloon-based earplugs produced an occlusion effect of 14.9 dB while the foam earplugs produced 5.9 dB.
---
paper_title: Active cancellation of occlusion: an electronic vent for hearing aids and hearing protectors.
paper_content:
The occlusion effect is commonly described as an unnatural and mostly annoying quality of the voice of a person wearing hearing aids or hearing protectors. As a result, it is often reported by hearing aid users as a deterrent to wearing hearing aids. This paper presents an investigation into active occlusion cancellation. Measured transducer responses combined with models of an active feedback scheme are first examined in order to predict the effectiveness of occlusion reduction. The simulations predict 18 dB of occlusion reduction in completely blocked ear canals. Simulations incorporating a 1 mm vent (providing passive occlusion reduction) predict a combined active and passive occlusion reduction of 20 dB. A prototype occlusion canceling system was constructed. Averaged across 12 listeners with normal hearing, it provided 15 dB of occlusion reduction. Ten of the subjects reported a more natural own voice quality and an appreciable increase in comfort with the cancellation active, and 11 out of the 12 preferred the active system over the passive system.
---
paper_title: Comparison of Vent Effects between a Solid Earmold and a Hollow Earmold
paper_content:
where the thickness of the shell was the length of the vent. Vent diameters were 0, 1, 2, and 3 mm. Data Collection and Analysis: The vent effect was evaluated on real-ear aided response, real-ear occluded response during vocalization, subjective occlusion rating, insertion loss, and maximum available gain before feedback. Real-ear measurements were made with the Fonix 6500 probemicrophone real-ear system. Vocalizations from the participants were analyzed with a custom MATLAB program, and statistical analysis was conducted with SPSS software. Results: A systematic vent effect was seen with each earmold type as the nominal vent diameter changed. For the same vent diameter, the vent effect seen with the hollow earmold was greater than that of the solid earmold. Conclusions: Because of the difference in vent length (and thus acoustic mass) between a solid and a hollow earmold, the effect of vent diameter in a hollow earmold is more pronounced than that seen in a solid earmold of the same nominal vent diameter. Thus, a smaller vent diameter will be needed in a hollow earmold than in a solid earmold to achieve similar vent effects.
---
paper_title: In the ear hearing device with a valve formed with an electroactive material having a changeable volume and method of operating the hearing device
paper_content:
For particularly good adaptation to a given hearing situation, an in-the-ear hearing device which has a housing with a channel in the housing that is designed as a through-opening for sound and air between the interior of the ear and the environment outside the ear, the channel is provided with a structural element for changing the size of the through-opening at at least one position. The structural element is a valve formed with electroactive material and the size of the through-opening is adjusted by application of a voltage to the valve.
---
|
Title: Open Versus Closed Hearing-Aid Fittings: A Literature Review of Both Fitting Approaches
Section 1: Introduction
Description 1: Provide an overview of hearing loss, the role of hearing aids, and the difference between Behind-The-Ear (BTE) and In-The-Ear (ITE) hearing aids, including preliminary discussions on open and closed fittings.
Section 2: The Occlusion Effect
Description 2: Describe the physiological basis of the occlusion effect, the structure of the ear canal, and how closed earmolds exacerbate self-produced sound transmission within the ear canal.
Section 3: Quantifying the Occlusion Effect
Description 3: Differentiate between objective and subjective occlusion effects, and introduce methods used to measure these effects.
Section 4: The Objective Occlusion Effect
Description 4: Discuss the methodologies for measuring the objective occlusion effect, including real-ear measurements (REM) and frequency responses, and their relevance for open versus closed fittings.
Section 5: The Subjective Occlusion Effect
Description 5: Explore how individuals perceive their own voice when using different types of hearing aid fittings and how this subjective experience influences satisfaction with the device.
Section 6: The Acoustic Mass
Description 6: Explain the concept of acoustic mass and its impact on vent design and sound transmission in hearing aids.
Section 7: Alternative Vent Types
Description 7: Examine specific vent designs and their efficacy in reducing the occlusion effect, particularly focusing on innovative technologies like Flex-Vent TM.
Section 8: Open-Fit Hearing Aids Definition of Open-Fit Hearing Aids
Description 8: Define open-fit hearing aids, including differences in design and application within the ear canal, and elaborate on manufacturer recommendations.
Section 9: Vent Sizes
Description 9: Discuss the relationship between vent size and the occlusion effect, and provide guidelines for choosing appropriate vent sizes based on hearing loss levels.
Section 10: RIC and RITA Hearing Aids
Description 10: Compare Receiver-In-The-Canal (RIC) and Receiver-In-The-Aid (RITA) hearing aids concerning gain before feedback and user preference.
Section 11: Fitting of Open-Fit Hearing Aids
Description 11: Outline the fitting process for open-fit hearing aids, the software adjustments based on hearing loss, and challenges in achieving optimal gain.
Section 12: Advantages of Open-Fit Hearing Aids
Description 12: Detail the benefits of open-fit hearing aids, including subjective evaluations, localization, and improvements in daily life experiences.
Section 13: Disadvantages of Open-Fit Hearing Aids
Description 13: Discuss the limitations of open-fit hearing aids, such as REM accuracy, interaction between direct and amplified sounds, and reduced effectiveness of adaptive features.
Section 14: Alternative Approaches
Description 14: Introduce alternative methods and innovative designs for reducing occlusion, such as mechanical modifications and active occlusion reduction algorithms.
Section 15: Summary and Conclusions
Description 15: Provide a summary of the key points discussed in the paper, highlighting the importance of fitting choices, individual needs, and the trade-offs between open and closed hearing-aid fittings.
|
Non-Interactive Differential Privacy: a Survey
| 10 |
---
paper_title: Robust De-anonymization of Large Sparse Datasets
paper_content:
We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
---
paper_title: Big data: The next frontier for innovation, competition, and productivity
paper_content:
The amount of data in our world has been exploding, and analyzing large data sets—so-called big data— will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus, according to research by MGI and McKinsey's Business Technology Office. Leaders in every sector will have to grapple with the implications of big data, not just a few data-oriented managers. The increasing volume and detail of information captured by enterprises, the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future.
---
paper_title: k-Anonymity: A Model for Protecting Privacy
paper_content:
Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.
---
paper_title: A firm foundation for private data analysis
paper_content:
In the information realm, loss of privacy is usually associated with failure to control access to information, to control the flow of information, or to control the purposes for which information is employed. Differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved: privacy-preserving statistical analysis of data. The problem of statistical disclosure control – revealing accurate statistics about a set of respondents while preserving the privacy of individuals – has a venerable history, with an extensive literature spanning statistics, theoretical computer science, security, databases, and cryptography (see, for example, the excellent survey [1], the discussion of related work in [2] and the Journal of Official Statistics 9 (2), dedicated to confidentiality and disclosure control). This long history is a testament the importance of the problem. Statistical databases can be of enormous social value; they are used for apportioning resources, evaluating medical therapies, understanding the spread of disease, improving economic utility, and informing us about ourselves as a species. The data may be obtained in diverse ways. Some data, such as census, tax, and other sorts of official data, are compelled; others are collected opportunistically, for example, from traffic on the internet, transactions on Amazon, and search engine query logs; other data are provided altruistically, by respondents who hope that sharing their information will help others to avoid a specific misfortune, or more generally, to increase the public good. Altruistic data donors are typically promised their individual data will be kept confidential – in short, they are promised “privacy.” Similarly, medical data and legally compelled data, such as census data, tax return data, have legal privacy mandates. In our view, ethics demand that opportunistically obtained data should be treated no differently, especially when there is no reasonable alternative to engaging in the actions that generate the data in question. The problems remain: even if data encryption, key management, access control, and the motives of the data curator
---
paper_title: t-Closeness: Privacy Beyond k-Anonymity and l-Diversity
paper_content:
The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain "identifying" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.
---
paper_title: Transparent government, not transparent citizens: a report on privacy and transparency for the Cabinet Office
paper_content:
1. Privacy is extremely important to transparency. The political legitimacy of a transparency programme will depend crucially on its ability to retain public confidence. Privacy protection should therefore be embedded in any transparency programme, rather than bolted on as an afterthought. 2. Privacy and transparency are compatible, as long as the former is carefully protected and considered at every stage. 3. Under the current transparency regime, in which public data is specifically understood not to include personal data, most data releases will not raise privacy concerns. However, some will, especially as we move toward a more demand-driven scheme. 4. Discussion about deanonymisation has been driven largely by legal considerations, with a consequent neglect of the input of the technical community. 5. There are no complete legal or technical fixes to the deanonymisation problem. We should continue to anonymise sensitive data, being initially cautious about releasing such data under the Open Government Licence while we continue to take steps to manage and research the risks of deanonymisation. Further investigation to determine the level of risk would be very welcome. 6. There should be a focus on procedures to output an auditable debate trail. Transparency about transparency – metatransparency – is essential for preserving trust and confidence. Fourteen recommendations are made to address these conclusions.
---
paper_title: Calibrating Noise to Sensitivity in Private Data Analysis
paper_content:
We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
---
paper_title: Our Data, Ourselves: Privacy via Distributed Noise Generation
paper_content:
In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.
---
paper_title: Privacy: Theory meets Practice on the Map
paper_content:
In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.
---
paper_title: No free lunch in data privacy
paper_content:
Differential privacy is a powerful tool for providing privacy-preserving noisy query answers over statistical databases. It guarantees that the distribution of noisy query answers changes very little with the addition or deletion of any tuple. It is frequently accompanied by popularized claims that it provides privacy without any assumptions about the data and that it protects against attackers who know all but one record. In this paper we critically analyze the privacy protections offered by differential privacy. First, we use a no-free-lunch theorem, which defines non-privacy as a game, to argue that it is not possible to provide privacy and utility without making assumptions about how the data are generated. Then we explain where assumptions are needed. We argue that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process. This is different from limiting the inference about the presence of a tuple (for example, Bob's participation in a social network may cause edges to form between pairs of his friends, so that it affects more than just the tuple labeled as "Bob"). The definition of evidence of participation, in turn, depends on how the data are generated -- this is how assumptions enter the picture. We explain these ideas using examples from social network research as well as tabular data for which deterministic statistics have been previously released. In both cases the notion of participation varies, the use of differential privacy can lead to privacy breaches, and differential privacy does not always adequately limit inference about participation.
---
paper_title: Calibrating Noise to Sensitivity in Private Data Analysis
paper_content:
We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
---
paper_title: Our Data, Ourselves: Privacy via Distributed Noise Generation
paper_content:
In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.
---
paper_title: Mechanism Design via Differential Privacy
paper_content:
We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.
---
paper_title: Optimizing linear counting queries under differential privacy
paper_content:
Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. But despite much recent work, optimal strategies for answering a collection of related queries are not known. We propose the matrix mechanism, a new algorithm for answering a workload of predicate counting queries. Given a workload, the mechanism requests answers to a different set of queries, called a query strategy, which are answered using the standard Laplace mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two stage process can result in a more complex correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show this problem can be formulated as a rank-constrained semidefinite program. Finally, we analyze two seemingly distinct techniques, whose similar behavior is explained by viewing them as instances of the matrix mechanism.
---
paper_title: Universally Utility-maximizing Privacy Mechanisms
paper_content:
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether a given database row is included. The goal of this paper is to formulate and provide strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a symmetric and monotone loss function). Our main result is the following: for each fixed count query and differential privacy level, there is a geometric mechanism $M^*$---a discrete variant of the simple and well-studi...
---
paper_title: On the geometry of differential privacy
paper_content:
We consider the noise complexity of differentially private mechanisms in the setting where the user asks d linear queries f:Rn -> R non-adaptively. Here, the database is represented by a vector in R and proximity between databases is measured in the l1-metric. We show that the noise complexity is determined by two geometric parameters associated with the set of queries. We use this connection to give tight upper and lower bounds on the noise complexity for any d ≤ n. We show that for d random linear queries of sensitivity 1, it is necessary and sufficient to add l2-error Θ(min d√d/ε,d√(log (n/d))/ε) to achieve ε-differential privacy. Assuming the truth of a deep conjecture from convex geometry, known as the Hyperplane conjecture, we can extend our results to arbitrary linear queries giving nearly matching upper and lower bounds. ::: Our bound translates to error $O(min d/ε,√(d log(n/d)/ε)) per answer. The best previous upper bound (Laplacian mechanism) gives a bound of O(min (d/ε,√n/ε)) per answer, while the best known lower bound was Ω(√d/ε). ::: In contrast, our lower bound is strong enough to separate the concept of differential privacy from the notion of approximate differential privacy where an upper bound of O(√{d}/ε) can be achieved.
---
paper_title: A learning theory approach to non-interactive database privacy
paper_content:
In this paper we demonstrate that, ignoring computational constraints, it is possible to privately release synthetic databases that are useful for large classes of queries -- much larger in size than the database itself. Specifically, we give a mechanism that privately releases synthetic data for a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows only logarithmically with the size of the query class. ::: We also show that it is not possible to privately release even simple classes of queries (such as intervals and their generalizations) over continuous domains. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, given a slight relaxation of the utility guarantee. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension. ::: Finally, inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.
---
paper_title: Publishing Set-Valued Data via Differential Privacy
paper_content:
Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets.
---
paper_title: Compressive mechanism: utilizing sparse representation in differential privacy
paper_content:
Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
---
paper_title: Differentially Private Trajectory Data Publication
paper_content:
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
---
paper_title: Differentially Private Data Release through Multidimensional Partitioning
paper_content:
Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release based on an interactive differential privacy interface. We propose two multidimensional partitioning strategies including a baseline cell-based partitioning and an innovative kd-tree based partitioning. In addition to providing formal proofs for differential privacy and usefulness guarantees for linear distributive queries, we also present a set of experimental results and demonstrate the feasibility and performance of our method.
---
paper_title: Differentially Private Spatial Decompositions
paper_content:
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.
---
paper_title: Differentially Private Publication of Sparse Data
paper_content:
The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. ::: We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach.
---
paper_title: Differentially private data cubes: optimizing noise sources and consistency
paper_content:
Data cubes play an essential role in data analysis and decision support. In a data cube, data from a fact table is aggregated on subsets of the table's dimensions, forming a collection of smaller tables called cuboids. When the fact table includes sensitive data such as salary or diagnosis, publishing even a subset of its cuboids may compromise individuals' privacy. In this paper, we address this problem using differential privacy (DP), which provides provable privacy guarantees for individuals by adding noise to query answers. We choose an initial subset of cuboids to compute directly from the fact table, injecting DP noise as usual; and then compute the remaining cuboids from the initial set. Given a fixed privacy guarantee, we show that it is NP-hard to choose the initial set of cuboids so that the maximal noise over all published cuboids is minimized, or so that the number of cuboids with noise below a given threshold (precise cuboids) is maximized. We provide an efficient procedure with running time polynomial in the number of cuboids to select the initial set of cuboids, such that the maximal noise in all published cuboids will be within a factor (ln|L| + 1)^2 of the optimal, where |L| is the number of cuboids to be published, or the number of precise cuboids will be within a factor (1 - 1/e) of the optimal. We also show how to enforce consistency in the published cuboids while simultaneously improving their utility (reducing error). In an empirical evaluation on real and synthetic data, we report the amounts of error of different publishing algorithms, and show that our approaches outperform baselines significantly.
---
paper_title: Differential Privacy via Wavelet Transforms
paper_content:
Privacy preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides one of the strongest privacy guarantees. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output dataset is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.
---
paper_title: Fast Private Data Release Algorithms for Sparse Queries
paper_content:
We revisit the problem of accurately answering large classes of statistical queries while preserving differential privacy. Previous approaches to this problem have either been very general but have not had run-time polynomial in the size of the database, have applied only to very limited classes of queries, or have relaxed the notion of worst-case error guarantees. In this paper we consider the large class of sparse queries, which take non-zero values on only polynomially many universe elements. We give efficient query release algorithms for this class, in both the interactive and the non-interactive setting. Our algorithms also achieve better accuracy bounds than previous general techniques do when applied to sparse queries: our bounds are independent of the universe size. In fact, even the runtime of our interactive mechanism is independent of the universe size, and so can be implemented in the"infinite universe"model in which no finite universe need be specified by the data curator.
---
paper_title: Boosting the Accuracy of Differentially Private Histograms Through Consistency
paper_content:
We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.
---
paper_title: Compressive mechanism: utilizing sparse representation in differential privacy
paper_content:
Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
---
paper_title: Differentially Private Data Release through Multidimensional Partitioning
paper_content:
Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release based on an interactive differential privacy interface. We propose two multidimensional partitioning strategies including a baseline cell-based partitioning and an innovative kd-tree based partitioning. In addition to providing formal proofs for differential privacy and usefulness guarantees for linear distributive queries, we also present a set of experimental results and demonstrate the feasibility and performance of our method.
---
paper_title: On the complexity of differentially private data release: efficient algorithms and hardness results
paper_content:
We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role. For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.
---
paper_title: Optimizing linear counting queries under differential privacy
paper_content:
Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. But despite much recent work, optimal strategies for answering a collection of related queries are not known. We propose the matrix mechanism, a new algorithm for answering a workload of predicate counting queries. Given a workload, the mechanism requests answers to a different set of queries, called a query strategy, which are answered using the standard Laplace mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two stage process can result in a more complex correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show this problem can be formulated as a rank-constrained semidefinite program. Finally, we analyze two seemingly distinct techniques, whose similar behavior is explained by viewing them as instances of the matrix mechanism.
---
paper_title: Differentially Private Publication of Sparse Data
paper_content:
The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. ::: We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach.
---
paper_title: Differentially private data cubes: optimizing noise sources and consistency
paper_content:
Data cubes play an essential role in data analysis and decision support. In a data cube, data from a fact table is aggregated on subsets of the table's dimensions, forming a collection of smaller tables called cuboids. When the fact table includes sensitive data such as salary or diagnosis, publishing even a subset of its cuboids may compromise individuals' privacy. In this paper, we address this problem using differential privacy (DP), which provides provable privacy guarantees for individuals by adding noise to query answers. We choose an initial subset of cuboids to compute directly from the fact table, injecting DP noise as usual; and then compute the remaining cuboids from the initial set. Given a fixed privacy guarantee, we show that it is NP-hard to choose the initial set of cuboids so that the maximal noise over all published cuboids is minimized, or so that the number of cuboids with noise below a given threshold (precise cuboids) is maximized. We provide an efficient procedure with running time polynomial in the number of cuboids to select the initial set of cuboids, such that the maximal noise in all published cuboids will be within a factor (ln|L| + 1)^2 of the optimal, where |L| is the number of cuboids to be published, or the number of precise cuboids will be within a factor (1 - 1/e) of the optimal. We also show how to enforce consistency in the published cuboids while simultaneously improving their utility (reducing error). In an empirical evaluation on real and synthetic data, we report the amounts of error of different publishing algorithms, and show that our approaches outperform baselines significantly.
---
paper_title: Differential Privacy via Wavelet Transforms
paper_content:
Privacy preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides one of the strongest privacy guarantees. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output dataset is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.
---
paper_title: Fast Private Data Release Algorithms for Sparse Queries
paper_content:
We revisit the problem of accurately answering large classes of statistical queries while preserving differential privacy. Previous approaches to this problem have either been very general but have not had run-time polynomial in the size of the database, have applied only to very limited classes of queries, or have relaxed the notion of worst-case error guarantees. In this paper we consider the large class of sparse queries, which take non-zero values on only polynomially many universe elements. We give efficient query release algorithms for this class, in both the interactive and the non-interactive setting. Our algorithms also achieve better accuracy bounds than previous general techniques do when applied to sparse queries: our bounds are independent of the universe size. In fact, even the runtime of our interactive mechanism is independent of the universe size, and so can be implemented in the"infinite universe"model in which no finite universe need be specified by the data curator.
---
paper_title: Releasing search queries and clicks privately
paper_content:
The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data.
---
paper_title: Publishing Search Logs—A Comparative Study of Privacy Guarantees
paper_content:
Search engine companies collect the “database of intentions,” the histories of their users' search queries. These search logs are a gold mine for researchers. Search engine companies, however, are wary of publishing search logs in order not to disclose sensitive information. In this paper, we analyze algorithms for publishing frequent keywords, queries, and clicks of a search log. We first show how methods that achieve variants of k-anonymity are vulnerable to active attacks. We then demonstrate that the stronger guarantee ensured by e-differential privacy unfortunately does not provide any utility for this problem. We then propose an algorithm ZEALOUS and show how to set its parameters to achieve (e, δ)-probabilistic privacy. We also contrast our analysis of ZEALOUS with an analysis by Korolova et al. [17] that achieves (e',δ')-indistinguishability. Our paper concludes with a large experimental study using real applications where we compare ZEALOUS and previous work that achieves k-anonymity in search log publishing. Our results show that ZEALOUS yields comparable utility to k-anonymity while at the same time achieving much stronger privacy guarantees.
---
paper_title: Differentially Private Publication of Sparse Data
paper_content:
The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. ::: We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach.
---
paper_title: Differentially private search log sanitization with optimal output utility
paper_content:
Web search logs contain extremely sensitive data, as evidenced by the recent AOL incident. However, storing and analyzing search logs can be very useful for many purposes (i.e. investigating human behavior). Thus, an important research question is how to privately sanitize search logs. Several search log anonymization techniques have been proposed with concrete privacy models. However, in all of these solutions, the output utility of the techniques is only evaluated rather than being maximized in any fashion. Indeed, for effective search log anonymization, it is desirable to derive the outputs with optimal utility while meeting the privacy standard. In this paper, we propose utility-maximizing sanitization based on the rigorous privacy standard of differential privacy, in the context of search logs. Specifically, we utilize optimization models to maximize the output utility of the sanitization for different applications, while ensuring that the production process satisfies differential privacy. An added benefit is that our novel randomization strategy maintains the schema integrity in the output search logs. A comprehensive evaluation on real search logs validates the approach and demonstrates its robustness and scalability.
---
paper_title: Differentially Private Data Release through Multidimensional Partitioning
paper_content:
Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release based on an interactive differential privacy interface. We propose two multidimensional partitioning strategies including a baseline cell-based partitioning and an innovative kd-tree based partitioning. In addition to providing formal proofs for differential privacy and usefulness guarantees for linear distributive queries, we also present a set of experimental results and demonstrate the feasibility and performance of our method.
---
paper_title: Differentially Private Spatial Decompositions
paper_content:
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.
---
paper_title: Private record matching using differential privacy
paper_content:
Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts.
---
paper_title: Compressive mechanism: utilizing sparse representation in differential privacy
paper_content:
Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
---
paper_title: Fast Private Data Release Algorithms for Sparse Queries
paper_content:
We revisit the problem of accurately answering large classes of statistical queries while preserving differential privacy. Previous approaches to this problem have either been very general but have not had run-time polynomial in the size of the database, have applied only to very limited classes of queries, or have relaxed the notion of worst-case error guarantees. In this paper we consider the large class of sparse queries, which take non-zero values on only polynomially many universe elements. We give efficient query release algorithms for this class, in both the interactive and the non-interactive setting. Our algorithms also achieve better accuracy bounds than previous general techniques do when applied to sparse queries: our bounds are independent of the universe size. In fact, even the runtime of our interactive mechanism is independent of the universe size, and so can be implemented in the"infinite universe"model in which no finite universe need be specified by the data curator.
---
paper_title: Calibrating Noise to Sensitivity in Private Data Analysis
paper_content:
We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
---
paper_title: Publishing Set-Valued Data via Differential Privacy
paper_content:
Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets.
---
paper_title: Boosting the Accuracy of Differentially Private Histograms Through Consistency
paper_content:
We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.
---
paper_title: Differentially Private Trajectory Data Publication
paper_content:
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
---
paper_title: Publishing Search Logs—A Comparative Study of Privacy Guarantees
paper_content:
Search engine companies collect the “database of intentions,” the histories of their users' search queries. These search logs are a gold mine for researchers. Search engine companies, however, are wary of publishing search logs in order not to disclose sensitive information. In this paper, we analyze algorithms for publishing frequent keywords, queries, and clicks of a search log. We first show how methods that achieve variants of k-anonymity are vulnerable to active attacks. We then demonstrate that the stronger guarantee ensured by e-differential privacy unfortunately does not provide any utility for this problem. We then propose an algorithm ZEALOUS and show how to set its parameters to achieve (e, δ)-probabilistic privacy. We also contrast our analysis of ZEALOUS with an analysis by Korolova et al. [17] that achieves (e',δ')-indistinguishability. Our paper concludes with a large experimental study using real applications where we compare ZEALOUS and previous work that achieves k-anonymity in search log publishing. Our results show that ZEALOUS yields comparable utility to k-anonymity while at the same time achieving much stronger privacy guarantees.
---
paper_title: Differentially Private Data Release through Multidimensional Partitioning
paper_content:
Differential privacy is a strong notion for protecting individual privacy in privacy preserving data analysis or publishing. In this paper, we study the problem of differentially private histogram release based on an interactive differential privacy interface. We propose two multidimensional partitioning strategies including a baseline cell-based partitioning and an innovative kd-tree based partitioning. In addition to providing formal proofs for differential privacy and usefulness guarantees for linear distributive queries, we also present a set of experimental results and demonstrate the feasibility and performance of our method.
---
paper_title: Differentially Private Spatial Decompositions
paper_content:
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.
---
paper_title: Privacy: Theory meets Practice on the Map
paper_content:
In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.
---
paper_title: Differentially Private Publication of Sparse Data
paper_content:
The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. ::: We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach.
---
paper_title: Differential Privacy via Wavelet Transforms
paper_content:
Privacy preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides one of the strongest privacy guarantees. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output dataset is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.
---
paper_title: Differentially private search log sanitization with optimal output utility
paper_content:
Web search logs contain extremely sensitive data, as evidenced by the recent AOL incident. However, storing and analyzing search logs can be very useful for many purposes (i.e. investigating human behavior). Thus, an important research question is how to privately sanitize search logs. Several search log anonymization techniques have been proposed with concrete privacy models. However, in all of these solutions, the output utility of the techniques is only evaluated rather than being maximized in any fashion. Indeed, for effective search log anonymization, it is desirable to derive the outputs with optimal utility while meeting the privacy standard. In this paper, we propose utility-maximizing sanitization based on the rigorous privacy standard of differential privacy, in the context of search logs. Specifically, we utilize optimization models to maximize the output utility of the sanitization for different applications, while ensuring that the production process satisfies differential privacy. An added benefit is that our novel randomization strategy maintains the schema integrity in the output search logs. A comprehensive evaluation on real search logs validates the approach and demonstrates its robustness and scalability.
---
paper_title: Private record matching using differential privacy
paper_content:
Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts.
---
paper_title: Publishing Set-Valued Data via Differential Privacy
paper_content:
Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets.
---
paper_title: Compressive mechanism: utilizing sparse representation in differential privacy
paper_content:
Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
---
paper_title: Differentially Private Trajectory Data Publication
paper_content:
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
---
paper_title: Privacy: Theory meets Practice on the Map
paper_content:
In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.
---
paper_title: Differentially private search log sanitization with optimal output utility
paper_content:
Web search logs contain extremely sensitive data, as evidenced by the recent AOL incident. However, storing and analyzing search logs can be very useful for many purposes (i.e. investigating human behavior). Thus, an important research question is how to privately sanitize search logs. Several search log anonymization techniques have been proposed with concrete privacy models. However, in all of these solutions, the output utility of the techniques is only evaluated rather than being maximized in any fashion. Indeed, for effective search log anonymization, it is desirable to derive the outputs with optimal utility while meeting the privacy standard. In this paper, we propose utility-maximizing sanitization based on the rigorous privacy standard of differential privacy, in the context of search logs. Specifically, we utilize optimization models to maximize the output utility of the sanitization for different applications, while ensuring that the production process satisfies differential privacy. An added benefit is that our novel randomization strategy maintains the schema integrity in the output search logs. A comprehensive evaluation on real search logs validates the approach and demonstrates its robustness and scalability.
---
paper_title: Publishing Set-Valued Data via Differential Privacy
paper_content:
Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets.
---
paper_title: Compressive mechanism: utilizing sparse representation in differential privacy
paper_content:
Differential privacy provides the first theoretical foundation with provable privacy guarantee against adversaries with arbitrary prior knowledge. The main idea to achieve differential privacy is to inject random noise into statistical query results. Besides correctness, the most important goal in the design of a differentially private mechanism is to reduce the effect of random noise, ensuring that the noisy results can still be useful. This paper proposes the compressive mechanism, a novel solution on the basis of state-of-the-art compression technique, called compressive sensing. Compressive sensing is a decent theoretical tool for compact synopsis construction, using random projections. In this paper, we show that the amount of noise is significantly reduced from O(n) to O(log(n)), when the noise insertion procedure is carried on the synopsis samples instead of the original database. As an extension, we also apply the proposed compressive mechanism to solve the problem of continual release of statistical results. Extensive experiments using real datasets justify our accuracy claims.
---
paper_title: Differentially Private Trajectory Data Publication
paper_content:
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
---
paper_title: Differentially Private Spatial Decompositions
paper_content:
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.
---
paper_title: Differentially Private Publication of Sparse Data
paper_content:
The problem of privately releasing data is to provide a version of a dataset without revealing sensitive information about the individuals who contribute to the data. The model of differential privacy allows such private release while providing strong guarantees on the output. A basic mechanism achieves differential privacy by adding noise to the frequency counts in the contingency tables (or, a subset of the count data cube) derived from the dataset. However, when the dataset is sparse in its underlying space, as is the case for most multi-attribute relations, then the effect of adding noise is to vastly increase the size of the published data: it implicitly creates a huge number of dummy data points to mask the true data, making it almost impossible to work with. ::: We present techniques to overcome this roadblock and allow efficient private release of sparse data, while maintaining the guarantees of differential privacy. Our approach is to release a compact summary of the noisy data. Generating the noisy data and then summarizing it would still be very costly, so we show how to shortcut this step, and instead directly generate the summary from the input data, without materializing the vast intermediate noisy data. We instantiate this outline for a variety of sampling and filtering methods, and show how to use the resulting summary for approximate, private, query answering. Our experimental study shows that this is an effective, practical solution, with comparable and occasionally improved utility over the costly materialization approach.
---
paper_title: Differential Privacy via Wavelet Transforms
paper_content:
Privacy preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides one of the strongest privacy guarantees. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output dataset is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.
---
paper_title: Differentially private search log sanitization with optimal output utility
paper_content:
Web search logs contain extremely sensitive data, as evidenced by the recent AOL incident. However, storing and analyzing search logs can be very useful for many purposes (i.e. investigating human behavior). Thus, an important research question is how to privately sanitize search logs. Several search log anonymization techniques have been proposed with concrete privacy models. However, in all of these solutions, the output utility of the techniques is only evaluated rather than being maximized in any fashion. Indeed, for effective search log anonymization, it is desirable to derive the outputs with optimal utility while meeting the privacy standard. In this paper, we propose utility-maximizing sanitization based on the rigorous privacy standard of differential privacy, in the context of search logs. Specifically, we utilize optimization models to maximize the output utility of the sanitization for different applications, while ensuring that the production process satisfies differential privacy. An added benefit is that our novel randomization strategy maintains the schema integrity in the output search logs. A comprehensive evaluation on real search logs validates the approach and demonstrates its robustness and scalability.
---
paper_title: Fast Private Data Release Algorithms for Sparse Queries
paper_content:
We revisit the problem of accurately answering large classes of statistical queries while preserving differential privacy. Previous approaches to this problem have either been very general but have not had run-time polynomial in the size of the database, have applied only to very limited classes of queries, or have relaxed the notion of worst-case error guarantees. In this paper we consider the large class of sparse queries, which take non-zero values on only polynomially many universe elements. We give efficient query release algorithms for this class, in both the interactive and the non-interactive setting. Our algorithms also achieve better accuracy bounds than previous general techniques do when applied to sparse queries: our bounds are independent of the universe size. In fact, even the runtime of our interactive mechanism is independent of the universe size, and so can be implemented in the"infinite universe"model in which no finite universe need be specified by the data curator.
---
|
Title: Non-Interactive Differential Privacy: a Survey
Section 1: INTRODUCTION
Description 1: This section covers the motivation behind the survey and the basic definitions related to differential privacy.
Section 2: DIFFERENTIAL PRIVACY
Description 2: This section explains the concept of differential privacy, introduces randomized mechanisms, and provides formal definitions along with examples.
Section 3: Relaxations
Description 3: This section discusses the relaxations of differential privacy and the need for these relaxations to maintain the utility of the data.
Section 4: Is differential privacy good enough?
Description 4: This section evaluates the effectiveness of differential privacy and addresses some criticisms regarding its applicability.
Section 5: Mechanisms
Description 5: This section details various mechanisms used to implement differential privacy, including the Laplace mechanism and the Exponential mechanism.
Section 6: MEASURING UTILITY
Description 6: This section discusses different approaches and metrics to measure the utility of differentially private mechanisms and their effectiveness.
Section 7: METHODS
Description 7: This section outlines different methods for releasing differentially private data, including histogram construction, sampling and filtering, partitioning, and dimensionality reduction.
Section 8: APPLICATIONS
Description 8: This section covers real-world applications of differential privacy and highlights successful case studies.
Section 9: SYNTHETIC DATABASES
Description 9: This section explores the generation and use of synthetic databases to preserve privacy while maintaining data utility.
Section 10: CONCLUSIONS
Description 10: This section summarizes the findings of the survey, underscoring the importance and applicability of differential privacy in various domains.
|
A review of High Performance Computing foundations for scientists
| 7 |
---
paper_title: Essentials Of Computational Chemistry Theories And Models
paper_content:
Thank you very much for downloading essentials of computational chemistry theories and models. Maybe you have knowledge that, people have search hundreds times for their chosen books like this essentials of computational chemistry theories and models, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop.
---
paper_title: 369 Tflop-s molecular dynamics simulations on the petaflop hybrid supercomputer ‘Roadrunner’
paper_content:
We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ‘TriBlade’ compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ‘evolutionary’ port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ‘revolutionary’ implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I-O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard–Jones pair potential benchmark reached a peak of 369 Tflop-s double-precision floating-point performance on the full Roadrunner system (27.7p of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Spending Moore's dividend
paper_content:
Multicore computers shift the burden of software performance from chip designers and processor architects to software developers.
---
paper_title: Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices
paper_content:
We have studied the magnetoresistance of (001)Fe/(001)Cr superlattices prepared by molecularbeam epitaxy. A huge magnetoresistance is found in superlattices with thin Cr layers: For example, with ${t}_{\mathrm{Cr}}=9$ \AA{}, at $T=4.2$ K, the resistivity is lowered by almost a factor of 2 in a magnetic field of 2 T. We ascribe this giant magnetoresistance to spin-dependent transmission of the conduction electrons between Fe layers through Cr layers.
---
paper_title: Future hard disk drive systems
paper_content:
Abstract This paper briefly reviews the evolution of today's hard disk drive with the additional intention of orienting the reader to the overall mechanical and electrical architecture. The modern hard disk drive is a miracle of storage capacity and function together with remarkable economy of design. This paper presents a personal view of future customer requirements and the anticipated design evolution of the components. There are critical decisions and great challenges ahead for the key technologies of heads, media, head–disk interface, mechanics, and electronics.
---
paper_title: Utilizing high performance computing for chemistry: parallel computational chemistry.
paper_content:
Parallel hardware has become readily available to the computational chemistry research community. This perspective will review the current state of parallel computational chemistry software utilizing high-performance parallel computing platforms. Hardware and software trends and their effect on quantum chemistry methodologies, algorithms, and software development will also be discussed.
---
paper_title: Scientific data management in the coming decade
paper_content:
Scientific instruments and computer simulations are creating vast data stores that require new scientific methods to analyze and organize the data. Data volumes are approximately doubling each year. Since these new instruments have extraordinary precision, the data quality is also rapidly improving. Analyzing this data to find the subtle effects missed by previous studies requires algorithms that can simultaneously deal with huge datasets and that can find very subtle effects --- finding both needles in the haystack and finding very small haystacks that were undetected in previous measurements.
---
paper_title: Introduction to High Performance Computing for Scientists and Engineers
paper_content:
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature.
---
paper_title: An optimal multimedia object allocation solution in multi-powermode storage systems
paper_content:
Given a set of multimedia objects R=lo1, o2, …, okr each of which has a set of multiple versions oi.v=lAi.0, Ai.1, …, Ai.mr, i=1, 2, …, k, there is a problem of distributing these objects in a server system so that user requests for accessing specified multimedia objects can be fulfilled with the minimum energy consumption and without significant degrading of the system performance. This paper considers the allocation problem of multimedia objects in multi-powermode storage systems, where the objects are distributed among multi-powermode storages based on the access pattern to the objects. We design an underlying infrastructure of storage system and propose a dynamic multimedia object allocation policy based on the designed infrastructure, which integrate and prove the optimality of the proposed policy. Copyright © 2010 John Wiley & Sons, Ltd.
---
paper_title: Spending Moore's dividend
paper_content:
Multicore computers shift the burden of software performance from chip designers and processor architects to software developers.
---
paper_title: The impact of multicore on math software
paper_content:
Power consumption and heat dissipation issues are pushing the microprocessors industry towards multicore design patterns. Given the cubic dependence between core frequency and power consumption, multicore technologies leverage the idea that doubling the number of cores and halving the cores frequency gives roughly the same performance reducing the power consumption by a factor of four. With the number of cores on multicore chips expected to reach tens in a few years, efficient implementations of numerical libraries using shared memory programming models is of high interest. The current message passing paradigm used in ScaLAPACK and elsewhere introduces unnecessary memory overhead and memory copy operations, which degrade performance, along with the making it harder to schedule operations that could be done in parallel. Limiting the use of shared memory to fork-join parallelism (perhaps with OpenMP) or to its use within the BLAS does not address all these issues.
---
paper_title: Utilizing high performance computing for chemistry: parallel computational chemistry.
paper_content:
Parallel hardware has become readily available to the computational chemistry research community. This perspective will review the current state of parallel computational chemistry software utilizing high-performance parallel computing platforms. Hardware and software trends and their effect on quantum chemistry methodologies, algorithms, and software development will also be discussed.
---
paper_title: The impact of IBM Cell technology on the programming paradigm in the context of computer systems for climate and weather models
paper_content:
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256 kB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column-physics components (half of the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (∼25p total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: octopus: a tool for the application of time-dependent density functional theory
paper_content:
We report on the background, current status, and current lines of development of the octopus project. This program materializes the main equations of density-functional theory in the ground state, and of time-dependent density-functional theory for dynamical effects. The focus is nowadays placed on the optical (i.e. electronic) linear response properties of nanostructures and biomolecules, and on the non-linear response to high-intensity fields of finite systems, with particular attention to the coupled ionic-electronic motion (i.e. photo-chemical processes). In addition, we are currently extending the code to the treatment of periodic systems (both to one-dimensional chains, two-dimensional slabs, or fully periodic solids), magnetic properties (ground state properties and excitations), and to the field of quantum-mechanical transport or “molecular electronics.” In this communication, we concentrate on the development of the methodology: we review the essential numerical schemes used in the code, and report on the most recent implementations, with special attention to the introduction of adaptive coordinates, to the extension of our real-space technique to tackle periodic systems, and on large-scale parallelization. More information on the code, as well as the code itself, can be found at http://www.tddft.org/programs/octopus/. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
---
paper_title: The Future of Microprocessors
paper_content:
The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software. This is depicted in figure 1 for Intel’s processors.
---
paper_title: Cramming More Components Onto Integrated Circuits
paper_content:
The future of integrated electronics is the future of electronics itself. The advantages of integration will bring about a proliferation of electronics, pushing this science into many new areas. Integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today. But the biggest potential lies in the production of large systems. In telephone communications, integrated circuits in digital filters will separate channels on multiplex equipment. Integrated circuits will also switch telephone circuits and perform data processing. Computers will be more powerful, and will be organized in completely different ways. For example, memories built of integrated electronics may be distributed throughout the machine instead of being concentrated in a central unit. In addition, the improved reliability made possible by integrated circuits will allow the construction of larger processing units. Machines similar to those in existence today will be built at lower costs and with faster turnaround.
---
paper_title: N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions
paper_content:
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme (Makino and Aarseth, 1992), and achieved the performance of 20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions (Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme (Nitadori et al., 2006a), and achieved 90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N 105 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs (Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
---
paper_title: octopus: a first-principles tool for excited electron-ion dynamics.
paper_content:
We present a computer package aimed at the simulation of the electron–ion dynamics of finite systems, both in one and three dimensions, under the influence of time-dependent electromagnetic fields. The electronic degrees of freedom are treated quantum mechanically within the time-dependent Kohn–Sham formalism, while the ions are handled classically. All quantities are expanded in a regular mesh in real space, and the simulations are performed in real time. Although not optimized for that purpose, the program is also able to obtain static properties like ground-state geometries, or static polarizabilities. The method employed proved quite reliable and general, and has been successfully used to calculate linear and non-linear absorption spectra, harmonic spectra, laser induced fragmentation, etc. of a variety of systems, from small clusters to medium sized quantum dots. 2002 Elsevier Science B.V. All rights reserved.
---
paper_title: Introduction to High Performance Computing for Scientists and Engineers
paper_content:
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature.
---
paper_title: 369 Tflop-s molecular dynamics simulations on the petaflop hybrid supercomputer ‘Roadrunner’
paper_content:
We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ‘TriBlade’ compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ‘evolutionary’ port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ‘revolutionary’ implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I-O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard–Jones pair potential benchmark reached a peak of 369 Tflop-s double-precision floating-point performance on the full Roadrunner system (27.7p of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Lattice Boltzmann simulation optimization on leading multicore platforms
paper_content:
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara!, STI Cell, as well as the single core Intel Itanium.2. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto- tuned LBMHD application achieves up to a 14times improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.
---
paper_title: Atomistic protein folding simulations on the submillisecond time scale using worldwide distributed computing
paper_content:
Atomistic simulations of protein folding have the potential to be a great complement to experimental studies, but have been severely limited by the time scales accessible with current computer hardware and algorithms. By employing a worldwide distributed computing network of tens of thousands of PCs and algorithms designed to efficiently utilize this new many-processor, highly heterogeneous, loosely coupled distributed computing paradigm, we have been able to simulate hundreds of microseconds of atomistic molecular dynamics. This has allowed us to directly simulate the folding mechanism and to accurately predict the folding rate of several fast-folding proteins and polymers, including a nonbiological helix, polypeptide alpha-helices, a beta-hairpin, and a three-helix bundle protein from the villin headpiece. Our results demonstrate that one can reach the time scales needed to simulate fast folding using distributed computing, and that potential sets used to describe interatomic interactions are sufficiently accurate to reach the folded state with experimentally validated rates, at least for small proteins.
---
paper_title: Utilizing high performance computing for chemistry: parallel computational chemistry.
paper_content:
Parallel hardware has become readily available to the computational chemistry research community. This perspective will review the current state of parallel computational chemistry software utilizing high-performance parallel computing platforms. Hardware and software trends and their effect on quantum chemistry methodologies, algorithms, and software development will also be discussed.
---
paper_title: The Future of Microprocessors
paper_content:
The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software. This is depicted in figure 1 for Intel’s processors.
---
paper_title: Proton Transfer 200 Years after von Grotthuss: Insights from Ab Initio Simulations
paper_content:
In the last decade, ab initio simulations and especially Car-Parrinello molecular dynamics have significantly conttibuted to the improvement of our undetstanding of both the physical and chemical properties of water, ice, and hydrogen-bonded systems in general. At the heart of this family of in silico techniques lies the crucial idea of computing the -many-body interactions by solving the electronic structure problem 'on the fly" as the simulation proceeds, which circumvents the need for pre-parameterized potential models. In particular, the field of proton transfer in hydrogen-bonded networks greatly benefits from these technical advawes. Here, several systems of seemingly quite different nature and of increasing complexity, such as Grotthuss diffusion in water, excited state proton transfer in solution, phase transitions in ice, and pratonated water networis in the membrane protein bacteriorhodopsin, are discussed in the realms of a unifying viewpoint.
---
paper_title: Introduction to High Performance Computing for Scientists and Engineers
paper_content:
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature.
---
paper_title: 369 Tflop-s molecular dynamics simulations on the petaflop hybrid supercomputer ‘Roadrunner’
paper_content:
We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ‘TriBlade’ compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ‘evolutionary’ port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ‘revolutionary’ implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I-O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard–Jones pair potential benchmark reached a peak of 369 Tflop-s double-precision floating-point performance on the full Roadrunner system (27.7p of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Spending Moore's dividend
paper_content:
Multicore computers shift the burden of software performance from chip designers and processor architects to software developers.
---
paper_title: The Case for Energy-Proportional Computing
paper_content:
Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems.
---
paper_title: The impact of multicore on math software
paper_content:
Power consumption and heat dissipation issues are pushing the microprocessors industry towards multicore design patterns. Given the cubic dependence between core frequency and power consumption, multicore technologies leverage the idea that doubling the number of cores and halving the cores frequency gives roughly the same performance reducing the power consumption by a factor of four. With the number of cores on multicore chips expected to reach tens in a few years, efficient implementations of numerical libraries using shared memory programming models is of high interest. The current message passing paradigm used in ScaLAPACK and elsewhere introduces unnecessary memory overhead and memory copy operations, which degrade performance, along with the making it harder to schedule operations that could be done in parallel. Limiting the use of shared memory to fork-join parallelism (perhaps with OpenMP) or to its use within the BLAS does not address all these issues.
---
paper_title: The impact of IBM Cell technology on the programming paradigm in the context of computer systems for climate and weather models
paper_content:
The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256 kB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column-physics components (half of the total computational time), (2) has an extremely high computational intensity: the ratio of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (∼25p total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Millisecond-scale molecular dynamics simulations on Anton
paper_content:
Anton is a recently completed special-purpose supercomputer designed for molecular dynamics (MD) simulations of biomolecular systems. The machine's specialized hardware dramatically increases the speed of MD calculations, making possible for the first time the simulation of biological molecules at an atomic level of detail for periods on the order of a millisecond---about two orders of magnitude beyond the previous state of the art. Anton is now running simulations on a timescale at which many critically important, but poorly understood phenomena are known to occur, allowing the observation of aspects of protein dynamics that were previously inaccessible to both computational and experimental study. Here, we report Anton's performance when executing actual MD simulations whose accuracy has been validated against both existing MD software and experimental observations. We also discuss the manner in which novel algorithms have been coordinated with Anton's co-designed, application-specific hardware to achieve these results.
---
paper_title: The Future of Microprocessors
paper_content:
The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law,1 and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict,2 because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software. This is depicted in figure 1 for Intel’s processors.
---
paper_title: JANUS: an FPGA-based System for High Performance Scientific Computing
paper_content:
Janus is a modular, massively parallel, and reconfigurable FPGA-based computing system. Each Janus module has one computational core and one host. Janus is tailored to, but not limited to, the needs of a class of hard scientific applications characterized by regular code structure, unconventional data-manipulation requirements, and a few Megabits database. The authors discuss this configurable system's architecture and focus on its use for Monte Carlo simulations of statistical mechanics, as Janus performs impressively on this class of application.
---
paper_title: Exploring atomic resolution physiology on a femtosecond to millisecond timescale using molecular dynamics simulations
paper_content:
Discovering the functional mechanisms of biological systems frequently requires information that challenges the spatial and temporal resolution limits of current experimental techniques. Recent dramatic methodological advances have made all-atom molecular dynamics (MD) simulations an ever more
---
paper_title: Introduction to High Performance Computing for Scientists and Engineers
paper_content:
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature.
---
paper_title: 369 Tflop-s molecular dynamics simulations on the petaflop hybrid supercomputer ‘Roadrunner’
paper_content:
We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ‘TriBlade’ compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ‘evolutionary’ port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ‘revolutionary’ implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I-O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard–Jones pair potential benchmark reached a peak of 369 Tflop-s double-precision floating-point performance on the full Roadrunner system (27.7p of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Extending Amdahl's Law for Energy-Efficient Computing in the Many-Core Era
paper_content:
An updated take on Amdahl's analytical model uses modern design constraints to analyze many-core design alternatives. The revised models provide computer architects with a better understanding of many-core design types, enabling them to make more informed tradeoffs.
---
paper_title: The LINPACK Benchmark : past , present and future
paper_content:
SUMMARY This paper describes the LINPACK Benchmark and some of its variations commonly used to assess the performance of computer systems. Aside from the LINPACK Benchmark suite, the TOP500 and the HPL codes are presented. The latter is frequently used to obtained results for TOP500 submissions. Information is also given on how to interpret the results of the benchmark and how the results fit into the performance evaluation process. Copyright c � 2003 John Wiley & Sons, Ltd.
---
paper_title: Spending Moore's dividend
paper_content:
Multicore computers shift the burden of software performance from chip designers and processor architects to software developers.
---
paper_title: Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices
paper_content:
We have studied the magnetoresistance of (001)Fe/(001)Cr superlattices prepared by molecularbeam epitaxy. A huge magnetoresistance is found in superlattices with thin Cr layers: For example, with ${t}_{\mathrm{Cr}}=9$ \AA{}, at $T=4.2$ K, the resistivity is lowered by almost a factor of 2 in a magnetic field of 2 T. We ascribe this giant magnetoresistance to spin-dependent transmission of the conduction electrons between Fe layers through Cr layers.
---
paper_title: Atomistic protein folding simulations on the submillisecond time scale using worldwide distributed computing
paper_content:
Atomistic simulations of protein folding have the potential to be a great complement to experimental studies, but have been severely limited by the time scales accessible with current computer hardware and algorithms. By employing a worldwide distributed computing network of tens of thousands of PCs and algorithms designed to efficiently utilize this new many-processor, highly heterogeneous, loosely coupled distributed computing paradigm, we have been able to simulate hundreds of microseconds of atomistic molecular dynamics. This has allowed us to directly simulate the folding mechanism and to accurately predict the folding rate of several fast-folding proteins and polymers, including a nonbiological helix, polypeptide alpha-helices, a beta-hairpin, and a three-helix bundle protein from the villin headpiece. Our results demonstrate that one can reach the time scales needed to simulate fast folding using distributed computing, and that potential sets used to describe interatomic interactions are sufficiently accurate to reach the folded state with experimentally validated rates, at least for small proteins.
---
paper_title: Scientific data management in the coming decade
paper_content:
Scientific instruments and computer simulations are creating vast data stores that require new scientific methods to analyze and organize the data. Data volumes are approximately doubling each year. Since these new instruments have extraordinary precision, the data quality is also rapidly improving. Analyzing this data to find the subtle effects missed by previous studies requires algorithms that can simultaneously deal with huge datasets and that can find very subtle effects --- finding both needles in the haystack and finding very small haystacks that were undetected in previous measurements.
---
paper_title: Computer simulation study of the structural stability and materials properties of DNA-intercalated layered double hydroxides.
paper_content:
The intercalation of DNA into layered double hydroxides (LDHs) has various applications, including drug delivery for gene therapy and origins of life studies. The nanoscale dimensions of the interlayer region make the exact conformation of the intercalated DNA difficult to elucidate experimentally. We use molecular dynamics techniques, performed on high performance supercomputing grids, to carry out large-scale simulations of double stranded, linear and plasmid DNA up to 480 base pairs in length intercalated within a magnesium-aluminum LDH. Currently only limited experimental data have been reported for these systems. Our models are found to be in agreement with experimental observations, according to which hydration is a crucial factor in determining the structural stability of DNA. Phosphate backbone groups are found to align with aluminum lattice positions. At elevated temperatures and pressures, relevant to origins of life studies which maintain that the earliest life forms originated around deep ocean hydrothermal vents, the structural stability of LDH-intercalated DNA is substantially enhanced as compared to DNA in bulk water. We also discuss how the materials properties of the LDH are modified due to DNA intercalation.
---
paper_title: Grid Computing: Techniques and Applications
paper_content:
Designed for senior undergraduate and first-year graduate students, this classroom-tested book shows professors how to teach this subject in a practical way. It encompasses the varied and interconnected aspects of Grid computing, including how to design a system infrastructure and Grid portal. The text covers job submission and scheduling, Grid security, Grid computing services and software tools, graphical user interfaces, workflow editors, and Grid-enabling applications. It also contains programming assignments and multiple-choice questions and answers. The authors Web site offers various instructional resources, including slides and links to software for the programming assignments.
---
paper_title: Introduction to High Performance Computing for Scientists and Engineers
paper_content:
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent nonuniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature.
---
paper_title: 369 Tflop-s molecular dynamics simulations on the petaflop hybrid supercomputer ‘Roadrunner’
paper_content:
We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ‘TriBlade’ compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ‘evolutionary’ port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ‘revolutionary’ implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I-O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard–Jones pair potential benchmark reached a peak of 369 Tflop-s double-precision floating-point performance on the full Roadrunner system (27.7p of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: An optimal multimedia object allocation solution in multi-powermode storage systems
paper_content:
Given a set of multimedia objects R=lo1, o2, …, okr each of which has a set of multiple versions oi.v=lAi.0, Ai.1, …, Ai.mr, i=1, 2, …, k, there is a problem of distributing these objects in a server system so that user requests for accessing specified multimedia objects can be fulfilled with the minimum energy consumption and without significant degrading of the system performance. This paper considers the allocation problem of multimedia objects in multi-powermode storage systems, where the objects are distributed among multi-powermode storages based on the access pattern to the objects. We design an underlying infrastructure of storage system and propose a dynamic multimedia object allocation policy based on the designed infrastructure, which integrate and prove the optimality of the proposed policy. Copyright © 2010 John Wiley & Sons, Ltd.
---
|
Title: A Review of High Performance Computing Foundations for Scientists
Section 1: Introduction
Description 1: Introduce the importance and role of scientific computer simulation, and provide an overview of High Performance Computing (HPC) and its application.
Section 2: Hardware basics
Description 2: Describe the basic components of a computer, including memory, Control Unit, Arithmetic Logic Unit (ALU), and how these components work together. Discuss memory hierarchy in detail.
Section 3: Beyond the Von Neumann paradigm
Description 3: Explain the limitations of the traditional Von Neumann architecture and discuss modern improvements and alternatives such as integration, clock-rate increase, superscalarity, multicore architecture, multithreading, SIMD instructions, out-of-order execution, and simplified instruction sets.
Section 4: Parallel computers
Description 4: Discuss the architecture and design of parallel computers, including shared-memory and distributed-memory systems, the importance of network topology, and different types of parallel supercomputers.
Section 5: Hybrid and heterogeneous models
Description 5: Describe the construction and benefits of hybrid and heterogeneous machines that combine different kinds of processors such as CPUs, GPUs, and special-purpose chips. Include a discussion on the advantages and challenges of these models.
Section 6: Distributed computing
Description 6: Explain newer forms of high-performance computing like grid computing, volunteer computing, and cloud computing. Discuss how these distributed computing methods work, their benefits and drawbacks.
Section 7: Intrinsic limitations to accuracy and efficiency
Description 7: Examine the issues related to accuracy and execution time in scientific calculations. Discuss machine precision errors, soft errors, algorithm-specific errors, and the implications of Amdahl's Law and Gustafson's Law on parallel computing.
|
A Review of Control Algorithms for Autonomous Quadrotors
| 14 |
---
paper_title: Attitude Stabilization Control of a Quadrotor UAV by Using Backstepping Approach
paper_content:
The modeling and attitude stabilization control problems of a four-rotor vertical takeoff and landing unmanned air vehicle (UAV) known as the quadrotor are investigated. The quadrotor’s attitude is represented by the unit quaternion rather than Euler angles to avoid singularity problem. Taking dynamical behavior of motors into consideration and ignoring aerodynamic effect, a nonlinear controller is developed to stabilize the attitude. The control design is accomplished by using backstepping control technique. The proposed control law is based on the compensation for the Coriolis and gyroscope torques. Applying Lyapunov stability analysis proves that the closed-loop attitude system is asymptotic stable. Moreover, the controller can guarantee that all the states of the system are uniformly ultimately bounded in the presence of external disturbance torque. The effectiveness of the proposed control approach is analytically authenticated and also validated via simulation study.
---
paper_title: Design and control of quadrotors with application to autonomous flying
paper_content:
This thesis is about modelling, design and control of Miniature Flying Robots (MFR) with a focus on Vertical Take-Off and Landing (VTOL) systems and specifically, micro quadrotors. It introduces a mathematical model for simulation and control of such systems. It then describes a design methodology for a miniature rotorcraft. The methodology is subsequently applied to design an autonomous quadrotor named OS4. Based on the mathematical model, linear and nonlinear control techniques are used to design and simulate various controllers along this work. The dynamic model and the simulator evolved from a simple set of equations, valid only for hovering, to a complex mathematical model with more realistic aerodynamic coefficients and sensor and actuator models. Two platforms were developed during this thesis. The first one is a quadrotor-like test-bench with off-board data processing and power supply. It was used to safely and easily test control strategies. The second one, OS4, is a highly integrated quadrotor with on-board data processing and power supply. It has all the necessary sensors for autonomous operation. Five different controllers were developed. The first one, based on Lyapunov theory, was applied for attitude control. The second and the third controllers are based on PID and LQ techniques. These were compared for attitude control. The fourth and the fifth approaches use backstepping and sliding-mode concepts. They are applied to control attitude. Finally, backstepping is augmented with integral action and proposed as a single tool to design attitude, altitude and position controllers. This approach is validated through various flight experiments conducted on the OS4.
---
paper_title: Artificial intelligent-based feedforward optimized PID wheel slip controller
paper_content:
Continual improvement of the anti-lock braking system control strategy is the focus of this work. Advances in auto-electronics and sub-systems such as the brake-by-wire technology are the driving forces behind the improvement of the anti-lock braking system. The control strategy has shifted from speed-control to slip-control strategy. In the current slip-control approach, proportional-integral-derivative (PID) controller and its variants: P, PI and PD have been proposed in place of the bang-bang controller mostly used in commercial ABS. Though the PID controller is famous due to its wide applications in industry: irrespective of the nature of the process or system, it might lead to limited performance when applied to the ABS. In order to improve the performance of the PID controller, a neural network inverse model of the plant is used to optimize the reference input slip. The resultant neural network-based PID ABS is then tested in Matlab® /Simulink® simulation environment. The results of the proposed controller, exhibits more accurate slip tracking than the PID-slip controller.
---
paper_title: Dynamic analysis and PID control for a quadrotor
paper_content:
In order to analyze the dynamic characteristics and PID controller performance of a quadrotor, this paper firstly describes the architecture of the quadrotor and analyzes the dynamic model of it. Then, based on the classic scheme of PID control, this paper designs a controller, which aims to regulate the posture (position and orientation) of the 6 d.o.f. quadrotor. Thirdly, the dynamic model is implemented in Matlab/Simulink simulation, and the PID control parameters are obtained according to the simulation results. Finally, a quadrotor with PID controllers is designed and made. In order to do the experiment, a flying experiment for the quadrotor has been done. The results of flying experiment show that the PID controllers robustly stabilize the quadrotor.
---
paper_title: Hovering control of a quadrotor
paper_content:
This paper deals with the hovering control of a quadrotor. First, we derive the quadrotor model using the Euler-Lagrange equation and perform experiment to identify the model parameter. Second, we divide a quadrotor system into two subsystems: the attitude system and the altitude system. For attitude control, we use PID control method, and for the altitude control, we use dynamic surface control (DSC) method. From the Lyapunov stability theory, we prove that all signals of a quadrotor system are uniformly ultimately bounded(UUB). Finally, we present the simulation and experimental results to verify the effectiveness of the proposed control method.
---
paper_title: PID vs LQ control techniques applied to an indoor micro quadrotor
paper_content:
The development of miniature flying robots has become a reachable dream, thanks to the new sensing and actuating technologies. Micro VTOL systems represent a useful class of flying robots because of their strong abilities for small-area monitoring and building exploration. In this paper, we present the results of two model-based control techniques applied to an autonomous four-rotor micro helicopter called quadrotor. A classical approach (PID) assumed a simplified dynamics and a modern technique (LQ) based on a more complete model. Various simulations were performed and several tests on the bench validate the control laws. Finally, we present the results of the first test in flight with the helicopter released. These developments are part of the OS4 project in our lab.
---
paper_title: Modeling and control of quadrotor MAV using vision-based measurement
paper_content:
In this paper we review the mathematics model of quadrotor using Lagrange's equation. We propose a vision-based measurement to stabilize this model. A dual camera method is used for estimating the pose of the quadrotor; positions and attitudes of the quadrotor MAV. One of these cameras is located on-board the quadrotor MAV, and the orther is located on the ground. The control system is developed in Matlab/Simulink. In this paper, we consider a linear controller for our purpose. Linear Quadratic tracking controller with integral action, and Optimal Linear Quadratic Gaussian (LQG) control with integral action are designed for stabilization of the attitude of the quadrotor MAV. Moreover, some measurement noises will be considered in the controller design, too. Finally, this paper will demonstrate how well this control works for a certain flight mission: quadrotor MAV on the ground, and then starting hover, sideway move and keeping the position with a pointing to a certain object fixed in space.
---
paper_title: A prototype of an autonomous controller for a quadrotor UAV
paper_content:
The paper proposes a complete real-time control algorithm for autonomous collision-free operations of the quadrotor UAV. As opposed to fixed wing vehicles the quadrotor is a small agile vehicle which might be more suitable for the variety of specific applications including search and rescue, surveillance and remote inspection. The developed control system incorporates both trajectory planning and path following. Using a differential flatness property the trajectory planning is posed as a constrained optimization problem in the output space (as opposed to the control space), which simplifies the problem. The trajectory and speed profile are parameterized to reduce the problem to a finite dimensional problem. To optimize the speed profile independently of the trajectory a virtual argument is used as opposed to time. A path following portion of the proposed algorithm uses a standard linear multi-variable control technique. The paper presents the results of simulations to demonstrate the suitability of the proposed control algorithm.
---
paper_title: Sliding Mode Control of quadrotor
paper_content:
This paper presents a design method for attitude control of an autonomous quadrotor based sliding Mode Control. We are interested in the dynamic modeling of quadrotor because of it complexity. The dynamic model is used to design a stable and accurate controller to perform the best tracking and attitude results. To stabilize the overall systems, each sliding mode controller is designed based on the Lyapunov stability theory. The advantage of sliding mode control is its not being sensitive to model errors, parametric uncertainties and other disturbances. Lastly, we show that the control law has a good robust and good stability though simulation.
---
paper_title: Sliding Mode Control of a Quadrotor Helicopter
paper_content:
In this paper, we present a new design method for the flight control of an autonomous quadrotor helicopter based on sliding mode control. Due to the under-actuated property of a quadrotor helicopter, the controller can make the helicopter move three positions (x, y, z) and the yaw angle to their desired values and stabilize the pitch and roll angles. A sliding mode control is proposed to stabilize a class of cascaded under-actuated systems. The global stability analysis of the closed-loop system is presented. The advantage of sliding mode control is its insensitivity to the model errors, parametric uncertainties and other disturbances. Simulations show that the control law robustly stabilizes a quadrotor.
---
paper_title: Attitude Stabilization Control of a Quadrotor UAV by Using Backstepping Approach
paper_content:
The modeling and attitude stabilization control problems of a four-rotor vertical takeoff and landing unmanned air vehicle (UAV) known as the quadrotor are investigated. The quadrotor’s attitude is represented by the unit quaternion rather than Euler angles to avoid singularity problem. Taking dynamical behavior of motors into consideration and ignoring aerodynamic effect, a nonlinear controller is developed to stabilize the attitude. The control design is accomplished by using backstepping control technique. The proposed control law is based on the compensation for the Coriolis and gyroscope torques. Applying Lyapunov stability analysis proves that the closed-loop attitude system is asymptotic stable. Moreover, the controller can guarantee that all the states of the system are uniformly ultimately bounded in the presence of external disturbance torque. The effectiveness of the proposed control approach is analytically authenticated and also validated via simulation study.
---
paper_title: Adaptive integral backstepping control of a Micro-Quadrotor
paper_content:
Micro-Quadrotor aerial robots have enormous potential applications in the field of near-area surveillance and exploration in military and commercial applications. However, stabilizing and position control of the robot are difficult tasks because of the nonlinear dynamic behavior and model uncertainties. Backstepping is a widely used control law for under-actuated systems including quadrotor. But general backstepping control algorithm needs accurate model parameters and isn't robust to external disturbances. In this paper, an adaptive integral backstepping control algorithm is proposed to realize robust control of quadrotor. The proposed control algorithm can estimate disturbances online and therefore improve the robustness of the system. Simulation results show that the proposed algorithm performs well against model uncertainties.
---
paper_title: Backstepping Control for a Quadrotor Helicopter
paper_content:
This paper presents a nonlinear dynamic model for a quadrotor helicopter in a form suited for backstepping control design. Due to the under-actuated property of quadrotor helicopter, the controller can set the helicopter track three Cartesian positions (x, y, z) and the yaw angle to their desired values and stabilize the pitch and roll angles. The system has been presented into three interconnected subsystems. The first one representing the under-actuated subsystem, gives the dynamic relation of the horizontal positions (x, y) with the pitch and roll angles. The second fully-actuated subsystem gives the dynamics of the vertical position z and the yaw angle. The last subsystem gives the dynamics of the propeller forces. A backstepping control is presented to stabilize the whole system. The design methodology is based on the Lyapunov stability theory. Various simulations of the model show that the control law stabilizes a quadrotor with good tracking.
---
paper_title: Adaptive Control of a Quadrotor with Dynamic Changes in the Center of Gravity
paper_content:
Abstract In this paper, we address the problem of quadrotor stabilization and trajectory tracking with dynamic changes in the quadrotor's center of gravity. This problem has great practical significance in many UAV applications. However, it has received little attention in literature so far. In this paper, we present an adaptive tracking controller based on output feedback linearization that compensates for dynamical changes in the center of gravity of the quadrotor. Effectiveness and robustness of the proposed adaptive control scheme is verified through simulation results. The proposed controller is an important step towards developing the next generation of agile autonomous aerial vehicles. This control algorithm enables a quadrotor to display agile maneuvers while reconfiguring in real time whenever a change in the center of gravity occurs.
---
paper_title: Position trajectory tracking of a quadrotor helicopter based on L1 adaptive control
paper_content:
We present an adaptive backstepping controller for the position trajectory tracking of a quadrotor. The tracking controller is based on the L1 adaptive control approach and uses a typical nonlinear quadrotor model. We slightly modify the L1 adaptive control design for linear systems to comply with the time-varying nonlinear error dynamics that arise from the backstepping design. Our approach yields a stable adaptive system with verifiable bounds on the tracking error and input signals. The adaptive controller compensates for all model uncertainties and for all bounded disturbances within a particular frequency range, which we specify a priori. The design of this frequency range involves a trade-off between control performance and robustness, which can be managed transparently through the L1 adaptive control design. Simulation results show the powerful properties of the presented control application.
---
paper_title: A nonlinear adaptive control approach for quadrotor UAVs
paper_content:
In this paper, a continuous, time varying adaptive controller is developed for an underactuated quadrotor unmanned aerial vehicle (UAV). The vehicle's dynamic model is subject to uncertainties associated with mass, inertia matrix, and aerodynamic damping coefficients. A Lyapunov based approach is utilized to ensure that position and yaw rotation tracking errors are ultimately driven to a neighborhood about zero that can be made arbitrary small. Simulation results are included to illustrate the performance of the control strategy.
---
paper_title: Robust control of quadrotor unmanned air vehicles
paper_content:
A robust control approach for a quadrotor helicopter is presented. The controller consists of two parts: an attitude controller and a position controller. The attitude controller is designed based on linear control combined with robust compensation. The linear controller is designed for the nominal system to achieve desired nominal performances, while the robust compensator is applied to restrain the effects of uncertainties and external disturbances. The position controller was realized with classical PD method. Experimental results on the Tsinghua Autonomous Quadrotor System (TAQS) which is developed in our laboratory demonstrate the effectiveness of the designed controller.
---
paper_title: Robust attitude tracking control of a quadrotor helicopter in the presence of uncertainty
paper_content:
A robust attitude tracking controller is presented in this paper, which achieves asymptotic tracking of a quadrotor helicopter in the presence of parametric uncertainty and unknown, nonlinear, non-vanishing disturbances, which do not satisfy the linear-in-the-parameters assumption. One of the challenges encountered in the control design is that the control input is premultiplied by a nonlinear, state-varying matrix containing parametric uncertainty. An integral sliding mode control technique is employed to compensate for the nonlinear disturbances, and the input-multiplicative uncertainty is mitigated through innovative algebraic manipulation in the error system development. The proposed robust control law is designed to be practically implementable, requiring no observers, function approximators, or online adaptation laws. Asymptotic trajectory tracking is proven via Lyapunov-based stability analysis, and simulation results are provided to verify the performance of the proposed controller.
---
paper_title: Attitude Stabilization Control of a Quadrotor UAV by Using Backstepping Approach
paper_content:
The modeling and attitude stabilization control problems of a four-rotor vertical takeoff and landing unmanned air vehicle (UAV) known as the quadrotor are investigated. The quadrotor’s attitude is represented by the unit quaternion rather than Euler angles to avoid singularity problem. Taking dynamical behavior of motors into consideration and ignoring aerodynamic effect, a nonlinear controller is developed to stabilize the attitude. The control design is accomplished by using backstepping control technique. The proposed control law is based on the compensation for the Coriolis and gyroscope torques. Applying Lyapunov stability analysis proves that the closed-loop attitude system is asymptotic stable. Moreover, the controller can guarantee that all the states of the system are uniformly ultimately bounded in the presence of external disturbance torque. The effectiveness of the proposed control approach is analytically authenticated and also validated via simulation study.
---
paper_title: Robust Optimal Control of Quadrotor UAVs
paper_content:
This paper provides the design and implementation of an L1-optimal control of a quadrotor unmanned aerial vehicle (UAV). The quadrotor UAV is an underactuated rigid body with four propellers that generate forces along the rotor axes. These four forces are used to achieve asymptotic tracking of four outputs, namely the position of the center of mass of the UAV and the heading. With perfect knowledge of plant parameters and no measurement noise, the magnitudes of the errors are shown to exponentially converge to zero. In the case of parametric uncertainty and measurement noise, the controller yields an exponential decrease of the magnitude of the errors in an L1-optimal sense. In other words, the controller is designed so that it minimizes the L∞-gain of the plant with respect to disturbances. The performance of the controller is evaluated in experiments and compared with that of a related robust nonlinear controller in the literature. The experimental data shows that the proposed controller rejects persistent disturbances, which is quantified by a very small magnitude of the mean error.
---
paper_title: Path following controller for a quadrotor helicopter
paper_content:
A path following controller is presented for a quadrotor helicopter model. The controller relies on input dynamic extension and feedback linearization. The controller allows the designer to specify the speed profile of the quadrotor on the path and its yaw angle as a function of the displacement.
---
paper_title: Adaptive Control of a Quadrotor with Dynamic Changes in the Center of Gravity
paper_content:
Abstract In this paper, we address the problem of quadrotor stabilization and trajectory tracking with dynamic changes in the quadrotor's center of gravity. This problem has great practical significance in many UAV applications. However, it has received little attention in literature so far. In this paper, we present an adaptive tracking controller based on output feedback linearization that compensates for dynamical changes in the center of gravity of the quadrotor. Effectiveness and robustness of the proposed adaptive control scheme is verified through simulation results. The proposed controller is an important step towards developing the next generation of agile autonomous aerial vehicles. This control algorithm enables a quadrotor to display agile maneuvers while reconfiguring in real time whenever a change in the center of gravity occurs.
---
paper_title: Adaptive Neural Network for a Quadrotor Unmanned Aerial Vehicle
paper_content:
A new adaptive neural control scheme for quadrotor helicopter stabilization at the presence of sinusoidal disturbance is proposed in this paper. The adaptive control classical laws such e-modification presents some limitations in particular when persistent oscillations are presenting in the input. These techniques can create a dilemma between weights drifting and tracking errors. To avoid this problem in adaptive Single Hidden Layer neural network scheme, a new solution is proposed in this work. The main idea is based on the use of two SHL in parallel instead of one in the closed loop in order to estimate the unknown nonlinear function in Quadrotor dynamical model. The learning algorithms of the two SHL Networks are obtained using the Lyapunov stability method. The simulation results are given to highlight the performances of the proposed control scheme.
---
paper_title: Intelligent fuzzy controller of a quadrotor
paper_content:
The aim of this work is to describe an intelligent system based on fuzzy logic that is developed to control a quadrotor. A quadrotor is a helicopter with four rotors, that make the vehicle more stable but more complex to model and to control. The quadrotor has been used as a testing platform in the last years for various universities and research centres. A quadrotor has six degrees of freedom, three of them regarding the position: height, horizontal and vertical motions; and the other three are related to the orientation: pitch, roll and yaw. A fuzzy control is designed and implemented to control a simulation model of the quadrotor. The inputs are the desired values of the height, roll, pitch and yaw. The outputs are the power of each of the four rotors that is necessary to reach the specifications. Simulation results prove the efficiency of this intelligent control strategy.
---
paper_title: Output Feedback Control of a Quadrotor UAV Using Neural Networks
paper_content:
In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.
---
paper_title: Adaptive Control via Backstepping Technique and Neural Networks of a Quadrotor Helicopter
paper_content:
Abstract A nonlinear adaptive controller for the quadrotor helicopter is proposed using backstepping technique mixed with neural networks. The backstepping strategy is used to achieve good tracking of desired translation positions and yaw angle while maintaining the stability of pitch and roll angles simultaneously. The knowledge of all physical parameters and the exact model of the quadrotor are not required for the controller, only some properties of the model are needed. In fact, online adaptation of neural networks and some parameters is used to compensate some unmodeled dynamics including aerodynamic effects. Under certain relaxed assumptions, the proposed control scheme can guarantee that all the signals in the closed-loop system are Uniformly Ultimately Bounded (UUB). The design methodology is based on Lyapunov stability. One salient feature of the proposed approach is that the controller can be applied to any type of quadrotor helicopter of different masses and lengths within the same class. The feasibility of the control scheme is demonstrated through simulation results.
---
paper_title: Feedback linearization and high order sliding mode observer for a quadrotor UAV
paper_content:
In this paper, a feedback linearization-based con- troller with a high order sliding mode observer running parallel is applied to a quadrotor unmanned aerial vehicle. The high order sliding mode observer works as an observer and estimator of the effect of the external disturbances such as wind and noise. The whole observer-estimator-control law constitutes an original approach to the vehicle regulation with minimal num- ber of sensors. Performance issues of the controller-observer are illustrated in a simulation study that takes into account parameter uncertainties and external disturbances. I. INTRODUCTION
---
paper_title: Backstepping sliding mode controller improved with fuzzy logic: Application to the quadrotor helicopter
paper_content:
In this paper we present a new design method for the fight control of an autonomous quadrotor helicopter based on fuzzy sliding mode control using backstepping approach. Due to the underactuated property of the quadrotor helicopter, the controller can move three positions (x,y,z) of the helicopter and the yaw angle to their desired values and stabilize the pitch and roll angles. A first-order nonlinear sliding surface is obtained using the backstepping technique, on which the developed sliding mode controller is based. Mathematical development for the stability and convergence of the system is presented. The main purpose is to eliminate the chattering phenomenon. Thus we have used a fuzzy logic control to generate the hitting control signal. The performances of the nonlinear control method are evaluated by simulation and the results demonstrate the effectiveness of the proposed control strategy for the quadrotor helicopter in vertical flights.
---
paper_title: Design and construction of a novel quad tilt-wing UAV
paper_content:
Abstract This paper presents aerodynamic and mechanical design, prototyping and flight control system design of a new unmanned aerial vehicle SUAVI ( S abanci University U nmanned A erial V eh I cle). SUAVI is an electric powered quad tilt-wing UAV that is capable of vertical takeoff and landing (VTOL) like a helicopter and long duration horizontal flight like an airplane. Aerodynamic and mechanical designs are optimized to enhance the operational performance of the aerial vehicle. Both of them have great importance for increasing efficiency, reaching the flight duration goals and achieving the desired tasks. A full dynamical model is derived by utilizing Newton–Euler formulation for the development of the flight control system. The prototype is constructed from carbon composite material. A hierarchical control system is designed where a high level controller (supervisor) is responsible for task decision, monitoring states of the vehicle, generating references for low level controllers, etc. and several low level controllers are responsible for attitude and altitude stabilization. Results of several simulations and real flight tests are provided along with flight data to show performance of the developed UAV.
---
paper_title: Modeling and control of a novel tilt — Roll rotor quadrotor UAV
paper_content:
The use of unmanned aerial vehicles (UAVs) for military, scientific, and civilian sectors are increasing drastically in recent years. The quadrotor platform has been used for many applications and research studies, as well. One of the limiting factors that prevents further implementation of the quadrotor system into applications, is the way quadrotor moves. It needs to tilt along the desired direction of motion. By doing this it can have necessary acceleration towards that direction. But tilting has the undesired effect of moving the onboard cameras' direction of view. This becomes an issue for surveillance and other vision based tasks. This study presents the design and control of a novel quadrotor system. Unlike previous study that uses regular quadrotor, this study proposes an alternative propulsion system formed by tilting rotors. This design eliminates the need of tilting the airframe, and it suggests superior performance with respect to regular quadrotor design. The mathematical model of the tiltable-rotor type quadrotor and designed control algorithms are explained. Various simulations are developed on MATLAB, in which the proposed quadrotor aerial vehicle has been successfully controlled. Comparison of the proposed system to regular quadrotor suggests better performance.
---
|
Title: A Review of Control Algorithms for Autonomous Quadrotors
Section 1: Introduction
Description 1: Introduce the significance and applications of quadrotors, and outline the focus and structure of the review.
Section 2: Mathematical Model
Description 2: Explain the key mathematical equations and models used to describe the dynamics of quadrotors.
Section 3: Survey of Control Algorithms
Description 3: Provide an overview of the various control algorithms applied to quadrotors, categorizing them into linear and non-linear control schemes.
Section 4: Proportional Integral Derivative (PID)
Description 4: Discuss the PID control algorithm and its applications, strengths, and limitations in controlling quadrotors.
Section 5: Linear Quadratic Regulator/Gaussian-LQR/G
Description 5: Examine the LQR and LQG control algorithms, their methodologies, and their effectiveness in quadrotor control.
Section 6: Sliding Mode Control (SMC)
Description 6: Describe the implementation of sliding mode control, its benefits, and its challenges when applied to quadrotors.
Section 7: (Integrator) Backstepping Control
Description 7: Analyze the backstepping control algorithm and its integrator variant, focusing on their convergence and disturbance handling capabilities.
Section 8: Adaptive Control Algorithms
Description 8: Explore adaptive control techniques used for quadrotors, highlighting how they adapt to parameter changes and uncertainties.
Section 9: Robust Control Algorithms
Description 9: Review robust control strategies designed to handle system uncertainties and disturbances, and discuss their limitations in terms of tracking.
Section 10: Optimal Control Algorithms
Description 10: Provide insights into optimization-based control methods like LQR and H∞, emphasizing their application to quadrotors and their robustness issues.
Section 11: Feedback Linearization
Description 11: Explain feedback linearization control methods, their application to quadrotors, and their sensitivity to model inaccuracies.
Section 12: Intelligent Control (Fuzzy Logic and Artificial Neural Networks)
Description 12: Cover intelligent control approaches such as fuzzy logic and neural networks, discussing their complexity and computational requirements.
Section 13: Hybrid Control Algorithms
Description 13: Discuss hybrid control schemes that combine multiple control philosophies to overcome individual limitations and enhance overall performance.
Section 14: Discussion and Conclusion
Description 14: Summarize the findings of the review, discuss the strengths and weaknesses of different control algorithms, and outline future research directions in quadrotor control.
|
Euler Diagrams 2004 Preliminary Version A Survey of Reasoning Systems Based on Euler Diagrams Abstract
| 13 |
---
paper_title: Constraint diagrams: visualizing invariants in object-oriented models
paper_content:
A new visual notation is proposed for precisely expressing constraints on object-oriented models, as an alternative to mathematical logic notation used in methods such as Syntropy and Catalysis. The notation is potentially intuitive, expressive, integrates well with existing visual notations, and has a clear and unambiguous semantics. It is reminiscent of informal diagrams used by mathematicians for illustrating relations, and borrows much from Venn diagrams. It may be viewed as a generalization of instance diagrams.
---
paper_title: The Logical Status of Diagrams
paper_content:
Acknowledgements 1. Introduction 2. Preliminaries 3. Venn-I 4. Venn-II 5. Venn-II and L0 6. Diagrammatic versus linguistic representation 7. Conclusion Appendix References Index.
---
paper_title: SD2: a sound and complete diagrammatic reasoning system
paper_content:
SD2 is a system of Venn-type diagrams that can be used to reason diagrammatically about sets, their cardinalities and their relationships. They augment the systems of Venn-Peirce diagrams investigated by Shin (1994) to include lower and upper bounds for the cardinalities of the sets represented by regions of diagrams. We summarise their syntax and semantics and introduce inference rules for reasoning with the system. We discuss the soundness of the system and develop a proof strategy for completeness simpler than that adopted by Shin. We expect this strategy to extend to other, richer spider diagram systems and to constraint diagrams, the visual notation that has been used in conjunction with object-oriented modelling notations such as the Unified Modelling Language.
---
paper_title: Logic and Visual Information
paper_content:
From the Publisher: ::: The importance of visual information is clear from its frequent presence in everyday reasoning and communication, and also in computation. This book examines the logical foundations of visual information presented in the form of diagrams, graphs, charts, tables and maps.
---
paper_title: Implementing Euler/Venn Reasoning Systems
paper_content:
This paper proposes an implementation of a Euler/Venn reasoning system using directed acyclic graphs and shows that this implementation is correct with respect to a modified Shin/Hammer mathematical model of Euler/Venn Reasoning. In proving its correctness it will also be shown that the proposed implementation preserves or inherits the soundness and completeness properties of the mathematical model of the Euler/Venn system.
---
paper_title: Logic and Visual Information
paper_content:
From the Publisher: ::: The importance of visual information is clear from its frequent presence in everyday reasoning and communication, and also in computation. This book examines the logical foundations of visual information presented in the form of diagrams, graphs, charts, tables and maps.
---
paper_title: The Logical Status of Diagrams
paper_content:
Acknowledgements 1. Introduction 2. Preliminaries 3. Venn-I 4. Venn-II 5. Venn-II and L0 6. Diagrammatic versus linguistic representation 7. Conclusion Appendix References Index.
---
paper_title: The information content of Euler/Venn diagrams
paper_content:
An ignition system for internal combustion engines. It generates a controlled-duration continuous-wave high-frequency spark, and employs an output transformer in an oscillator which includes a control winding for starting and stopping the oscillator. There is an electronic switch in series with the control winding; and the spark intervals, including duration thereof, are determined by photoelectric engine-timed means that employ a phototransistor. There is a control circuit for the electronic switch, which circuit includes means for minimizing the response time of the phototransistor.
---
paper_title: Towards a model theory of Venn diagrams
paper_content:
In a regenerative type gas turbine application, a mounting apparatus for a recuperator to support the recuperator between the relatively stationary end walls of a gas turbine housing while simultaneously permitting movement of the recuperator in plural directions with respect to the walls caused by thermal expansion.
---
paper_title: Modeling Heterogeneous Systems
paper_content:
Reasoning practices and decision making often require information from many different sources, which can be both sentential and diagrammatic. In such situations, there are many advantages to reasoning with the diagrams themselves, as opposed to re-expressing the information content of the diagram in sentential form and reasoning in an abstract sentential language. Thus for these practices, being able to extract and re-express pieces of information from one kind of representation into another is essential. The main goal of this paper is to propose a general framework for the modeling of heterogeneous reasoning systems and, most importantly, heterogeneous rules of inference in those systems. Unlike some other work in designing heterogeneous systems, our purpose will not be to define just one notion of heterogeneous inference, but rather to provide a framework in which many different kinds of heterogeneous rules of inference can be defined. After proposing this framework, we will then show how it can be applied to a sample heterogeneous system to define a number of different heterogeneous rules of inference. We will also discuss how the framework can be used to define rules of inference similar to the Observe Rule in Barwise and Etchemendy's Hyperproof system.
---
paper_title: Using DAG Transformations to Verify Euler/Venn Homogeneous and Euler/Venn FOL Heterogeneous Rules of Inference
paper_content:
Abstract In this paper we will present a graph-transformation based method for the verification of heterogeneous first order logic (FOL) and Euler/Venn proofs. It has been shown that a special collection of directed acyclic graphs (DAGs) can be used interchangeably with Euler/Venn diagrams in reasoning processes [4]. Thus, proofs which include Euler/Venn diagrams can be thought of as proofs with DAGs where steps involving only Euler/Venn diagrams can be treated as particular DAG transformations. In the work reported here, we will show how the characterization of these manipulations can be used to verify Euler/Venn proofs. Also, a method for verifying the use of heterogeneous Euler/Venn and FOL reasoning rules will be presented that is also based upon DAG transformations.
---
paper_title: The Logical Status of Diagrams
paper_content:
Acknowledgements 1. Introduction 2. Preliminaries 3. Venn-I 4. Venn-II 5. Venn-II and L0 6. Diagrammatic versus linguistic representation 7. Conclusion Appendix References Index.
---
paper_title: Logic and Visual Information
paper_content:
From the Publisher: ::: The importance of visual information is clear from its frequent presence in everyday reasoning and communication, and also in computation. This book examines the logical foundations of visual information presented in the form of diagrams, graphs, charts, tables and maps.
---
paper_title: Implementing Euler/Venn Reasoning Systems
paper_content:
This paper proposes an implementation of a Euler/Venn reasoning system using directed acyclic graphs and shows that this implementation is correct with respect to a modified Shin/Hammer mathematical model of Euler/Venn Reasoning. In proving its correctness it will also be shown that the proposed implementation preserves or inherits the soundness and completeness properties of the mathematical model of the Euler/Venn system.
---
paper_title: A case study of the design and implementation of heterogeneous reasoning systems
paper_content:
In recent years we have witnessed a growing interest in heterogeneous reasoning systems. A heterogeneous reasoning system incorporates representations from a number of different representation systems, in our case a sentential and a diagrammatic system. The advantage of heterogeneous systems is that they allow a reasoner to bridge the gaps among various formalisms and construct threads of proof which cross the boundaries of the systems of representation. In doing this, these heterogeneous systems allow the reasoner to take advantage of each component system’s ability to express information in that component’s area of expertise. The purpose of this paper is twofold: to propose a general theoretical framework, inspired by Barwise and Seligman’s work in Information Theory [Barwise and Seligman, 1997], for the design of heterogeneous reasoning systems and to use this framework as the basis of an implementation of a First Order Logic and Euler/Venn reasoning system.
---
paper_title: Reasoning with Extended Venn-Peirce Diagrammatic Systems
paper_content:
A method for making molten metal for casting directly from cold starting material. Cold starting material is placed in a movable vessel, and a layer of particulate material selected from the group consisting of coke and flux is spread on the top of the starting material in the vessel. One or more flames of hydrocarbon fuel-oxygen mixture are directed to the particulate layer, while translating the vessel along a closed path, so as to heat the particulate to white-hot state and to mix the cold starting material with the particulate thus heated, for melting the starting material. A composition-control agent is added in the material thus melted for producing desired composition of molten metal, while maintaining the flames and the translation. A fluid curtain is formed around the flames to inhibit the flames from directly burning the inner surface of the vessel.
---
paper_title: On the Completeness and Expressiveness of Spider Diagram Systems
paper_content:
Spider diagram systems provide a visual language that extends the popular and intuitive Venn diagrams and Euler circles. Designed to complement object-oriented modelling notations in the specification of large software systems they can be used to reason diagrammatically about sets, their cardinalities and their relationships with other sets. A set of reasoning rules for a spider diagram system is shown to be sound and complete. We discuss the extension of this result to diagrammatically richer notations and also consider their expressiveness. Finally, we show that for a rich enough system we can diagrammatically express the negation of any diagram.
---
paper_title: Spider diagrams: A diagrammatic reasoning system
paper_content:
Spider diagrams combine and extend Venn diagrams and Euler circles to express constraints on sets and their relationships with other sets. These diagrams can be used in conjunction with object-oriented modelling notations such as the Unified Modelling Language. This paper summarises the main syntax and semantics of spider diagrams. It also introduces inference rules for reasoning with spider diagrams and a rule for combining spider diagrams. This system is shown to be sound but not complete. Disjunctive diagrams are considered as one way of enriching the system to allow combination of diagrams so that no semantic information is lost. The relationship of this system of spider diagrams to other similar systems, which are known to be sound and complete, is explored briefly.
---
paper_title: Reasoning with spider diagrams
paper_content:
Spider diagrams combine and extend Venn diagrams and Euler circles to express constraints on sets and their relationships with other sets. These diagrams can usefully be used in conjunction with object-oriented modelling notations such as the Unified Modelling Language (UML). This paper summarises the main syntax and semantics of spider diagrams and introduces four inference rules for reasoning with spider diagrams and a rule governing the equivalence of the Venn and Euler forms of spider diagrams. This paper also details rules for combining two spider diagrams to produce a single diagram which retains as much of their combined semantic information as possible, and discusses disjunctive diagrams as one possible way of enriching the system in order to combine spider diagrams so that no semantic information is lost.
---
paper_title: Reasoning with Extended Venn-Peirce Diagrammatic Systems
paper_content:
A method for making molten metal for casting directly from cold starting material. Cold starting material is placed in a movable vessel, and a layer of particulate material selected from the group consisting of coke and flux is spread on the top of the starting material in the vessel. One or more flames of hydrocarbon fuel-oxygen mixture are directed to the particulate layer, while translating the vessel along a closed path, so as to heat the particulate to white-hot state and to mix the cold starting material with the particulate thus heated, for melting the starting material. A composition-control agent is added in the material thus melted for producing desired composition of molten metal, while maintaining the flames and the translation. A fluid curtain is formed around the flames to inhibit the flames from directly burning the inner surface of the vessel.
---
paper_title: On Diagram Tokens and Types
paper_content:
Rejecting the temptation to make up a list of necessary and sufficient conditions for diagrammatic and sentential systems, we present an important distinction which arises from sentential and diagrammatic features of systems. Importantly, the distinction we will explore in the paper lies at a meta-level. That is, we argue for a major difference in meta-theory between diagrammatic and sentential systems, by showing the necessity of a more fine-grained syntax for a diagrammatic system than for a sentential system. Unlike with sentential systems, a diagrammatic system requires two levels of syntax--token and type. Token-syntax is about particular diagrams instantiated on some physical medium, and type-syntax provides a formal definition with which a concrete representtation of a diagram must comply. While these two levels of syntax are closely related, the domains of type-syntax and token-syntax are distinct from each other. Euler diagrams are chosen as a case study to illustrate the following major points of the paper: (i) What kinds of diagrammatic features (as opposed to sentential features) require two different levels of syntax? (ii) What is the relation between these two levels of syntax? (iii) What is the advantage of having a two-tiered syntax?
---
paper_title: Using DAG Transformations to Verify Euler/Venn Homogeneous and Euler/Venn FOL Heterogeneous Rules of Inference
paper_content:
Abstract In this paper we will present a graph-transformation based method for the verification of heterogeneous first order logic (FOL) and Euler/Venn proofs. It has been shown that a special collection of directed acyclic graphs (DAGs) can be used interchangeably with Euler/Venn diagrams in reasoning processes [4]. Thus, proofs which include Euler/Venn diagrams can be thought of as proofs with DAGs where steps involving only Euler/Venn diagrams can be treated as particular DAG transformations. In the work reported here, we will show how the characterization of these manipulations can be used to verify Euler/Venn proofs. Also, a method for verifying the use of heterogeneous Euler/Venn and FOL reasoning rules will be presented that is also based upon DAG transformations.
---
paper_title: Reasoning with Extended Venn-Peirce Diagrammatic Systems
paper_content:
A method for making molten metal for casting directly from cold starting material. Cold starting material is placed in a movable vessel, and a layer of particulate material selected from the group consisting of coke and flux is spread on the top of the starting material in the vessel. One or more flames of hydrocarbon fuel-oxygen mixture are directed to the particulate layer, while translating the vessel along a closed path, so as to heat the particulate to white-hot state and to mix the cold starting material with the particulate thus heated, for melting the starting material. A composition-control agent is added in the material thus melted for producing desired composition of molten metal, while maintaining the flames and the translation. A fluid curtain is formed around the flames to inhibit the flames from directly burning the inner surface of the vessel.
---
paper_title: Reasoning with Extended Venn-Peirce Diagrammatic Systems
paper_content:
A method for making molten metal for casting directly from cold starting material. Cold starting material is placed in a movable vessel, and a layer of particulate material selected from the group consisting of coke and flux is spread on the top of the starting material in the vessel. One or more flames of hydrocarbon fuel-oxygen mixture are directed to the particulate layer, while translating the vessel along a closed path, so as to heat the particulate to white-hot state and to mix the cold starting material with the particulate thus heated, for melting the starting material. A composition-control agent is added in the material thus melted for producing desired composition of molten metal, while maintaining the flames and the translation. A fluid curtain is formed around the flames to inhibit the flames from directly burning the inner surface of the vessel.
---
paper_title: Reasoning with Extended Venn-Peirce Diagrammatic Systems
paper_content:
A method for making molten metal for casting directly from cold starting material. Cold starting material is placed in a movable vessel, and a layer of particulate material selected from the group consisting of coke and flux is spread on the top of the starting material in the vessel. One or more flames of hydrocarbon fuel-oxygen mixture are directed to the particulate layer, while translating the vessel along a closed path, so as to heat the particulate to white-hot state and to mix the cold starting material with the particulate thus heated, for melting the starting material. A composition-control agent is added in the material thus melted for producing desired composition of molten metal, while maintaining the flames and the translation. A fluid curtain is formed around the flames to inhibit the flames from directly burning the inner surface of the vessel.
---
paper_title: What can spider diagrams say
paper_content:
Spider diagrams are a visual notation for expressing logical statements. In this paper we identify a well known fragment of first order predicate logic, that we call \(\mathcal {ESD}\), equivalent in expressive power to the spider diagram language. The language \(\mathcal {ESD}\) is monadic and includes equality but has no constants or function symbols. To show this equivalence, in one direction, for each diagram we construct a sentence in \(\mathcal {ESD}\) that expresses the same information. For the more challenging converse we show there exists a finite set of models for a sentence S that can be used to classify all the models for S. Using these classifying models we show that there is a diagram expressing the same information as S.
---
paper_title: The Expressiveness of Spider Diagrams Augmented with Constants
paper_content:
Spider diagrams are a visual language for expressing logical statements. Spiders represent the existence of elements and contours denote sets. Several sound and complete spider diagram systems have been developed and it is known that the spider diagram language is equivalent in expressive power to monadic first order logic with equality. However, these sound and complete spider diagram systems do not contain syntactic elements analogous to constants in first order predicate logic. We extend the spider diagram language to include constant spiders which represent specific individuals and give formal semantics for the extended diagram language. We then prove that this extended system is equivalent in expressive power to the language of spider diagrams without constants
---
paper_title: Projections in Venn-Euler diagrams
paper_content:
Venn diagrams and Euler circles have long been used to express constraints on sets and their relationships with other sets. However, these notations can get very cluttered when we consider many closed curves or contours. In order to reduce this clutter, and to focus attention within the diagram appropriately, the notion of a projected contour, or projection, is introduced. Informally, a projected contour is a contour that describes a set of elements limited to a certain context. Through a series of examples, we develop a formal semantics of projections and discuss the issues involved in introducing these.
---
paper_title: Towards a formalization of constraint diagrams
paper_content:
Geared to complement UML and to the specification of large software systems by non-mathematicians, constraint diagrams are a visual language that generalizes the popular and intuitive Venn diagrams and Euler circles, and adds facilities for quantifying over elements and navigating relations. The language design emphasizes scalability and expressiveness while retaining intuitiveness. Spider diagrams form a subset of the notation, leaving out universal quantification and the ability to navigate relations. Spider diagrams have been given a formal definition. This paper extends that definition to encompass the constraint diagram notation. The formalization of constraint diagrams is nontrivial: it exposes subtleties concerned with the implicit ordering of symbols in the visual language, which were not evident before a formal definition of the language was attempted. This has led to an improved design of the language.
---
paper_title: Drawing graphs in Euler diagrams
paper_content:
We describe a method for drawing graph-enhanced Euler diagrams using a three stage method. The first stage is to lay out the underlying Euler diagram using a multicriteria optimizing system. The second stage is to find suitable locations for nodes in the zones of the Euler diagram using a force based method. The third stage is to minimize edge crossings and total edge length by swapping the location of nodes that are in the same zone with a multicriteria hill climbing method. We show a working version of the software that draws spider diagrams. Spider diagrams represent logical expressions by superimposing graphs upon an Euler diagram. This application requires an extra step in the drawing process because the embedded graphs only convey information about the connectedness of nodes and so a spanning tree must be chosen for each maximally connected component. Similar notations to Euler diagrams enhanced with graphs are common in many applications and our method is generalizable to drawing Hypergraphs represented in the subset standard, or to drawing Higraphs where edges are restricted to connecting with only atomic nodes.
---
paper_title: Dynamic Euler Diagram Drawing
paper_content:
In this paper we describe a method to lay out a graph enhanced Euler diagram so that it looks similar to a previously drawn graph enhanced Euler diagram. This task is nontrivial when the underlying structures of the diagrams differ. In particular, if a structural change is made to an existing drawn diagram, our work enables the presentation of the new diagram with minor disruption to the user's mental map. As the new diagram can be generated from an abstract representation, its initial embedding may be very different from that of the original. We have developed comparison measures for Euler diagrams, integrated into a multicriteria optimizer, and applied a force model for associated graphs that attempts to move nodes towards their positions in the original layout. To further enhance the usability of the system, the transition between diagrams can be animated
---
paper_title: A reading algorithm for constraint diagrams
paper_content:
Constraint diagrams are a visual notation designed to complement the Unified Modeling Language in the development of software systems. They generalize Venn diagrams and Euler circles, and include facilities for quantification and navigation of relations. Their design emphasizes scalability and expressiveness while retaining intuitiveness. The formalization of constraint diagrams is non-trivial: previous attempts have exposed subtleties concerned with the ordering of symbols in the visual language. Consequently, some constraint diagrams have more than one intuitive reading. We develop the concept of the dependence graph for a constraint diagram. From the dependence graph, we obtain a set of reading trees. A reading tree provides a partial ordering for some syntactic elements of the diagram. Given a reading tree for a constraint diagram, we present an algorithm that delivers a unique semantic reading.
---
paper_title: Towards a default reading for constraint diagrams
paper_content:
Constraint diagrams are a diagrammatic notation which may be used to express logical constraints. They were designed to complement the Unified Modeling Language in the development of software systems. They generalize Venn diagrams and Euler circles, and include facilities for quantification and navigation of relations. Due to the lack of a linear ordering of symbols inherent in a diagrammatic language which expresses logical statements, some constraint diagrams have more than one intuitive meaning. We generalize, from an example based approach, to suggest a default reading for constraint diagrams. This reading is usually unique, but may require a small number of simple user choices.
---
paper_title: Reasoning with Projected Contours
paper_content:
Projected contours enable Euler diagrams to scale better. They enable the representation of information using less syntax and can therefore increase visual clarity. Here informal reasoning rules are given that allow the transformation of spider diagrams with respect to projected contours.
---
paper_title: Reasoning with spider diagrams
paper_content:
Spider diagrams combine and extend Venn diagrams and Euler circles to express constraints on sets and their relationships with other sets. These diagrams can usefully be used in conjunction with object-oriented modelling notations such as the Unified Modelling Language (UML). This paper summarises the main syntax and semantics of spider diagrams and introduces four inference rules for reasoning with spider diagrams and a rule governing the equivalence of the Venn and Euler forms of spider diagrams. This paper also details rules for combining two spider diagrams to produce a single diagram which retains as much of their combined semantic information as possible, and discusses disjunctive diagrams as one possible way of enriching the system in order to combine spider diagrams so that no semantic information is lost.
---
paper_title: Computing Reading Trees for Constraint Diagrams
paper_content:
Constraint diagrams are a visual notation designed to complement the Unified Modeling Language in the development of software systems. They generalize Venn diagrams and Euler circles, and include facilities for quantification and navigation of relations. Their design emphasizes scalability and expressiveness while retaining intuitiveness. Due to subtleties concerned with the ordering of symbols in this visual language, the formalization of constraint diagrams is non-trivial; some constraint diagrams have more than one intuitive reading. A ‘reading’ algorithm, which associates a unique semantic interpretation to a constraint diagram, with respect to a reading tree, has been developed. A reading tree provides a partial ordering for syntactic elements of the diagram. Reading trees are obtainable from a partially directed graph, called the dependence graph of the diagram. In this paper we describe a ‘tree-construction’ algorithm, which utilizes graph transformations in order to produce all possible reading trees from a dependence graph. This work will aid the production of tools which will allow an advanced user to choose from a range of semantic interpretations of a diagram.
---
paper_title: Constraint diagrams: visualizing invariants in object-oriented models
paper_content:
A new visual notation is proposed for precisely expressing constraints on object-oriented models, as an alternative to mathematical logic notation used in methods such as Syntropy and Catalysis. The notation is potentially intuitive, expressive, integrates well with existing visual notations, and has a clear and unambiguous semantics. It is reminiscent of informal diagrams used by mathematicians for illustrating relations, and borrows much from Venn diagrams. It may be viewed as a generalization of instance diagrams.
---
paper_title: Generating Readable Proofs: A Heuristic Approach to Theorem Proving With Spider Diagrams
paper_content:
An important aim of diagrammatic reasoning is to make it easier for people to create and understand logical arguments. We have worked on spider diagrams, which visually express logical statements. Ideally, automatically generated proofs should be short and easy to understand. An existing proof generator for spider diagrams successfully writes proofs, but they can be long and unwieldy. In this paper, we present a new approach to proof writing in diagrammatic systems, which is guaranteed to find shortest proofs and can be extended to incorporate other readability criteria. We apply the A * algorithm and develop an admissible heuristic function to guide automatic proof construction. We demonstrate the effectiveness of the heuristic used. The work has been implemented as part of a spider diagram reasoning tool.
---
paper_title: Layout metrics for Euler diagrams
paper_content:
We present an aesthetics based method for drawing Euler diagrams. Aesthetic layout metrics have been found to be useful in graph drawing algorithms, which use metrics motivated by aesthetic principles that aid user understanding of diagrams. We have taken a similar approach to Euler diagram drawing, and have defined a set of suitable metrics to be used within a hill climbing multicriteria optimiser to produce "good" drawings. There are added difficulties when drawing Euler diagrams as they are made up of contours whose structural properties of intersection and containment must be preserved under any layout improvements. We describe our Java implementation of a pair of hill climbing variants to find good drawings, a set of metrics that measure aesthetics for good diagram layout, and issues concerning the choice of weightings for a useful combination of the metrics.
---
paper_title: Constraint diagrams: A step beyond UML
paper_content:
The Unified Modeling Language (UML) is a set of notations for modelling object-oriented systems. It has become the de facto standard. Most of its notations are diagrammatic. An exception to this is the Object Constraint Language (OCL) which is essentially a textual, stylised form of first order predicate logic. We describe a notation, constraint diagrams, which were introduced as a visual technique intended to be used in conjunction with the UML for object-oriented modelling. Constraint diagrams provide a diagrammatic notation for expressing constraints (e.g., invariants) that could only be expressed in UML using OCL.
---
paper_title: Generating proofs with spider diagrams using heuristics
paper_content:
We apply the A¤ algorithm to guide a diagrammatic theorem proving tool. The algorithm requires a heuristic function, which provides a metric on the search space. In this paper we present a collection of metrics between two spider diagrams. We combine these metrics to give a heuristic function that provides a lower bound on the length of a shortest proof from one spider diagram to another, using a collection of sound reasoning rules. We compare the effectiveness of our approach with a breadth- first search for proofs.
---
paper_title: Generating Euler Diagrams
paper_content:
This article describes an algorithm for the automated generation of any Euler diagram starting with an abstract description of the diagram. An automated generation mechanism for Euler diagrams forms the foundations of a generation algorithm for notations such as Harel's higraphs, constraint diagrams and some of the UML notation. An algorithm to generate diagrams is an essential component of a diagram tool for users to generate, edit and reason with diagrams.The work makes use of properties of the dual graph of an abstract diagram to identify which abstract diagrams are "drawable" within given wellformedness rules on concrete diagrams. A Java program has been written to implement the algorithm and sample output is included.
---
paper_title: A constraint diagram reasoning system
paper_content:
The Unified Modeling Language (UML) is a collection of notations which are mainly diagrammatic. These notations are used by software engineers in the process of object oriented modelling. The only textual notation in the UML is the Object Constraint Language (OCL). The OCL is used to express logical constraints such as system invariants. Constraint diagrams are designed to provide a diagrammatic alternative to the OCL. Since constraint diagrams are visual they complement existing notations in the UML. Spider diagrams form the basis of constraint diagrams and sound and complete reasoning systems have been developed. Spider diagrams allow subset relations between sets and cardinality constraints on sets to be expressed. In addition to this, constraint diagrams allow universal quantification and relational navigation and hence are vastly more expressive. In this paper we present the first constraint diagram reasoning system. We give syntax and semantics for constraint diagrams we call CD1 diagrams. We identify syntactic criteria that allow us to determine whether a CD1 diagram is satisfiable. We give descriptions of a set of sound and complete reasoning rules for CD1 diagrams.
---
|
Title: Euler Diagrams 2004 Preliminary Version: A Survey of Reasoning Systems Based on Euler Diagrams
Section 1: Introduction
Description 1: Introduce Euler diagrams, their semantics, limitations, and the purpose of the survey.
Section 2: Reasoning with Euler Diagrams
Description 2: Describe Hammer’s Euler diagram reasoning system and its reasoning rules.
Section 3: Venn Diagrams
Description 3: Outline the features of Venn diagrams and their differences from Euler diagrams.
Section 4: Venn-Peirce Diagrams
Description 4: Discuss modifications by Peirce to Venn diagrams to increase expressiveness.
Section 5: Venn-I Diagrams
Description 5: Explain Shin’s Venn-I diagram system and its reasoning rules.
Section 6: Venn-II Diagrams
Description 6: Describe Shin’s Venn-II diagrams, their rules, and expressive power.
Section 7: Euler/Venn Diagrams
Description 7: Outline Swoboda and Allwein's Euler/Venn diagram system and its features.
Section 8: Spider Diagrams
Description 8: Explain the development, features, and reasoning systems of spider diagrams.
Section 9: SD1 Diagrams
Description 9: Provide details on the SD1 spider diagram system, its syntax, and reasoning rules.
Section 10: SD2 Diagrams
Description 10: Discuss the SD2 system that extends SD1 with the ability to express upper bounds on cardinalities.
Section 11: ESD2 Diagrams
Description 11: Outline the ESD2 system that includes Euler-based diagrams and additional syntax.
Section 12: Further Spider Diagram Systems
Description 12: Explain additional spider diagram systems, including SD3, that further extend the capabilities of SD2.
Section 13: Constraint Diagrams
Description 13: Describe constraint diagrams, their expressiveness, and their application in software engineering.
|
Six reasons for rejecting an industrial survey paper
| 11 |
---
paper_title: Preliminary Findings from a Survey on the MD State of the Practice
paper_content:
In the context of an Italian research project, this paper reports on an on-line survey, performed with 155 software professionals, with the aim of investigating about their opinions and experiences in modeling during software development and Model-driven engineering usage. The survey focused also on used modeling languages, processes and tools. A preliminary analysis of the results confirmed that Model-driven engineering, and more in general software modeling, are very relevant phenomena. Approximately 68% of the sample use models during software development. Among then, 44% generate code starting from models and 16% execute them directly. The preferred language for modeling is UML but DSLs are used as well.
---
paper_title: Principles of survey research: part 1: turning lemons into lemonade
paper_content:
Surveys are probably the most commonly-used research method world-wide. Survey work is visible not only because we see many examples of it in software engineering research, but also because we are often asked to participate in surveys in our private capacity, as electors, consumers, or service users. This widespread use of surveys may give us the impression that surveybased research is straightforward, an easy option for researchers to gather important information about products, context, processes, workers and more. In our personal experience with applying and evaluating research methods and their results, we certainly did not expect to encounter major problems with a survey that we planned, to investigate issues associated with technology adoption. This article and subsequent ones in this series describe how wrong we were. We do not want to give the impression that there is any way of turning a bad survey into a good one; if a survey is a lemon, it stays a lemon. However, we believe that learning from our mistakes is the way to make lemonade from lemons. So this series of articles shares with you our lessons learned, in the hope of improving survey research in software engineering.
---
paper_title: Stakeholders' Perception of Success: An Empirical Investigation
paper_content:
Different stakeholders involved in the software development may attribute success to different indicators. Analogously they may support different factors considered the root of successful projects. The study presented in this paper explores how different stakeholders perceive project success and what they deem the effect of specific factors on the project outcome. The study highlighted both commonalities and differences among three main stakeholder classes. A substantial agreement was observed concerning the characteristics that make a project or product successful. As far as the factors that could lead to success are concerned, more bias emerged.
---
paper_title: SOA adoption in the Italian industry
paper_content:
We conducted a personal opinion survey in two rounds – years 2008 and 2011 – with the aim of investigating the level of knowledge and adoption of SOA in the Italian industry. We are also interested in understanding what is the trend of SOA (positive or negative?) and what are the methods, technologies and tools really used in the industry. The main findings of this survey are the following: (1) SOA is a relevant phenomenon in Italy, (2) Web services and RESTFul services are well-known/used and (3) orchestration languages and UDDI are little known and used. These results suggest that in Italy SOA is interpreted in a more simplistic way with respect to the current/real definition (i.e., without the concepts of orchestration/choreography and registry). Currently, the adoption of SOA is medium/low with a stable/positive trend of pervasiveness.
---
paper_title: Practical Experiences in the Design and Conduct of Surveys in Empirical Software Engineering
paper_content:
A survey is an empirical research strategy for the collection of information from heterogeneous sources. In this way, survey results often exhibit a high degree of external validity. It is complementary to other empirical research strategies such as controlled experiments, which usually have their strengths in the high internal validity of the findings. While there is a growing number of (quasi-)controlled experiments reported in the software engineering literature, few results of large scale surveys have been reported there. Hence, there is still a lack of knowledge on how to use surveys in a systematic manner for software engineering empirical research.
---
paper_title: Maturity of software modelling and model driven engineering: a survey in the Italian industry
paper_content:
Background: The main claimed advantage of model driven engineering is the improvement of productivity. However, little information is available about its actual usage during software development and maintenance in the industry. Objective: The main aim of this work is investigating the level of maturity in the usage of software models and model driven engineering in the Italian industry. The perspective is that of software engineering researchers. Method: First, we conducted an exploratory personal opinion survey with 155 Italian software professionals. The data were collected with the help of a web-based on-line questionnaire. Then, we conducted focused interviews with three software professionals to interpret doubtful results. Results: Software modelling is a very relevant phenomenon in the Italian industry. Model driven techniques are used in the industry, even if (i) only for a limited extent, (ii) despite a quite generalized dissatisfaction about available tools and (iii) despite a generally low experience of the IT personnel in such techniques. Limitations: Generalization of results is limited due to the sample size. Moreover, possible self-exclusion from participants not interested in modelling could have biased the results. Conclusion: Results reinforce existing evidence regarding the usage of software modelling and (partially of) model driven engineering in the industry but highlight several aspects of immaturity of the Italian industry.
---
paper_title: Software migration projects in Italian industry: Preliminary results from a state of the practice survey
paper_content:
Software migration is a fundamental and complex task in software maintenance, particularly relevant in recent years given the pervasiveness of Web and of mobile technologies. In the context of an Italian Research Project devoted to the empirical assessment of migration techniques and tools, this paper reports on a survey, performed among 59 Italian Information Technology (IT) companies, with the aim of investigating about their experiences in software migration, their main migration goals, and the pieces of technology adopted. The research project and the survey focused, in particular, on in-house migration projects towards the Web, service oriented architectures and wireless environments. A preliminary analysis of results confirmed that software migration is a very relevant phenomenon. Most migration activities performed in recent years targeted the Web, with a few number of migrations towards mobile and towards service-oriented architectures. Among other things, the survey highlights a limited, and insufficient, usage and availability of tools for supporting migration tasks.
---
paper_title: Actual vs. perceived effect of software engineering practices in the Italian industry
paper_content:
A commonly cited limitation of software engineering research consists in its detachment from the industrial practice. Several studies have analyzed a number of practices and identified their benefits and drawbacks but little is known about their dissemination in the industry. For a set of 18 practices commonly studied in the literature, this paper investigated diffusion, effect on the success, and perceived usefulness in 62 actual industrial projects from 28 Italian IT companies. In particular we proposed a classification of these perceptions and we were able to classify 14 practices. We found statistical evidence that 7 factors have an actual effect (positive for 6 of them, negative for one). Moreover 77% (10 out of 13) of the known good practices (e.g., importance of good project schedule or complete requirements' list) are perceived consistently by the industry. For a few other practices (having a champion's support, using metrics, reducing quality) we noticed a lack of awareness in the industry. Starting from these observations we propose guidelines for industrial practice and suggestions for academic research.
---
paper_title: Migration of information systems in the Italian industry: A state of the practice survey
paper_content:
Context: Software migration-and in particular migration towards the Web and towards distributed architectures-is a challenging and complex activity, and has been particularly relevant in recent years, due to the large number of migration projects the industry had to face off because of the increasing pervasiveness of the Web and of mobile devices. Objective: This paper reports a survey aimed at identifying the state-of-the-practice of the Italian industry for what concerns the previous experiences in software migration projects-specifically concerning information systems-the adopted tools and the emerging needs and problems. Method: The study has been carried out among 59 Italian Information Technology companies, and for each company a representative person had to answer an on-line questionnaire concerning migration experiences, pieces of technology involved in migration projects, adopted tools, and problems occurred during the project. Results: Indicate that migration-especially towards the Web-is highly relevant for Italian IT companies, and that companies tend to increasingly adopt free and open source solutions rather than commercial ones. Results also indicate that the adoption of specific tools for migration is still very limited, either because of the lack of skills and knowledge, or due to the lack of mature and adequate options. Conclusions: Findings from this survey suggest the need for further technology transfer between academia and industry for the purpose of favoring the adoption of software migration techniques and tools.
---
paper_title: An exploratory survey on SOA knowledge, adoption and trend in the Italian industry
paper_content:
The main aim of this work is investigating the level of knowledge and diffusion of SOA (Service Oriented Architecture) in the Italian industry. We are also interested to understand what is the trend of SOA (positive or negative?) and what are the methods, technologies and tools really used in the industry.
---
paper_title: Stakeholders' Perception of Success: An Empirical Investigation
paper_content:
Different stakeholders involved in the software development may attribute success to different indicators. Analogously they may support different factors considered the root of successful projects. The study presented in this paper explores how different stakeholders perceive project success and what they deem the effect of specific factors on the project outcome. The study highlighted both commonalities and differences among three main stakeholder classes. A substantial agreement was observed concerning the characteristics that make a project or product successful. As far as the factors that could lead to success are concerned, more bias emerged.
---
paper_title: Software migration projects in Italian industry: Preliminary results from a state of the practice survey
paper_content:
Software migration is a fundamental and complex task in software maintenance, particularly relevant in recent years given the pervasiveness of Web and of mobile technologies. In the context of an Italian Research Project devoted to the empirical assessment of migration techniques and tools, this paper reports on a survey, performed among 59 Italian Information Technology (IT) companies, with the aim of investigating about their experiences in software migration, their main migration goals, and the pieces of technology adopted. The research project and the survey focused, in particular, on in-house migration projects towards the Web, service oriented architectures and wireless environments. A preliminary analysis of results confirmed that software migration is a very relevant phenomenon. Most migration activities performed in recent years targeted the Web, with a few number of migrations towards mobile and towards service-oriented architectures. Among other things, the survey highlights a limited, and insufficient, usage and availability of tools for supporting migration tasks.
---
paper_title: Migration of information systems in the Italian industry: A state of the practice survey
paper_content:
Context: Software migration-and in particular migration towards the Web and towards distributed architectures-is a challenging and complex activity, and has been particularly relevant in recent years, due to the large number of migration projects the industry had to face off because of the increasing pervasiveness of the Web and of mobile devices. Objective: This paper reports a survey aimed at identifying the state-of-the-practice of the Italian industry for what concerns the previous experiences in software migration projects-specifically concerning information systems-the adopted tools and the emerging needs and problems. Method: The study has been carried out among 59 Italian Information Technology companies, and for each company a representative person had to answer an on-line questionnaire concerning migration experiences, pieces of technology involved in migration projects, adopted tools, and problems occurred during the project. Results: Indicate that migration-especially towards the Web-is highly relevant for Italian IT companies, and that companies tend to increasingly adopt free and open source solutions rather than commercial ones. Results also indicate that the adoption of specific tools for migration is still very limited, either because of the lack of skills and knowledge, or due to the lack of mature and adequate options. Conclusions: Findings from this survey suggest the need for further technology transfer between academia and industry for the purpose of favoring the adoption of software migration techniques and tools.
---
paper_title: Preliminary Findings from a Survey on the MD State of the Practice
paper_content:
In the context of an Italian research project, this paper reports on an on-line survey, performed with 155 software professionals, with the aim of investigating about their opinions and experiences in modeling during software development and Model-driven engineering usage. The survey focused also on used modeling languages, processes and tools. A preliminary analysis of the results confirmed that Model-driven engineering, and more in general software modeling, are very relevant phenomena. Approximately 68% of the sample use models during software development. Among then, 44% generate code starting from models and 16% execute them directly. The preferred language for modeling is UML but DSLs are used as well.
---
paper_title: SOA adoption in the Italian industry
paper_content:
We conducted a personal opinion survey in two rounds – years 2008 and 2011 – with the aim of investigating the level of knowledge and adoption of SOA in the Italian industry. We are also interested in understanding what is the trend of SOA (positive or negative?) and what are the methods, technologies and tools really used in the industry. The main findings of this survey are the following: (1) SOA is a relevant phenomenon in Italy, (2) Web services and RESTFul services are well-known/used and (3) orchestration languages and UDDI are little known and used. These results suggest that in Italy SOA is interpreted in a more simplistic way with respect to the current/real definition (i.e., without the concepts of orchestration/choreography and registry). Currently, the adoption of SOA is medium/low with a stable/positive trend of pervasiveness.
---
paper_title: Maturity of software modelling and model driven engineering: a survey in the Italian industry
paper_content:
Background: The main claimed advantage of model driven engineering is the improvement of productivity. However, little information is available about its actual usage during software development and maintenance in the industry. Objective: The main aim of this work is investigating the level of maturity in the usage of software models and model driven engineering in the Italian industry. The perspective is that of software engineering researchers. Method: First, we conducted an exploratory personal opinion survey with 155 Italian software professionals. The data were collected with the help of a web-based on-line questionnaire. Then, we conducted focused interviews with three software professionals to interpret doubtful results. Results: Software modelling is a very relevant phenomenon in the Italian industry. Model driven techniques are used in the industry, even if (i) only for a limited extent, (ii) despite a quite generalized dissatisfaction about available tools and (iii) despite a generally low experience of the IT personnel in such techniques. Limitations: Generalization of results is limited due to the sample size. Moreover, possible self-exclusion from participants not interested in modelling could have biased the results. Conclusion: Results reinforce existing evidence regarding the usage of software modelling and (partially of) model driven engineering in the industry but highlight several aspects of immaturity of the Italian industry.
---
paper_title: Actual vs. perceived effect of software engineering practices in the Italian industry
paper_content:
A commonly cited limitation of software engineering research consists in its detachment from the industrial practice. Several studies have analyzed a number of practices and identified their benefits and drawbacks but little is known about their dissemination in the industry. For a set of 18 practices commonly studied in the literature, this paper investigated diffusion, effect on the success, and perceived usefulness in 62 actual industrial projects from 28 Italian IT companies. In particular we proposed a classification of these perceptions and we were able to classify 14 practices. We found statistical evidence that 7 factors have an actual effect (positive for 6 of them, negative for one). Moreover 77% (10 out of 13) of the known good practices (e.g., importance of good project schedule or complete requirements' list) are perceived consistently by the industry. For a few other practices (having a champion's support, using metrics, reducing quality) we noticed a lack of awareness in the industry. Starting from these observations we propose guidelines for industrial practice and suggestions for academic research.
---
paper_title: An exploratory survey on SOA knowledge, adoption and trend in the Italian industry
paper_content:
The main aim of this work is investigating the level of knowledge and diffusion of SOA (Service Oriented Architecture) in the Italian industry. We are also interested to understand what is the trend of SOA (positive or negative?) and what are the methods, technologies and tools really used in the industry.
---
paper_title: Stakeholders' Perception of Success: An Empirical Investigation
paper_content:
Different stakeholders involved in the software development may attribute success to different indicators. Analogously they may support different factors considered the root of successful projects. The study presented in this paper explores how different stakeholders perceive project success and what they deem the effect of specific factors on the project outcome. The study highlighted both commonalities and differences among three main stakeholder classes. A substantial agreement was observed concerning the characteristics that make a project or product successful. As far as the factors that could lead to success are concerned, more bias emerged.
---
paper_title: Principles of survey research: part 1: turning lemons into lemonade
paper_content:
Surveys are probably the most commonly-used research method world-wide. Survey work is visible not only because we see many examples of it in software engineering research, but also because we are often asked to participate in surveys in our private capacity, as electors, consumers, or service users. This widespread use of surveys may give us the impression that surveybased research is straightforward, an easy option for researchers to gather important information about products, context, processes, workers and more. In our personal experience with applying and evaluating research methods and their results, we certainly did not expect to encounter major problems with a survey that we planned, to investigate issues associated with technology adoption. This article and subsequent ones in this series describe how wrong we were. We do not want to give the impression that there is any way of turning a bad survey into a good one; if a survey is a lemon, it stays a lemon. However, we believe that learning from our mistakes is the way to make lemonade from lemons. So this series of articles shares with you our lessons learned, in the hope of improving survey research in software engineering.
---
|
Title: Six Reasons for Rejecting an Industrial Survey Paper
Section 1: INTRODUCTION
Description 1: Introduce the motivation and background for industrial surveys in software engineering and explain their importance and challenges.
Section 2: EXPERIENCE GAINED
Description 2: Summarize the authors' experiences and the details of several industrial surveys they have conducted over the past ten years.
Section 3: METHOD
Description 3: Describe the methodology used to analyze the reviewer comments about the industrial survey papers, including categorizing the comments and analyzing their frequency.
Section 4: CRITICISMS
Description 4: Present the categorized criticisms received for the survey papers, including their frequency and groundedness, and discuss each criticism in detail.
Section 5: No Practical Usefulness
Description 5: Discuss the common criticism regarding the lack of practical usefulness of the survey results and provide a rebuttal.
Section 6: Sampling Bias
Description 6: Describe the issues related to sampling bias, including self-selection bias, sampling frame, and the representativeness of the sample, and suggest possible mitigation strategies.
Section 7: Obvious Conclusions
Description 7: Address the criticism that survey results are often deemed obvious or non-controversial and provide a counter-argument.
Section 8: People's Perceptions
Description 8: Explain the limitation of surveys collecting self-reported data and argue for the importance of human perceptions in software engineering.
Section 9: Non Respondents
Description 9: Discuss the challenges of analyzing non-respondents and the computation of response rates.
Section 10: Limited Geographical Scope
Description 10: Address the criticism of limited geographical scope in survey studies and discuss its implications on the generalizability of the findings.
Section 11: CONCLUSIONS
Description 11: Summarize the main findings and argue for a more lenient and understanding approach to evaluating industrial survey papers, emphasizing the importance of such studies despite their limitations.
|
Approximation Metrics Based on Probabilistic Bisimulations for General State-Space Markov Processes: A Survey
| 7 |
---
paper_title: Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games
paper_content:
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experiments or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of epsi-simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines epsi-(bi)simulation. Of course, for all the notions defined, letting epsi = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens in fully probabilistic systems when epsi = 0, two-way e-simulation for epsi > 0 is not equal to epsi-bisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest epsi such that s and t are epsi-equivalent. This is the first polynomial algorithm for a non-discounted metric. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
---
paper_title: Weak Bisimulation for Fully Probabilistic Processes
paper_content:
Bisimulations that abstract from internal computation have proven to be useful for verification of compositionally defined transition systems. In the literature of probabilistic extensions of such transition systems, similar bisimulations are rare. In this paper, we introduce weak and branching bisimulation for fully probabilistic systems, transition systems where nondeterministic branching is replaced by probabilistic branching. In contrast to the nondeterministic case, both relations coincide. We give an algorithm to decide weak (and branching) bisimulation with a time complexity cubic in the number of states of the fully probabilistic system. This meets the worst case complexity for deciding branching bisimulation in the nondeterministic case. In addition, the relation is shown to be a congruence with respect to the operators of PLSCCS, a lazy synchronous probabilistic variant of CCS. We illustrate that due to these properties, weak bisimulation provides all the crucial ingredients for mechanised compositional veri�cation of probabilistic transition systems.
---
paper_title: Principles of Model Checking
paper_content:
Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.
---
paper_title: Approximately Bisimilar Symbolic Models for Incrementally Stable Switched Systems
paper_content:
Switched systems constitute an important modeling paradigm faithfully describing many engineering systems in which software interacts with the physical world. Despite considerable progress on stability and stabilization of switched systems, the constant evolution of technology demands that we make similar progress with respect to different, and perhaps more complex, objectives. This paper describes one particular approach to address these different objectives based on the construction of approximately equivalent (bisimilar) symbolic models for switched systems. The main contribution of this paper consists in showing that under standard assumptions ensuring incremental stability of a switched system (i.e., existence of a common Lyapunov function, or multiple Lyapunov functions with dwell time), it is possible to construct a finite symbolic model that is approximately bisimilar to the original switched system with a precision that can be chosen a priori. To support the computational merits of the proposed approach, we use symbolic models to synthesize controllers for two examples of switched systems, including the boost dc-dc converter.
---
paper_title: Exact and Ordinary Lumpability in Finite Markov Chains
paper_content:
Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact determination of several stationary and transient results for the original chain. We show which quantities can be determined without an error from the aggregated process and describe methods to calculate bounds on the remaining results. Furthermore, the concept of lumpability is extended to near
---
paper_title: Probabilistic Simulations for Probabilistic Processes
paper_content:
Several probabilistic simulation relations for probabilistic systems are defined and evaluated according to two criteria: compositionality and preservation of “interesting” properties. Here, the interesting properties of a system are identified with those that are expressible in an untimed version of the Timed Probabilistic concurrent Computation Tree Logic (TPCTL) of Hansson. The definitions are made, and the evaluations carried out, in terms of a general labeled transition system model for concurrent probabilistic computation. The results cover weak simulations, which abstract from internal computation, as well as strong simulations, which do not.
---
paper_title: Continuous stochastic logic characterizes bisimulation of continuous-time Markov processes
paper_content:
In a recent paper Baier et al. [Lecture Notes in Computer Science, Springer-Verlag, 2000, p. 358] analyzed a new way of model-checking formulas of a logic for continuous-time processes—called continuous stochastic logic (henceforth CSL)—against continuous-time Markov chains—henceforth CTMCs. One of the important results of that paper was the proof that if two CTMCs were bisimilar then they would satisfy exactly the same formulas of CSL. This raises the converse question—does satisfaction of the same collection of CSL formulas imply bisimilarity? In other words, given two CTMCs which are known to satisfy exactly the same formulas of CSL does it have to be the case that they are bisimilar? We prove that the answer to the question just raised is “yes”. In fact we prove a significant extension, namely that a subset of CSL suffices even for systems where the state space may be a continuum. Along the way we prove a result to the effect that the set of Zeno paths has measure zero provided that the transition rates are bounded.
---
paper_title: O-Minimal Hybrid Systems
paper_content:
An important approach to decidability questions for verification algorithms of hybrid systems has been the construction of a bisimulation. Bisimulations are finite state quotients whose reachability properties are equivalent to those of the original infinite state hybrid system. In this paper we introduce the notion of o-minimal hybrid systems, which are initialized hybrid systems whose relevant sets and flows are definable in an o-minimal theory. We prove that o-minimal hybrid systems always admit finite bisimulations. We then present specific examples of hybrid systems with complex continuous dynamics for which finite bisimulations exist.
---
paper_title: Bisimulation for probabilistic transition systems: a coalgebraic approach
paper_content:
Abstract The notion of bisimulation as proposed by Larsen and Skou for discrete probabilistic transition systems is shown to coincide with a coalgebraic definition in the sense of Aczel and Mendler in terms of a set functor, which associates to a set its collection of simple probability distributions. This coalgebraic formulation makes it possible to generalize the concepts of discrete probabilistic transition system and probabilistic bisimulation to a continuous setting involving Borel probability measures. A functor ℳ 1 is introduced that yields for a metric space its collection of Borel probability measures. Under reasonable conditions, this functor exactly captures generalized probabilistic bisimilarity. Application of the final coalgebra paradigm to a functor based on ℳ 1 then yields an internally fully abstract semantical domain with respect to probabilistic bisimulation, which is therefore well suited for the interpretation of probabilistic specification and stochastic programming concepts.
---
paper_title: Approximately bisimilar symbolic models for nonlinear control systems
paper_content:
Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suitable to describe software and hardware interfacing the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each "symbol" corresponds to an "aggregate" of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore if the state space of the control system is bounded the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
---
paper_title: Bisimulation for Labelled Markov Processes
paper_content:
In this paper we introduce a new class of labelled transition systems - Labelled Markov Processes - and define bisimulation for them. Labelled Markov processes are probabilistic labelled transition systems where the state space is not necessarily discrete, it could be the reals, for example. We assume that it is a Polish space (the underlying topological space for a complete separable metric space). The mathematical theory of such systems is completely new from the point of view of the extant literature on probabilistic process algebra; of course, it uses classical ideas from measure theory and Markov process theory. The notion of bisimulation builds on the ideas of Larsen and Skou and of Joyal, Nielsen and Winskel. The main result that we prove is that a notion of bisimulation for Markov processes on Polish spaces, which extends the Larsen-Skou denition for discrete systems, is indeed an equivalence relation. This turns out to be a rather hard mathematical result which, as far as we know, embodies a new result in pure probability theory. This work heavily uses continuous mathematics which is becoming an important part of work on hybrid systems.
---
paper_title: HYTECH: A Model Checker for Hybrid Systems
paper_content:
A hybrid system consists of a collection of digital programs that interact with each other and with an analog environment. Examples of hybrid systems include medical equipment, manufacturing controllers, automotive controllers, and robots. The formal analysis of the mixed digital-analog nature of these systems requires a model that incorporates the discrete behavior of computer programs with the continuous behavior of environment variables, such as temperature and pressure. Hybrid automata capture both types of behavior by combining finite automata with differential inclusions (i.e. differential inequalities). HyTech is a symbolic model checker for linear hybrid automata, an expressive, yet automatically analyzable, subclass of hybrid automata. A key feature of HyTech is its ability to perform parametric analysis, i.e. to determine the values of design parameters for which a linear hybrid automaton satisfies a temporal requirement.
---
paper_title: Bisimulation for general stochastic hybrid systems
paper_content:
In this paper we define a bisimulation concept for some very general models for stochastic hybrid systems (general stochastic hybrid systems). The definition of bisimulation builds on the ideas of Edalat and of Larsen and Skou and of Joyal, Nielsen and Winskel. The main result is that this bisimulation for GSHS is indeed an equivalence relation. The secondary result is that this bisimulation relation for the stochastic hybrid system models used in this paper implies the same kind of bisimulation for their continuous parts and respectively for their jumping structures.
---
paper_title: Bisimulation for Communicating Piecewise Deterministic Markov Processes (CPDPs)
paper_content:
CPDPs (Communicating Piecewise Deterministic Markov Processes) can be used for compositional specification of systems from the class of stochastic hybrid processes formed by PDPs (Piecewise Deterministic Markov Processes). We define CPDPs and the composition of CPDPs, and prove that the class of CPDPs is closed under composition. Then we introduce a notion of bisimulation for PDPs and CPDPs and we prove that bisimilar PDPs as well as bisimilar CPDPs have equal stochastic behavior. Finally, as main result, we prove the congruence property that, for a composite CPDP, substituting components by different but bisimilar components results in a CPDP that is bisimilar to the original composite CPDP (and therefore has equal stochastic behavior). © Springer-Verlag Berlin Heidelberg 2005.
---
paper_title: Metrics for Markov Decision Processes with Infinite State Spaces
paper_content:
We present metrics for measuring state similarity in Markov decision processes (MDPs) with infinitely many states, including MDPs with continuous state spaces. Such metrics provide a stable quantitative analogue of the notion of bisimulation for MDPs, and are suitable for use in MDP approximation. We show that the optimal value function associated with a discounted infinite horizon planning task varies continuously with respect to our metric distances.
---
paper_title: Metrics for Finite Markov Decision Processes
paper_content:
We present metrics for measuring the similarity of states in a finite Markov decision process (MDP). The formulation of our metrics is based on the notion of bisimulation for MDPs, with an aim towards solving discounted infinite horizon reinforcement learning tasks. Such metrics can be used to aggregate states, as well as to better structure other value function approximators (e.g., memory-based or nearest-neighbor approximators). We provide bounds that relate our metric distances to the optimal values of states in the given MDP.
---
paper_title: Approximation Metrics for Discrete and Continuous Systems
paper_content:
Established system relationships for discrete systems, such as language inclusion, simulation, and bisimulation, require system observations to be identical. When interacting with the physical world, modeled by continuous or hybrid systems, exact relationships are restrictive and not robust. In this paper, we develop the first framework of system approximation that applies to both discrete and continuous systems by developing notions of approximate language inclusion, approximate simulation, and approximate bisimulation relations. We define a hierarchy of approximation pseudo-metrics between two systems that quantify the quality of the approximation, and capture the established exact relationships as zero sections. Our approximation framework is compositional for a synchronous composition operator. Algorithms are developed for computing the proposed pseudo-metrics, both exactly and approximately. The exact algorithms require the generalization of the fixed point algorithms for computing simulation and bisimulation relations, or dually, the solution of a static game whose cost is the so-called branching distance between the systems. Approximations for the pseudo-metrics can be obtained by considering Lyapunov-like functions called simulation and bisimulation functions. We illustrate our approximation framework in reducing the complexity of safety verification problems for both deterministic and nondeterministic continuous systems
---
paper_title: A theory of timed automata
paper_content:
Alur, R. and D.L. Dill, A theory of timed automata, Theoretical Computer Science 126 (1994) 183-235. We propose timed (j&e) automata to model the behavior of real-time systems over time. Our definition provides a simple, and yet powerful, way to annotate state-transition graphs with timing constraints using finitely many real-valued clocks. A timed automaton accepts timed words-infinite sequences in which a real-valued time of occurrence is associated with each symbol. We study timed automata from the perspective of formal language theory: we consider closure properties, decision problems, and subclasses. We consider both nondeterministic and deterministic transition structures, and both Biichi and Muller acceptance conditions. We show that nondeterministic timed automata are closed under union and intersection, but not under complementation, whereas deterministic timed Muller automata are closed under all Boolean operations. The main construction of the paper is an (PSPACE) algorithm for checking the emptiness of the language of a (nondeterministic) timed automaton. We also prove that the universality problem and the language inclusion problem are solvable only for the deterministic automata: both problems are undecidable (II i-hard) in the nondeterministic case and PSPACE-complete in the deterministic case. Finally, we discuss the application of this theory to automatic verification of real-time requirements of finite-state systems.
---
paper_title: Bisimilar linear systems
paper_content:
The notion of bisimulation in theoretical computer science is one of the main complexity reduction methods for the analysis and synthesis of labeled transition systems. Bisimulations are special quotients of the state space that preserve many important properties expressible in temporal logics, and, in particular, reachability. In this paper, the framework of bisimilar transition systems is applied to various transition systems that are generated by linear control systems. Given a discrete-time or continuous-time linear system, and a finite observation map, we characterize linear quotient maps that result in quotient transition systems that are bisimilar to the original system. Interestingly, the characterizations for discrete-time systems are more restrictive than for continuous-time systems, due to the existence of an atomic time step. We show that computing the coarsest bisimulation, which results in maximum complexity reduction, corresponds to computing the maximal controlled or reachability invariant subspace inside the kernel of the observations map. These results establish strong connections between complexity reduction concepts in control theory and computer science.
---
paper_title: Equivalence notions and model minimization in Markov decision processes
paper_content:
Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for solving them run in time polynomial in the size of the state space, where this size is extremely large for most real-world planning problems of interest. Recent AI research has addressed this problem by representing the MDP in a factored form. Factored MDPs, however, are not amenable to traditional solution methods that call for an explicit enumeration of the state space. One familiar way to solve MDP problems with very large state spaces is to form a reduced (or aggregated) MDP with the same properties as the original MDP by combining "equivalent" states. In this paper, we discuss applying this approach to solving factored MDP problems--we avoid enumerating the state space by describing large blocks of "equivalent" states in factored form, with the block descriptions being inferred directly from the original factored representation. The resulting reduced MDP may have exponentially fewer states than the original factored MDP, and can then be solved using traditional methods. The reduced MDP found depends on the notion of equivalence between states used in the aggregation. The notion of equivalence chosen will be fundamental in designing and analyzing algorithms for reducing MDPs. Optimally, these algorithms will be able to find the smallest possible reduced MDP for any given input MDP and notion of equivalence (i.e., find the "minimal model" for the input MDP). Unfortunately, the classic notion of state equivalence from non-deterministic finite state machines generalized to MDPs does not prove useful. We present here a notion of equivalence that is based upon the notion of bisimulation from the literature on concurrent processes. Our generalization of bisimulation to stochastic processes yields a non-trivial notion of state equivalence that guarantees the optimal policy for the reduced model immediately induces a corresponding Optimal policy for the original model. With this notion of state equivalence, we design and analyze an algorithm that minimizes arbitrary factored MDPs and compare this method analytically to previous algorithms for solving factored MDPs. We show that previous approaches implicitly derive equivalence relations that we define here.
---
paper_title: Specification and refinement of probabilistic processes
paper_content:
A formalism for specifying probabilistic transition systems, which constitute a basic semantic model for description and analysis of, e.g. reliability aspects of concurrent and distributed systems, is presented. The formalism itself is based on transition systems. Roughly a specification has the form of a transition system in which transitions are labeled by sets of allowed probabilities. A satisfaction relation between processes and specifications that generalizes probabilistic bisimulation equivalence is defined. It is shown that it is analogous to the extension from processes to modal transition systems given by K. Larsen and B. Thomsen (1988). Another weaker criterion views a specification as defining a set of probabilistic processes; refinement is then simply containment between sets of processes. A complete method for verifying containment between specifications, which extends methods for deciding containment between specifications, which extends methods for deciding containment between finite automata or tree acceptors, is presented. >
---
paper_title: Concurrency and Automata on Infinite Sequences
paper_content:
The paper is concerned with ways in which fair concurrency can be modeled using notations for omega-regular languages - languages containing infinite sequences, whose recognizers a.re modified forms of Buchi or Muller-McNaughton automata. There are characterization of these languages in terms of recursion equation sets which involve both minimal and maximal fix point operators. The class of ω-regular languages is closed under a fair concurrency operator. A general method for proving/deciding equivalences between such languages is obtained, derived from Milner's notion of "simulation".
---
paper_title: The metric analogue of weak bisimulation for probabilistic processes
paper_content:
We observe that equivalence is not a robust concept in the presence of numerical information - such as probabilities-in the model. We develop a metric analogue of weak bisimulation in the spirit of our earlier work on metric analogues for strong bisimulation. We give a fixed point characterization of the metric. This makes available conductive reasoning principles and allows us to prove metric analogues of the usual algebraic laws for process combinators. We also show that quantitative properties of interest are continuous with respect to the metric, which says that if two processes are close in the metric then observable quantitative properties of interest are indeed close. As an important example of this we show that nearby processes have nearby channel capacities - a quantitative measure of their propensity to leak information.
---
paper_title: Abstraction in Probabilistic Process AlgebraS
paper_content:
Process algebras with abstraction have been widely used for the specification and verification of non-probabilistic concurrent systems. The main strategy in these algebras is introducing a constant, denoting an internal action, and a set of fairness rules. Following the same approach, in this paper we propose a fully probabilistic process algebra with abstraction which contains a set of verification rules as counterparts of the fairness rules in standard ACP-like process algebras with abstraction. Having probabilities present and employing the results from Markov chain analysis, these rules are expressible in a very intuitive way. In addition to this algebraic approach, we introduce a new version of probabilistic branching bisimulation for the alternating model of probabilistic systems. Different from other approaches, this bisimulation relation requires the same probability measure only for specific related processes called entries. We claim this definition corresponds better with intuition. Moreover, the fairness rules are sound in the model based on this bisimulation. Finally, we present an algorithm to decide our branching bisimulation with a polynomial-time complexity in the number of the states of the probabilistic graph.
---
paper_title: Bisimilar control affine systems
paper_content:
The notion of bisimulation plays a very important role in theoretical computer science where it provides several notions of equivalence between models of computation. These equivalences are in turn used to simplify analysis and synthesis for these models. In system theory, a similar notion is also of interest in order to develop modular analysis and design tools for purely continuous or hybrid control systems. We introduce two notions of bisimulation for nonlinear systems. We present a differential-algebraic characterization of these notions and show that bisimilar systems of different dimensions are obtained by factoring out certain invariant distributions. Furthermore, we also show that all bisimilar systems of different dimension are of this form.
---
paper_title: Comparative Branching-Time Semantics for Markov Chains
paper_content:
This paper presents various semantics in the branching-time spectrum of discrete-time and continuous-time Markov chains (DTMCs and CTMCs). Strong and weak bisimulation equivalence and simulation preorders are covered and are logically characterized in terms of the temporal logics Probabilistic Computation Tree Logic (PCTL) and Continuous Stochastic Logic (CSL). Apart from presenting various existing branching-time relations in a uniform manner, this paper presents the following new results: (i) strong simulation for CTMCs, (ii) weak simulation for CTMCs and DTMCs, (iii) logical characterizations thereof (including weak bisimulation for DTMCs), (iv) a relation between weak bisimulation and weak simulation equivalence, and (v) various connections between equivalences and pre-orders in the continuous-and discrete-time setting. The results are summarized in a branching-time spectrum for DTMCs and CTMCs elucidating their semantics as well as their relationship.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: On infinite-horizon probabilistic properties and stochastic bisimulation functions
paper_content:
This work investigates infinite-horizon properties over discrete-time stochastic models with continuous state spaces. The focus is on understanding how the structural features of a model (e.g., the presence of absorbing sets) affect the values of these properties and relate to their uniqueness. Furthermore, we argue that the investigation of these features can lead to approximation bounds for the value of such properties, as well as to improvements on their computation. The article employs the presented results to find a stochastic bisimulation function of two processes.
---
paper_title: Approximate abstractions of stochastic systems: A randomized method
paper_content:
This work introduces a randomized method for the design of an approximate abstraction of a stochastic system and the assessment of its quality. The proposed approach relies on the formulation of the problem as a semi-infinite chance-constrained optimization program and on its solution via randomization. The method has quite general applicability, since it only requires to be able to run multiple executions of the candidate abstract model and of the original system and to compute their distance. Two variants of the notion of distance are considered in view of a possible use of the approximate abstraction for probabilistic safety verification. The approach is tested on a numerical example.
---
paper_title: A contractivity approach for probabilistic bisimulations of diffusion processes
paper_content:
This work is concerned with the problem of characterizing and computing probabilistic bisimulations of diffusion processes. A probabilistic bisimulation relation between two such processes is defined through a bisimulation function, which induces an approximation metric on the expectation of the (squared norm of the) distance between the two processes. We introduce sufficient conditions for the existence of a bisimulation function, based on the use of contractivity analysis for probabilistic systems. Furthermore, we show that the notion of stochastic contractivity is related to a probabilistic version of the concept of incremental stability. This relationship leads to a procedure that constructs a discrete approximation of a diffusion process. The procedure is based on the discretization of space and time. Given a diffusion process, we raise sufficient conditions for the existence of such an approximation, and show that it is probabilistically bisimilar to the original process, up to a certain approximation precision.
---
paper_title: Approximate Abstractions of Stochastic Hybrid Systems
paper_content:
We present a constructive procedure for obtaining a finite approximate abstraction of a discrete-time stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov set-Chain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finite-time algorithm that computes the abstraction.
---
paper_title: Approximations of Stochastic Hybrid Systems
paper_content:
This paper develops a notion of approximation for a class of stochastic hybrid systems that includes, as special cases, both jump linear stochastic systems and linear stochastic hybrid automata. Our approximation framework is based on the recently developed notion of the so-called stochastic simulation functions. These Lyapunov-like functions can be used to rigorously quantify the distance or error between a system and its approximate abstraction. For the class of jump linear stochastic systems and linear stochastic hybrid automata, we show that the computation of stochastic simulation functions can be cast as a tractable linear matrix inequality problem. This enables us to compute the modeling error incurred by abstracting some of the continuous dynamics, or by neglecting the influence of stochastic noise, or even the influence of stochastic discrete jumps.
---
paper_title: A numerical approximation scheme for reachability analysis of stochastic hybrid systems with state-dependent switchings
paper_content:
We describe a methodology for reachability analysis of a certain class of stochastic hybrid systems, whose continuous dynamics is governed by stochastic differential equations and discrete dynamics by state-dependent probabilistic transitions. The main feature of the proposed methodology is that it rests on the weak approximation of the solution to the stochastic differential equation with random mode transitions by a Markov chain. Reachability computations then reduce to propagating the transition probabilities of the approximating Markov chain. An example of applications to system verification is presented.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
paper_title: Approximation Metrics for Discrete and Continuous Systems
paper_content:
Established system relationships for discrete systems, such as language inclusion, simulation, and bisimulation, require system observations to be identical. When interacting with the physical world, modeled by continuous or hybrid systems, exact relationships are restrictive and not robust. In this paper, we develop the first framework of system approximation that applies to both discrete and continuous systems by developing notions of approximate language inclusion, approximate simulation, and approximate bisimulation relations. We define a hierarchy of approximation pseudo-metrics between two systems that quantify the quality of the approximation, and capture the established exact relationships as zero sections. Our approximation framework is compositional for a synchronous composition operator. Algorithms are developed for computing the proposed pseudo-metrics, both exactly and approximately. The exact algorithms require the generalization of the fixed point algorithms for computing simulation and bisimulation relations, or dually, the solution of a static game whose cost is the so-called branching distance between the systems. Approximations for the pseudo-metrics can be obtained by considering Lyapunov-like functions called simulation and bisimulation functions. We illustrate our approximation framework in reducing the complexity of safety verification problems for both deterministic and nondeterministic continuous systems
---
paper_title: Approximation and Weak Convergence Methods for Random Processes with Applications to Stochastic Systems Theory
paper_content:
Control and communications engineers, physicists, and probability theorists, among others, will find this book unique. It contains a detailed development of approximation and limit theorems and methods for random processes and applies them to numerous problems of practical importance. In particular, it develops usable and broad conditions and techniques for showing that a sequence of processes converges to a Markov diffusion or jump process. This is useful when the natural physical model is quite complex, in which case a simpler approximation la diffusion process, for example) is usually made.The book simplifies and extends some important older methods and develops some powerful new ones applicable to a wide variety of limit and approximation problems. The theory of weak convergence of probability measures is introduced along with general and usable methods (for example, perturbed test function, martingale, and direct averaging) for proving tightness and weak convergence.Kushner's study begins with a systematic development of the method. It then treats dynamical system models that have state-dependent noise or nonsmooth dynamics. Perturbed Liapunov function methods are developed for stability studies of nonMarkovian problems and for the study of asymptotic distributions of non-Markovian systems. Three chapters are devoted to applications in control and communication theory (for example, phase-locked loops and adoptive filters). Smallnoise problems and an introduction to the theory of large deviations and applications conclude the book.Harold J. Kushner is Professor of Applied Mathematics and Engineering at Brown University and is one of the leading researchers in the area of stochastic processes concerned with analysis and synthesis in control and communications theory. This book is the sixth in The MIT Press Series in Signal Processing, Optimization, and Control, edited by Alan S. Willsky.
---
paper_title: On the Approximation Quality of Markov State Models
paper_content:
We consider a continuous-time Markov process on a large continuous or discrete state space. The process is assumed to have strong enough ergodicity properties and to exhibit a number of metastable sets. Markov state models (MSMs) are designed to represent the effective dynamics of such a process by a Markov chain that jumps between the metastable sets with the transition rates of the original process. MSMs have been used for a number of applications, including molecular dynamics, for more than a decade. Their approximation quality, however, has not yet been fully understood. In particular, it would be desirable to have a sharp error bound for the difference in propagation of probability densities between the MSM and the original process on long timescales. Here, we provide such a bound for a rather general class of Markov processes ranging from diffusions in energy landscapes to Markov jump processes on large discrete spaces. Furthermore, we discuss how this result provides formal support or shows the limitations of algorithmic strategies that have been found to be useful for the construction of MSMs. Our findings are illustrated by numerical experiments.
---
paper_title: Optimal Control of Stochastic Hybrid Systems Based on Locally Consistent Markov Decision Processes
paper_content:
This paper applies a known approach for approximating controlled stochastic diffusion to hybrid systems. Stochastic hybrid systems are approximated by locally consistent Markov decision processes that preserve local mean and covariance. A randomized switching policy is introduced for approximating the dynamics on the switching boundaries. The validity of the approximation is shown by solving the optimal control problem of minimizing a cost until a target set is reached using dynamic programming. It is shown that using the randomized switching policy, the solution obtained based on the discrete approximation converges to the solution of the original problem
---
paper_title: Approximate Model Checking of Stochastic Hybrid Systems
paper_content:
A method for approximate model checking of stochastic hybrid systems with provable approximation guarantees is proposed. We focus on the probabilistic invariance problem for discrete time stochastic hybrid systems and propose a two-step scheme. The stochastic hybrid system is first approximated by a finite state Markov chain. The approximating chain is then model checked for probabilistic invariance. Under certain regularity conditions on the transition and reset kernels governing the dynamics of the stochastic hybrid system, the invariance probability computed using the approximating Markov chain is shown to converge to the invariance probability of the original stochastic hybrid system, as the grid used in the approximation gets finer. A bound on the convergence rate is also provided. The performance of the two-step approximate model checking procedure is assessed on a case study of a multi-room heating system.
---
paper_title: Approximating stochastic biochemical processes with Wasserstein pseudometrics.
paper_content:
Modelling stochastic processes inside the cell is difficult due to the size and complexity of the processes being investigated. As a result, new approaches are needed to address the problems of model reduction, parameter estimation, model comparison and model invalidation. Here, the authors propose addressing these problems by using Wasserstein pseudometrics to quantify the differences between processes. The method the authors propose is applicable to any bounded continuous-time stochastic process and pseudometrics between processes are defined only in terms of the available outputs. Algorithms for approximating Wasserstein pseudometrics are developed from experimental or simulation data and demonstrate how to optimise parameter values to minimise the pseudometrics. The approach is illustrated with studies of a stochastic toggle switch and of stochastic gene expression in E. coli.
---
paper_title: Markov Models and Optimization
paper_content:
Analysis, probability and stochastic processes piecewise deterministic processes expectations and distributions control theory control by intervention.
---
paper_title: Markov Chains and Stochastic Stability
paper_content:
Meyn & Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.
---
paper_title: Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems
paper_content:
In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired 'safe' region of the state space is characterized in terms of a value function, and 'maximally safe' Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case.
---
paper_title: Bisimulation for Labelled Markov Processes
paper_content:
In this paper we introduce a new class of labelled transition systems - Labelled Markov Processes - and define bisimulation for them. Labelled Markov processes are probabilistic labelled transition systems where the state space is not necessarily discrete, it could be the reals, for example. We assume that it is a Polish space (the underlying topological space for a complete separable metric space). The mathematical theory of such systems is completely new from the point of view of the extant literature on probabilistic process algebra; of course, it uses classical ideas from measure theory and Markov process theory. The notion of bisimulation builds on the ideas of Larsen and Skou and of Joyal, Nielsen and Winskel. The main result that we prove is that a notion of bisimulation for Markov processes on Polish spaces, which extends the Larsen-Skou denition for discrete systems, is indeed an equivalence relation. This turns out to be a rather hard mathematical result which, as far as we know, embodies a new result in pure probability theory. This work heavily uses continuous mathematics which is becoming an important part of work on hybrid systems.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Bisimulation for Labelled Markov Processes
paper_content:
In this paper we introduce a new class of labelled transition systems - Labelled Markov Processes - and define bisimulation for them. Labelled Markov processes are probabilistic labelled transition systems where the state space is not necessarily discrete, it could be the reals, for example. We assume that it is a Polish space (the underlying topological space for a complete separable metric space). The mathematical theory of such systems is completely new from the point of view of the extant literature on probabilistic process algebra; of course, it uses classical ideas from measure theory and Markov process theory. The notion of bisimulation builds on the ideas of Larsen and Skou and of Joyal, Nielsen and Winskel. The main result that we prove is that a notion of bisimulation for Markov processes on Polish spaces, which extends the Larsen-Skou denition for discrete systems, is indeed an equivalence relation. This turns out to be a rather hard mathematical result which, as far as we know, embodies a new result in pure probability theory. This work heavily uses continuous mathematics which is becoming an important part of work on hybrid systems.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Weak Bisimulation for Fully Probabilistic Processes
paper_content:
Bisimulations that abstract from internal computation have proven to be useful for verification of compositionally defined transition systems. In the literature of probabilistic extensions of such transition systems, similar bisimulations are rare. In this paper, we introduce weak and branching bisimulation for fully probabilistic systems, transition systems where nondeterministic branching is replaced by probabilistic branching. In contrast to the nondeterministic case, both relations coincide. We give an algorithm to decide weak (and branching) bisimulation with a time complexity cubic in the number of states of the fully probabilistic system. This meets the worst case complexity for deciding branching bisimulation in the nondeterministic case. In addition, the relation is shown to be a congruence with respect to the operators of PLSCCS, a lazy synchronous probabilistic variant of CCS. We illustrate that due to these properties, weak bisimulation provides all the crucial ingredients for mechanised compositional veri�cation of probabilistic transition systems.
---
paper_title: Bisimulation for probabilistic transition systems: a coalgebraic approach
paper_content:
Abstract The notion of bisimulation as proposed by Larsen and Skou for discrete probabilistic transition systems is shown to coincide with a coalgebraic definition in the sense of Aczel and Mendler in terms of a set functor, which associates to a set its collection of simple probability distributions. This coalgebraic formulation makes it possible to generalize the concepts of discrete probabilistic transition system and probabilistic bisimulation to a continuous setting involving Borel probability measures. A functor ℳ 1 is introduced that yields for a metric space its collection of Borel probability measures. Under reasonable conditions, this functor exactly captures generalized probabilistic bisimilarity. Application of the final coalgebra paradigm to a functor based on ℳ 1 then yields an internally fully abstract semantical domain with respect to probabilistic bisimulation, which is therefore well suited for the interpretation of probabilistic specification and stochastic programming concepts.
---
paper_title: Bisimulation for Labelled Markov Processes
paper_content:
In this paper we introduce a new class of labelled transition systems - Labelled Markov Processes - and define bisimulation for them. Labelled Markov processes are probabilistic labelled transition systems where the state space is not necessarily discrete, it could be the reals, for example. We assume that it is a Polish space (the underlying topological space for a complete separable metric space). The mathematical theory of such systems is completely new from the point of view of the extant literature on probabilistic process algebra; of course, it uses classical ideas from measure theory and Markov process theory. The notion of bisimulation builds on the ideas of Larsen and Skou and of Joyal, Nielsen and Winskel. The main result that we prove is that a notion of bisimulation for Markov processes on Polish spaces, which extends the Larsen-Skou denition for discrete systems, is indeed an equivalence relation. This turns out to be a rather hard mathematical result which, as far as we know, embodies a new result in pure probability theory. This work heavily uses continuous mathematics which is becoming an important part of work on hybrid systems.
---
paper_title: Approximate Model Checking of Stochastic Hybrid Systems
paper_content:
A method for approximate model checking of stochastic hybrid systems with provable approximation guarantees is proposed. We focus on the probabilistic invariance problem for discrete time stochastic hybrid systems and propose a two-step scheme. The stochastic hybrid system is first approximated by a finite state Markov chain. The approximating chain is then model checked for probabilistic invariance. Under certain regularity conditions on the transition and reset kernels governing the dynamics of the stochastic hybrid system, the invariance probability computed using the approximating Markov chain is shown to converge to the invariance probability of the original stochastic hybrid system, as the grid used in the approximation gets finer. A bound on the convergence rate is also provided. The performance of the two-step approximate model checking procedure is assessed on a case study of a multi-room heating system.
---
paper_title: Correctness Issues of Symbolic Bisimulation Computation for Markov Chains ⋆
paper_content:
Bisimulation reduction is a classical means to fight the infamous state space explosion problem, which limits the applicability of automated methods for verification like model checking. A signature-based method, originally developed by Blom and Orzan for labeled transition systems and adapted for Markov chains by Derisavi, has proved to be very efficient. It is possible to implement it symbolically using binary decision diagrams such that it is able to handle very large state spaces efficiently. We will show, however, that for Markov chains this algorithm suffers from numerical instabilities, which often result in too large quotient systems. We will present and experimentally evaluate two different approaches to avoid these problems: first the usage of rational arithmetic, and second an approach not only to represent the system structure but also the transition rates symbolically. In addition, this allows us to modify their actual values after the quotient computation.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games
paper_content:
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experiments or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of epsi-simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines epsi-(bi)simulation. Of course, for all the notions defined, letting epsi = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens in fully probabilistic systems when epsi = 0, two-way e-simulation for epsi > 0 is not equal to epsi-bisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest epsi such that s and t are epsi-equivalent. This is the first polynomial algorithm for a non-discounted metric. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
---
paper_title: Approximate abstractions of stochastic systems: A randomized method
paper_content:
This work introduces a randomized method for the design of an approximate abstraction of a stochastic system and the assessment of its quality. The proposed approach relies on the formulation of the problem as a semi-infinite chance-constrained optimization program and on its solution via randomization. The method has quite general applicability, since it only requires to be able to run multiple executions of the candidate abstract model and of the original system and to compute their distance. Two variants of the notion of distance are considered in view of a possible use of the approximate abstraction for probabilistic safety verification. The approach is tested on a numerical example.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games
paper_content:
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experiments or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of epsi-simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines epsi-(bi)simulation. Of course, for all the notions defined, letting epsi = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens in fully probabilistic systems when epsi = 0, two-way e-simulation for epsi > 0 is not equal to epsi-bisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest epsi such that s and t are epsi-equivalent. This is the first polynomial algorithm for a non-discounted metric. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games
paper_content:
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experiments or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of epsi-simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines epsi-(bi)simulation. Of course, for all the notions defined, letting epsi = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens in fully probabilistic systems when epsi = 0, two-way e-simulation for epsi > 0 is not equal to epsi-bisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest epsi such that s and t are epsi-equivalent. This is the first polynomial algorithm for a non-discounted metric. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
---
paper_title: Principles of Model Checking
paper_content:
Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.
---
paper_title: Labeled Markov processes: stronger and faster approximations
paper_content:
This paper proposes a measure-theoretic reconstruction of the approximation schemes developed for labeled Markov processes: approximants are seen as quotients with respect to sets of temporal properties expressed in a simple logic. This gives the possibility of customizing approximants with respect to properties of interest and is thus an important step towards using automated techniques intended for finite state systems, e.g. model checking, for continuous state systems. The measure-theoretic apparatus meshes well with an enriched logic, extended with a greatest fix-point, and gives means to define approximants which retain cyclic properties of their target.
---
paper_title: Approximate Abstractions of Stochastic Hybrid Systems
paper_content:
We present a constructive procedure for obtaining a finite approximate abstraction of a discrete-time stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov set-Chain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finite-time algorithm that computes the abstraction.
---
paper_title: Metrics for Markov Decision Processes with Infinite State Spaces
paper_content:
We present metrics for measuring state similarity in Markov decision processes (MDPs) with infinitely many states, including MDPs with continuous state spaces. Such metrics provide a stable quantitative analogue of the notion of bisimulation for MDPs, and are suitable for use in MDP approximation. We show that the optimal value function associated with a discounted infinite horizon planning task varies continuously with respect to our metric distances.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
paper_title: Metrics for Labelled Markov Processes
paper_content:
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process behavior has a quantitative aspect there should be a more robust approach to process equivalence. This paper studies a metric between labelled Markov processes. This metric has the property that processes are at zero distance if and only if they are bisimilar. The metric is inspired by earlier work on logics for characterizing bisimulation and is related, in spirit, to the Hutchinson metric.
---
paper_title: Probabilistic bisimulations of switching and resetting diffusions
paper_content:
This contribution presents sufficient conditions for the existence of probabilistic bisimulations between two diffusion processes that are additionally endowed with switching and resetting behaviors. A probabilistic bisimulation between two stochastic processes is defined by means of a bisimulation function, which induces an approximation metric over the distance between the two processes. The validity of the proposed sufficient conditions results in the explicit characterization of one such bisimulation function. The conditions depend on contractivity properties of the two stochastic processes.
---
paper_title: Approximations of Stochastic Hybrid Systems
paper_content:
This paper develops a notion of approximation for a class of stochastic hybrid systems that includes, as special cases, both jump linear stochastic systems and linear stochastic hybrid automata. Our approximation framework is based on the recently developed notion of the so-called stochastic simulation functions. These Lyapunov-like functions can be used to rigorously quantify the distance or error between a system and its approximate abstraction. For the class of jump linear stochastic systems and linear stochastic hybrid automata, we show that the computation of stochastic simulation functions can be cast as a tractable linear matrix inequality problem. This enables us to compute the modeling error incurred by abstracting some of the continuous dynamics, or by neglecting the influence of stochastic noise, or even the influence of stochastic discrete jumps.
---
paper_title: Approximations of Stochastic Hybrid Systems
paper_content:
This paper develops a notion of approximation for a class of stochastic hybrid systems that includes, as special cases, both jump linear stochastic systems and linear stochastic hybrid automata. Our approximation framework is based on the recently developed notion of the so-called stochastic simulation functions. These Lyapunov-like functions can be used to rigorously quantify the distance or error between a system and its approximate abstraction. For the class of jump linear stochastic systems and linear stochastic hybrid automata, we show that the computation of stochastic simulation functions can be cast as a tractable linear matrix inequality problem. This enables us to compute the modeling error incurred by abstracting some of the continuous dynamics, or by neglecting the influence of stochastic noise, or even the influence of stochastic discrete jumps.
---
paper_title: Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games
paper_content:
We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experiments or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or closeness) of processes. Relaxing the definition of simulation and bisimulation is another avenue which we follow. We define a new semantics to a known simple logic for probabilistic processes and show that it characterises a notion of epsi-simulation. We also define two-players games that correspond to these notions: the existence of a winning strategy for one of the players determines epsi-(bi)simulation. Of course, for all the notions defined, letting epsi = 0 gives back the usual notions of logical equivalence, simulation and bisimulation. However, in contrast to what happens in fully probabilistic systems when epsi = 0, two-way e-simulation for epsi > 0 is not equal to epsi-bisimulation. Next we give a polynomial time algorithm to compute a naturally derived metric: distance between states s and t is defined as the smallest epsi such that s and t are epsi-equivalent. This is the first polynomial algorithm for a non-discounted metric. Finally we show that most of these notions can be extended to deal with probabilistic systems that allow non-determinism as well.
---
paper_title: Lyapunov-like techniques for stochastic stability
paper_content:
The purpose of this paper is to give a survey of the results proved in Florchinger (1993) concerning the stabilizability problem for control stochastic nonlinear systems driven by a Wiener process. Sufficient conditions for the existence of stabilizing feedback laws which are smooth, except possibly at the equilibrium point of the system, are provided by means of stochastic Lyapunov-like techniques. The notion of dynamic asymptotic stability in probability of control stochastic differential systems is introduced, and the stabilization by means of dynamic controllers is studied. >
---
paper_title: Approximately Bisimilar Symbolic Models for Incrementally Stable Switched Systems
paper_content:
Switched systems constitute an important modeling paradigm faithfully describing many engineering systems in which software interacts with the physical world. Despite considerable progress on stability and stabilization of switched systems, the constant evolution of technology demands that we make similar progress with respect to different, and perhaps more complex, objectives. This paper describes one particular approach to address these different objectives based on the construction of approximately equivalent (bisimilar) symbolic models for switched systems. The main contribution of this paper consists in showing that under standard assumptions ensuring incremental stability of a switched system (i.e., existence of a common Lyapunov function, or multiple Lyapunov functions with dwell time), it is possible to construct a finite symbolic model that is approximately bisimilar to the original switched system with a precision that can be chosen a priori. To support the computational merits of the proposed approach, we use symbolic models to synthesize controllers for two examples of switched systems, including the boost dc-dc converter.
---
paper_title: A contractivity approach for probabilistic bisimulations of diffusion processes
paper_content:
This work is concerned with the problem of characterizing and computing probabilistic bisimulations of diffusion processes. A probabilistic bisimulation relation between two such processes is defined through a bisimulation function, which induces an approximation metric on the expectation of the (squared norm of the) distance between the two processes. We introduce sufficient conditions for the existence of a bisimulation function, based on the use of contractivity analysis for probabilistic systems. Furthermore, we show that the notion of stochastic contractivity is related to a probabilistic version of the concept of incremental stability. This relationship leads to a procedure that constructs a discrete approximation of a diffusion process. The procedure is based on the discretization of space and time. Given a diffusion process, we raise sufficient conditions for the existence of such an approximation, and show that it is probabilistically bisimilar to the original process, up to a certain approximation precision.
---
paper_title: Approximately bisimilar symbolic models for nonlinear control systems
paper_content:
Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suitable to describe software and hardware interfacing the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each "symbol" corresponds to an "aggregate" of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore if the state space of the control system is bounded the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
---
paper_title: A Contraction Theory Approach to Stochastic Incremental Stability
paper_content:
We investigate the incremental stability properties of It\^o stochastic dynamical systems. Specifically, we derive a stochastic version of nonlinear contraction theory that provides a bound on the mean square distance between any two trajectories of a stochastically contracting system. This bound can be expressed as a function of the noise intensity and the contraction rate of the noise-free system. We illustrate these results in the contexts of stochastic nonlinear observers design and stochastic synchronization.
---
paper_title: Probabilistic bisimulations of switching and resetting diffusions
paper_content:
This contribution presents sufficient conditions for the existence of probabilistic bisimulations between two diffusion processes that are additionally endowed with switching and resetting behaviors. A probabilistic bisimulation between two stochastic processes is defined by means of a bisimulation function, which induces an approximation metric over the distance between the two processes. The validity of the proposed sufficient conditions results in the explicit characterization of one such bisimulation function. The conditions depend on contractivity properties of the two stochastic processes.
---
paper_title: Approximations of Stochastic Hybrid Systems
paper_content:
This paper develops a notion of approximation for a class of stochastic hybrid systems that includes, as special cases, both jump linear stochastic systems and linear stochastic hybrid automata. Our approximation framework is based on the recently developed notion of the so-called stochastic simulation functions. These Lyapunov-like functions can be used to rigorously quantify the distance or error between a system and its approximate abstraction. For the class of jump linear stochastic systems and linear stochastic hybrid automata, we show that the computation of stochastic simulation functions can be cast as a tractable linear matrix inequality problem. This enables us to compute the modeling error incurred by abstracting some of the continuous dynamics, or by neglecting the influence of stochastic noise, or even the influence of stochastic discrete jumps.
---
paper_title: Bisimilar linear systems
paper_content:
The notion of bisimulation in theoretical computer science is one of the main complexity reduction methods for the analysis and synthesis of labeled transition systems. Bisimulations are special quotients of the state space that preserve many important properties expressible in temporal logics, and, in particular, reachability. In this paper, the framework of bisimilar transition systems is applied to various transition systems that are generated by linear control systems. Given a discrete-time or continuous-time linear system, and a finite observation map, we characterize linear quotient maps that result in quotient transition systems that are bisimilar to the original system. Interestingly, the characterizations for discrete-time systems are more restrictive than for continuous-time systems, due to the existence of an atomic time step. We show that computing the coarsest bisimulation, which results in maximum complexity reduction, corresponds to computing the maximal controlled or reachability invariant subspace inside the kernel of the observations map. These results establish strong connections between complexity reduction concepts in control theory and computer science.
---
paper_title: Bisimilar control affine systems
paper_content:
The notion of bisimulation plays a very important role in theoretical computer science where it provides several notions of equivalence between models of computation. These equivalences are in turn used to simplify analysis and synthesis for these models. In system theory, a similar notion is also of interest in order to develop modular analysis and design tools for purely continuous or hybrid control systems. We introduce two notions of bisimulation for nonlinear systems. We present a differential-algebraic characterization of these notions and show that bisimilar systems of different dimensions are obtained by factoring out certain invariant distributions. Furthermore, we also show that all bisimilar systems of different dimension are of this form.
---
paper_title: Adaptive Gridding for Abstraction and Verification of Stochastic Hybrid Systems
paper_content:
This work is concerned with the generation of finite abstractions of general Stochastic Hybrid Systems, to be employed in the formal verification of probabilistic properties by means of model checkers. The contribution employs an abstraction procedure based on a partitioning of the state space, and puts forward a novel adaptive gridding algorithm that is expected to conform to the underlying dynamics of the model and thus at least to mitigate the curse of dimensionality related to the partitioning procedure. With focus on the study of probabilistic safety over a finite horizon, the proposed adaptive algorithm is first benchmarked against a uniform gridding approach from the literature, and finally tested on a known applicative case study.
---
paper_title: On Choosing and Bounding Probability Metrics
paper_content:
When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.
---
paper_title: On infinite-horizon probabilistic properties and stochastic bisimulation functions
paper_content:
This work investigates infinite-horizon properties over discrete-time stochastic models with continuous state spaces. The focus is on understanding how the structural features of a model (e.g., the presence of absorbing sets) affect the values of these properties and relate to their uniqueness. Furthermore, we argue that the investigation of these features can lead to approximation bounds for the value of such properties, as well as to improvements on their computation. The article employs the presented results to find a stochastic bisimulation function of two processes.
---
paper_title: Approximate abstractions of stochastic systems: A randomized method
paper_content:
This work introduces a randomized method for the design of an approximate abstraction of a stochastic system and the assessment of its quality. The proposed approach relies on the formulation of the problem as a semi-infinite chance-constrained optimization program and on its solution via randomization. The method has quite general applicability, since it only requires to be able to run multiple executions of the candidate abstract model and of the original system and to compute their distance. Two variants of the notion of distance are considered in view of a possible use of the approximate abstraction for probabilistic safety verification. The approach is tested on a numerical example.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
paper_title: A theory of timed automata
paper_content:
Alur, R. and D.L. Dill, A theory of timed automata, Theoretical Computer Science 126 (1994) 183-235. We propose timed (j&e) automata to model the behavior of real-time systems over time. Our definition provides a simple, and yet powerful, way to annotate state-transition graphs with timing constraints using finitely many real-valued clocks. A timed automaton accepts timed words-infinite sequences in which a real-valued time of occurrence is associated with each symbol. We study timed automata from the perspective of formal language theory: we consider closure properties, decision problems, and subclasses. We consider both nondeterministic and deterministic transition structures, and both Biichi and Muller acceptance conditions. We show that nondeterministic timed automata are closed under union and intersection, but not under complementation, whereas deterministic timed Muller automata are closed under all Boolean operations. The main construction of the paper is an (PSPACE) algorithm for checking the emptiness of the language of a (nondeterministic) timed automaton. We also prove that the universality problem and the language inclusion problem are solvable only for the deterministic automata: both problems are undecidable (II i-hard) in the nondeterministic case and PSPACE-complete in the deterministic case. Finally, we discuss the application of this theory to automatic verification of real-time requirements of finite-state systems.
---
paper_title: Uncertain convex programs: Randomized solutions and confidence levels
paper_content:
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chance-constrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chance-constrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a pre-specified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations.
---
paper_title: Approximate Model Checking of Stochastic Hybrid Systems
paper_content:
A method for approximate model checking of stochastic hybrid systems with provable approximation guarantees is proposed. We focus on the probabilistic invariance problem for discrete time stochastic hybrid systems and propose a two-step scheme. The stochastic hybrid system is first approximated by a finite state Markov chain. The approximating chain is then model checked for probabilistic invariance. Under certain regularity conditions on the transition and reset kernels governing the dynamics of the stochastic hybrid system, the invariance probability computed using the approximating Markov chain is shown to converge to the invariance probability of the original stochastic hybrid system, as the grid used in the approximation gets finer. A bound on the convergence rate is also provided. The performance of the two-step approximate model checking procedure is assessed on a case study of a multi-room heating system.
---
paper_title: Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems
paper_content:
In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired 'safe' region of the state space is characterized in terms of a value function, and 'maximally safe' Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case.
---
paper_title: The Scenario Approach for Systems and Control Design
paper_content:
The ‘scenario approach’ is an innovative technology that has been introduced to solve convex optimization problems with an infinite number of constraints, a class of problems which often occurs when dealing with uncertainty. This technology relies on random sampling of constraints, and provides a powerful means for solving a variety of design problems in systems and control. The objective of this paper is to illustrate the scenario approach at a tutorial level, focusing mainly on algorithmic aspects. Its versatility and virtues will be pointed out through a number of examples in model reduction, robust and optimal control.
---
paper_title: Approximating stochastic biochemical processes with Wasserstein pseudometrics.
paper_content:
Modelling stochastic processes inside the cell is difficult due to the size and complexity of the processes being investigated. As a result, new approaches are needed to address the problems of model reduction, parameter estimation, model comparison and model invalidation. Here, the authors propose addressing these problems by using Wasserstein pseudometrics to quantify the differences between processes. The method the authors propose is applicable to any bounded continuous-time stochastic process and pseudometrics between processes are defined only in terms of the available outputs. Algorithms for approximating Wasserstein pseudometrics are developed from experimental or simulation data and demonstrate how to optimise parameter values to minimise the pseudometrics. The approach is illustrated with studies of a stochastic toggle switch and of stochastic gene expression in E. coli.
---
paper_title: On Choosing and Bounding Probability Metrics
paper_content:
When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.
---
paper_title: Labeled Markov processes: stronger and faster approximations
paper_content:
This paper proposes a measure-theoretic reconstruction of the approximation schemes developed for labeled Markov processes: approximants are seen as quotients with respect to sets of temporal properties expressed in a simple logic. This gives the possibility of customizing approximants with respect to properties of interest and is thus an important step towards using automated techniques intended for finite state systems, e.g. model checking, for continuous state systems. The measure-theoretic apparatus meshes well with an enriched logic, extended with a greatest fix-point, and gives means to define approximants which retain cyclic properties of their target.
---
paper_title: Approximating labeled Markov processes
paper_content:
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we develop to reach these results are: a finite-model theorem for the modal logic used to characterize bisimulation; and a categorical equivalence between the category of Markov processes (with simulation morphisms) with the /spl omega/-continuous dcpo Proc, defined as the solution of the recursive domain equation Proc=/spl Pi//sub Labels/ P/sub Prob/(Proc). The correspondence between labeled Markov processes and Proc yields a logic complete for reasoning about simulation for continuous-state processes.
---
|
Title: Approximation Metrics Based on Probabilistic Bisimulations for General State-Space Markov Processes: A Survey
Section 1: Motivations and Objective
Description 1: Introduce the motivations for studying approximation metrics based on probabilistic bisimulations, and outline the objectives and structure of the survey.
Section 2: Review of Literature Background
Description 2: Provide a comprehensive coverage of existing work on simulations, bisimulations, and approximate versions thereof, specifically focusing on probabilistic models.
Section 3: Markov Processes over General State Spaces
Description 3: Introduce and discuss the models under study, particularly probabilistic processes defined over continuous state spaces.
Section 4: Exact and Approximate Probabilistic Bisimulations: Relations, Logics, and Categories
Description 4: Present the concept of exact and approximate (strong) probabilistic bisimulation, and provide related characterizations based on algebra, logic, and category theory.
Section 5: Approximate Bisimulations via Probabilistic Bisimulation Functions
Description 5: Define and characterize the notion of probabilistic bisimulation function, leading to the introduction of approximate metrics between processes' trajectories.
Section 6: Characterization of Stochastic Bisimulation Function as solution of a Probabilistic Reachability Problem
Description 6: Look at semantic-based computations of distance metrics between comparable probabilistic processes by solving probabilistic reachability problems.
Section 7: Discussion and Conclusions
Description 7: Discuss the surveyed techniques, highlight their differences, and look forward at future research directions and practical applications.
|
Survey on Sparse Coded Features for Content Based Face Image Retrieval
| 7 |
---
paper_title: Robust Face Recognition via Sparse Representation
paper_content:
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.
---
paper_title: Unsupervised auxiliary visual words discovery for large-scale image object retrieval
paper_content:
Image object retrieval–locating image occurrences of specific objects in large-scale image collections–is essential for manipulating the sheer amount of photos. Current solutions, mostly based on bags-of-words model, suffer from low recall rate and do not resist noises caused by the changes in lighting, viewpoints, and even occlusions. We propose to augment each image with auxiliary visual words (AVWs), semantically relevant to the search targets. The AVWs are automatically discovered by feature propagation and selection in textual and visual image graphs in an unsupervised manner. We investigate variant optimization methods for effectiveness and scalability in large-scale image collections. Experimenting in the large-scale consumer photos, we found that the the proposed method significantly improves the traditional bag-of-words (111% relatively). Meanwhile, the selection process can also notably reduce the number of features (to 1.4%) and can further facilitate indexing in large-scale image object retrieval.
---
paper_title: Linear spatial pyramid matching using sparse coding for image classification
paper_content:
Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 ~ n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.
---
|
Title: Survey on Sparse Coded Features for Content Based Face Image Retrieval
Section 1: INTRODUCTION
Description 1: Provide an overview of the problem, the scope of the survey, and the significance of using sparse-coded features for face image retrieval.
Section 2: FACE RECOGNITION VIA SPARSE REPRESENTATION
Description 2: Discuss how sparse representation addresses the challenges in face recognition and improves retrieval systems.
Section 3: SPARSE CODING FOR IMAGE CLASSIFICATION AND RETRIEVAL
Description 3: Explain the use of sparse coding in image classification and retrieval, including methods and algorithms involved.
Section 4: SPARSE CODING WITH IDENTITY CONSTRAINTS
Description 4: Explore the application of sparse coding with identity constraints to enhance face image retrieval systems.
Section 5: ATTRIBUTE ENHANCED SPARSE CODEWORDS
Description 5: Detail how high-level human facial attributes can enhance the effectiveness of sparse-coded representations in face image retrieval.
Section 6: SPATIAL PYRAMID MATCHING USING SPARSE CODING
Description 6: Examine the use of Spatial Pyramid Matching (SPM) combined with sparse coding for improved image classification performance.
Section 7: CONCLUSION
Description 7: Summarize the main findings, the overall effectiveness of sparse-coded features in face image retrieval, and potential future directions for research.
|
Critical Factors and Guidelines for 3D Surveying and Modelling in Cultural Heritage
| 9 |
---
paper_title: Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning
paper_content:
The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper.
---
paper_title: Image-based 3D Modelling: A Review
paper_content:
In this paper the main problems and the available solutions are addressed for the generation of 3D models from terrestrial images. Close range photogrammetry has dealt for many years with manual or automatic image measurements for precise 3D modelling. Nowadays 3D scanners are also becoming a standard source for input data in many application areas, but image-based modelling still remains the most complete, economical, portable, flexible and widely used approach. In this paper the full pipeline is presented for 3D modelling from terrestrial image data, considering the different approaches and analysing all the steps involved.
---
paper_title: Building reconstruction using manhattan-world grammars
paper_content:
We present a passive computer vision method that exploits existing mapping and navigation databases in order to automatically create 3D building models. Our method defines a grammar for representing changes in building geometry that approximately follow the Manhattan-world assumption which states there is a predominance of three mutually orthogonal directions in the scene. By using multiple calibrated aerial images, we extend previous Manhattan-world methods to robustly produce a single, coherent, complete geometric model of a building with partial textures. Our method uses an optimization to discover a 3D building geometry that produces the same set of facade orientation changes observed in the captured images. We have applied our method to several real-world buildings and have analyzed our approach using synthetic buildings.
---
paper_title: LiDAR or Photogrammetry? Integration is the answer
paper_content:
In the last few years, LiDAR and image-matching techniques have been employed in many application fields because of their quickness in point cloud generation. Nevertheless, the use of only one of these techniques is not sufficient to extract automatically reliable information. In this paper, an integration approach of these techniques is proposed in order to overcome their individual weakness. The goal of this work is the automated extraction of manmade object outlines in order to reduce the human intervention during the data processing simplifying the work of the users. This approach has been implemented on both terrestrial and aerial applications, showing the reliability and the potentiality of this kind of integration.
---
paper_title: Airborne and terrestrial laser scanning
paper_content:
Whittles Publishing is delighted to announce that Airborne and Terrestrial Laser Scanning has been awarded the Karl Kraus Medal by the ISPRS with the presentation being made on 31st August at the XXII Congress in Melbourne. We are also very pleased for all the authors who have contributed to the book and especially to George Vosselman and Hans-Gerd Maas for their commitment and expertise which has resulted in such an acclaimed and highly successful book. It is gratifying that our growing list of geomatics titles continues to receive recognition and we are endeavouring to create more books that will be equally well-received and useful. ::: ::: Written by a team of international experts, this book provides a comprehensive overview of the major applications of airborne and terrestrial laser scanning. The book focuses on principles and methods and presents an integrated treatment of airborne and terrestrial laser scanning technology. ::: ::: Laser scanning is a relatively young 3D measurement technique offering much potential in the acquisition of precise and reliable 3D geodata and object geometries. However, there are many terrestrial and airborne scanners on the market, accompanied by numerous software packages that handle data acquisition, processing and visualization, yet existing knowledge is fragmented over a wide variety of publications, whether printed or electronic. ::: ::: This book brings together the various facets of the subject in a coherent text that will be relevant for advanced students, academics and practitioners. After consideration of the technology and processing methods, the book turns to applications. The primary use thus far has been the extraction of digital terrain models from airborne laser scanning data, but many other applications are considered including engineering, forestry, cultural heritage, extraction of 3D building models and mobile mapping.
---
paper_title: Best practices for the 3D documentation of the Grotta dei Cervi of Porto Badisco, Italy
paper_content:
The Grotta dei Cervi is a Neolithic cave where human presence has left many unique pictographs on the walls of many ::: of its chambers. It was closed for conservation reasons soon after its discovery in 1970. It is for these reasons that a 3D ::: documentation was started. Two sets of high resolution and detailed three-dimensional (3D) acquisitions were captured ::: in 2005 and 2009 respectively, along with two-dimensional (2D) images. From this information a textured 3D model ::: was produced for most of the 300-m long central corridor. Carbon dating of the guano used for the pictographs and ::: environmental monitoring (Temperature, Relative humidity, and Radon) completed the project. This paper presents this ::: project, some results obtained up to now, the best practice that has emerged from this work and a description of the ::: processing pipeline that deals with more than 27 billion 3D coordinates.
---
paper_title: Basic theory on surface measurement uncertainty of 3D imaging systems
paper_content:
Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data ::: have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product ::: developers who are concerned with 1) the applicability of a given system to the task at hand (fit-for-purpose), 2) the ::: ability to fairly compare across instruments, 3) instrument warranty issues, 4) costs savings through 3D imaging. The ::: evaluation and characterization of 3D imaging sensors and algorithms require the definition of metric performance. The ::: performance of a system is usually evaluated using quality parameters such as spatial resolution/uncertainty/accuracy ::: and complexity. These are quality parameters that most people in the field can agree upon. The difficulty arises from ::: defining a common terminology and procedures to quantitatively evaluate them though metrology and standards ::: definitions. This paper reviews the basic principles of 3D imaging systems. Optical triangulation and time delay (timeof- ::: flight) measurement systems were selected to explain the theoretical and experimental strands adopted in this paper. ::: The intrinsic uncertainty of optical distance measurement techniques, the parameterization of a 3D surface and ::: systematic errors are covered. Experimental results on a number of scanners (Surphaser®, HDS6000®, Callidus CPW ::: 8000®, ShapeGrabber® 102) support the theoretical descriptions.
---
paper_title: Close-range photogrammetry vs. 3D scanning: Comparing data capture, processing and model generation in the field and the lab
paper_content:
With the introduction of several affordable and/or free close-range photogrammetric software packages that require minimal processing labor over the past year, much discussion has developed regarding how useful such a cheap and easy 3D capture solution is for archaeologists. When is a point-and-shoot camera sufficient for documenting an excavation or ceramic vessel and when would an expensive laser scanner be required? How does the accuracy and repeatability of these newer close-range photogrammetry options compare with 3D scanning? This paper reviews a variety of non-metric, close-range photogrammetric data capture methods (e.g. calibrated vs. non-calibrated, wide-angle vs. normal lens, etc.) and assess how each resulting data set performs through a comparison of at least three photogrammetric software packages including Eos Systems' PhotoModeler Scanner, AutoDesk's 123D Catch and AgiSoft's PhotoScan. The resulting data sets will be compared to scan data of the same objects as captured by a Leica C10 mid-range laser scanner and the Breuckmann SmartScan HE close-range scanner. Test data will include rock art and architecture from Knowth, Ireland; Defiance House Ruin, United States; architectural sculptures from El Zotz, Guatemala; Wadi Abu Subeira I, El Hosh, Egypt as well as controlled lab tests.
---
paper_title: Design and implement a reality-based 3D digitisation and modelling project
paper_content:
3D digitisation denotes the process of describing parts of our physical world through finite measurements and representations that can be processed and visualised with a computer system. Reality-based 3D digitisation is essential for the documentation, conservation and preservation of our Cultural Heritage. This article composes a critical review of the digitisation pipeline, ranging from sensor selection and planning to data acquisition, processing and visualisation.
---
paper_title: From deposit to point cloud: a study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations
paper_content:
Stratigraphic archaeological excavations demand high-resolution documentation techniques for 3D recording. Today, this is typically accomplished using total stations or terrestrial laser scanners. This paper demonstrates the potential of another technique that is low-cost and easy to execute. It takes advantage of software using Structure from Motion (SfM) algorithms, which are known for their ability to reconstruct camera pose and threedimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. When complemented by stereo matching algorithms, detailed 3D surface models can be built from such relatively oriented photo collections in a fully automated way. The absolute orientation of the model can be derived by the manual measurement of control points. The approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite. For a few years, these algorithms are embedded in several free and low-cost software packages. This paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the ima ges were not taken for this purpose). Archived data from previous excavations of VIAS-University of Vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. The a bsolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. In order to express the positional accuracy of the generated 3D surface models, the NSSDA guidelines were applied. Simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. The vertical accuracy of the surface models generated by SfM was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well.
---
paper_title: Low-Cost and Open-Source Solutions for Automated Image Orientation – A Critical Overview
paper_content:
The recent developments in automated image processing for 3D reconstruction purposes have led to the diffusion of low-cost and open-source solutions which can be nowadays used by everyone to produce 3D models. The level of automation is so high that many solutions are black-boxes with poor repeatability and low reliability. The article presents an investigation of automated image orientation packages in order to clarify potentialities and performances when dealing with large and complex datasets.
---
|
Title: Critical Factors and Guidelines for 3D Surveying and Modelling in Cultural Heritage
Section 1: Introduction
Description 1: Introduce the importance of 3D digital documentation in archaeology and cultural heritage, including the different types of survey techniques and their advantages.
Section 2: The 3D Digitization Pipeline
Description 2: Describe the steps involved in the 3D digitization process, from site examination to data acquisition and processing, emphasizing the key steps often skipped by non-experts.
Section 3: Test Sites and Objects
Description 3: Provide an overview of the five case studies used to illustrate the practical application of the 3D digitization pipeline on different movable and unmovable heritage objects.
Section 4: The Byzantine walls of Aquileia
Description 4: Detail the specific case study of the Byzantine walls of Aquileia, including the project's scope, constraints, and selected 3D recording techniques.
Section 5: San Galgano abbey
Description 5: Describe the case study involving the San Galgano abbey, focusing on the goals, constraints, and the 3D digitization method used.
Section 6: The etruscan Bartoccini's tomb
Description 6: Explain the 3D digitization of the Etruscan Bartoccini's tomb, covering the project's objective, constraints, and the specific surveying and modeling techniques implemented.
Section 7: The bronze statue of the Archangel Gabriel
Description 7: Highlight the 3D digitization project of the Archangel Gabriel statue, including the intricacies and constraints faced during the process.
Section 8: Archaeological museum objects
Description 8: Discuss various archaeological objects from the museum in Milan, illustrating the differences in 3D digitization methods based on the type and material of objects.
Section 9: Conclusions
Description 9: Summarize the overall conclusions, emphasizing the importance of combining photogrammetry and laser scanning, the need for interdisciplinary collaboration, and discussing the current challenges and future perspectives in the field of 3D digitization.
|
Survey of Data Mining Approaches to User Modeling for Adaptive Hypermedia
| 9 |
---
paper_title: Predictive Statistical Models for User Modeling
paper_content:
The limitations of traditional knowledge representation methods for modeling complex human behaviour led to the investigation of statistical models. Predictive statistical models enable the anticipation of certain aspects of human behaviour, such as goals, actions and preferences. In this paper, we motivate the development of these models in the context of the user modeling enterprise. We then review the two main approaches to predictive statistical modeling, content-based and collaborative, and discuss the main techniques used to develop predictive statistical models. We also consider the evaluation requirements of these models in the user modeling context, and propose topics for future research.
---
paper_title: Adaptable and Adaptive Information Access for All Users, Including the Disabled and the Elderly
paper_content:
The tremendously increasing popularity of the World Wide Web indicates that hypermedia is going to be the leading online information medium for the years to come and will most likely be the standard gateway to the “information highway”. Visitors of web sites are generally heterogeneous and have different needs, and this trend is likely even to increase in the future. The aim of the AVANTI project is to cater hypermedia information to these different needs by adapting the content and the presentation of web pages to each individual user. The special needs of elderly and handicapped users are also considered to some extent. Our experience from this research is that adaptation and user modeling techniques that have so far almost exclusively focused on adapting interactive software systems to “normal” users also prove useful for adaptation to users with special needs.
---
paper_title: User as Student: Towards an Adaptive Interface for Advanced Web-Based Applications
paper_content:
This paper discusses the problems of developing adaptive self-explaining interfaces for advanced World-Wide Web (WWW) applications. Two kinds of adaptation are considered: incremental learning and incremental interfaces. The key problem for these kinds of adaptation is to decide which interface features should be explained or enabled next. We analyze possible ways to implement incremental learning and incremental interfaces on the WWW and suggest a “user as student” approach. With this approach, the order of learning or enabling of interface features is determined by adaptive sequencing, a popular intelligent tutoring technology, which is based on the pedagogical model of the interface and user knowledge about it. We describe in detail how this approach was implemented in the InterBook system, a shell for developing Web-based adaptive electronic textbooks.
---
paper_title: Data Mining Practical Machine Learning Tools And Techniques With Java Implementations
paper_content:
Thank you for reading data mining practical machine learning tools and techniques with java implementations. As you may know, people have look hundreds times for their favorite novels like this data mining practical machine learning tools and techniques with java implementations, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their laptop.
---
paper_title: Generic user modeling systems
paper_content:
This chapter reviews research results in the field of Generic User Modeling Systems. It describes the purposes of such systems, their services within user-adaptive systems, and the different design requirements for research prototypes and commercial deployments. It discusses the architectures that have been explored so far, namely shell systems that form part of the application, central server systems that communicate with several applications, and possible future agent-based user modeling systems. Major implemented research prototypes and commercial systems are briefly described.
---
paper_title: Machine Learning for User Modeling
paper_content:
At first blush, user modeling appears to be a prime candidate for straightforward application of standard machine learning techniques. Observations of the user's behavior can provide training examples that a machine learning system can use to form a model designed to predict future actions. However, user modeling poses a number of challenges for machine learning that have hindered its application in user modeling, including: the need for large data sets; the need for labeled data; concept drift; and computational complexity. This paper examines each of these issues and reviews approaches to resolving them.
---
paper_title: Some methods for classification and analysis of multivariate observations
paper_content:
The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special
---
paper_title: Low-complexity fuzzy relational clustering algorithms for web mining
paper_content:
This paper presents new algorithms-fuzzy c-medoids (FCMdd) and robust fuzzy c-medoids (RFCMdd)-for fuzzy clustering of relational data. The objective functions are based on selecting c representative objects (medoids) from the data set in such a way that the total fuzzy dissimilarity within each cluster is minimized. A comparison of FCMdd with the well-known relational fuzzy c-means algorithm (RFCM) shows that FCMdd is more efficient. We present several applications of these algorithms to Web mining, including Web document clustering, snippet clustering, and Web access log analysis.
---
paper_title: Effective personalization based on association rule discovery from web usage data
paper_content:
To engage visitors to a Web site at a very early stage (i.e., before registration or authentication), personalization tools must rely primarily on clickstream data captured in Web server logs. The lack of explicit user ratings as well as the sparse nature and the large volume of data in such a setting poses serious challenges to standard collaborative filtering techniques in terms of scalability and performance. Web usage mining techniques such as clustering that rely on offline pattern discovery from user transactions can be used to improve the scalability of collaborative filtering, however, this is often at the cost of reduced recommendation accuracy. In this paper we propose effective and scalable techniques for Web personalization based on association rule discovery from usage data. Through detailed experimental evaluation on real usage data, we show that the proposed methodology can achieve better recommendation effectiveness, while maintaining a computational advantage over direct approaches to collaborative filtering such as the k-nearest-neighbor strategy.
---
paper_title: Automating Personal Categorization Using Artificial Neural Networks
paper_content:
Organizations as well as personal users invest a great deal of time in assigning documents they read or write to categories. Automatic document classification that matches user subjective classification is widely used, but much challenging research still remain to be done. The self-organizing map (SOM) is an artificial neural network (ANN) that is mathematically characterized by transforming high-dimensional data into two-dimensional representation. This enables automatic clustering of the input, while preserving higher order topology. A closely related method is the Learning Vector Quantization (LVQ) algorithm, which uses supervised learning to maximize correct data classification. This study evaluates and compares the application of SOM and LVQ to automatic document classification, based on a subjectively predefined set of clusters in a specific domain. A set of documents from an organization, manually clustered by a domain expert, was used in the experiment. Results show that in spite of the subjective nature of human categorization, automatic document clustering methods match with considerable success subjective, personal clustering, the LVQ method being more advantageous.
---
paper_title: Symbolic data analysis with the k–means algorithm for user profiling
paper_content:
We propose to simplify human-machine interaction by automating device settings that are normally made manually. We present here a classification scheme of user behaviours based on an adaptation of the K-means algorithm to symbolic data representing user behaviours. This classification enables a system to derive prototypical behaviours and to control device settings automatically.
---
paper_title: Low-complexity fuzzy relational clustering algorithms for web mining
paper_content:
This paper presents new algorithms-fuzzy c-medoids (FCMdd) and robust fuzzy c-medoids (RFCMdd)-for fuzzy clustering of relational data. The objective functions are based on selecting c representative objects (medoids) from the data set in such a way that the total fuzzy dissimilarity within each cluster is minimized. A comparison of FCMdd with the well-known relational fuzzy c-means algorithm (RFCM) shows that FCMdd is more efficient. We present several applications of these algorithms to Web mining, including Web document clustering, snippet clustering, and Web access log analysis.
---
paper_title: On Mining Web Access Logs
paper_content:
Abstract : The proliferation of information on the world wide web has made the personalization of this information space a necessity. One possible approach to web personalization is to mine typical user profiles from the vast amount of historical data stored in access logs. In the absence of any a priori knowledge, unsupervised classification or clustering methods seem to be ideally suited to analyze the semi-structured log data of user accesses. In this paper, we define the notion of a user session , as well as a dissimilarity measure between two web sessions that captures the organization of a web site. To extract a user access profile, we cluster the user sessions based on the pair-wise dissimilarities using a robust fuzzy clustering algorithm that we have developed. We report the results of experiments with our algorithm and show that this leads to extraction of interesting user profiles. We also show that it outperforms association rule based approaches for this task.
---
paper_title: On Mining Web Access Logs
paper_content:
Abstract : The proliferation of information on the world wide web has made the personalization of this information space a necessity. One possible approach to web personalization is to mine typical user profiles from the vast amount of historical data stored in access logs. In the absence of any a priori knowledge, unsupervised classification or clustering methods seem to be ideally suited to analyze the semi-structured log data of user accesses. In this paper, we define the notion of a user session , as well as a dissimilarity measure between two web sessions that captures the organization of a web site. To extract a user access profile, we cluster the user sessions based on the pair-wise dissimilarities using a robust fuzzy clustering algorithm that we have developed. We report the results of experiments with our algorithm and show that this leads to extraction of interesting user profiles. We also show that it outperforms association rule based approaches for this task.
---
paper_title: User Intention Modeling in Web Applications Using Data Mining
paper_content:
The problem of inferring a user's intentions in Machine–Human Interaction has been the key research issue for providing personalized experiences and services. In this paper, we propose novel approaches on modeling and inferring user's actions in a computer. Two linguistic features – keyword and concept features – are extracted from the semantic context for intention modeling. Concept features are the conceptual generalization of keywords. Association rule mining is used to find the proper concept of corresponding keyword. A modified Naive Bayes classifier is used in our intention modeling. Experimental results have shown that our proposed approach achieved 84% average accuracy in predicting user's intention, which is close to the precision (92%) of human prediction.
---
paper_title: Effective personalization based on association rule discovery from web usage data
paper_content:
To engage visitors to a Web site at a very early stage (i.e., before registration or authentication), personalization tools must rely primarily on clickstream data captured in Web server logs. The lack of explicit user ratings as well as the sparse nature and the large volume of data in such a setting poses serious challenges to standard collaborative filtering techniques in terms of scalability and performance. Web usage mining techniques such as clustering that rely on offline pattern discovery from user transactions can be used to improve the scalability of collaborative filtering, however, this is often at the cost of reduced recommendation accuracy. In this paper we propose effective and scalable techniques for Web personalization based on association rule discovery from usage data. Through detailed experimental evaluation on real usage data, we show that the proposed methodology can achieve better recommendation effectiveness, while maintaining a computational advantage over direct approaches to collaborative filtering such as the k-nearest-neighbor strategy.
---
paper_title: Effective Prediction of Web-user Accesses: A Data Mining Approach
paper_content:
The problem of predicting web-user accesses has recently attracted significant attention. Several algorithms have been proposed, which find important applications, like user profiling, recommender systems, web prefetching, design of adaptive web sites, etc. In all these applications the core issue is the developement of an effective prediction algorithm. In this paper, we focus on web-prefetching, because of its importance in reducing user perceived latency present in every Web-based application. The proposed method can be easily extended to the other aforementioned applications. Prefetching refers to the mechanism of deducing forthcoming page accesses of a client, based on access log information. We examine a method that is based on a new type of association patterns, which differently from existing approaches, considers all the specific characteristics of the Web-user navigation. Experimental results indicate its superiority over existing methods. Index Terms — Prediction, Web Log Mining, Prefetching, Association Rules, Data Mining.
---
paper_title: User Modeling for Efficient Use of Multimedia Files
paper_content:
It is very common that a user likes to collect many multimedia files of their interests from the web or other sources for his/her daily use, such as in emails, presentations, and technical documents. This paper presents algorithms to learn user models, in particular, user intention models and preference models from the usage of these files. Such usages include downloading, inserting, and sending multimedia files. A user intention model predicts when the user may want to involve some multimedia objects in his currently working environment (e.g., an email) and provides more convenient and accurate help to the user. A user preference model describes the types and classes of the user's favorite multimedia files and helps an offline crawler to autonomously collect more useful multimedia files for the user. The algorithms have been implemented in our media agents system and shown their effectiveness in user modeling.
---
paper_title: Mining association rules between sets of items in large databases
paper_content:
We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm.
---
paper_title: From Computational Intelligence to Web Intelligence: An Ensemble from Potpourri
paper_content:
The advent of the internet has changed the world in possibly more significant ways than any other event in the history of humanity. Is internet access and use beyond the reach of ordinary people with ordinary intelligence? Ignoring for the moment economic issues of access for all citizenry, what is it about internet access and use that hinders more widespread acceptability? We explore several issues, not exclusive, that attempt to provoke and poke at answers to these simple questions. Largely speculative, as invited talks ought to be, we explore 3 topics, well studied but as yet generally unsolved, in computational intelligence and explore their impact on web intelligence. These topics are machine translation, machine learning, and user interface design. Conclusion will be mine; readers will draw general conclusions.
---
paper_title: The application of nearest neighbor algorithm on creating an adaptive on-line learning system
paper_content:
The purpose of this research is the development of an online learning system by the application of nearest neighbor algorithm for providing adaptive learning materials on the course of introduction to computer networks. There are three major tasks included in the research: (1) categorization of the materials of computer networks; (2) application of the nearest neighbor algorithm for retrieving the most adequate materials for different students when they are learning via the World Wide Web (WWW); and (3) identification of changes in thought of students' knowledge on computer networks after taking the on-line course. Compared with previous research into adaptive learning, the application of the nearest neighbor algorithm could retrieve more adequate materials from the library based on previous similar cases. Therefore, the learning performance of each individual could be improved.
---
paper_title: Learning a Model of a Web User's Interests
paper_content:
There are many recommender systems that are designed to help users find relevant information on the web. To produce recommendations that are relevant to an individual user, many of these systems first attempt to learn a model of the user's browsing behavior. This paper presents a novel method for learning such a model from a set of annotated web logs--i.e., web logs that are augmented with the user's assessment of whether each webpage is an information content (IC) page (i.e., contains the information required to complete her task). Our systems use this to learn what properties of a webpage, within a sequence, identify such IC-pages, and similarly what "browsing properties" characterize the words on such pages ("IC-words"). As these methods deal with properties of web pages (or of words), rather than specific URLs (words), they can be used anywhere throughout the web; i.e., they are not specific to a particular website, or a particular task. This paper also describes the enhanced browser, aie, that we designed and implemented for collecting these annotated web logs, and an empirical study we conducted to investigate the effectiveness of our approach. This empirical evidence shows that our approach, and our algorithms, work effectively.
---
paper_title: Machine Learning for User Modeling
paper_content:
At first blush, user modeling appears to be a prime candidate for straightforward application of standard machine learning techniques. Observations of the user's behavior can provide training examples that a machine learning system can use to form a model designed to predict future actions. However, user modeling poses a number of challenges for machine learning that have hindered its application in user modeling, including: the need for large data sets; the need for labeled data; concept drift; and computational complexity. This paper examines each of these issues and reviews approaches to resolving them.
---
paper_title: Discovering Prediction Rules in AHA! Courses
paper_content:
In this paper we are going to show how to discover interesting prediction rules from student usage information to improve adaptive web courses. We have used AHA! to make courses that adapt both the presentation and the navigation depending on the level of knowledge that each particular student has. We have performed several modifications in AHA! to specialize it and power it in the educational area. Our objective is to discover relations between all the picked-up usage data (reading times, difficulty levels and test results) from student executions and show the most interesting to the teacher so that he can carry out the appropriate modifications in the course to improve it.
---
paper_title: Statistical Data Analysis in the Computer Age
paper_content:
Most of our familiar statistical methods, such as hypothesis testing, linear regression, analysis of variance, and maximum likelihood estimation, were designed to be implemented on mechanical calculators. Modern electronic computation has encouraged a host of new statistical methods that require fewer distributional assumptions than their predecessors and can be applied to more complicated statistical estimators. These methods allow the scientist to explore and describe data and draw valid statistical inferences without the usual concerns for mathematical tractability. This is possible because traditional methods of mathematical analysis are replaced by specially constructed computer algorithms. Mathematics has not disappeared from statistical theory. It is the main method for deciding which algorithms are correct and efficient tools for automating statistical inference.
---
paper_title: C4.5: Programs for Machine Learning
paper_content:
From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses.
---
paper_title: Initializing the student model using stereotypes and machine learning
paper_content:
In this paper we describe the method for initializing the student model in a Web-based Algebra Tutor, which is called Web-EasyMath. The system uses an innovative combination of stereotypes and the distance weighted k-nearest neighbor algorithm to initialize the model of a new student. In particular, the student is first assigned to a stereotype category concerning her/his knowledge level based on her/his performance on a preliminary test. The system then initializes all aspects of the student model using the distance weighted k-nearest neighbor algorithm among the students that belong to the same stereotype category with the new student. The basic idea of the application of the algorithm is to weigh the contribution of each of the neighbor students according to their distance from the new student; the distance between students is calculated based on a similarity measure. In the case of Web-EasyMath the similarity measure is estimated taking into account the school class students belong to, their degree of carefulness while solving exercises as well as their proficiency in using simple arithmetic operations.
---
paper_title: Modelling of Novices' Control Skills With Machine Learning
paper_content:
We report an empirical study on the application of machine learning to the modelling of novice controllers’ skills in balancing a pole (inverted pendulum) on top of a cart. Results are presented on the predictive power of the models, and the extent to which they were tailored to each controller. The behaviour of the participants in the study and the behaviour of an interpreter executing their models are compared with respect to the amount of time they were able to keep the pole and cart under control, the degree of stability achieved, and the conditions of failure. We discuss the results of the study, the limitations of the methodology in relation to learner modelling, and we point out future directions of research.
---
paper_title: From computational intelligence to Web intelligence
paper_content:
The authors explore three topics in computational intelligence: machine translation, machine learning and user interface design and speculate on their effects on Web intelligence. Systems that can communicate naturally and learn from interactions will power Web intelligence's long term success. The large number of problems requiring Web-specific solutions demand a sustained and complementary effort to advance fundamental machine-learning research and incorporate a learning component into every Internet interaction. Traditional forms of machine translation either translate poorly, require resources that grow exponentially with the number of languages translated, or simplify language excessively. Recent success in statistical, nonlinguistic, and hybrid machine translation suggests that systems based on these technologies can achieve better results with a large annotated language corpus. Adapting existing computational intelligence solutions, when appropriate for Web intelligence applications, must incorporate a robust notion of learning that will scale to the Web, adapt to individual user requirements, and personalize interfaces.
---
paper_title: Automatic Web-Page Classification by Using Machine Learning Methods
paper_content:
This paper describes automatic Web-page classification by using machine learning methods. Recently, the importance of portal site services is increasing including the search engine function on World Wide Web. Especially, the portal site such as Yahoo! service, which hierarchically classifies Web-pages into many categories, is becoming popular. However, the classification of Web-page into each category relies on man power, which costs much time and care. To alleviate this problem, we propose techniques to generate attributes by using co-occurrence analysis and to classify Web-page automatically based on machine learning. We apply these techniques to Web-pages on Yahoo! JAPAN and construct decision trees, which determine appropriate category for each Web-page. The performance of this proposed method is evaluated in terms of error rate, recall, and precision. The experimental evaluation demonstrates that this method provides acceptable accuracy with the classification of Web-page into top level categories on Yahoo! JAPAN.
---
paper_title: Predicting student help-request behavior in an intelligent tutor for reading
paper_content:
This paper describes our efforts at constructing a fine-grained student model in Project LISTEN's intelligent tutor for reading. Reading is different from most domains that have been studied in the intelligent tutoring community, and presents unique challenges. Constructing a model of the user from voice input and mouse clicks is difficult, as is constructing a model when there is not a well-defined domain model. We use a database describing student interactions with our tutor to train a classifier that predicts whether students will click on a particular word for help with 83.2% accuracy. We have augmented the classifier with features describing properties of the word's individual graphemes, and discuss how such knowledge can be used to assess student skills that cannot be directly measured.
---
paper_title: An Exploratory Technique for Investigating Large Quantities of Categorical Data
paper_content:
SUMMARY The technique set out in the paper, CHAID, is an offshoot of AID (Automatic Interaction Detection) designed for a categorized dependent variable. Some important modifications which are relevant to standard AID include: built-in significance testing with the consequence of using the most significant predictor (rather than the most explanatory), multi-way splits (in contrast to binary) and a new type of predictor which is especially useful in handling missing information.
---
paper_title: Learning Interaction Models in a Digital Library Service
paper_content:
We present the exploitation of an improved version of the Learning Server for modeling the user interaction in a digital library service architecture. This module is the basic component for providing the service with an added value such as an essential extensible form of interface adaptivity. Indeed, the system is equipped with a web-based visual environment, primarily intended to improve the user interaction by automating the assignment of a suitable interface depending on data relative to the previous experience with the system, coded in log files. The experiments performed show that accurate interaction models can be inferred automatically by using up-to-date learning algorithms.
---
paper_title: Data Mining Practical Machine Learning Tools And Techniques With Java Implementations
paper_content:
Thank you for reading data mining practical machine learning tools and techniques with java implementations. As you may know, people have look hundreds times for their favorite novels like this data mining practical machine learning tools and techniques with java implementations, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their laptop.
---
paper_title: An Algorithm for Finding Nearest Neighbors
paper_content:
An algorithm that finds the k nearest neighbors of a point, from a sample of size N in a d-dimensional space, with an expected number of distance calculations is described, its properties examined, and the validity of the estimate verified with simulated data.
---
paper_title: An introduction to computing with neural nets
paper_content:
Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.
---
paper_title: Adaptive user modeling for filtering electronic news
paper_content:
A prototype system for the fine-grained filtering of news items has been developed and a pilot test has been conducted. The system is based on an adaptive user model that integrates stereotypes and artificial neural networks. The stereotypes are based on newspaper sections and sub-sections, along with editor specified and user specified keywords. Eight subjects trained the system over six days of news papers (986 news items) and then tested the system on a seventh day (171 news items). Five users were simply asked to 'read the news' while three users developed 'corporate' profiles with explicit information needs. The evaluations suggests that such an integrated adaptive user model did, in fact, reflect the difference between the two different types of task. In both cases, the results also reflect the quality of the training of the adaptive neural network by the user in creating the user profile.
---
paper_title: Statistical machine learning for tracking hypermedia user behavior
paper_content:
We consider the classification and tracking of user navigation patterns for closed world hypermedia. We use a number of statistical machine learning models and compare them on different instances of the classification/tracking problem using a home made access log database. We conclude on the potential and limitations of these methods for user behavior identification and tracking.
---
paper_title: Predicting student help-request behavior in an intelligent tutor for reading
paper_content:
This paper describes our efforts at constructing a fine-grained student model in Project LISTEN's intelligent tutor for reading. Reading is different from most domains that have been studied in the intelligent tutoring community, and presents unique challenges. Constructing a model of the user from voice input and mouse clicks is difficult, as is constructing a model when there is not a well-defined domain model. We use a database describing student interactions with our tutor to train a classifier that predicts whether students will click on a particular word for help with 83.2% accuracy. We have augmented the classifier with features describing properties of the word's individual graphemes, and discuss how such knowledge can be used to assess student skills that cannot be directly measured.
---
paper_title: Pattern recognition using neural networks: theory and algorithms for engineers and scientists
paper_content:
Part I FUNDAMENTALS OF PATTERN RECOGNITION 0. Basic Concepts of Pattern Recognition 1. Decision Theoretic Algorithms 2. Structural Pattern Recognition Part II INTRODUCTORY NEURAL NETWORKS 3. Artificial Neural Network Structures 4. Supervised Training via Error Backpropogation: Derivations 5. Acceleration and Stabilization of Supervised Gradient Training of MLPs Part III ADVANCED FUNDAMENTALS OF NEURAL NETWORKS 6. Supervised Training via Strategic Search 7. Advances in Network Algorithms for Recognition 8. Using Hopfield Recurrent Neural Networks Part IV NEURAL, FEATURE, AND DATA ENGINEERING 9. Neural Engineering and Testing of FANNs 10. Feature and Data Engineering
---
paper_title: G.: A Connectionist Model of Spatial Knowledge Acquisition in a Virtual Environment
paper_content:
This paper proposes the use of neural networks as a tool for studying navigation within virtual worlds. Results indicate that network learned to predict the next step for a given trajectory, acquiring also basic spatial knowledge in terms of landmarks and configuration of spatial layout. In addition, the network built a spatial representation of the virtual world, e.g. cognitive-like map, which preserves the topology but lacks metric accuracy. The benefits of this approach and the possibility of extending the methodology to the study of navigation in Human Computer Interaction are discussed.
---
paper_title: Using a Learning Agent with a Student Model
paper_content:
In this paper we describe the application of machine learning to the problem of constructing a student model for an intelligent tutoring system. The proposed system learns on a per student basis how long an individual student requires to solve the problem presented by the tutor. This model of relative problem difficulty is learned within a "two-phase" learning algorithm. First, data from the entire student population are used to train a neural network. Second, the system learns how to modify the neural network's output to better fit each individual student's performance. Both components of the model proved useful in improving its accuracy. This model of time to solve a problem is used by the tutor to control the complexity of problems presented to the student.
---
paper_title: An introduction to Support Vector Machines
paper_content:
This book is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory. The book also introduces Bayesian analysis of learning and relates SVMs to Gaussian Processes and other kernel based learning methods. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc. Their first introduction in the early 1990s lead to a recent explosion of applications and deepening theoretical analysis, that has now established Support Vector Machines along with neural networks as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and application of these techniques. The concepts are introduced gradually in accessible and self-contained stages, though in each stage the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally the book will equip the practitioner to apply the techniques and an associated web site will provide pointers to updated literature, new applications, and on-line software.
---
paper_title: Adapting to the user’s internet search strategy
paper_content:
World Wide Web search engines typically return thousands of results to the users. To avoid users browsing through the whole list of results, search engines use ranking algorithms to order the list according to predefined criteria. In this paper, we present Toogle, a front-end to the Google search engine for both desktop browsers and mobile phones. For a given search query, Toogle first ranks results using Google's algorithm and, as the user browses through the result list, uses machine learning techniques to infer a model of her search goal and to adapt accordingly the order in which the results are presented. We describe preliminary experimental results that show the effectiveness of Toogle.
---
paper_title: Category Based Customization Approach for Information Retrieval
paper_content:
This paper proposes an customization technique to supporting interactive document retrieval in unorganized open information space like WWW. We assume that taxonomical thought is one of the most important and skilled operations for us when we organize or store information. The proposed methodology, therefore, handles hierarchical categories of documents. The system can be customized through users' modification of categories. The features of the proposed approach are (1) visualization of document categories for interaction, (2) initialization of categories by hierarchical clustering method, (3) customization of categories by support vector machine techniques, (4) additional attributes for individual implicit cognitive aspects.
---
paper_title: A training algorithm for optimal margin classifiers
paper_content:
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
---
paper_title: The Lumiere Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users
paper_content:
The Lumiere Project centers on harnessing probability and utility to provide assistance to computer software users. We review work on Bayesian user models that can be employed to infer a user's needs by considering a user's background, actions, and queries. Several problems were tackled in Lumiere research, including (1) the construction of Bayesian models for reasoning about the time-varying goals of computer users from their observed actions and queries, (2) gaining access to a stream of events from software applications, (3) developing a language for transforming system events into observational variables represented in Bayesian user models, (4) developing persistent profiles to capture changes in a user's expertise, and (5) the development of an overall architecture for an intelligent user interface. Lumiere prototypes served as the basis for the Ofice Assistant in the Microsoft Office '97 suite of productivity applications.
---
paper_title: User Intention Modeling in Web Applications Using Data Mining
paper_content:
The problem of inferring a user's intentions in Machine–Human Interaction has been the key research issue for providing personalized experiences and services. In this paper, we propose novel approaches on modeling and inferring user's actions in a computer. Two linguistic features – keyword and concept features – are extracted from the semantic context for intention modeling. Concept features are the conceptual generalization of keywords. Association rule mining is used to find the proper concept of corresponding keyword. A modified Naive Bayes classifier is used in our intention modeling. Experimental results have shown that our proposed approach achieved 84% average accuracy in predicting user's intention, which is close to the precision (92%) of human prediction.
---
paper_title: Pre-sending Documents on the WWW: A Comparative Study
paper_content:
Users' waiting time for information on the WWW may be reduced by pre-sending documents they are likely to request, albeit at a possible expense of additional transmission costs. In this paper, we describe a prediction model which anticipates the documents a user is likely to request next, and present a decision-theoretic approach for pre-sending documents based on the predictions made by this model. We introduce two evaluation methods which measure the immediate and the eventual benefit of pre-sending a document. We use these evaluation methods to compare the performance of our decision-theoretic policy to that of a naive pre-sending policy, and to identify the domain parameter configurations for which each of these policies provides a clear overall benefit to the user.
---
paper_title: Adaptive Web Navigation for Wireless Devices
paper_content:
Visitors who browse the web from wireless PDAs, cell phones, and pagers are frequently stymied by web interfaces optimized for desktop PCs. Simply replacing graphics with text and reformatting tables does not solve the problem, because deep link structures can still require minutes to traverse. ::: ::: In this paper we develop an algorithm, MINPATH, that automatically improves wireless web navigation by suggesting useful shortcut links in real time. MINPATH finds shortcuts by using a learned model of web visitor behavior to estimate the savings of shortcut links, and suggests only the few best links. We explore a variety of predictive models, including Naive Bayes mixture models and mixtures of Markov models, and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort.
---
paper_title: Machine Learning for User Modeling
paper_content:
At first blush, user modeling appears to be a prime candidate for straightforward application of standard machine learning techniques. Observations of the user's behavior can provide training examples that a machine learning system can use to form a model designed to predict future actions. However, user modeling poses a number of challenges for machine learning that have hindered its application in user modeling, including: the need for large data sets; the need for labeled data; concept drift; and computational complexity. This paper examines each of these issues and reviews approaches to resolving them.
---
paper_title: Predictive Statistical Models for User Modeling
paper_content:
The limitations of traditional knowledge representation methods for modeling complex human behaviour led to the investigation of statistical models. Predictive statistical models enable the anticipation of certain aspects of human behaviour, such as goals, actions and preferences. In this paper, we motivate the development of these models in the context of the user modeling enterprise. We then review the two main approaches to predictive statistical modeling, content-based and collaborative, and discuss the main techniques used to develop predictive statistical models. We also consider the evaluation requirements of these models in the user modeling context, and propose topics for future research.
---
paper_title: Link prediction and path analysis using Markov chains
paper_content:
Abstract The enormous growth in the number of documents in the World Wide Web increases the need for improved link navigation and path analysis models. Link prediction and path analysis are important problems with a wide range of applications ranging from personalization to Web server request prediction. The sheer size of the World Wide Web coupled with the variation in users' navigation patterns makes this a very difficult sequence modelling problem. In this paper, the notion of probabilistic link prediction and path analysis using Markov chains is proposed and evaluated. Markov chains allow the system to dynamically model the URL access patterns that are observed in navigation logs based on the previous state. Furthermore, the Markov chain model can also be used in a generative mode to automatically obtain tours. The Markov transition matrix can be analysed further using eigenvector decomposition to obtain `personalized hubs/authorities'. The utility of the Markov chain approach is demonstrated in many domains: HTTP request prediction, system-driven adaptive Web navigation, tour generation, and detection of `personalized hubs/authorities' from user navigation profiles. The generality and power of Markov chains is a first step towards the application of powerful probabilistic models to Web path analysis and link prediction.
---
paper_title: Relational Markov models and their application to adaptive web navigation
paper_content:
Relational Markov models (RMMs) are a generalization of Markov models where states can be of different types, with each type described by a different set of variables. The domain of each variable can be hierarchically structured, and shrinkage is carried out over the cross product of these hierarchies. RMMs make effective learning possible in domains with very large and heterogeneous state spaces, given only sparse data. We apply them to modeling the behavior of web site users, improving prediction in our P ROTEUS architecture for personalizing web sites. We present experiments on an e-commerce and an academic web site showing that RMMs are substantially more accurate than alternative methods, and make good predictions even when applied to previously-unvisited parts of the site.
---
paper_title: Assessing Temporally Variable User Properties With Dynamic Bayesian Networks
paper_content:
Bayesian networks have been successfully applied to the assessment of user properties which remain unchanged during a session. However, many properties of a person vary over time, thus raising new questions of network modeling. In this paper we characterize different types of dependencies that occur in networks that deal with the modeling of temporally variable user properties. We show how existing techniques of applying dynamic probabilistic networks can be adapted for the task of modeling the dependencies in dynamic Bayesian networks. We illustrate the proposed techniques using examples of emergency calls to the fire department of the city of Saarbrucken. The fire department officers are experienced in dealing with emergency calls from callers whose available working memory capacity is temporarily limited. We develop a model which reconstructs the officers’ assessments of a caller’s working memory capacity.
---
paper_title: Prefetching Hyperlinks
paper_content:
This paper develops a new method for prefetching Web pages into the client cache. Clients send reference information to Web servers, which aggregate the reference information in near-real-time and then disperse the aggregated information to all clients, piggybacked on GET responses. The information indicates how often hyperlink URLs embedded in pages have been previously accessed relative to the embedding page. Based on knowledge about which hyperlinks are generally popular, clients initiate prefetching of the hyperlinks and their embedded images according to any algorithm they prefer. Both client and server may cap the prefetching mechanism's space overhead and waste of network resources due to speculation. The result of these differences is improved prefetching: lower client latency (by 52.3%) and less wasted network bandwidth (24.0%).
---
|
Title: Survey of Data Mining Approaches to User Modeling for Adaptive Hypermedia
Section 1: INTRODUCTION
Description 1: In this section, introduce the concept of adaptive hypermedia (AH), personalization, user modeling (UM), and the role of data mining and machine learning techniques in the automatic creation of user models for AH services.
Section 2: UM
Description 2: Define a user model, its relevance for AH, and the elements it should capture about user behavior. Also, provide a basic overview of automatic user model generation.
Section 3: UM and AH
Description 3: Describe how AH systems use user models to personalize content and the basic architecture of an AH system, including the recommendation and classification tasks.
Section 4: Automatic Generation of User Models
Description 4: Outline the steps in the automatic generation of user models, including data collection, preprocessing, pattern discovery, and validation and interpretation.
Section 5: Data Mining and Its Relevance to UM
Description 5: Discuss how data mining techniques help in the phase of pattern discovery for user modeling and the importance of choosing suitable methods based on the data and tasks at hand.
Section 6: UNSUPERVISED APPROACHES TO UM
Description 6: Review the main unsupervised techniques like clustering (hierarchical, nonhierarchical, fuzzy) and association rules, highlighting their application, basic algorithms, and limitations for user modeling.
Section 7: SUPERVISED APPROACHES TO UM
Description 7: Explore supervised learning techniques such as decision trees/classification rules, k-NN, neural networks, and SVMs, explaining their application, basic algorithms, and limitations for user modeling.
Section 8: CRITERIA FOR THE SELECTION OF THE TECHNIQUES
Description 8: Provide guidelines to help decide which data mining/machine learning technique to use for developing AH applications based on data nature, task type, and readability requirements.
Section 9: CONCLUSION AND DISCUSSION
Description 9: Summarize the findings of the paper, discuss the lack of standardization in user model design, propose future research directions including hybrid systems, and discuss the potential improvements in user modeling for AH systems.
|
Using Formal Verification to Evaluate Human-Automation Interaction: A Review
| 6 |
---
paper_title: A model for types and levels of human interaction with automation
paper_content:
We outline a model for types and levels of automation that provides a framework and an objective basis for deciding which system functions should be automated and to what extent. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
---
paper_title: Human-Automation Interaction:
paper_content:
Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues. Keywords: Driver distraction; Language: en
---
paper_title: PUMA Footprints: linking theory and craft skill in usability evaluation
paper_content:
‘Footprints’ are marks or features of a design that alert the analyst to the possible existence of ::: usability difficulties caused by violations of design principles. PUMA Footprints make an explicit link between ::: the theory underlying a Programmable User Model and the design principles that can be derived from that ::: theory. While principles are widely presented as being intuitively obvious, it is desirable that they should have a ::: theoretical basis. However, working directly with theory tends to be time-consuming, and demands a high level ::: of skill. PUMA footprints offer a theory-based justification for various usability principles, with guidelines on ::: detecting violations of those principles.
---
paper_title: Automated tool for task analysis of NextGen automation
paper_content:
The realization of NextGen capabilities will require rapid deployment of revised airline cockpit procedures and the pre-requisite training and proficiency checks. Traditional approaches for the evaluation of the re-designed procedures and training, such as expert reviews and human-in-the- loop tests, cannot provide comprehensive analysis, cannot be performed until after the procedures and training are developed, and are cost and time prohibitive. This paper describes the emergence of a new class of tools to automate the evaluation of procedures and training. The tools capture the procedures and tasks to be trained in a formal model that is stored in a data-base. Human performance models are executed to estimate the ease-of-learning, ease-of-use and likelihood of failure of each of the tasks. The procedures and tasks can be defined rapidly, and modified and run repeatedly throughout the development cycle. The underlying models and tools are described in this paper. A case study and the implications of these tools are also discussed.
---
paper_title: HOW IN THE WORLD DID WE EVER GET INTO THAT MODE? MODE ERROR AND AWARENESS IN SUPERVISORY CONTROL
paper_content:
New technology is flexible in the sense that it provides practitioners with a large number of functions and options for carrying out a given task under different circumstances. However, this flexibility has a price. Because the human supervisor must choose the mode best suited to a particular situation, he or she must know more than before about system operations and the operation of the system as well as satisfy new monitoring and attentional demands to track which mode the automation is in and what it is doing to manage the underlying processes. When designers proliferate modes without supporting these new cognitive demands, new mode-related error forms and failure paths can result. Mode error has been discussed in human-computer interaction for some time; however, the increased capabilities and the high level of autonomy of new automated systems appear to have created new types of mode-related problems. The authors explore these new aspects based on findings from their own and related studies of human-automation interaction. In particular, investigators draw on empirical data from a series of studies of pilot-automation interaction in commercial glass cockpit aircraft to illustrate the nature, circumstances, and potential consequences of mode awareness problems in supervisory control of automated resources. The result is an expanded view of mode error that considers the new demands imposed by more automated systems.
---
paper_title: Formally Justifying User-Centred Design Rules: A Case Study on Post-completion Errors
paper_content:
Interactive systems combine a human operator with a computer. Either may be a source of error. The verification processes used must ensure both the correctness of the computer component, and also minimize the risk of human error. Human-centred design aims to do this by designing systems in a way that make allowance for human frailty. One approach to such design is to adhere to design rules. Design rules, however, are often ad hoc. We examine how a formal cognitive model, encapsulating results from the cognitive sciences, can be used to justify such design rules in a way that integrates their use with existing formal hardware verification techniques. We consider here the verification of a design rule intended to prevent a commonly occurring class of human error know as the post-completion error.
---
paper_title: ANALYSIS OF PILOTS' MONITORING AND PERFORMANCE ON AN AUTOMATED FLIGHT DECK
paper_content:
In order to understand the role of pilot monitoring in the loss of mode awareness on automated flight decks, we studied 20 Boeing 747-400 line pilots in a simulated flight. We developed a set of scenario events that created challenges to monitoring. We measured automation use, eye fixations, and pilot mental models. The results showed that, at an aggregate level, pilot monitoring patterns were consistent with those found in the few previous studies. However, mode awareness was affected by both failures to verify mode selections and an inability to understand the implications of autoflight mode on airplane performance.
---
paper_title: An Apprenticeship Approach for the Development of Operations Automation Knowledge Bases
paper_content:
Operations automation is automation that replaces, wholly or in part, operational activities currently carried out by human controllers in complex systems. It is intended to be neither ‘black-box’ nor ‘human-tended’ automation, but rather automation that functions independent of human control and yet still facilitates its inspection and repair as necessary. This paper describes research to develop an apprenticeship approach to developing a knowledge base to support such automation. The result is both a human-centered automation approach and a software architecture, Apprentice, to support this approach. Apprentice enables human operators to create the knowledge base for operations automation by performing their normal control activities. Apprentice watches and compares them with those specified in the knowledge base, noting discrepancies between the knowledge base and operator activities. Graphical knowledge base editing tools are then used—-by the domain practitioners—-to modify, refine, or extend the kno...
---
paper_title: On the representation of automation using a work domain analysis
paper_content:
Work domain analysis (WDA) has been applied extensively within cognitive engineering as an analytic framework for the evaluation of complex sociotechnical systems in support of design. However, the WDAs described in the literature have not explored the representation of automated system components, despite the documented problems associated with operator-automation interaction and the requirements for operator support in complex automated systems. The current research examines the application of WDA to model an example automated system – a camera – by representing the camera along with its automated components as separate systems using the abstraction hierarchy (AH). Additionally, we contrasted this modelling approach with the more typical approach of modelling automation within a cognitive work analysis (CWA) by performing a control task analysis using the decision ladder. The results of these analyses suggest that, similar to non-automated systems, considering a separate representation of an automated s...
---
paper_title: Humans and Automation: Use, Misuse, Disuse, Abuse
paper_content:
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by man...
---
paper_title: A History and Primer of Human Performance Modeling
paper_content:
Human performance models are abstractions, usually mathematical or computational, that attempt to explain or predict human behavior in a particular domain or task. This includes a wide range of techniques and approaches, from ideas that are most likely familiar to the majority of human factors professionals (such as signal detection theory) to more novel and complex approaches (such as computational models of dual tasking while driving). This chapter provides a sampling of modeling approaches and domains to which those approaches have been applied; for a more in-depth review, see Pew and Mavor (1998). We also discuss some of the issues faced by modelers as well as describe some of the rich history of the modeling endeavor over the past 50 years. Language: en
---
paper_title: Ironies of Automation
paper_content:
Abstract This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the 'classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.
---
paper_title: Human-Automated Judge Learning: A Methodology for Examining Human Interaction With Information Analysis Automation
paper_content:
Human-automated judge learning (HAJL) is a methodology providing a three-phase process, quantitative measures, and analytical methods to support design of information analysis automation. HAJL's measures capture the human and automation's judgment processes, relevant features of the environment, and the relationships between each. Specific measures include achievement of the human and the automation, conflict between them, compromise and adaptation by the human toward the automation, and the human's ability to predict the automation. HAJL's utility is demonstrated herein using a simplified air traffic conflict prediction task. HAJL was able to capture patterns of behavior within and across the three phases with measures of individual judgments and human-automation interaction. Its measures were also used for statistical tests of aggregate effects across human judges. Two between-subject manipulations were crossed to investigate HAJL's sensitivity to interventions in the human's training (sensor noise during training) and in display design (information from the automation about its judgment strategy). HAJL identified that the design intervention impacted conflict and compromise with the automation, participants learned from the automation over time, and those with higher individual judgment achievement were also better able to predict the automation.
---
paper_title: Integrating Task- and Work Domain-Based Work Analyses in Ecological Interface Design: A Process Control Case Study
paper_content:
In this paper, we present a case study wherein several work analysis methods were incorporated in the design of a graphical interface for a petrochemical production process. We follow this case from the application of the work analysis methods, through the consolidation of information requirements, to the design of a novel interface that integrates the requirements. The findings confirm earlier assertions that task-based and work domain-based analysis frameworks identify unique and complementary requirements for effective information systems that are intended to support supervisory control of complex systems. It further provides the first industrial demonstration of ecological interface forms based on integrated task-and work domain-based work requirements.
---
paper_title: Brittleness in the design of cooperative problem-solving systems: the effects on user performance
paper_content:
One of the critical problems in the design and use of advanced decision-support systems is their potential "brittleness". This brittleness can arise because of the inability of the designer to anticipate and design for all of the scenarios that could arise during the use of the system. The typical "safety valve" to deal with this problem is to keep a person "in the loop", requiring that person to apply his/her expertise in making the final decision on what actions to take. This paper provides empirical data on how the role of the decision support system can have a major impact on the effectiveness of this design strategy. Using flight planning for commercial airlines as a testbed, three alternative designs for a graphical flight planning tool were evaluated, using 27 dispatchers and 30 pilots as subjects. The results show that the presentation of a suggestion or recommendation by the computer early in the person's own problem evaluation can have a significant impact on that person's decision processes, influencing situation assessment and the evaluation of alternative solutions.
---
paper_title: Flight deck automation and task management
paper_content:
The purpose of the paper is to show that recent flight deck automation human factors research suggests that attention allocation or task management is a critical safety issue in advanced technology aircraft, to relate that finding to task management research, and to suggest a course for future research to address that issue.
---
paper_title: Using multiple cognitive task analysis methods for supervisory control interface design in high-throughput biological screening processes
paper_content:
Cognitive task analysis (CTA) approaches are currently needed in many domains to provide explicit guidance on redesigning existing systems. This study used goal-directed task analysis (GDTA) along with abstraction hierarchy (AH) modeling to characterize the knowledge structure of biopharmacologists in planning, executing and analyzing the results of high-throughput organic compound screening operations, as well as the lab automation and equipment used in these operations. It was hypothesized that combining the results of the GDTA and AH models would provide a better understanding of complex system operator needs and how they may be addressed by existing technologies, as well as facilitate identification of automation and system interface design limitations. We used comparisons of the GDTA and AH models along with taxonomies of usability heuristics and types of automation in order to formulate interface design and automation functionality recommendations for existing software applications used in biological screening experiments. The proposed methodology yielded useful recommendations for improving custom supervisory control applications that led to prototypes of interface redesigns. The approach was validated through an expert usability evaluation of the redesigns and was shown to be applicable to the life sciences domain.
---
paper_title: Using GOMS for user interface design and evaluation: which technique?
paper_content:
Since the seminal book, The Psychology of Human-Computer Interaction , the GOMS model has been one of the few widely known theoretical concepts in human-computer interaction. This concept has spawned much research to verify and extend the original work and has been used in real-world design and evaluation situations. This article synthesizes the previous work on GOMS to provide an integrated view of GOMS models and how they can be used in design. We briefly describe the major variants of GOMS that have matured sufficiently to be used in actual design. We then provide guidance to practitioners about which GOMS variant to use for different design situations. Finally, we present examples of the application of GOMS to practical design problems and then summarize the lessons learned.
---
paper_title: Human-Automation Interaction:
paper_content:
Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues. Keywords: Driver distraction; Language: en
---
paper_title: Spatial Awareness in Synthetic Vision Systems: Using Spatial and Temporal Judgments to Evaluate Texture and Field of View
paper_content:
OBJECTIVE ::: This work introduced judgment-based measures of spatial awareness and used them to evaluate terrain textures and fields of view (FOVs) in synthetic vision system (SVS) displays. ::: ::: ::: BACKGROUND ::: SVSs are cockpit technologies that depict computer-generated views of terrain surrounding an aircraft. In the assessment of textures and FOVs for SVSs, no studies have directly measured the three levels of spatial awareness with respect to terrain: identification of terrain, its relative spatial location, and its relative temporal location. ::: ::: ::: METHODS ::: Eighteen pilots made four judgments (relative azimuth angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 noninteractive 5-s simulations of an SVS head-down display. There were two between-subject variables (texture order and FOV order) and five within-subject variables (texture, FOV, and the terrain point's relative azimuth angle, distance, and height). ::: ::: ::: RESULTS ::: Texture produced significant main and interaction effects for the magnitude of error in the relative angle, distance, height, and abeam time judgments. FOV interaction effects were significant for the directional magnitude of error in the relative distance, height, and abeam time judgments. ::: ::: ::: CONCLUSION ::: Spatial awareness was best facilitated by the elevation fishnet (EF), photo fishnet (PF), and photo elevation fishnet (PEF) textures. ::: ::: ::: APPLICATION ::: This study supports the recommendation that the EF, PF, and PEF textures be further evaluated in future SVS experiments. Additionally, the judgment-based spatial awareness measures used in this experiment could be used to evaluate other display parameters and depth cues in SVSs.
---
paper_title: Interactive Critiquing as a Form of Decision Support: An Empirical Evaluation
paper_content:
This research focused on the design of a decision-support system to assist blood bankers in identifying alloantibodies in patients' blood. It was hypothesized that critiquing, a technique in which a computer monitors human performance for errors, would be an effective role for such a decision-support system if the error monitoring was unobtrusive and if the critiquing was in response to both intermediate and final conclusions made by the user. A prototype critiquing system monitored medical technologists for (a) errors of commission and errors of omission, b) failure to follow a complete protocol, (c) answers inconsistent with the data collected, and (d) answers inconsistent with prior probability information. Participants using the critiquing system had significantly better performance (completely eliminating misdiagnosis rates for 3 out of 4 test cases) than a comparable control group. Detailed analysis of the behavioral protocols provided insights into how specific design features influenced performanc...
---
paper_title: Designing Effective Human-Automation-Plant Interfaces: A Control-Theoretic Perspective
paper_content:
In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.
---
paper_title: A specifier's introduction to formal methods
paper_content:
Formal methods used in developing computer systems (i.e. mathematically based techniques for describing system properties) are defined, and their role is delineated. Formal specification languages, which provide the formal method's mathematical basis, are examined. Certain pragmatic concerns about formal methods and their users, uses, and characteristics are discussed. Six well-known or commonly used formal methods are illustrated by simple examples. They are Z, VDM, Larch, temporal logic, CSP, and transition axioms. >
---
paper_title: An improvement in formal verification
paper_content:
Critical safety and liveness properties of a concurrent system can often be proven with the help of a reachability analysis of a finite state model. This type of analysis is usually implemented as a depthfirst search of the product statespace of all components in the system, with each (finite state) component modeling the behavior of one asynchronously executing process. Formal verification is achieved by coupling the depthfirst search with a method for identifying those states or sequences of states that violate the correct- ness requirements. It is well known, however, that an exhaustive depthfirst search of this type performs redundant work. The redundancy is caused by the many possible interleavings of inde- pendent actions in a concurrent system. Few of these interleavings can alter the truth or falsity of the correctness properties being studied. The standard depthfirst search algorithm can be modified to track additional information about the interleavings that have already been inspected, and use this information to avoid the exploration of redundant interleavings. Care must be taken to perform the reductions in such a way that the capability to prove both safety and liveness properties is fully pre- served. Not all known methods have this property. Another potential drawback of the existing methods is that the additional computations required to enforce a reduction dur- ing the search can introduce overhead that diminishes the benefits. In this paper we dis- cuss a new reduction method that solves some of these problems.
---
paper_title: A machine program for theorem-proving
paper_content:
The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
---
paper_title: Counterexample-guided abstraction refinement for symbolic model checking
paper_content:
The state explosion problem remains a major hurdle in applying symbolic model checking to large hardware designs. State space abstraction, having been essential for verifying designs of industrial complexity, is typically a manual process, requiring considerable creativity and insight.In this article, we present an automatic iterative abstraction-refinement methodology that extends symbolic model checking. In our method, the initial abstract model is generated by an automatic analysis of the control structures in the program to be verified. Abstract models may admit erroneous (or "spurious") counterexamples. We devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly. We describe aSMV, a prototype implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with about 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.
---
paper_title: Verifying Invariants Using theorem Proving
paper_content:
Our goal is to use a theorem prover in order to verify invariance properties of distributed systems in a “model checking like” manner. A system S is described by a set of sequential components, each one given by a transition relation and a predicate Init defining the set of initial states. In order to verify that P is an invariant of S, we try to compute, in a model checking like manner, the weakest predicate P′ stronger than P and weaker than Init which is an inductive invariant, that is, whenever P′ is true in some state, then P′ remains true after the execution of any possible transition. The fact that P is an invariant can be expressed by a set of predicates (having no more quantifiers than P) on the set of program variables, one for every possible transition of the system. In order to prove these predicates, we use either automatic or assisted theorem proving depending on their nature.
---
paper_title: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints
paper_content:
A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe {(+), (-), (±)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).
---
paper_title: Physigrams: modelling devices for natural interaction
paper_content:
This paper explores the formal specification of the physical behaviour of devices ‘unplugged’ from their digital effects. By doing this we seek to better understand the nature of physical interaction and the way this can be exploited to improve the design of hybrid devices with both physical and digital features. We use modified state transition networks of the physical behaviour, which we call physiograms, and link these to parallel diagrams of the digital state. These are used to describe a number of features of physical interaction exposed by previous work and relevant properties expressed using a formal semantics of the diagrams. As well as being an analytic tool, the physigrams have been used in a case study where product designers used and adapted them as part of the design process.
---
paper_title: A Review of Formalisms for Describing Interactive Behaviour
paper_content:
This paper reviews the state of research linking formal specification and interactive systems. An appreciation of Human Computer Interaction has become increasingly important within Software Engineering. As systems have become more complex there is an increasing awareness of the consequences of human error. As a result the formal specification of interactive behaviour has become a pressing topic of research. The notations considered here describe both the capabilities and resources of users in relation to a specific system and those aspects of an interactive system that must be analysed from a user perspective before implementation. The review concludes by surveying ongoing work which attempts to bridge the gap between disciplinary standpoints.
---
paper_title: Formal verification of humanautomation interaction
paper_content:
This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.
---
paper_title: On the use of transition diagrams in the design of a user interface for an interactive computer system
paper_content:
This paper deals with what might be called the top level design of an interactive computer system. It examines some problems which arise in trying to specify what the user interface of such a system should be. It proposes a concept—the terminal state—and a notation—the terminal state transition diagram—which make the design of the top level somewhat easier. It also proposes a user interface in which the notion of terminal state is explicit. This user interface seems to provide a great improvement in flexibility and case of adding subsystems to a general purpose system.
---
paper_title: Press on: Principles of Interaction Programming
paper_content:
Choice Outstanding Academic Title, 2008. and Winner, Computer and Information Sciences category, 2007 Professional/Scholarly Publishing Awards for Excellence Competition presented by the Association of American Publishers, Inc. Interactive systems and devices, from mobile phones to office copiers, do not fulfill their potential for a wide variety of reasonsnot all of them technical. Press On shows that we can design better interactive systems and devices if we draw on sound computer science principles. It uses state machines and graph theory as a powerful and insightful way to analyze and design better interfaces and examines specific designs and creative solutions to design problems. Programmerswho have the technical knowledge that designers and users often lackcan be more creative and more central to interaction design than we might think. Sound programming concepts improve device design. Press On provides the insights, concepts and programming tools to improve usability. Knowing the computer science is fundamental, but Press On also shows how essential it is to have the right approaches to manage the design of systems that people use. Particularly for complex systems, the social, psychological and ethical concernsthe wider design issuesare crucial, and these are covered in depth. Press On highlights key principles throughout the text and provides cross-topic linkages between chapters and suggestions for further reading. Additional material, including all the program code used in the book, is available on an interactive web site. Press On is an essential textbook and reference for computer science students, programmers, and anyone interested in the design of interactive technologies.
---
paper_title: Analyzing interaction orderings with model checking
paper_content:
Human-computer interaction (HCI) systems control an ongoing interaction between end-users and computer-based systems. For software-intensive systems, a graphic user interface (GUI) is often employed for enhanced usability. Traditional approaches to validation of GUI aspects in HCI systems involve prototyping and live-subject testing. These approaches are limited in their ability to cover the set of possible human-computer interactions that a system may allow, since patterns of interaction may be long running and have large numbers of alternatives. In this paper, we propose a static analysis that is capable of reasoning about user-interaction properties of GUI portions of HCI applications written in Java using modern GUI frameworks, such as Swing/spl trade/. Our approach consists of partitioning an HCI application into three parts: the Swing library, the GUI implementation, i.e., code that interacts directly with Swing, and the underlying application. We develop models of each of these parts that preserve behavior relevant to interaction ordering. We describe how these models are generated and how we have customized a model checking framework to efficiently analyze their combination.
---
paper_title: Model checking graphical user interfaces using abstractions
paper_content:
Symbolic model checking techniques have been widely and successfully applied to statically analyze dynamic properties of hardware systems. Efforts to apply this same technology to the analysis of software systems has met with a number of obstacles, such as the existence of non-finite state-spaces. This paper investigates abstractions that make it possible to cost-effectively model check specifications of software for graphical user interface (GUI) systems. We identify useful abstractions for this domain and demonstrate that they can be incorporated into the analysis of a variety of systems with similar structural characteristics. The resulting domain-specific model checking yields fast verification of naturally occur ring specifications of intended GUI behavior.
---
paper_title: A Formalism for the Specification of Operationally Embedded Reactive Systems
paper_content:
The Operational Procedure Information Model, presented in this paper, provides a formalism for the specification of the behavior of operationally embedded reactive systems found in aircraft guidance and navigation systems. The information model assigns semantic interpretations of the operational procedure construct to the elements of a finite state machine. The operational procedure construct captures the embedded operational behavior of the system over all the missions in the life-cycle. The finite state machine captures the reactive behavior of the system. ::: ::: ::: ::: The information model, captured in a database and interrogated through a graphical user-interface, can be used for simulation, analysis, and the generation of code and documentation.
---
paper_title: A formal model to handle the adaptability of multimodal user interfaces
paper_content:
In this paper we propose an approach for checking adaptability property of multimodal User Interfaces (UIs) for systems used in dynamic environments like mobile phones and PDAs. The approach is based on a formal description of both the multimodal interaction and the property. The SMV model-checking formal technique is used for the verification process of the property. The approach is defined in two steps. First, the system is described using a formal model, and the property is specified using CTL (Computation Tree Logic) temporal logic. Then, we assume that an environment changes such that at most one modality of the system is disabled. For this propose, Disable is defined as a formal operator that disables a modality in the system. The property is checked by using the SMV (Symbolic Model Verifier) model-checker on all systems resulting from desabling a modality of the system. The approach reduces the complexity of the model-checking process and allows the verification at earlier stages of the development life cycle. We apply this approach on a mobile phone case study.
---
paper_title: Formal Reasoning about Dialogue Properties with Automatic Support
paper_content:
Abstract One of the advantages of using formal methods in the design of human–computer interfaces is the possibility to reason about user interface properties. Model checking techniques provide a useful support to this end. This paper discusses the possibilities of verifying the properties of user interfaces and related problems, such as when the dialogue specification has an infinite number of states. We provide an example of a set of general user interfaces properties, and we show how these properties can be tailored for specific cases and thus be used as a framework to evaluate the design of the interactive application considered.
---
paper_title: Formally verifying interactive systems: A review
paper_content:
Although some progress has been made in the development of principles to guide the designers of interactive systems, ultimately the only proven method of checking how usable a particular system is must be based on experiment. However, it is also the case that changes that occur at this late stage are very expensive. The need for early design checking increases as software becomes more complex and is designed to serve volume international markets and also as interactions between operators and automation in safety-critical environments becomes more complex. This paper reviews progress in the area of formal verification of interactive systems and proposes a short agenda for further work.
---
paper_title: Systematic Analysis of Control Panel Interfaces Using Formal Tools
paper_content:
The paper explores the role that formal modeling may play in aiding the visualization and implementation of usability requirements of a control panel. We propose that this form of analysis should become a systematic and routine aspect of the development of such interfaces. We use a notation for describing the interface that is convenient to use by software engineers, and describe a set of tools designed to make the process systematic and exhaustive.
---
paper_title: Validating Human-device Interfaces with Model Checking and Temporal Logic Properties Automatically Generated from Task Analytic Models
paper_content:
When evaluating designs of human-device interfaces for safety critical systems, it is very important that they be valid: support the goal-directed tasks they were designed to facilitate. Model checking is a type of formal analysis that is used to mathematically prove whether or not a model of a system does or does not satisfy a set of specification properties, usually written in a temporal logic. In the analysis of human-automation interaction, model checkers have been used to formally verify that human-device interface models are valid with respect to goal-directed tasks encoded in temporal logic properties. All of the previous work in this area has required that analysts manually specify these properties. Given the semantics of temporal logic and the complexity of task analytic behavior models, this can be very difficult. This paper describes a method that allows temporal logic properties to be automatically generated from task analytic models created early in the system design process. This allows analysts to use model checkers to validate that modeled human-device interfaces will allow human operators to successfully perform the necessary tasks with the system. The use of the method is illustrated with a patient controlled analgesia pump programming example. The method is discussed and avenues for future work are described.
---
paper_title: An integrated framework for the analysis of dependable interactive systems (IFADIS): Its tool support and evaluation
paper_content:
This paper discusses a method for the analysis of dependable interactive systems using model checking, and its support by a tool designed to make it accessible to a broader community. The method and the tool are designed to be of value to system engineers, usability engineers and software engineers. It has been designed to help usability engineers by making those aspects of the analysis relevant to them explicit while concealing those aspects of modelling and model checking that are not relevant. The paper presents the results of a user evaluation of the effectiveness of aspects of the tool and how it supports the proposed method.
---
paper_title: Analyzing interaction orderings with model checking
paper_content:
Human-computer interaction (HCI) systems control an ongoing interaction between end-users and computer-based systems. For software-intensive systems, a graphic user interface (GUI) is often employed for enhanced usability. Traditional approaches to validation of GUI aspects in HCI systems involve prototyping and live-subject testing. These approaches are limited in their ability to cover the set of possible human-computer interactions that a system may allow, since patterns of interaction may be long running and have large numbers of alternatives. In this paper, we propose a static analysis that is capable of reasoning about user-interaction properties of GUI portions of HCI applications written in Java using modern GUI frameworks, such as Swing/spl trade/. Our approach consists of partitioning an HCI application into three parts: the Swing library, the GUI implementation, i.e., code that interacts directly with Swing, and the underlying application. We develop models of each of these parts that preserve behavior relevant to interaction ordering. We describe how these models are generated and how we have customized a model checking framework to efficiently analyze their combination.
---
paper_title: Model checking graphical user interfaces using abstractions
paper_content:
Symbolic model checking techniques have been widely and successfully applied to statically analyze dynamic properties of hardware systems. Efforts to apply this same technology to the analysis of software systems has met with a number of obstacles, such as the existence of non-finite state-spaces. This paper investigates abstractions that make it possible to cost-effectively model check specifications of software for graphical user interface (GUI) systems. We identify useful abstractions for this domain and demonstrate that they can be incorporated into the analysis of a variety of systems with similar structural characteristics. The resulting domain-specific model checking yields fast verification of naturally occur ring specifications of intended GUI behavior.
---
paper_title: Automatic detection of interaction vulnerabilities in an executable specification
paper_content:
This paper presents an approach to providing designers with the means to detect Human-Computer Interaction (HCI) vulnerabilities without requiring extensive HCI expertise. The goal of the approach is to provide timely, useful analysis results early in the design process, when modifications are less expensive. The twin challenges of providing timely and useful analysis results led to the development and evaluation of computational analyses, integrated into a software prototyping toolset. The toolset, referred to as the Automation Design and Evaluation Prototyping Toolset (ADEPT) was constructed to enable the rapid development of an executable specification for automation behavior and user interaction. The term executable specification refers to the concept of a testable prototype whose purpose is to support development of a more accurate and complete requirements specification.
---
paper_title: HMI aspects of automotive climate control systems
paper_content:
In this paper we discuss a formal approach to the design and analysis of automotive systems, from a human-machine interaction (HMI) point of view. Specifically, we detail the behavior of a generic climate control system, present a statecharts model of this system, and discuss aspects of user interaction analysis. Several general principles for the design of climate control systems are illustrated and discussed. The topic of design patterns, in the context of a formal description of user interaction, is introduced, and two design patterns are illustrated and discussed.
---
paper_title: Interaction engineering using the IVY tool
paper_content:
This paper is concerned with support for the process of usability engineering. The aim is to use formal techniques to provide a systematic approach that is more traceable, and because it is systematic, repeatable. As a result of this systematic process some of the more subjective aspects of the analysis can be removed. The technique explores exhaustively those features of a specific design that fail to satisfy a set of properties. It also analyzes those aspects of the design where it is possible to quantify the cost of use. The method is illustrated using the example of a medical device. While many aspects of the approach and its tool support have already been discussed elsewhere, this paper builds on and contrasts an analysis of the same device provided by a third party and in so doing enhances the IVY tool.
---
paper_title: NUSMV: a new Symbolic Model Verifier
paper_content:
This paper describes NUSMV, a new symbolic model checker developed as a joint project between Carnegie Mellon University (CMU) and Istituto per la Ricerca Scientifica e Tecnolgica (IRST). NUSMV is designed to be a well structured, open, flexible and documented platform for model checking. In order to make NUSMV applicable in technology transfer projects, it was designed to be very robust, close to the standards required by industry, and to allow for expressive specification languages. NUSMV is the result of the reengineering, reimplementation and extension of SMV [6], version 2.4.4 (SMV from now on). With respect to SMV, NUSMV has been extended and upgraded along three dimensions. First, from the point of view of the system functionalities, NUSMV features a textual interaction shell and a graphical interface, extended model partitioning techniques, and allows for LTL model checking. Second, the system architecture of NUSMV has been designed to be highly modular and open. The interdependencies between different modules have been separated, and an external, state of the art BDD package [8] has been integrated in the system kernel. Third, the quality of the implementation has been strongly enhanced. This makes of NUSMV a robust, maintainable and well documented system, with a relatively easy to modify source code. NUSMV is available at http://nusmv.irst.itc.it/.
---
paper_title: Communicating sequential processes
paper_content:
This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of familiar programming exercises.
---
paper_title: A formal methods approach to the analysis of mode confusion
paper_content:
The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness, mode confusion is increasingly becoming a major contributor as well. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).
---
paper_title: A Rigorous View of Mode Confusion
paper_content:
Not only in aviation psychology, mode confusion is recognised as a significant safety concern. The notion is used intuitively in the pertinent literature, but with surprisingly different meanings. We present a rigorous way of modelling the human and the machine in a shared-control system. This enables us to propose a precise definition of mode and mode confusion. In our modelling approach, we extend the commonly used distinction between the machine and the user's mental model of it by explicitly separating these and their safety-relevant ions. Furthermore, we show that distinguishing three different interfaces during the design phase reduces the potential for mode confusion. A result is a new classification of mode confusions by cause, leading to a number of design recommendations for shared-control systems which help to avoid mode confusion problems. A further result is a foundation for detecting mode confusion problems by model checking.
---
paper_title: Safety-relevant mode confusions – modelling and reducing them
paper_content:
Abstract Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of ‘mode’ and ‘mode confusion’ for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot.
---
paper_title: Formal verification of humanautomation interaction
paper_content:
This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.
---
paper_title: Hybrid verification of an interface for an automatic landing
paper_content:
Modern commercial aircraft have extensive automation which helps the pilot by performing computations, obtaining data, and completing procedural tasks. The pilot display must contain enough information so that the pilot can correctly predict the aircraft's behavior, while not overloading the pilot with unnecessary information. Human-automation interaction is currently evaluated through extensive simulation. In this paper, using both hybrid and discrete-event system techniques, we show how one could mathematically verify that an interface contains enough information for the pilot to safely and unambiguously complete a desired maneuver. We first develop a nonlinear, hybrid model for the longitudinal dynamics of a large civil jet aircraft in an autoland/go-around maneuver. We find the largest controlled subset of the aircraft's flight envelope for which we can guarantee both safe landing and safe go-around. We abstract a discrete procedural model using this result, and verify a discrete formulation of the pilot display against it. An interface which fails this verification could result in nondeterministic or unpredictable behavior from the pilot's point of view.
---
paper_title: A bisimulation-based approach to the analysis of human-computer interaction
paper_content:
This paper discusses the use of formal methods for analysing human-computer interaction. We focus on the mode confusion problem that arises whenever the user thinks that the system is doing something while it is in fact doing another thing. We consider two kinds of models: the system model describes the actual behaviour of the system and the mental model represents the user's knowledge of the system. The user interface is modelled as a subset of system transitions that the user can control or observe. We formalize a full-control property which holds when a mental model and associated user interface are complete enough to allow proper control of the system. This property can be verified using model-checking techniques on the parallel composition of the two models. We propose a bisimulation-based equivalence relation on the states of the system and show that, if the system satisfies a determinism condition with respect to that equivalence, then minimization modulo that equivalence produces a minimal mental model that allows full-control of the system. We enrich our approach to take operating modes into account. We give experimental results obtained by applying a prototype implementation of the proposed techniques to a simple model of an air-conditioner.
---
paper_title: A method for predicting errors when interacting with finite state systems. How implicit learning shapes the user's knowledge of a system
paper_content:
Abstract This paper describes a method for predicting the errors that may appear when human operators or users interact with systems behaving as finite state systems. The method is a generalization of a method used for predicting errors when interacting with autopilot modes on modern, highly computerized airliners [Proc 17th Digital Avionics Sys Conf (DASC) (1998); Proc 10th Int Symp Aviat Psychol (1999)]. A cognitive model based on spreading activation networks is used for predicting the user's model of the system and its impact on the production of errors. The model strongly posits the importance of implicit learning in user–system interaction and its possible detrimental influence on users' knowledge of the system. An experiment conducted with Airbus Industrie and a major European airline on pilots' knowledge of autopilot behavior on the A340-200/300 confirms the model predictions, and in particular the impact of the frequencies with which specific state transitions and contexts are experienced.
---
paper_title: Models and Mechanized Methods that Integrate Human Factors into Automation Design
paper_content:
Recent work h as s hown a c onvergence between the Human Factors and Formal Methods communities that opens promising n ew directions for collaborative work in calculating, predicting, and analyzing the behavior of complex aeronautical systems and their operators. Previously it has been shown that fully automatic, finitestate verification techniques can be used to identify likely sources of mode c onfusion in existing systems; i n this paper we focus on use of these techniques in the design of new systems. We use a simple e xample to d emonstrate how automated finite-state techniques can be used to explore autopilot design options, and then suggest additional applications for this technique, including the validation of empirically-derived, minimal mental models of autopilot behavior.
---
paper_title: On Preventing Telephony Feature Interactions which are Shared-Control Mode Confusions
paper_content:
We demonstrate that many undesired telephony feature interactions are also shared-control mode confusions. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the behaviour of the user’s mental model of it. Several measures for preventing mode confusions are known in the literature on human-computer interaction. We show that these measures can be applied to this kind of feature interaction. We sketch several more measures for the telephony domain.
---
paper_title: Using Model Checking to Help Discover Mode Confusions and Other Automation Surprises
paper_content:
Abstract Automation surprises occur when an automated system behaves differently than its operator expects. If the actual system behavior and the operator's ‘mental model’ are both described as finite state transition systems, then mechanized techniques known as ‘model checking’ can be used automatically to discover any scenarios that cause the behaviors of the two descriptions to diverge from one another. These scenarios identify potential surprises and pinpoint areas where design changes, or revisions to training materials or procedures, should be considered. The mental models can be suggested by human factors experts, or can be derived from training materials, or can express simple requirements for ‘consistent’ behavior. The approach is demonstrated by applying the Muro state exploration system to a ‘kill-the-capture’ surprise in the MD-88 autopilot. This approach does not supplant the contributions of those working in human factors and aviation psychology, but rather provides them with a tool to examine properties of their models using mechanized calculation. These calculations can be used to explore the consequences of alternative designs and cues, and of systematic operator error, and to assess the cognitive complexity of designs. The description of model checking is tutorial and is hoped to be accessible to those from the human factors community to whom this technology may be new.
---
paper_title: The Role of Working Memory on Measuring Mental Models of Physical Systems
paper_content:
Up until now there has been no agreement on what a mental model of a physical system is and how to infer the mental model a person has. This paper describes research aimed at solving these problems by proposing that a Mental Model is a dynamic representation created in WM by combining information stored in LTM (the Conceptual Model of the system) and characteristics extracted from the environment. Three experiments tested hypotheses derived from this proposal. Implications for research on Mental Model are discussed.
---
paper_title: A formal framework for design and analysis of human-machine interaction
paper_content:
Automated systems are increasingly complex, making it hard to design interfaces for human operators. Human-machine interaction (HMI) errors like automation surprises are more likely to appear and lead to system failures or accidents. In previous work, we studied the problem of generating system abstractions, called mental models, that facilitate system understanding while allowing proper control of the system by operators as defined by the full-control property. Both the domain and its mental model have Labelled Transition Systems (LTS) semantics, and we proposed algorithms for automatically generating minimal mental models as well as checking full-control. This paper presents a methodology and an associated framework for using the above and other formal method based algorithms to support the design of HMI systems. The framework can be used for modelling HMI systems and analysing models against HMI vulnerabilities. The analysis can be used for validation purposes or for generating artifacts such as mental models, manuals and recovery procedures. The framework is implemented in the JavaPathfinder model checker. Our methodology is demonstrated on two examples, an existing benchmark of a medical device, and a model generated from the ADEPT toolset developed at NASA Ames. Guidelines about how ADEPT models can be translated automatically into JavaPathfinder models are also discussed.
---
paper_title: Protocol Verification as a Hardware Design Aid
paper_content:
The role of automatic formal protocol verification in hardware design is considered. Principles that maximize the benefits of protocol verification while minimizing the labor and computation required are identified. A novel protocol description language and verifier (both called Mur phi ) are described, along with experiences in applying them to two industrial protocols that were developed as part of hardware designs. >
---
paper_title: Immediate observability of discrete event systems with application to user-interface design
paper_content:
A human interacting with a hybrid system is often presented, through information displays, with a simplified representation of the underlying system. This interface should not overwhelm the human with unnecessary information, and thus usually contains only a subset of information about the true system model, yet, if properly designed, represents an abstraction of the true system which the human is able to use to safely interact with the system [M. Heymann and A. Degani, 2002]. For cases in which the human interacts with all or part of the system from a remote location, and communication has a high cost, the need for a simple abstraction, which reduces the amount of information that must be transmitted, is of the utmost importance. The user should be able to immediately determine the actual state of the system, based on the information displayed through the interface. In this paper, we derive conditions for immediate observability in which the current state of the system can be unambiguously reconstructed from the output associated with the current state and the last or next event. Then, we show how to construct a discrete event system output function, which makes a system immediately observable, and apply this to a reduced state machine, which represents an interface.
---
paper_title: An automated method to detect potential mode confusions
paper_content:
Mode confusions are a type of "automation surprise"-circumstances where an automated system behaves differently than its operator expects. It is generally accepted that operators develop "mental models" for the behavior of automated systems and use these to guide their interaction with the systems concerned, so that an automation surprise results when the actual system behavior diverges from its operator's mental model. Complex systems are often structured into "modes" (for example, an autopilot might have different modes for altitude capture, altitude hold, and so on), and their behavior can change significantly across different modes. "Mode confusion" arises when the system is in a different mode than that assumed by its operator; this is a rich source of automation surprises, since the operator may interact with the system according to a mental model that is inappropriate for its actual mode. Mode confusions have been implicated in several recent crashes and other incidents, and are a growing source of concern in modern automated cockpits. If we accept that mode confusions are due to a mismatch between the actual behavior of a system and the mental model of its operator, then one way to look for potential mode confusions is to compare the design of the actual system against a mental model. There are two challenges here: how to get hold of a mental model, and how to do the comparison. Through observation, questionnaires, and other techniques, psychologists have been able to elicit the mental models of individual operators (typically pilots). However, comparison between a design and the mental model of a specific individual will provide only very specific information; we are interested in whether a design is prone to mode confusions, and for this purpose it is more useful to compare the design against a generic mental model rather than that of an individual.
---
paper_title: Formal Modeling and Analysis for Interactive Hybrid Systems
paper_content:
An effective strategy for discovering certain kinds of automation surprise and other problems in interactive systems is to build models of the participating (automated and human) agents and then explore all reachable states of the composed system looking for divergences between mental states and those of the automation. Various kinds of model checking provide ways to automate this approach when the agents can be modeled as discrete automata. But when some of the agents are continuous dynamical systems (e.g., airplanes), the composed model is a hybrid (i.e., mixed continuous and discrete) system and these are notoriously hard to analyze. We describe an approach for very abstract modeling of hybrid systems using rela- tional approximations and their automated analysis using infinite bounded model checking supported by an SMT solver. When counterexamples are found, we de- scribe how additional constraints can be supplied to direct counterexamples toward plausible scenarios that can be confirmed in high-fidelity simulation. The approach is illustrated though application to a known (and now corrected) human-automation interaction problem in Airbus aircraft.
---
paper_title: Formal Models for Cooperative Tasks: Concepts and an Application for En-Route Air-Traffic Control
paper_content:
This paper presents a proposal for specifying task models for cooperative applications that allow designers to describe the relationships between the activities performed by various users involved in cooperative environments. To this end we extend the ConcurTaskTree notation so that new information useful for describing complex cooperative applications can be clearly specified. An example of application to describe En-Route Air Traffic Control (ATC) is given to illustrate and clarify our approach.
---
paper_title: Generating phenotypical erroneous human behavior to evaluate human-automation interaction using model checking
paper_content:
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.
---
paper_title: Formal Validation of HCI User Tasks
paper_content:
Our work focuses on the use of formal techniques in order to increase the quality of HCI software and of all the processes resulting from the development, verification, design and validation activities. This paper shows how the B formal technique can be used for user tasks modelling and validation. A trace based semantics is used to describe either the HCI or the user tasks. Each task is modelled by a sequence of fired events. Each event is defined in the abstract specification and design of the HCI system.
---
paper_title: AMBOSS: A Task Modeling Approach for Safety-Critical Systems
paper_content:
In a recent project we created AMBOSS, a task modeling environment taking into account the special needs for safety-critical socio-technical systems. An AMBOSS task model allows the specification of relevant information concerning safety aspects. To achieve this we complemented task models with additional information elements and appropriate structures. These refer primarily to aspects of timing, spatial information, and communication. In this paper we give an introductory overview about AMBOSS and its contribution to modeling safety-critical systems. In addition, we present AmbossA, the visual pattern language for detecting particular constellations of interest within a task model.
---
paper_title: Using task analytic models to visualize model checker counterexamples
paper_content:
Model checking is a type of automated formal verification that searches a system model's entire state space in order to mathematically prove that the system does or does not meet desired properties. An output of most model checkers is a counterexample: an execution trace illustrating exactly how a specification was violated. In most analysis environments, this output is a list of the model variables and their values at each step in the execution trace. We have developed a language for modeling human task behavior and an automated method which translates instantiated models into a formal system model implemented in the language of the Symbolic Analysis Laboratory (SAL). This allows us to use model checking formal verification to evaluate human-automation interaction. In this paper we present an operational concept and design showing how our task modeling visual notation and system modeling architecture can be exploited to visualize counterexamples produced by SAL. We illustrate the use of our design with a model related to the operation of an automobile with a simple cruise control.
---
paper_title: Formal aspects of procedures: The problem of sequential correctness
paper_content:
A formal, model-based approach is proposed for the development and evaluation of the sequences of actions specified in procedures. The approach employs methodologies developed within the discipline of discrete-event and hybrid systems control. We demonstrate the proposed approach through an evaluation of a procedure for handling an irregular engine-start on board a modern commercial aircraft. In complex human-machine systems, successful operations depend on an elaborate set of procedures provided to the human operator. These procedures specify a detailed step-by-step process for configuring the machine during normal, abnormal, and emergency situations. The adequacy of these procedures is vitally important for the safe and efficient operation of any complex system. In high-risk endeavors such as aircraft operations, maritime, space flight, nuclear power production, and military operations, it is essential that these procedures be flawless, as the price of error may be unacceptable. When operating procedures are inadequate for the task, not only will the system’s overall efficiency be thwarted, but there may also be tragic human and material
---
paper_title: Integrating model checking and HCI tools to help designers verify user interface properties
paper_content:
In this paper we present a method that aims to integrate the use of formal techniques in the design process of interactive applications, with particular attention to those applications where both usability and safety are main concerns. The method is supported by a set of tools. We will also discuss how the resulting environment can be helpful in reasoning about multi-user interactions using the task model of an interactive application. Examples are provided from a case study in the field of air traffic control.
---
paper_title: Validating Interactive System Design Through the Verification of Formal Task and System Models
paper_content:
This paper addresses the problem of the articulation between task modelling and system modelling in the design of interactive software. We aim at providing solutions allowing the software designers to use efficiently task models during the design process, and to check that the software being built actually corresponds to the requirements elicited during the task analysis phase. The proposed approach is twofold: Firstly, we use the User Action Notation, a semi-formal task modelling formalism, and we present a translation scheme allowing to transform the User Action Notation constructs into Petri nets. Secondly, we use the Interactive Cooperative Objects formalism (based on Petri nets and on the object-oriented approach) to build the model of the system. We finally use the mathematical analysis techniques stemming from the Petri net theory to analyse and validate the cooperation between task models and system model. The approach is presented through a case study, showing the User Action Notation task models, the equivalent Petri net models and the Interactive Cooperative Object system model.
---
paper_title: The UAN: a user-oriented representation for direct manipulation interface designs
paper_content:
Many existing interface representation techniques, especially those associated with UIMS, are constructional and focused on interface implementation, and therefore do not adequately support a user-centered focus. But it is in the behavioral domain of the user that interface designers and evaluators do their work . We are seeking to complement constructional methods by providing a tool-supported technique capable of specifying the behavioral aspects of an interactive system–the tasks and the actions a user performs to accomplish those tasks. In particular, this paper is a practical introduction to use of the User Action Notation (UAN), a task- and user-oriented notation for behavioral representation of asynchronous, direct manipulation interface designs. Interfaces are specified in UAN as a quasihierarchy of asynchronous tasks. At the lower levels, user actions are associated with feedback and system state changes. The notation makes use of visually onomatopoeic symbols and is simple enough to read with little instruction. UAN is being used by growing numbers of interface developers and researchers. In addition to its design role, current research is investigating how UAN can support production and maintenance of code and documentation.
---
paper_title: A Systematic Approach to Model Checking Human–Automation Interaction Using Task Analytic Models
paper_content:
Formal methods are typically used in the analysis of complex system components that can be described as “automated” (digital circuits, devices, protocols, and software). Human-automation interaction has been linked to system failure, where problems stem from human operators interacting with an automated system via its controls and information displays. As part of the process of designing and analyzing human-automation interaction, human factors engineers use task analytic models to capture the descriptive and normative human operator behavior. In order to support the integration of task analyses into the formal verification of larger system models, we have developed the enhanced operator function model (EOFM) as an Extensible Markup Language-based, platform- and analysis-independent language for describing task analytic models. We present the formal syntax and semantics of the EOFM and an automated process for translating an instantiated EOFM into the model checking language Symbolic Analysis Laboratory. We present an evaluation of the scalability of the translation algorithm. We then present an automobile cruise control example to illustrate how an instantiated EOFM can be integrated into a larger system model that includes environmental features and the human operator's mission. The system model is verified using model checking in order to analyze a potentially hazardous situation related to the human-automation interaction.
---
paper_title: Error patterns: systematic investigation of deviations in task models
paper_content:
We propose a model-based approach to integrate human error analysis with task modelling, introducing the concept of Error Pattern. Error Patterns are prototypical deviations from abstract task models, expressed in a formal way by a model transformation. A collection of typical errors taken from the literature on human errors is described within our framework. The intent is that the human factors specialist will produce the task models taking an error-free perspective, producing small and useful task models. The specialist will then choose from the collection of error patterns, and selectively apply these patterns to parts of the original task model, thus producing a transformed model exhibiting erroneous user behaviour. This transformed task model can be used at various stages of the design process, to investigate the system's reaction to erroneous behaviour or to generate test sequences.
---
paper_title: Specifying and Analyzing Workflows for Automated Identification and Data Capture
paper_content:
Humans use computers to carry out tasks that neither is able to do easily alone: humans provide eyes, hands, and judgment while computers provide computation, networking, and storage. This symbiosis is especially evident in workflows where humans identify objects using bar codes or RFID tags and capture data about them for the computer. This Automated Identification and Data Capture (AIDC) is increasingly important in areas such as inventory systems and health care. Humans involved in AIDC follow simple rules and rely on the computer to catch mistakes; in complex situations this reliance can lead to mismatches between human workflows and system programming. In this paper we explore the design, implementation and formal modeling of AIDC for vital signs measurements in hospitals. To this end we describe the design of a wireless mobile medical mediator device that mediates between identifications, measurements, and updates of Electronic Health Records (EHRs). We implement this as a system Med2 that uses PDAs equipped with Bluetooth, WiFi, and RFID wireless capabilities. Using Communicating Sequential Processes (CSP) we jointly specify workflow and computer system operations and provide a formal analysis of the protections the system provides for user errors.
---
paper_title: ConcurTaskTrees: A Diagrammatic Notation for Specifying Task Models
paper_content:
In this paper we discuss a notation to describe task models, which can specify a wide range of temporal relationships among ta~ks. It is a compact and graphical notation, immediate both to use and understand. Its logical structure and the related automatic tool make it suitable for designing even large sized applications.
---
paper_title: Preventing user errors by systematic analysis of deviations from the system task model
paper_content:
Interactive safety-critical applications have specific requirements that cannot be completely captured by traditional evaluation techniques. In this paper, we discuss how to perform a systematic inspection-based analysis to improve both usability and safety aspects of an application. The analysis considers a system prototype and the related task model and aims to evaluate what could happen when interactions and behaviours occur differently from what the system design assumes. We also provide a description and discussion of an application of this method to a case study in the air traffic control domain.
---
paper_title: Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
paper_content:
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human---automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human---automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.
---
paper_title: Evaluating human-automation interaction using task analytic behavior models, strategic knowledge-based erroneous human behavior generation, and model checking
paper_content:
Human-automation interaction, including erroneous human behavior, is a factor in the failure of complex, safety-critical systems. This paper presents a method for automatically generating task analytic models encompassing both erroneous and normative human behavior from normative task models by manipulating modeled strategic knowledge. Resulting models can be automatically translated into larger formal system models so that safety properties can be formally verified with a model checker. This allows analysts to prove that a human automation-interactive system (as represented by the formal model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. This method is illustrated with a case study: the programming of a patient-controlled analgesia pump. In this example, a problem resulting from a generated erroneous human behavior is discovered and a potential solutions is explored. Future research directions are discussed.
---
paper_title: Toward a multi-method approach to formalizing human-automation interaction and human-human communications
paper_content:
Breakdowns in complex systems often occur as a result of system elements interacting in ways unanticipated by analysts or designers. The use of task behavior as part of a larger, formal system model is potentially useful for analyzing such problems because it allows the ramifications of different human behaviors to be verified in relation to other aspects of the system. A component of task behavior largely overlooked to date is the role of human-human interaction, particularly human-human communication in complex human-computer systems. We are developing a multi-method approach based on extending the Enhanced Operator Function Model language to address human agent communications (EOFMC). This approach includes analyses via theorem proving and future support for model checking linked through the EOFMC top level XML description. Herein, we consider an aviation scenario in which an air traffic controller needs a flight crew to change the heading for spacing. Although this example, at first glance, seems to be one simple task, on closer inspection we find that it involves local human-human communication, remote human-human communication, multi-party communications, communication protocols, and human-automation interaction. We show how all these varied communications can be handled within the context of EOFMC.
---
paper_title: Formal and experimental validation approaches in HCI systems design based on a shared event B model
paper_content:
The development of user interfaces (UI) needs validation and verification of a set of required properties. Different kinds of properties arc relevant to the human computer interaction (HCI) area. Not all of them may be checked using classical software engineering validation and verification tools. Indeed, a large part of properties is related to the user and to usability. Moreover, this kind of properties usually requires an experimental validation. This paper addresses the cooperation between formal and experimental HCI properties validation and verification. It focuses on a proof based technique (event B) and a Model Based System (MBS) based technique (SUIDT). Moreover, this paper tries to bridge the gap between both approaches in order to reduce the heterogeneity they lead to.
---
paper_title: A Method for the Formal Verification of Human-interactive Systems
paper_content:
Predicting failures in complex, human-interactive systems is difficult as they may occur under rare operational conditions and may be influenced by many factors including the system mission, the human operator's behavior, device automation, human-device interfaces, and the operational environment. This paper presents a method that integrates task analytic models of human behavior with formal models and model checking in order to formally verify properties of human-interactive systems. This method is illustrated with a case study: the programming of a patient controlled analgesia pump. Two specifications, one of which produces a counterexample, illustrate the analysis and visualization capabilities of the method.
---
paper_title: Formal socio-technical barrier modelling for safety-critical interactive systems design
paper_content:
Abstract This paper presents a three step approach to improve safety in the field of interactive systems. The approach combines, within a single framework, previous work in the field of barrier analysis and modelling, with model based design of interactive systems. The approach first uses the Safety Modelling Language to specify safety barriers which could achieve risk reduction if implemented. The detailed mechanism by which these barriers behave is designed in the subsequent stage, using a Petri nets-based formal description technique called Interactive Cooperative Objects. One of the main characteristics of interactive systems is the fact that the user is deeply involved in the operation of such systems. This paper addresses this issue of user behaviour by modelling tasks and activities using the same notation as for the system side (both barriers and interactive system). The use of a formal modelling technique for the description of these three components makes it possible to compare, analyse and integrate them. The approach and the integration are presented on a mining case study. Two safety barriers are modelled as well as the relevant parts of the interactive system behaviour. Operators’ tasks are also modelled. The paper then shows how the integration of barriers within the system model can prevent previously identified hazardous sequences of events from occurring, thus increasing the entire system safety.
---
paper_title: A Discrete Control Model of Operator Function: A Methodology for Information Display Design
paper_content:
Recent advances in computer technology and the changing rule of the human in complex systems require changes in design strategies for information displays. The use of discrete control models to represent the human operator's cognitive and decisionmaking activities is described. The analytic procedures required to build a discrete control model show promise as a basis of a design methodology for the definition of an information display system for supervisory control tasks. The discrete control modeling procedures and their application for a simulated system is demonstrated.
---
paper_title: The phenotype of erroneous actions
paper_content:
Abstract The study of human actions with unwanted consequences, in this paper referred to as human erroneous actions, generally suffers from inadequate operational taxonomies. The main reason for this is the lack of a clear distinction between manifestations and causes. The failure to make this distinction is due to the reliance on subjective evidence which unavoidably mixes manifestations and causes. The paper proposes a clear distinction between the phenotypes (manifestations) and the genotypes (causes) of erroneous actions. A logical set of phenotypes is developed and compared with the established "human error" taxonomies as well as with the operational categories which have been developed in the field of human reliability analysis. The principles for applying the set of phenotypes as practical classification criteria are developed and described. A further illustration is given by the report of an action monitoring system (RESQ) which has been implemented as part of a larger set of operator support systems and which shows the viability of the concepts. The paper concludes by discussing the principal issues of error detection, in particular the trade-off between precision and meaningfulness.
---
paper_title: Programmable user models for predictive evaluation of interface designs
paper_content:
A Programmable User Model (PUM) is a psychologically constrained architecture which an interface designer is invited to program to simulate a user performing a range of tasks with a proposed interface. It provides a novel way of conveying psychological considerations to the designer, by involving the designer in the process of making predictions of usability. Development of the idea leads to a complementary perspective, of the PUM as an interpreter for an “instruction language”. The methodology used in this research involves the use of concrete HCI scenarios to assess different approaches to cognitive modelling. The research findings include analyses of the cognitive processes involved in the use of interactive computer systems, and a number of issues to be resolved in future cognitive models.
---
paper_title: Model-checking user behaviour using interacting components
paper_content:
This article describes a framework to formally model and analyse human behaviour. This is shown by a simple case study of a chocolate vending machine, which represents many aspects of human behaviour. The case study is modelled and analysed using the Maude rewrite system. This work extends a previous work by Basuki which attempts to model interactions between human and machine and analyse the possibility of errors occurring in the interactions. By redesigning the interface, it can be shown that certain kinds of error can be avoided for some users. This article overcomes the limitation of Basuki’s approach by incorporating many aspects of user behaviour into a single user model, and introduces a more natural approach to model human–computer interaction.
---
paper_title: Formally Justifying User-Centred Design Rules: A Case Study on Post-completion Errors
paper_content:
Interactive systems combine a human operator with a computer. Either may be a source of error. The verification processes used must ensure both the correctness of the computer component, and also minimize the risk of human error. Human-centred design aims to do this by designing systems in a way that make allowance for human frailty. One approach to such design is to adhere to design rules. Design rules, however, are often ad hoc. We examine how a formal cognitive model, encapsulating results from the cognitive sciences, can be used to justify such design rules in a way that integrates their use with existing formal hardware verification techniques. We consider here the verification of a design rule intended to prevent a commonly occurring class of human error know as the post-completion error.
---
paper_title: Modelling Distributed Cognition Systems in PVS
paper_content:
We report on our efforts to formalise DiCoT, an informal structured approach for analysing complex work systems, such as hospital and day care units, as distributed cognition systems. We focus on DiCoT's information flow model, which describes how information is transformed and propagated in the system. Our contribution is a set of generic models for the specification and verification system PVS. The developed models can be directly mapped to the informal descriptions adopted by human-computer interactions experts. The models can be verified against properties of interest in the PVS theorem prover. Also, the same models can be simulated, thus facilitating analysts to engage with stakeholders when checking the correctness of the model. We trial our ideas on a case study based on a real-world medical system.
---
paper_title: Models of interactive systems : a case study on Programmable User Modelling
paper_content:
Models of interactive systems can be used to answer focused questions about those systems. Making the appropriate choice of modelling technique depends on what questions are being asked. We present two styles of interactive system model and associated verification method. We show how they contrast in terms of tractability, inspectability of assumptions, level of abstraction and reusability of model fragments. These trade-offs are discussed. We discuss how they can be used as part of an integrated formal approach to the analysis of interactive systems where the different formal techniques focus on specific problems raised by empirical investigations. Explanations resulting from the formal analyses can be validated with respect to the empirical data.The first modelling style, which we term 'operational', is derived directly from principles of rationality that constrain which user behaviours are modelled. Modelling involves laying out user knowledge of the system and task, and their goals, then applying the principles to reason about the space of rational behaviours. This style supports reasoning about user knowledge and the consequences of particular knowledge in terms of likely behaviours. It is well suited to reasoning about interactions where user knowledge is a key to successful interaction. Such models can readily be implemented as computer programs; one such implementation is presented here.Models of the second style, 'abstract', are derived from the operational models and thus retain important aspects of rationality. As a result of the simplification, mathematical proof about selected properties of the interactive system, such as safety properties, can be tractably applied to these models. This style is well suited to cases where the user adopts particular strategies that can be represented succinctly within the model.We demonstrate the application of the two styles for understanding a reported phenomenon, using a case study on electronic diaries.
---
paper_title: Formal Modelling of Salience and Cognitive Load ⋆
paper_content:
Well-designed interfaces use procedural and sensory cues to increase the salience of appropriate actions and intentions. However, empirical studies suggest that cognitive load can influence the strength of procedural and sensory cues. We formalise the relationship between salience and cognitive load revealed by empirical data. We add these rules to our abstract cognitive architecture developed for the verification of usability properties. The interface of a fire engine dispatch task used in the empirical studies is then formally verified to assess the salience and load rules. Finally, we discuss how the formal modelling and verification suggests further refinements of the rules derived from the informal analysis of empirical data.
---
paper_title: Demonstrating the Cognitive Plausibility of Interactive System Specifications
paper_content:
Much of the behaviour of an interactive system is determined by its user population. This paper describes how assumptions about the user can be brought into system models in order to reason about their behaviour. We describe a system model containing reasonable assumptions about the user as being ‘cognitively plausible’. Before asserting the plausibility of a model however we must first be able to make the assumptions made in that model inspectable.
---
paper_title: Formal Modelling of Cognitive Interpretation
paper_content:
We formally specify the interpretation stage in a dual state space human-computer interaction cycle. This is done by extending / reorganising our previous cognitive architecture. In particular, we focus on shape related aspects of the interpretation process associated with device input prompts. A cash-point example illustrates our approach. Using the SAL model checking environment, we show how the extended cognitive architecture facilitates detection of prompt-shape induced human error.
---
paper_title: Formal analysis of human-computer interaction using model-checking
paper_content:
Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
---
paper_title: Users as rational interacting agents: formalising assumptions about cognition and interaction
paper_content:
One way of assessing the usability of a computer system is to make reasonable assumptions about users’ cognition and to analyse how they can be expected to work with the system, using their knowledge and information from the display to achieve their goals. This is the approach taken in Programmable User Modelling Analysis, a technique for predictive usability evaluation of interactive systems. The technique is based on the premise that an analyst can gain insights into the usability of a computer system by specifying the knowledge that a user needs to be able to use it and drawing inferences on how that knowledge will guide the user’s behaviour. This may be done by observing how a cognitive architecture, “programmed” with that knowledge, behaves. An alternative approach is to develop a formal description of the essential features of the cognitive architecture and to use that description to reason about likely user behaviour. In this paper, we present the approach and an outline formal description of the cognitive architecture. This initial description is derived from an existing implementation. We illustrate how the description can be used in reasoning by applying it to the task of setting up call diverting on a mobile phone. Successful performance of this task involves a combination of planned and responsive behaviour. The process of doing this analysis highlights what assumptions have been made by the designers about the user’s knowledge. We discuss limitations of the current formalisation and identify directions for future work.
---
paper_title: Verification-guided modelling of salience and cognitive load
paper_content:
Well-designed interfaces use procedural and sensory cues to increase the cognitive salience of appropriate actions. However, empirical studies suggest that cognitive load can influence the strength of those cues. We formalise the relationship between salience and cognitive load revealed by empirical data. We add these rules to our abstract cognitive architecture, based on higher-order logic and developed for the formal verification of usability properties. The interface of a fire engine dispatch task from the empirical studies is then formally modelled and verified. The outcomes of this verification and their comparison with the empirical data provide a way of assessing our salience and load rules. They also guide further iterative refinements of these rules. Furthermore, the juxtaposition of the outcomes of formal analysis and empirical studies suggests new experimental hypotheses, thus providing input to researchers in cognitive science.
---
paper_title: From a Formal User Model to Design Rules
paper_content:
Design rules sometimes seem to contradict. We examine how a formal description of user behaviour can help explain the context when such rules are, or are not, applicable. We describe how they can be justified from a formally specified generic user model. This model was developed by formalising cognitively plausible behaviour, based on results from cognitive psychology. We examine how various classes of erroneous actions emerge from the underlying model. Our lightweight semiformal reasoning from the user model makes predictions that could be used as the basis for further usability studies. Although the user model is very simple, a range of error patterns and design principles emerge.
---
paper_title: An approach to formal verification of human–computer interaction
paper_content:
The correct functioning of interactive computer systems depends on both the faultless operation of the device and correct human actions. In this paper, we focus on system malfunctions due to human actions. We present abstract principles that generate cognitively plausible human behaviour. These principles are then formalised in a higher-order logic as a generic, and so retargetable, cognitive architecture, based on results from cognitive psychology. We instantiate the generic cognitive architecture to obtain specific user models. These are then used in a series of case studies on the formal verification of simple interactive systems. By doing this, we demonstrate that our verification methodology can detect a variety of realistic, potentially erroneous actions, which emerge from the combination of a poorly designed device and cognitively plausible human behaviour.
---
paper_title: Generating phenotypical erroneous human behavior to evaluate human-automation interaction using model checking
paper_content:
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.
---
paper_title: Modelling Distributed Cognition Systems in PVS
paper_content:
We report on our efforts to formalise DiCoT, an informal structured approach for analysing complex work systems, such as hospital and day care units, as distributed cognition systems. We focus on DiCoT's information flow model, which describes how information is transformed and propagated in the system. Our contribution is a set of generic models for the specification and verification system PVS. The developed models can be directly mapped to the informal descriptions adopted by human-computer interactions experts. The models can be verified against properties of interest in the PVS theorem prover. Also, the same models can be simulated, thus facilitating analysts to engage with stakeholders when checking the correctness of the model. We trial our ideas on a case study based on a real-world medical system.
---
paper_title: Capturing the distinction between task and device errors in a formal model of user behaviour
paper_content:
In any complex interactive human-computer system, people are likely to make errors during its operation. In this paper, we describe a validation study of an existing generic model of user behaviour. The study is based on the data and conclusions from an independent prior experiment. We show that the current model does successfully capture the key concepts investigated in the experiment, particularly relating to results to do with the distinction between task and device-specific errors. However, we also highlight some apparent weaknesses in the current model with respect to initialisation errors, based on comparison with previously unpublished (and more detailed) data from the experiment. The differences between data and observed model behaviour suggest the need for new empirical research to determine what additional factors are at work. We also discuss the potential use of formal models of user behaviour in both informing, and generating further hypotheses about the causes of human error.
---
paper_title: Validating Human-device Interfaces with Model Checking and Temporal Logic Properties Automatically Generated from Task Analytic Models
paper_content:
When evaluating designs of human-device interfaces for safety critical systems, it is very important that they be valid: support the goal-directed tasks they were designed to facilitate. Model checking is a type of formal analysis that is used to mathematically prove whether or not a model of a system does or does not satisfy a set of specification properties, usually written in a temporal logic. In the analysis of human-automation interaction, model checkers have been used to formally verify that human-device interface models are valid with respect to goal-directed tasks encoded in temporal logic properties. All of the previous work in this area has required that analysts manually specify these properties. Given the semantics of temporal logic and the complexity of task analytic behavior models, this can be very difficult. This paper describes a method that allows temporal logic properties to be automatically generated from task analytic models created early in the system design process. This allows analysts to use model checkers to validate that modeled human-device interfaces will allow human operators to successfully perform the necessary tasks with the system. The use of the method is illustrated with a patient controlled analgesia pump programming example. The method is discussed and avenues for future work are described.
---
paper_title: A Systematic Approach to Model Checking Human–Automation Interaction Using Task Analytic Models
paper_content:
Formal methods are typically used in the analysis of complex system components that can be described as “automated” (digital circuits, devices, protocols, and software). Human-automation interaction has been linked to system failure, where problems stem from human operators interacting with an automated system via its controls and information displays. As part of the process of designing and analyzing human-automation interaction, human factors engineers use task analytic models to capture the descriptive and normative human operator behavior. In order to support the integration of task analyses into the formal verification of larger system models, we have developed the enhanced operator function model (EOFM) as an Extensible Markup Language-based, platform- and analysis-independent language for describing task analytic models. We present the formal syntax and semantics of the EOFM and an automated process for translating an instantiated EOFM into the model checking language Symbolic Analysis Laboratory. We present an evaluation of the scalability of the translation algorithm. We then present an automobile cruise control example to illustrate how an instantiated EOFM can be integrated into a larger system model that includes environmental features and the human operator's mission. The system model is verified using model checking in order to analyze a potentially hazardous situation related to the human-automation interaction.
---
paper_title: Analyzing interaction orderings with model checking
paper_content:
Human-computer interaction (HCI) systems control an ongoing interaction between end-users and computer-based systems. For software-intensive systems, a graphic user interface (GUI) is often employed for enhanced usability. Traditional approaches to validation of GUI aspects in HCI systems involve prototyping and live-subject testing. These approaches are limited in their ability to cover the set of possible human-computer interactions that a system may allow, since patterns of interaction may be long running and have large numbers of alternatives. In this paper, we propose a static analysis that is capable of reasoning about user-interaction properties of GUI portions of HCI applications written in Java using modern GUI frameworks, such as Swing/spl trade/. Our approach consists of partitioning an HCI application into three parts: the Swing library, the GUI implementation, i.e., code that interacts directly with Swing, and the underlying application. We develop models of each of these parts that preserve behavior relevant to interaction ordering. We describe how these models are generated and how we have customized a model checking framework to efficiently analyze their combination.
---
paper_title: Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
paper_content:
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human---automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human---automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.
---
paper_title: Evaluating human-automation interaction using task analytic behavior models, strategic knowledge-based erroneous human behavior generation, and model checking
paper_content:
Human-automation interaction, including erroneous human behavior, is a factor in the failure of complex, safety-critical systems. This paper presents a method for automatically generating task analytic models encompassing both erroneous and normative human behavior from normative task models by manipulating modeled strategic knowledge. Resulting models can be automatically translated into larger formal system models so that safety properties can be formally verified with a model checker. This allows analysts to prove that a human automation-interactive system (as represented by the formal model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. This method is illustrated with a case study: the programming of a patient-controlled analgesia pump. In this example, a problem resulting from a generated erroneous human behavior is discovered and a potential solutions is explored. Future research directions are discussed.
---
paper_title: A tool-supported design framework for safety critical interactive systems
paper_content:
Abstract This paper presents a design framework for safety critical interactive systems, based on a formal description technique called the ICO (Interactive Cooperative Object) formalism. ICO allows for describing, in a formal way, all the components of highly interactive (also called post-WIMP) applications. The framework is supported by a case tool called PetShop allowing for editing, verifying and executing the formal models. The first section describes why such user interfaces are challenging for most description techniques, as well as the state of the art in this field. Section 3 presents a development process dedicated to the framework. Then, we use a case study in order to recall the basic concepts of the ICO formalism and the recent extensions added in order to take into account post-WIMP interfaces' specificities. Section 5 presents the case tool PetShop and how the case study presented in the previous section has been dealt with. Lastly, we show how PetShop can be used for interactive prototyping.
---
paper_title: Designing Safe, Reliable Systems using Scade
paper_content:
As safety critical systems increase in size and complexity, the need for efficient tools to verify their reliability grows. In this paper we present a tool that helps engineers design safe and reliable systems. Systems are reliable if they keep operating safely when components fail. Our tool is at the core of the Scade Design Verifier integrated within Scade, a product developed by Esterel Technologies. Scade includes a graphical interface to build formal models in the synchronous data-flow language Lustre. Our tool automatically extends Lustre models by injecting faults, using libraries of typical failures. It allows to perform Failure Mode and Effect Analysis, which consists of verifying whether systems remain safe when selected components fail. The tool can also compute minimal combinations of failures breaking systems' safety, which is similar to Fault Tree Analysis. The paper includes successful verifications of examples from the aeronautics industry.
---
paper_title: Formal and experimental validation approaches in HCI systems design based on a shared event B model
paper_content:
The development of user interfaces (UI) needs validation and verification of a set of required properties. Different kinds of properties arc relevant to the human computer interaction (HCI) area. Not all of them may be checked using classical software engineering validation and verification tools. Indeed, a large part of properties is related to the user and to usability. Moreover, this kind of properties usually requires an experimental validation. This paper addresses the cooperation between formal and experimental HCI properties validation and verification. It focuses on a proof based technique (event B) and a Model Based System (MBS) based technique (SUIDT). Moreover, this paper tries to bridge the gap between both approaches in order to reduce the heterogeneity they lead to.
---
paper_title: Automatic detection of interaction vulnerabilities in an executable specification
paper_content:
This paper presents an approach to providing designers with the means to detect Human-Computer Interaction (HCI) vulnerabilities without requiring extensive HCI expertise. The goal of the approach is to provide timely, useful analysis results early in the design process, when modifications are less expensive. The twin challenges of providing timely and useful analysis results led to the development and evaluation of computational analyses, integrated into a software prototyping toolset. The toolset, referred to as the Automation Design and Evaluation Prototyping Toolset (ADEPT) was constructed to enable the rapid development of an executable specification for automation behavior and user interaction. The term executable specification refers to the concept of a testable prototype whose purpose is to support development of a more accurate and complete requirements specification.
---
paper_title: Incident and Accident Investigation Techniques to Inform Model-Based Design of Safety-Critical Interactive Systems
paper_content:
The quality of the design of an interactive safety-critical system can be enhanced by embedding data and knowledge from past experiences. Traditionally, this involves applying scenarios, usability analysis, or the use of metrics for risk analysis. In this paper, we present an approach that uses the information from incident investigations to inform the development of safety-cases that can, in turn, be used to inform a formal system model, represented using Petri nets and the ICO formalism. The foundations of the approach are first detailed and then exemplified using a fatal mining accident case study.
---
paper_title: Lightweight Formal Methods
paper_content:
Formal methods have offered great benefits, but often at a heavy price. For everyday software development, in which the pressures of the market don't allow full-scale formal methods to be applied, a more lightweight approach is called for. I'll outline an approach that is designed to provide immediate benefit at relatively low cost. Its elements are a small and succinct modelling language, and a fully automatic analysis scheme that can perform simulations and find errors. I'll describe some recent case studies using this approach, involving naming schemes, architectural styles, and protocols for networks with changing topologies. I'll make some controversial claims about this approach and its relationship to UML and traditional formal specification approaches, and I'll barbeque some sacred cows, such as the belief that executability compromises abstraction.
---
paper_title: Interaction engineering using the IVY tool
paper_content:
This paper is concerned with support for the process of usability engineering. The aim is to use formal techniques to provide a systematic approach that is more traceable, and because it is systematic, repeatable. As a result of this systematic process some of the more subjective aspects of the analysis can be removed. The technique explores exhaustively those features of a specific design that fail to satisfy a set of properties. It also analyzes those aspects of the design where it is possible to quantify the cost of use. The method is illustrated using the example of a medical device. While many aspects of the approach and its tool support have already been discussed elsewhere, this paper builds on and contrasts an analysis of the same device provided by a third party and in so doing enhances the IVY tool.
---
paper_title: Formal Modeling and Analysis for Interactive Hybrid Systems
paper_content:
An effective strategy for discovering certain kinds of automation surprise and other problems in interactive systems is to build models of the participating (automated and human) agents and then explore all reachable states of the composed system looking for divergences between mental states and those of the automation. Various kinds of model checking provide ways to automate this approach when the agents can be modeled as discrete automata. But when some of the agents are continuous dynamical systems (e.g., airplanes), the composed model is a hybrid (i.e., mixed continuous and discrete) system and these are notoriously hard to analyze. We describe an approach for very abstract modeling of hybrid systems using rela- tional approximations and their automated analysis using infinite bounded model checking supported by an SMT solver. When counterexamples are found, we de- scribe how additional constraints can be supplied to direct counterexamples toward plausible scenarios that can be confirmed in high-fidelity simulation. The approach is illustrated though application to a known (and now corrected) human-automation interaction problem in Airbus aircraft.
---
paper_title: Adaptive Automation of a Dynamic Control Task Based on Secondary Task Workload Measurement
paper_content:
Adaptive automation (AA) has been proposed as a method for regulating human workload and abating out-of-the-loop performance problems in complex systems control. The majority of AA or adaptive manual control studies, to this point, have facilitated control allocations using either preprogrammed schemes based on desired system performance, comparisons of human performance with established goals, or psychophysical variable monitoring to represent workload levels and determine appropriate control allocations for moderating workload. This study further explored the psychophysical assessment approach by using a secondary task measure of workload to facilitate control allocations in a complex primary task. An experiment was conducted in which participants performed a secondary gauge-monitoring task along with a simulated radar monitoring and target elimination task. Differences in single-secondary task performance and performance observed while also functioning in the primary task were used to direct operator-m...
---
paper_title: Trust, control strategies and allocation of function in human-machine systems.
paper_content:
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
---
paper_title: Human-Automated Judge Learning: A Methodology for Examining Human Interaction With Information Analysis Automation
paper_content:
Human-automated judge learning (HAJL) is a methodology providing a three-phase process, quantitative measures, and analytical methods to support design of information analysis automation. HAJL's measures capture the human and automation's judgment processes, relevant features of the environment, and the relationships between each. Specific measures include achievement of the human and the automation, conflict between them, compromise and adaptation by the human toward the automation, and the human's ability to predict the automation. HAJL's utility is demonstrated herein using a simplified air traffic conflict prediction task. HAJL was able to capture patterns of behavior within and across the three phases with measures of individual judgments and human-automation interaction. Its measures were also used for statistical tests of aggregate effects across human judges. Two between-subject manipulations were crossed to investigate HAJL's sensitivity to interventions in the human's training (sensor noise during training) and in display design (information from the automation about its judgment strategy). HAJL identified that the design intervention impacted conflict and compromise with the automation, participants learned from the automation over time, and those with higher individual judgment achievement were also better able to predict the automation.
---
paper_title: ACT-R: a theory of higher level cognition and its relation to visual attention
paper_content:
The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed.
---
paper_title: Using GOMS for user interface design and evaluation: which technique?
paper_content:
Since the seminal book, The Psychology of Human-Computer Interaction , the GOMS model has been one of the few widely known theoretical concepts in human-computer interaction. This concept has spawned much research to verify and extend the original work and has been used in real-world design and evaluation situations. This article synthesizes the previous work on GOMS to provide an integrated view of GOMS models and how they can be used in design. We briefly describe the major variants of GOMS that have matured sufficiently to be used in actual design. We then provide guidance to practitioners about which GOMS variant to use for different design situations. Finally, we present examples of the application of GOMS to practical design problems and then summarize the lessons learned.
---
paper_title: Toward a multi-method approach to formalizing human-automation interaction and human-human communications
paper_content:
Breakdowns in complex systems often occur as a result of system elements interacting in ways unanticipated by analysts or designers. The use of task behavior as part of a larger, formal system model is potentially useful for analyzing such problems because it allows the ramifications of different human behaviors to be verified in relation to other aspects of the system. A component of task behavior largely overlooked to date is the role of human-human interaction, particularly human-human communication in complex human-computer systems. We are developing a multi-method approach based on extending the Enhanced Operator Function Model language to address human agent communications (EOFMC). This approach includes analyses via theorem proving and future support for model checking linked through the EOFMC top level XML description. Herein, we consider an aviation scenario in which an air traffic controller needs a flight crew to change the heading for spacing. Although this example, at first glance, seems to be one simple task, on closer inspection we find that it involves local human-human communication, remote human-human communication, multi-party communications, communication protocols, and human-automation interaction. We show how all these varied communications can be handled within the context of EOFMC.
---
paper_title: On Combining Formal and Informal Verification
paper_content:
We propose algorithms which combine simulation with symbolic methods for the verification of invariants. The motivation is two-fold. First, there are designs which are too complex to be formally verified using symbolic methods; however the use of symbolic techniques in conjunction with traditional simulation results in better “coverage” relative to the computational resources used. Additionally, even on designs which can be symbolically verified, the use of a hybrid methodology often detects the presence of bugs faster than either formal verification or simulation.
---
paper_title: Spatial Awareness in Synthetic Vision Systems: Using Spatial and Temporal Judgments to Evaluate Texture and Field of View
paper_content:
OBJECTIVE ::: This work introduced judgment-based measures of spatial awareness and used them to evaluate terrain textures and fields of view (FOVs) in synthetic vision system (SVS) displays. ::: ::: ::: BACKGROUND ::: SVSs are cockpit technologies that depict computer-generated views of terrain surrounding an aircraft. In the assessment of textures and FOVs for SVSs, no studies have directly measured the three levels of spatial awareness with respect to terrain: identification of terrain, its relative spatial location, and its relative temporal location. ::: ::: ::: METHODS ::: Eighteen pilots made four judgments (relative azimuth angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 noninteractive 5-s simulations of an SVS head-down display. There were two between-subject variables (texture order and FOV order) and five within-subject variables (texture, FOV, and the terrain point's relative azimuth angle, distance, and height). ::: ::: ::: RESULTS ::: Texture produced significant main and interaction effects for the magnitude of error in the relative angle, distance, height, and abeam time judgments. FOV interaction effects were significant for the directional magnitude of error in the relative distance, height, and abeam time judgments. ::: ::: ::: CONCLUSION ::: Spatial awareness was best facilitated by the elevation fishnet (EF), photo fishnet (PF), and photo elevation fishnet (PEF) textures. ::: ::: ::: APPLICATION ::: This study supports the recommendation that the EF, PF, and PEF textures be further evaluated in future SVS experiments. Additionally, the judgment-based spatial awareness measures used in this experiment could be used to evaluate other display parameters and depth cues in SVSs.
---
paper_title: Simulation vs. formal: absorb what is useful; reject what is useless
paper_content:
This short paper is the result of the invited talk I gave at the 2007 Haifa Verification Conference. Its purpose is to briefly summarize the main points of my talk and to provide background references. The original talk abstract was, "Dynamic verification (simulation, emulation) and formal verification often live in separate worlds, with minimal interaction between the two camps, yet both have unique strengths that could complement the other. In this talk, I'll briefly enumerate what I believe are the best aspects of each verification style, and then explore some possibilities for drawing on the strengths of both camps."
---
paper_title: Combining Human Error Verification and Timing Analysis
paper_content:
Designs can often be unacceptable on performance grounds. In this work, we integrate a GOMS-like ability to predict execution times into the generic cognitive architecture developed for the formal verification of human error related correctness properties. As a result, formal verification and GOMS-like timing analysis are combined within a unified framework. This allows one to judge whether a formally correct design is also acceptable on performance grounds, and vice versa. We illustrate our approach with an example based on a KLM style timing analysis.
---
|
Title: Using Formal Verification to Evaluate Human-Automation Interaction: A Review
Section 1: Introduction
Description 1: Introduce the topic of formal verification and its relevance to human-automation interaction (HAI), providing background and setting the stage for the review.
Section 2: Formal Verification
Description 2: Explain the principles of formal verification, including techniques such as automated theorem proving and model checking, and discuss their application in ensuring system correctness.
Section 3: Formal Verification of Human-Automation Interaction
Description 3: Review how formal verification has been applied to HAI, highlighting various studies and approaches used to address issues like usability and mode confusion in human-automation systems.
Section 4: Formal Verification of Human-Automation Interface Properties
Description 4: Discuss the use of formal verification specifically for evaluating properties of human-automation interfaces, including usability and mode confusion analyses.
Section 5: Formal Verification Using Human Models
Description 5: Explore approaches that incorporate models of human cognitive and task behavior in formal verification to predict and analyze HAI failures and system-level problems.
Section 6: Discussion and Future Development
Description 6: Summarize the findings of the review, discuss the limitations and tradeoffs of different formal verification techniques, and suggest directions for future research and development.
|
Web Content Classification: A Survey
| 5 |
---
paper_title: Introduction to data mining and its applications
paper_content:
to Data Mining Principles.- Data Warehousing, Data Mining, and OLAP.- Data Marts and Data Warehouse.- Evolution and Scaling of Data Mining Algorithms.- Emerging Trends and Applications of Data Mining.- Data Mining Trends and Knowledge Discovery.- Data Mining Tasks, Techniques, and Applications.- Data Mining: an Introduction - Case Study.- Data Mining & KDD.- Statistical Themes and Lessons for Data Mining.- Theoretical Frameworks for Data Mining.- Major and Privacy Issues in Data Mining and Knowledge Discovery.- Active Data Mining.- Decomposition in Data Mining - A Case Study.- Data Mining System Products and Research Prototypes.- Data Mining in Customer Value and Customer Relationship Management.- Data Mining in Business.- Data Mining in Sales Marketing and Finance.- Banking and Commercial Applications.- Data Mining for Insurance.- Data Mining in Biomedicine and Science.- Text and Web Mining.- Data Mining in Information Analysis and Delivery.- Data Mining in Telecommunications and Control.- Data Mining in Security.
---
paper_title: Introduction to data mining and its applications
paper_content:
to Data Mining Principles.- Data Warehousing, Data Mining, and OLAP.- Data Marts and Data Warehouse.- Evolution and Scaling of Data Mining Algorithms.- Emerging Trends and Applications of Data Mining.- Data Mining Trends and Knowledge Discovery.- Data Mining Tasks, Techniques, and Applications.- Data Mining: an Introduction - Case Study.- Data Mining & KDD.- Statistical Themes and Lessons for Data Mining.- Theoretical Frameworks for Data Mining.- Major and Privacy Issues in Data Mining and Knowledge Discovery.- Active Data Mining.- Decomposition in Data Mining - A Case Study.- Data Mining System Products and Research Prototypes.- Data Mining in Customer Value and Customer Relationship Management.- Data Mining in Business.- Data Mining in Sales Marketing and Finance.- Banking and Commercial Applications.- Data Mining for Insurance.- Data Mining in Biomedicine and Science.- Text and Web Mining.- Data Mining in Information Analysis and Delivery.- Data Mining in Telecommunications and Control.- Data Mining in Security.
---
paper_title: Overview of Web Content Mining Tools
paper_content:
Nowadays, the Web has become one of the most widespread platforms for information change and retrieval. As it becomes easier to publish documents, as the number of users, and thus publishers, increases and as the number of documents grows, searching for information is turning into a cumbersome and time-consuming operation. Due to heterogeneity and unstructured nature of the data available on the WWW, Web mining uses various data mining techniques to discover useful knowledge from Web hyperlinks, page content and usage log. The main uses of web content mining are to gather, categorize, organize and provide the best possible information available on the Web to the user requesting the information. The mining tools are imperative to scanning the many HTML documents, images, and text. Then, the result is used by the search engines. In this paper, we first introduce the concepts related to web mining; we then present an overview of different Web Content Mining tools. We conclude by presenting a comparative table of these tools based on some pertinent criteria.
---
paper_title: On The Automated Classification of Web Pages Using Artificial Neural Network
paper_content:
The World Wide Web is growing at an uncontrollable rate. Hundreds of thousands of web sites appear every day, with the added challenge of keeping the web directories up-to-date. Further, the uncontrolled nature of web presents difficulties for Web page classification. As the number of Internet users is growing, so is the need for classification of web pages with greater precision in order to present the users with web pages of their desired class. However, web page classification has been accomplished mostly by using textual categorization methods. Herein, we propose a novel approach for web page classification that uses the HTML information present in a web page for its classification. There are many ways of achieving classification of web pages into various domains. This paper proposes an entirely new dimension towards web page classification using Artificial Neural Networks (ANN). Index Terms: World Wide Web, Web page classification, textual categorization, HTML, Artificial Neural Networks, ANN.
---
paper_title: Intelligent water drops algorithm for rough set feature selection
paper_content:
In this article; Intelligent Water Drops (IWD) algorithm is adapted for feature selection with Rough Set (RS). Specifically, IWD is used to search for a subset of features based on RS dependency as an evaluation function. The resulting system, called IWDRSFS (Intelligent Water Drops for Rough Set Feature Selection), is evaluated with six benchmark data sets. The performance of IWDRSFS are analysed and compared with those from other methods in the literature. The outcomes indicate that IWDRSFS is able to provide competitive and comparable results. In summary, this study shows that IWD is a useful method for undertaking feature selection problems with RS.
---
paper_title: Firefly Algorithm: Recent Advances and Applications
paper_content:
Nature-inspired metaheuristic algorithms, especially those based on swarm intelligence, have attracted much attention in the last ten years. Firefly algorithm appeared in about five years ago, its literature has expanded dramatically with diverse applications. In this paper, we will briefly review the fundamentals of firefly algorithm together with a selection of recent publications. Then, we discuss the optimality associated with balancing exploration and exploitation, which is essential for all metaheuristic algorithms. By comparing with intermittent search strategy, we conclude that metaheuristics such as firefly algorithm are better than the optimal intermittent search strategy. We also analyse algorithms and their implications for higher-dimensional optimization problems.
---
paper_title: Web page classification using firefly optimization
paper_content:
Increase in the amount of information on the Web has caused the need for accurate automated classifiers for Web pages to maintain Web directories and to increase search engines' performance. As every (HTML/XML) tag and every term on each Web page can be considered as a feature, we need efficient methods to select best features to reduce feature space of the Web page classification problem. In this study, our aim is to apply a recent optimization technique namely the firefly algorithm (FA), to select best features for Web page classification problem. The firefly algorithm (FA) is a metaheuristic algorithm, inspired by the flashing behavior of fireflies. In this study, we use FA to select a subset of features, and to evaluate the fitness of the selected features J48 classifier of the Weka data mining tool is employed. WebKB and Conference datasets were used to evaluate the effectiveness of the proposed feature selection system. We observed that when a subset of features are selected by using FA, WebKB and Conference datasets were classified without loss of accuracy, even more, time needed to classify new Web pages reduced sharply as the number of features were decreased.
---
|
Title: Web Content Classification: A Survey
Section 1: INTRODUCTION
Description 1: This section introduces the concept of data mining, its applications, and various tasks associated with data mining.
Section 2: WEB PAGE CLASSIFICATION
Description 2: This section discusses the need for web page classification, different types of classifications, challenges, and methods for classifying web pages.
Section 3: RELATED WORK
Description 3: This section reviews and summarizes existing research and methods proposed in the literature for web page classification and related topics.
Section 4: CONCLUSION
Description 4: This section concludes the paper by summarizing the findings, lessons learned, and future opportunities in the field of web classification.
Section 5: ACKNOWLEDGEMENTS
Description 5: This section acknowledges the contributions and support from individuals and references used in the creation of the survey paper.
|
Survey of Insurance Fraud Detection Using Data Mining Techniques
| 14 |
---
paper_title: On-Line Unsupervised Outlier Detection Using Finite Mixtures with Discounting Learning Algorithms
paper_content:
Outlier detection is a fundamental issue in data mining, specifically in fraud detection, network intrusion detection, network monitoring, etc. SmartSifter is an outlier detection engine addressing this problem from the viewpoint of statistical learning theory. This paper provides a theoretical basis for SmartSifter and empirically demonstrates its effectiveness. SmartSifter detects outliers in an on-line process through the on-line unsupervised learning of a probabilistic model (using a finite mixture model) of the information source. Each time a datum is input SmartSifter employs an on-line discounting learning algorithm to learn the probabilistic model. A score is given to the datum based on the learned model with a high score indicating a high possibility of being a statistical outlier. The novel features of SmartSifter are: (1) it is adaptive to non-stationary sources of datas (2) a score has a clear statistical/information-theoretic meanings (3) it is computationally inexpensives and (4) it can handle both categorical and continuous variables. An experimental application to network intrusion detection shows that SmartSifter was able to identify data with high scores that corresponded to attacks, with low computational costs. Further experimental application has identified a number of meaningful rare cases in actual health insurance pathology data from Australia's Health Insurance Commission.
---
paper_title: Decision Support and Business Intelligence Systems
paper_content:
Decision Support and Business Intelligence Systems 9e provides the only comprehensive, up-to-date guide to today's revolutionary management support system technologies, and showcases how they can be used for better decision-making. KEY TOPICS: Decision Support Systems and Business Intelligence. Decision Making, Systems, Modeling, and Support. Decision Support Systems Concepts, Methodologies, and Technologies: An Overview. Modeling and Analysis. Data Mining for Business Intelligence. Artificial Neural Networks for Data Mining. Text and Web Mining. Data Warehousing. Collaborative Computer-Supported Technologies and Group Support Systems. Knowledge Management. Artificial Intelligence and Expert Systems. Advanced Intelligent Systems. Management Support Systems: Emerging Trends and Impacts. Ideal for practicing managers interested in the foundations and applications of BI, group support systems (GSS), knowledge management, ES, data mining, intelligent agents, and other intelligent systems.
---
paper_title: Visualizing Corporate Data
paper_content:
Corporate databases have been recognized as strategic assets, and a successful corporation will make full use of its data resources to gain competitive advantage to better manage its business. Visualization is a key technology for extracting information from data, therefore, it is becoming increasingly important in our information rich society. It complements other analytic, model based approaches and exploits human pattern perception. Visualization can help users to navigate and explore the fast-growing number of data warehouses far more easily, and to rapidly discover the information hidden within volumes of data.
---
|
Title: Survey of Insurance Fraud Detection Using Data Mining Techniques
Section 1: INTRODUCTION
Description 1: Discusses the nature of insurance, common reasons for insurance uptake, and the prevalent fraud issues within the insurance sector.
Section 2: Insurance generally classified into Four types
Description 2: Describes the types of insurance, with a particular focus on motor and medical insurance fraud problems.
Section 3: Various stages of insurance fraud
Description 3: Explains the stages at which fraud can occur, including pre-insurance (application fraud) and post-insurance (claims fraud).
Section 4: Types of insurance claims frauds - Motor insurance frauds
Description 4: Details the nature of fraud within motor insurance, distinguishing between hard and soft frauds, and the challenges of data availability for research.
Section 5: Health Insurance
Description 5: Discusses health insurance frauds, including characteristics of fraudulent claims, the national perspective, and strategies for detection using data mining techniques.
Section 6: DATA MINING
Description 6: Introduces data mining as a technology for fraud detection, summarizing the process and its relevance in detecting financial fraud.
Section 7: Process Model of Data mining in financial fraud detection
Description 7: Outlines the process models and methods used in data mining for financial fraud detection.
Section 8: Classification
Description 8: Describes the classification techniques in data mining and their applications for fraud detection.
Section 9: Clustering
Description 9: Explains the clustering techniques used to group similar objects for fraud detection and investigation.
Section 10: Prediction
Description 10: Discusses prediction techniques and their usage in estimating future values for detection of fraudulent activities.
Section 11: Outlier detection
Description 11: Covers the methods for detecting outliers, which can indicate potential fraud in insurance claims.
Section 12: Regression
Description 12: Describes the use of regression techniques in uncovering relationships between variables to detect insurance fraud.
Section 13: Visualization
Description 13: Highlights the importance of visualization in presenting data mining results for clearer understanding and detection of fraud patterns.
Section 14: CONCLUSION
Description 14: Summarizes the findings from the survey, reiterates the importance of data mining in combating insurance fraud, and suggests future directions for research.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling
| 6 |
---
paper_title: Scheduling Construction Projects Using Evolutionary Algorithm
paper_content:
This paper attempts to use evolutionary algorithms to solve the problem of minimizing construction project duration in deterministic conditions, with in-time changeable and limited accessibility of renewable resources (workforce, machines, and equipment). Particular construction processes (with various levels of complexity) must be conducted in the established technological order and can be executed with different technological and organizational variants (different contractors, technologies, and ways of using resources). Such a description of realization conditions allows the method to also be applied to solving more complex problems that occur in construction practice (e.g., scheduling resources for a whole company, not only for a single project). The method's versatility distinguishes it from other approaches presented in numerous publications. To assess the solutions generated by the evolutionary algorithm, the writers worked a heuristic algorithm (for the allocation of resources and the calculation of the shortest project duration). The results obtained by means of this methodology seem to be similar to outcomes of other comparable methodologies. The proposed methodology (the model and the computer system) may be of great significance to the construction industry. The paper contains some examples of the practical use of the evolutionary algorithm for project planning with time constraints.
---
paper_title: A Pattern-Based Approach for Facilitating Schedule Generation and Cost Analysis in Bridge Construction Projects
paper_content:
The paper presents a computational method to help in automating the generation of time schedules for bridge construction projects. The method is based on the simulation of the construction works, taking into account the available resources and the interdependencies between the individual tasks. The simulation is realized by means of the discrete-event based simulation software originally created for plant layout in the manufacturing industry. Since the fixed process chains provided there are too rigid to model the more spontaneous task sequences of construction projects, a constraint module that selects the next task dynamically has been incorporated. The input data of the constraint module is formed by work packages of atomic activities. The description of a work package comprises the building element affected, the required material, machine and manpower resources, as well as the technological pre-requisites of the task to be performed. These input data are created with the help of a 3D model-based application that enables to assign process patterns to individual building elements. A process pattern consists of a sequence of work packages for realizing standard bridge parts, thus describing a construction method which in turn represents a higher level of abstraction in the scheduling process. In the last step, the user specifies the available resources. The system uses all the given information to automatically create a proposal for the construction schedule, which may then be refined using standard scheduling software.
---
paper_title: Genetic algorithms for multi-constraint scheduling: an application for the construction industry
paper_content:
SUMMARY Reliable construction schedule is vital for effective co-ordination across supply chains and various trades at construction work face. According to the lean construction concept, reliability of the schedule can be enhanced through detection and satisfaction of all potential constraints prior to releasing operation assignments. However, it is difficult to implement this concept since current scheduling tools and techniques are fragmented and designed to deal with a limited set of construction constraints. This paper introduces a methodology termed ‘multi-constraint scheduling’ in which four major groups of construction constraints including physical, contract, resource, and information constraints are considered. A Genetic Algorithm (GA) has been developed and used for multi-constraint optimisation problem. Given multiple constraints such as activity dependency, limited working area, and resource and information readiness, the GA alters tasks’ priorities and construction methods so as to arrive at optimum or near optimum set of project duration, cost, and smooth resource profiles. This feature has been practically developed as an embedded macro in MS Project. Several experiments confirmed that GA can provide near optimum solutions within acceptable searching time (i.e. 5 minutes for 1.92E11 alternatives). Possible improvements to this research are further suggested in the paper.
---
paper_title: Intelligent optimization for project scheduling of the first mining face in coal mining
paper_content:
In this paper, the intelligent optimization methods including genetic algorithm (GA), particle swarm optimization (PSO) and modified particle swarm optimization (MPSO) are used in optimizing the project scheduling of the first mining face of the second region of the fifth Ping'an coal mine in China. The result of optimization provides essential information of management and decision-making for governors and builder. The process of optimization contains two parts: the first part is obtaining the time parameters of each process and the network graph of the first mining face in the second region by PERT (program evaluation and review technique) method based on the raw data. The other part is the second optimization to maximal NPV (net present value) based on the network graph. The starting dates of all processes are decision-making variables. The process order and time are the constraints. The optimization result shows that MPSO is better than GA and PSO and the optimized NPV is 14,974,000 RMB more than the original plan.
---
paper_title: Incorporating Practicability into Genetic Algorithm-Based Time-Cost Optimization
paper_content:
Optimization problems in construction scheduling, such as time-cost optimization, can be effectively solved using genetic algorithms (GAs). This paper presents an approach that makes GA-based time-cost optimization viable for real world problems. Practicability is incorporated through the integration of a project management system to the GA system. The approach takes advantage of the powerful scheduling functionality of the project management system in evaluating project completion dates during optimization. The approach ensures that all scheduling parameters, including activity relationships, lags, calendars, constraints, resources, and progress, are considered in determining the project completion date, thus allowing comprehensive and realistic evaluations to be made during optimization.
---
paper_title: A two-phase GA model for resource-constrained project scheduling
paper_content:
In construction scheduling, problems can arise when each activity could start at different time points and the resources needed by the activities are limited. Moreover, activities have required conditions to be met, such as precedence relationships, resource requirements, etc. To resolve these problems, a two-phase GA (genetic algorithm) model is proposed in this paper, in which both the effects of time-cost trade-off and resource scheduling are taken into account. A GA-based time-cost trade-off analysis is adopted to select the execution mode of each activity through the balance of time and cost, followed by utilization of a GA-based resource scheduling method to generate a feasible schedule which may satisfy all the project constraints. Finally, the model is demonstrated using an example project and a real project.
---
paper_title: Planning and estimating in practice and the use of integrated computer models
paper_content:
Abstract Research into IT applications in the construction industry has been going on for many years, most of this work took the form of system development aimed at assisting construction practitioners and aimed at improving processes in order to reduce the cost of building. Most of these developments tended to identify a problem in a sector of theindustry and focused on using a certain technology in IT to provide a solution. This was often done without a proper investigation into the suitability and the acceptability of the technology to the end users (construction practitioners). Furthermore, most of the work was too focused on solving problems in isolation and did not consider the overall organisational framework and structure of the industry. This paper discusses and presents the results of a survey conducted to investigate the planning and estimating work practices in the industry in order to establish the important issues for the development of an integrated planning and estimating computer model. The survey established the important issues for the acceptability of computer models, the technical aspect to be addressed and a better working practice for estimating and planning. The technical aspect on which the computer model was based is the optimisation of the time and cost of building and the best work practice used is the integration of estimating and planning.
---
paper_title: SOFT LOGIC IN NETWORK ANALYSIS
paper_content:
One of the most important functions of planning is to offset uncertainty and change. However, projects are often affected by external factors or constraints that can either facilitate progress or create delays in the project. Sometimes, logic changes can be inevitable. Therefore, special techniques are needed to provide a simple way of network updating in order to reflect the impact of logic change on project completion date and on the critical path. This paper addresses the problem of soft logic and discusses logic changes during the course of the work. An algorithmic procedure has been developed to handle the soft logic in network analysis. SOFTCPM is a microcomputer program created by the writers that deals with the soft logic in CPM networks. It has the capability of updating the CPM network logic when any unexpected event occurs that prevents working according to the scheduled activity sequence.
---
paper_title: A Simple CPM Time-Cost Tradeoff Algorithm
paper_content:
This article describes an algorithm for efficiently shortening the duration of a project when the expected project duration exceeds a predetermined limit. The problem consists of determining which activities to expedite and by what amount. The objective is to minimize the cost of the project. ::: ::: This algorithm is considerably less complex than the analytic methods currently available. Because of its inherent simplicity, the algorithm is ideally suited for hand computation and also is suitable for computer solution. Solutions derived by the algorithm were compared with linear programming results. These comparisons revealed that the algorithm solutions are either a equally good or b nearly the same as the solutions obtained by more complex analytic methods which require a computer. ::: ::: With this method the CPM time-cost tradeoff problem is solved without access to a computer, thereby making this planning tool available to managers who otherwise would find implementation impractical.
---
paper_title: A new exact penalty function method for continuous inequality constrained optimization problems
paper_content:
In this paper, a computational approach based on a new exact penalty ::: function method is devised for solving a class of continuous ::: inequality constrained optimization problems. The continuous ::: inequality constraints are first approximated by smooth function in ::: integral form. Then, we construct a new exact penalty function, ::: where the summation of all these approximate smooth functions in ::: integral form, called the constraint violation, is appended to the ::: objective function. In this way, we obtain a sequence of approximate ::: unconstrained optimization problems. It is shown that if the value ::: of the penalty parameter is sufficiently large, then any local ::: minimizer of the corresponding unconstrained optimization problem is ::: a local minimizer of the original problem. For illustration, three ::: examples are solved using the proposed method. From the solutions ::: obtained, we observe that the values of their objective functions ::: are amongst the smallest when compared with those obtained by other ::: existing methods available in the literature. More importantly, our ::: method finds solution which satisfies the continuous inequality ::: constraints.
---
paper_title: Project Scheduling Problems: A Survey
paper_content:
A survey of project scheduling problems since 1973 limited to work done specifically in the project scheduling area (although several techniques developed for assembly line balancing and job‐shop scheduling can be applicable to project scheduling): the survey includes the work done on fundamental problems such as the resource‐constrained project scheduling problem (RCPSP); time/cost trade‐off problem (TCTP); and payment scheduling problem (PSP). Also discusses some recent research that integrates RCPSP with either TCTP or PSP, and PSP with TCTP. In spite of their practical relevance, very little work has been done on these combined problems to date. The future of the project scheduling literature appears to be developing in the direction of combining the fundamental problems and developing efficient exact and heuristic methods for the resulting problems.
---
paper_title: Multi-Objective Optimization of Construction Schedules
paper_content:
This paper focuses on the multi-objective deterministic and stochastic modelling and optimization of construction tasks scheduling. Of particular importance is a frequency need to satisfy conflicting optimization objectives such as the minimization of extra task performance cost and total performance time. Technological relationships between relevant construction tasks have been modelled using the deterministic and probabilistic precedence networks, while the construction resources have been modelled deterministically and stochastically using duration/productivity and resource/performance cost and performance time matrices. For the solution of the problem a mixed integer programming approach has been adopted, utilizing a computer algorithm containing elements heuristic methods.
---
paper_title: Augmented heuristic algorithm for multi-skilled resource scheduling
paper_content:
Conventional project scheduling is restricted to single-skilled resource assumption where each worker is assumed to have only one skill. This, in effect, contradicts real-world practice where workers may possess multiple skills and, on several occasions, are assigned to perform tasks for which they are not specialized. Past research has shown a simple process of heuristic approach for multi-skilled resource scheduling where a project is planned under the assumption that each resource can have more than one skill and resource substitution is allowed. Nevertheless, the approach has presented resource substitution step where an activity with higher priority can claim any resource regardless of its concurrent activities' resource requirements. Furthermore, the approach is subjected to all-or-nothing resource assignment concept where an activity cannot start and resources are not needed for that activity at all unless the required resources of that activity can be completely fulfilled. This research presents an alternative heuristic approach for multi-skilled resource scheduling in an attempt to improve the resource substitution approach. Augmented resource substitution rule and resource-driven task duration are presented to increase starting opportunity of activities on earlier time. Case studies are presented to illustrate the improved result of shorter project duration.
---
paper_title: Project scheduling - Theory and practice
paper_content:
The project scheduling problem involves the scheduling of project activities subject to precedence and/or resource constraints. Of obvious practical importance, it has been the subject of intensive research since the late fifties. A wide variety of commercialized project management software packages have been put to practical use. Despite all these efforts, numerous reports reveal that many projects escalate in time and budget and that many project scheduling procedures have not yet found their way to practical use. The objective of this paper is to confront project scheduling theory with project scheduling practice. We provide a generic hierarchical project planning and control framework that serves to position the various project planning procedures and discuss important research opportunities, the exploration of which may help to close the theory-practice gap.
---
paper_title: Application of genetic algorithms to construction scheduling with or without resource constraints
paper_content:
The difficulties encountered in scheduling construction projects with resource constraints are highlighted by means of a simplified bridge construction problem. A genetic algorithm applicable to projects with or without resource constraints is described. In this application, chromosomes are formed by genes consisting of the start days of the activities. This choice necessitated introducing two mathematical operators (datum operator and left compression operator) and emphasizing one genetic operator (fine mutation operator). A generalized evaluation of the fitness function is conducted. The algorithm is applied to the example problem. The results and the effects of some of the parameters are discussed.Key words: scheduling, genetic algorithms, construction management, computer application.
---
paper_title: Current Float Techniques for Resources Scheduling
paper_content:
This paper concerns resources scheduling using a simple heuristic model described as the “current float” model. Current sfloat is defined as the finish float available with respect to its latest finish time in the original network computations. The current float model allocates limited resources by giving priority to the activity that has the least current float. The current floats need be computed only for those activities that are engaged in a resource conflict. The “total float” models that were used previously required the tedious task of constructing and computing a status network every time an activity is postponed due to nonavailability of resources. The current float model avoids this and requires only the original network computations. The mathematical validity of the model is explained, and the paper presents proof that the output of this model is the same as that of the total float model. The physical significance of the model is also indicated. An application section is included to illustrate ...
---
paper_title: Time/cost optimization using hybrid evolutionary algorithm in construction project scheduling
paper_content:
Abstract This paper deals with construction project scheduling. In the literature on the subject one can find such scheduling methods as: the Linear Scheduling Model (LSM), Line of Balance (LOB) charts and CMP/PERT network planning. The methods take into account several objective functions: the least cost, the least time, limited resources, work priorities, etc., both in the deterministic and probabilistic approach. The paper presents an analysis of the time/cost relationship, performed using time coupling method TCM III. A modified hybrid evolutionary algorithm (HEA) developed by Bozejko and Wodecki (A Hybrid Evolutionary Algorithm for Some Discrete Optimization Problems. IEEE Computer Society, 325–331, 2005) was used for optimization.
---
paper_title: A survey on the resource-constrained project scheduling problem
paper_content:
In this paper, research on the resource-constrained project scheduling problem is classified according to specified objectives and constraints. Each classified area is extensively surveyed, and special emphasis is given to trends in recent research. Specific papers involving nonrenewable resource constraints and time/cost-based objectives are discussed in detail because they present models that are close representations of real-world problems. The difficulty of solving such complex models by optimization techniques is noted. For the purposes of this survey, a set of 78 optimally solved test problems from the literature and a second set of 110 benchmark problems have been subjected to analysis with some well-known dispatching rules and a scheduling algorithm that consists of a decision-making process utilizing the problem constraints as a base of selection. The computational results are reported and discussed in the text. Constructive scheduling algorithms that are directly based on the problem constraints...
---
paper_title: Scheduling Construction Projects
paper_content:
Introduction to Scheduling Task Definition: The Foundation of A Schedule Gantt Charts Logic Diagrams and Scheduling The Critical Path Method (CPM) Calculation of CPM Event and Task Times The Precedence Method (PM) Calculation of Precedence Method Task Times and Floats Program Evaluation Review Technique (PERT) Probabilistic Scheduling (Monte Carlo Method) Calendar Day Scheduling Updating the Schedule Expediting the Project Resource-Constrained Scheduling and Resource Leveling Cash Flow Analysis Based on Construction Schedules Applications of Computers to Scheduling The Scientific Cost Estimating Method Computer Programs for Scheduling Table of Random Numbers Index.
---
paper_title: TIME-COST-QUALITY TRADE-OFF ANALYSIS FOR HIGHWAY CONSTRUCTION
paper_content:
Many departments of transportation have recently started to utilize innovative contracting methods that provide new incentives for improving construction quality. These emerging contracts place an increasing pressure on decision makers in the construction industry to search for an optimal resource utilization plan that minimizes construction cost and time while maximizing its quality. This paper presents a multiobjective optimization model that supports decision makers in performing this challenging task. The model is designed to transform the traditional two-dimensional time-cost tradeoff analysis to an advanced three-dimensional time-cost-quality trade-off analysis. The model is developed as a multiobjective genetic algorithm to provide the capability of quantifying and considering quality in construction optimization. An application example is analyzed to illustrate the use of the model and demonstrate its capabilities in generating and visualizing optimal tradeoffs among construction time, cost, and quality.
---
paper_title: Multimode Project Scheduling Based on Particle Swarm Optimization
paper_content:
This paper introduces a methodology for solving the multimode resource-constrained project scheduling problem (MRCPSP) based on particle swarm optimization (PSO). The MRCPSP considers both renewable and nonrenewable resources that have not been addressed efficiently in the construction field. The framework of the PSO-based methodology is developed with the objective of minimizing project duration. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. Comparisons with other methods show that the PSO method is equally efficient at solving the MRCPSP.
---
paper_title: A resources scheduling decision support system for concurrent project management
paper_content:
This paper proposes an integrated methodology for the management of (availability) constrained resources within a concurrent project management environment. The methodology incorporates six main components the projects interface management module, the impact matrix module, the projects grouping module, the projects prioritization module, the master projects scheduling module, and the resources assignment module all of which are integrated coherently, within the framework of a decision support system. The component which is delineated in this paper is the resources assignment module; the other modules are respectively the subject of future papers. The essence of such an integrated and coherent methodology is to enhance the centralization and efficient distribution of information on various concurrent projects parameters, such as projects priorities, projects completion times, their associated costs, and resources allocation patterns, etc. The proposed methodogy is applied in an actual industrial case study...
---
paper_title: Optimizing Construction Time and Cost Using Ant Colony Optimization Approach
paper_content:
Time and cost are the most important factors to be considered in every construction project. In order to maximize the return, both the client and contractor would strive to optimize the project duration and cost concurrently. Over the years, many research studies have been conducted to model the time–cost relationships, and the modeling techniques range from the heuristic methods and mathematical approaches to genetic algorithms. Despite that, previous studies often assumed the time being constant leaving the analyses based purely on a single objective—cost. Acknowledging the significance of time–cost optimization, an evolutionary-based optimization algorithm known as ant colony optimization is applied to solve the multiobjective time–cost optimization problems. In this paper, the basic mechanism of the proposed model is unveiled. Having developed a program in the Visual Basic platform, tests are conducted to compare the performance of the proposed model against other analytical methods previously used fo...
---
paper_title: Evolutionary algorithms applied to project scheduling problems—a survey of the state-of-the-art
paper_content:
Evolutionary algorithms, a form of meta-heuristic, have been successfully applied to a number of classes of complex combinatorial problems such as the well-studied travelling salesman problem, bin packing problems, etc. They have provided a method other than an exact solution that will, within a reasonable execution time, provide either optimal or near optimal results. In many cases near optimal results are acceptable and the additional resources that may be required to provide exact optimal results prove uneconomical. The class of project scheduling problems (PSP) exhibit a similar type of complexity to the previous mentioned problems, also being NP-hard, and therefore would benefit from solution via meta-heuristic rather than exhaustive search. Improvement to a project schedule in terms of total duration or resource utilisation can be of major financial advantage and therefore near optimal solution via evolutionary techniques should be considered highly applicable. In preparation for further research th...
---
paper_title: A survey of variants and extensions of the resource-constrained project scheduling problem
paper_content:
The resource-constrained project scheduling problem (RCPSP) consists of activities that must be scheduled subject to precedence and resource constraints such that the makespan is minimized. It has become a well-known standard problem in the context of project scheduling which has attracted numerous researchers who developed both exact and heuristic scheduling procedures. However, it is a rather basic model with assumptions that are too restrictive for many practical applications. Consequently, various extensions of the basic RCPSP have been developed. This paper gives an overview over these extensions. The extensions are classified according to the structure of the RCPSP. We summarize generalizations of the activity concept, of the precedence relations and of the resource constraints. Alternative objectives and approaches for scheduling multiple projects are discussed as well. In addition to popular variants and extensions such as multiple modes, minimal and maximal time lags, and net present value-based objectives, the paper also provides a survey of many less known concepts.
---
paper_title: Permutation-Based Elitist Genetic Algorithm for Optimization of Large-Sized Resource-Constrained Project Scheduling
paper_content:
The resource-constrained project scheduling problem (RCPSP) has received the attention of many researchers because its general model can be used in a wide variety of construction planning and scheduling applications. The exact procedures and priority-rule-based heuristics fail to search for the optimum solution to the RCPSP of large-sized project networks in a reasonable amount of time for successful application in practice. This paper presents a permutation-based elitist genetic algorithm for solving the problem in order to fulfill the lack of an efficient optimal solution algorithm for project networks with 60 activities or more as well as to overcome the drawback of the exact solution approaches for large-sized project networks. The proposed algorithm employs the elitist strategy to preserve the best individual solution for the next generation so the improved solution can be obtained. A random number generator that provides and examines precedence feasible individuals is developed. A serial schedule generation scheme for the permutation-based decoding is applied to generate a feasible solution to the problem. Computational experiments using a set of standard test problems are presented to demonstrate the performance and accuracy of the proposed algorithm.
---
paper_title: Profit Optimization for Multiproject Scheduling Problems Considering Cash Flow
paper_content:
This study investigates cash flow for profit optimization and handles scheduling problems in multiproject environment. By identifying the amount and timing of individual inflow or outflow at the end of each period, contractors can observe the cash flow at specific time points according to project progress. Since most companies handle multiple projects simultaneously, managing project finance becomes complicated and tough for contractors. Therefore, this study considers cash flow and the financial requirements of contractors working in a multiple-project environment and proposes a profit optimization model for multiproject scheduling problems using constraint programming. The current study also presents a hypothetical example involving three projects to illustrate capability of the proposed model and adopts various constraints, including credit limit (CL) and due dates, for scenario analysis. The analysis result demonstrates that setting CLs ensures smooth financial pressure by properly shifting activities, and assigning due dates for projects helps planners avoid project duration extension while maximizing overall project profit.
---
paper_title: Optimization of construction time-cost trade-off analysis using genetic algorithms
paper_content:
In the management of a construction project, the project duration can often be compressed by accelerating some of its activities at an additional expense. This is the so-called time-cost trade-off ...
---
paper_title: SCHEDULING/COST OPTIMIZATION AND NEURAL DYNAMICS MODEL FOR CONSTRUCTION
paper_content:
A general mathematical formulation is presented for scheduling of construction projects and applied to the problem of highway construction scheduling. Repetitive and nonrepetitive tasks, work continuity constraints, multiple-crew strategies, and the effects of varying job conditions on the performance of a crew can be modeled. An optimization formulation is presented for the construction project scheduling problem with the goal of minimizing the direct construction cost. The nonlinear optimization is then solved by the neural dynamics model developed recently by Adeli and Park. For any given construction duration, the model yields at the optimum construction schedule for minimum construction cost automatically. By varying the construction duration, one can solve the cost-duration trade-off problem and obtain the global optimum schedule and the corresponding minimum construction cost. The new construction scheduling model provides the capabilities of both the CPM and LSM approaches. In addition, it provides features desirable for repetitive projects such as highway construction and allows schedulers greater flexibility. It is particularly suitable for studying the effects of change order on the construction cost. This research provides the mathematical foundation for development of a new generation of more general, flexible, and accurate construction scheduling systems.
---
paper_title: Heuristic scheduling of resource‐constrained, multiple‐mode and repetitive projects
paper_content:
An alternative heuristic method for scheduling repetitive projects in which resources are limited and activities may be executed with multiple modes of resource demands associated with different durations is proposed. Unlike general heuristic methods that separately analyze each competing activity and schedule only one at a time, the proposed heuristic algorithm ranks possible combinations of activities every time and simultaneously schedules all activities in the selected combination leading to minimal project duration. All alternative combinations of activities in consideration of resource constraints, multiple modes and characteristics of the repetitive projects are determined through a permutation tree-based procedure. The heuristic method is implemented based on the corresponding framework. An example is presented to demonstrate the efficiency of the proposed heuristic method. The study is expected to provide an efficient heuristic methodology for solving the project scheduling problem.
---
paper_title: Requirements identification for 4D constraintbased construction planning and control system
paper_content:
2 Abstract: Construction planning and control are identified among the top potential areas needing improvements. A traditional technique known as the Critical Path Method (CPM) has been widely criticised in terms of its inability to cope with non- precedence constraints, difficulty to evaluate and communicate interdependencies, and inadequacy for work-face productions. Attempting to treat these deficiencies, substantial research efforts have resulted in a wide range of advancements including design of new planning and control methodologies and development of sophisticated computerised applications. However, these efforts have not effectively overcome all of the above CPM drawbacks and, therefore, have not yet provided a solution to the industry.
---
paper_title: Critical-Path Planning and Scheduling: Mathematical Basis
paper_content:
This paper is concerned with establishing the mathematical basis of the Critical-Path Method---a new tool for planning, scheduling, and coordinating complex engineering-type projects. The essential ingredient of the technique is a mathematical model that incorporates sequence information, durations, and costs for each component of the project. It is a special parametric linear program that, via the primal-dual algorithm, may be solved efficiently by network flow methods. Analysis of the solutions of the model enables operating personnel to answer questions concerning labor needs, budget requirements, procurement and design limitations, the effects of delays, and communication difficulties.
---
paper_title: SOFT LOGIC IN NETWORK ANALYSIS
paper_content:
One of the most important functions of planning is to offset uncertainty and change. However, projects are often affected by external factors or constraints that can either facilitate progress or create delays in the project. Sometimes, logic changes can be inevitable. Therefore, special techniques are needed to provide a simple way of network updating in order to reflect the impact of logic change on project completion date and on the critical path. This paper addresses the problem of soft logic and discusses logic changes during the course of the work. An algorithmic procedure has been developed to handle the soft logic in network analysis. SOFTCPM is a microcomputer program created by the writers that deals with the soft logic in CPM networks. It has the capability of updating the CPM network logic when any unexpected event occurs that prevents working according to the scheduled activity sequence.
---
paper_title: Constraint knowledge for construction scheduling
paper_content:
The capabilities of network diagramming techniques (NDT) are restricted by limitations inherent in their representation of schedule constraints (typically expressed through precedent relationships between activities). This assertion suggests the selection of a richer representation as a departure point for extending the utility of planning and scheduling techniques. The authors suggest that such a representation is provided by a system which employs a general model of a project's 'status' and which allows schedule constraints to be expressed as rules which refer to this status. This system offers the advantages of allowing the precedence of activities to be based on more than just the completion of other activities. It also provides an efficient knowledge-based approach to scheduling that can express the reasoning underlying scheduling actions which can be employed by future artificial intelligence (AI) planning. The authors discuss how constraints are represented and describes what types of constraints can be represented in both NDT and in the proposed system-called A Construction Planner (ACP). This is followed by a comparison of the schedule generation algorithms used in NDT and in ACP.
---
paper_title: Resource-Activity Critical-Path Method for Construction Planning
paper_content:
In this paper, a practical method is developed in an attempt to address the fundamental matters and limitations of existing methods for critical-path method (CPM) based resource scheduling, which are identified by reviewing the prior research in resource-constrained CPM scheduling and repetitive scheduling. The proposed method is called the resource-activity critical-path method (RACPM), in which (1) the dimension of resource in addition to activity and time is highlighted in project scheduling to seamlessly synchronize activity planning and resource planning; (2) the start/finish times and the floats are defined as resource-activity attributes based on the resource-technology combined precedence relationships; and (3) the resource critical issue that has long baffled the construction industry is clarified. The RACPM is applied to an example problem taken from the literature for illustrating the algorithm and comparing it with the existing method. A sample application of the proposed RACPM for planning a footbridge construction project is also given to demonstrate that practitioners can readily interpret and utilize a RACPM schedule by relating the RACPM to the classic CPM. The RACPM provides schedulers with a convenient vehicle for seamlessly integrating the technology/process perspective with the resource use perspective in construction planning. The effect on the project duration and activity floats of varied resource availability can be studied through running RACPM on different scenarios of resources. This potentially leads to an integrated scheduling and cost estimating process that will produce realistic schedules, estimates, and control budgets for construction.
---
paper_title: Critical path methods in construction practice
paper_content:
Critical Path Method Procedures and Terminology. The Network Diagram and Utility Data. Network Calculations I: Critical Paths and Floats. Network Calculations II: Simple Compression. Network Calculations III: Complex Compression and Decompression. Network Calculations IV: Scheduling and Resource Leveling. Practical Planning with Critical Path Methods. Project Control with Critical Path Methods. Financial Planning and Cost Control. Evaluation of Work Changes and Delays. Attitudes, Responsibilities, and Duties. Computer-Aided CPM. Selection of Technique. Integrated Project Development and Management. CPM, a Systems Concept. Appendices. Index.
---
paper_title: Professional Construction Management
paper_content:
This paper describes in part findings and conclusions of ASCE’s Task Committee on Management of Construction Projects. This paper presents definitions of “Professional Construction Management” and “Professional Construction Manager”, explains the reasoning behind them, then describes the responsibilities of the Professional Construction Manager and his requirements in the planning and execution phases of a project. Professional Construction Management differs from conventional design-construct and traditional separate contractor and designer approaches in that there are by definition three separate and distinct members of the team (owner, designer, and manager) and the Professional Construction Manager does not perform significant design or construction work with his own forces. Professional Construction Management is not necessarily better or worse than other methods of procuring constructed facilities. However, the three-party-team approach is certainly a viable alternative to more traditional methods in many applications as its increasing use will demonstrate.
---
paper_title: Construction Time-Cost Trade-Off Analysis Using LP/IP Hybrid Method
paper_content:
Construction planners must select appropriate resources, including crew size, equipment, methods, and technologies, to perform the tasks of a construction project. In general, there is a trade-off between time and cost to complete a task—the less expensive the resources, the longer it takes. Using critical-path-method techniques, the overall project cost can be reduced by using less expensive resources for noncritical activities without impacting the duration. Furthermore planners usually need to adjust the selection of resources in order to shorten or lengthen the project duration. Finding optimal decisions is difficult and time-consuming considering the numbers of permutations involved. For example, a critical-path-method network with only eight activities, each with two options, will have 256 (2\u8) alternatives. Exhaustive enumeration is not economically feasible even with very fast computers. This paper presents a new algorithm using linear and integer programming to efficiently obtain optimal resource selections that optimize time and cost of a construction project.
---
paper_title: Scheduling of repetitive projects with cost optimization
paper_content:
Existing dynamic programming formulations are capable of identifying, from a set of possible alternatives, the optimum crew size for each activity in a repetitive project. The optimization criterion of these formulations is, however, limited to the minimization of the overall duration of the project. While this may lead to the minimization of the indirect cost of the project, it does not guarantee its overall minimum cost. The objective of this paper is to present a model that incorporates cost as an important decision variable in the optimization process. The model utilizes dynamic programming and performs the solution in two stages: first a forward process to identify local minimum conditions, and then a backward process to ensure an overall minimum state. In the first stage, a process similar to that use in time-cost trade-off analysis is employed, and a simple scanning and selecting process is used in the second stage. An example project from the literature is analyzed in order to demonstrate the use of the model and its validity, and illustrate the significance of cost as a decision variable in the optimization process.
---
paper_title: Assignment and Allocation Optimization of Partially Multiskilled Workforce
paper_content:
Multiskilling is a workforce strategy that has been shown to reduce indirect labor costs, improve productivity, and reduce turnover. A multiskilled workforce is one in which the workers possess a range of skills that allow them to participate in more than one work process. In practice, they may work across craft boundaries. The success of multiskilling greatly relies on the foreman's ability to assign workers to appropriate tasks and to compose crews effectively. The foreman assigns tasks to workers according to their knowledge, capabilities, and experience on former projects. This research investigated the mechanics of allocating a multiskilled workforce and developed a linear programming model to help optimize the multiskilled workforce assignment and allocation process in a construction project, or between the projects of one company. It is concluded that the model will be most useful in conditions where full employment does not exist; however, it is also useful for short term allocation decisions. By running the model for various simulated scenarios, additional observations were made. For example, it is concluded that, for a capital project, the benefits of multiskilling are marginal beyond approximately a 20% concentration of multiskilled workers in a project workforce. Benefits to workers themselves become marginal after acquiring competency in two or three crafts. These observations have been confirmed by field experience. Extension of this model to allocation of multifunctional resources, such as construction equipment, should also be possible.
---
paper_title: Applied Integer Programming: Modeling and Solution
paper_content:
PREFACE. PART I MODELING. 1 Introduction. 1.1 Integer Programming. 1.2 Standard Versus Nonstandard Forms. 1.3 Combinatorial Optimization Problems. 1.4 Successful Integer Programming Applications. 1.5 Text Organization and Chapter Preview. 1.6 Notes. 1.7 Exercises. 2 Modeling and Models. 2.1 Assumptions on Mixed Integer Programs. 2.2 Modeling Process. 2.3 Project Selection Problems. 2.4 Production Planning Problems. 2.5 Workforce/Staff Scheduling Problems. 2.6 Fixed-Charge Transportation and Distribution Problems. 2.7 Multicommodity Network Flow Problem. 2.8 Network Optimization Problems with Side Constraints. 2.9 Supply Chain Planning Problems. 2.10 Notes. 2.11 Exercises. 3 Transformation Using 0 1 Variables. 3.1 Transform Logical (Boolean) Expressions. 3.2 Transform Nonbinary to 0 1 Variable. 3.3 Transform Piecewise Linear Functions. 3.4 Transform 0 1 Polynomial Functions. 3.5 Transform Functions with Products of Binary and Continuous Variables: Bundle Pricing Problem. 3.6 Transform Nonsimultaneous Constraints. 3.7 Notes. 3.8 Exercises. 4 Better Formulation by Preprocessing. 4.1 Better Formulation. 4.2 Automatic Problem Preprocessing. 4.3 Tightening Bounds on Variables. 4.4 Preprocessing Pure 0 1 Integer Programs. 4.5 Decomposing a Problem into Independent Subproblems. 4.6 Scaling the Coefficient Matrix. 4.7 Notes. 4.8 Exercises. 5 Modeling Combinatorial Optimization Problems I. 5.1 Introduction. 5.2 Set Covering and Set Partitioning. 5.3 Matching Problem. 5.4 Cutting Stock Problem. 5.5 Comparisons for Above Problems. 5.6 Computational Complexity of COP. 5.7 Notes. 5.8 Exercises. 6 Modeling Combinatorial Optimization Problems II. 6.1 Importance of Traveling Salesman Problem. 6.2 Transformations to Traveling Salesman Problem. 6.3 Applications of TSP. 6.4 Formulating Asymmetric TSP. 6.5 Formulating Symmetric TSP. 6.6 Notes. 6.7 Exercises. PART II REVIEW OF LINEAR PROGRAMMING AND NETWORK FLOWS. 7 Linear Programming Fundamentals. 7.1 Review of Basic Linear Algebra. 7.2 Uses of Elementary Row Operations. 7.3 The Dual Linear Program. 7.4 Relationships Between Primal and Dual Solutions. 7.5 Notes. 7.6 Exercises. 8 Linear Programming: Geometric Concepts. 8.1 Geometric Solution. 8.2 Convex Sets. 8.3 Describing a Bounded Polyhedron. 8.4 Describing Unbounded Polyhedron. 8.5 Faces, Facets, and Dimension of a Polyhedron. 8.6 Describing a Polyhedron by Facets. 8.7 Correspondence Between Algebraic and Geometric Terms. 8.8 Notes. 8.9 Exercises. 9 Linear Programming: Solution Methods. 9.1 Linear Programs in Canonical Form. 9.2 Basic Feasible Solutions and Reduced Costs. 9.3 The Simplex Method. 9.4 Interpreting the Simplex Tableau. 9.5 Geometric Interpretation of the Simplex Method. 9.6 The Simplex Method for Upper Bounded Variables. 9.7 The Dual Simplex Method. 9.8 The Revised Simplex Method. 9.9 Notes. 9.10 Exercises. 10 Network Optimization Problems and Solutions. 10.1 Network Fundamentals. 10.2 A Class of Easy Network Problems. 10.3 Totally Unimodular Matrices. 10.4 The Network Simplex Method. 10.5 Solution via LINGO. 10.6 Notes. 10.7 Exercises. PART III SOLUTIONS. 11 Classical Solution Approaches. 11.1 Branch-and-Bound Approach. 11.2 Cutting Plane Approach. 11.3 Group Theoretic Approach. 11.4 Geometric Concepts. 11.5 Notes. 11.6 Exercises. 12 Branch-and-Cut Approach. 12.1 Introduction. 12.2 Valid Inequalities. 12.3 Cut Generating Techniques. 12.4 Cuts Generated from Sets Involving Pure Integer Variables. 12.5 Cuts Generated from Sets Involving Mixed Integer Variables. 12.6 Cuts Generated from 0 1 Knapsack Sets. 12.7 Cuts Generated from Sets Containing 0 1 Coefficients and 0 1 Variables. 12.8 Cuts Generated from Sets with Special Structures. 12.9 Notes. 12.10 Exercises. 13 Branch-and-Price Approach. 13.1 Concepts of Branch-and-Price. 13.2 Dantzig Wolfe Decomposition. 13.3 Generalized Assignment Problem. 13.4 GAP Example. 13.5 Other Application Areas. 13.6 Notes. 13.7 Exercises. 14 Solution via Heuristics, Relaxations, and Partitioning. 14.1 Introduction. 14.2 Overall Solution Strategy. 14.3 Primal Solution via Heuristics. 14.4 Dual Solution via Relaxation. 14.5 Lagrangian Dual. 14.6 Primal Dual Solution via Benders Partitioning. 14.7 Notes. 14.8 Exercises. 15 Solutions with Commercial Software. 15.1 Introduction. 15.2 Typical IP Software Components. 15.3 The AMPL Modeling Language. 15.4 LINGO Modeling Language. 15.5 MPL Modeling Language. REFERENCES. APPENDIX: ANSWERS TO SELECTED EXERCISES. INDEX.
---
paper_title: Advances in Linear and Integer Programming
paper_content:
List of contributors 1. Simplex Algorithms 2. Interior Point Methods 3. A Computational View of Interior Point Methods 4. Interior Point Algorithms for Network Flow Problems 5. Branch and Cut Algorithms 6. Interior Point Algorithms for Integer Programming 7. Computational Logic and Integer Programming
---
paper_title: Finance-Based Scheduling of Construction Projects Using Integer Programming
paper_content:
Construction scheduling is the process of devising schemes for sequencing activities. A realistic schedule fulfills actual concerns of users, thus minimizing the chances of schedule failure. The minimization of total project duration has been the concept underlying critical-path method/program evaluation and review technique (CPM/PERT) schedules. Subsequently, techniques including resource management and time-cost trade-off analysis were developed to customize CPM/PERT schedules to fulfill users' concerns regarding project resources, cost, and time. However, financing construction activities throughout the course of the project is another crucial concern that must be properly treated otherwise, nonrealistic schedules are to be anticipated. Unless contractors manage to procure adequate cash to keep construction work running on schedule, the pace of work will definitely be relaxed. Therefore, always keeping scheduled activities in balance with available cash is a potential contribution to producing realistic schedules. An integer-programming finance-based scheduling method is offered to produce financially feasible schedules that balance financing requirements of activities at any period with cash available in that same period. The proposed method offers 2-fold benefits of minimizing total project duration and fulfilling finance availability constraints.
---
paper_title: Multiobjective Linear Programming Model for Scheduling Linear Repetitive Projects
paper_content:
Linear repetitive construction projects require large amounts of resources which are used in a sequential manner and therefore effective resource management is very important both in terms of project cost and duration. Existing methodologies such as the critical path method and the repetitive scheduling method optimize the schedule with respect to a single factor, to achieve minimum duration or minimize resource work breaks, respectively. However real life scheduling decisions are more complicated and project managers must make decisions that address the various cost elements in a holistic way. To respond to this need, new methodologies that can be applied through the use of decision support systems should be developed. This paper introduces a multiobjective linear programming model for scheduling linear repetitive projects, which takes into consideration cost elements regarding the project's duration, the idle time of resources, and the delivery time of the project's units. The proposed model can be used to generate alternative schedules based on the relative magnitude and importance of the different cost elements. In this sense, it provides managers with the capability to consider alternative schedules besides those defined by minimum duration or maximizing work continuity of resources. The application of the model to a well known example in the literature demonstrates its use in providing explicatory analysis of the results.
---
paper_title: Integer programming : theory and practice
paper_content:
New Heuristics and Adaptive Memory Procedures for Boolean Optimization Problems, Lars M. Hvattum, Arne Lokketangen, and Fred Glover Convergent Lagrangian Methods for Separable Nonlinear Integer Programming: Objective Level-Cut and Domain-Cut Methods, Duan Li, Xiaoling Sun, and Jun Wang The Generalized Assignment Problem, Robert M. Nauss Decomposition in Integer Linear Programming, Ted K. Ralphs and Matthew V. Galati Airline Scheduling Models and Solution Algorithms for the Temporary Closure of Airports, Shangyao Yan and Chung-Gee Lin Determining an Optimal Fleet Mix and Schedules: Part I - Single Source and Destination, Hanif D. Sherali and Salem M. Al-Yakoob Determining an Optimal Fleet Mix and Schedules: Part II - Multiple Sources and Destinations, and the Option of Leasing Transshipment Depots, Hanif D. Sherali and Salem M. Al-Yakoob An Integer Programming Model for the Optimization of Data Cycle Maps, David Panton, Maria John, and Andrew Mason Application of Column-Generation Techniques to Retail Assortment Planning, Govind P. Daruka and Udatta S. Palekar Noncommercial Software for Mixed-Integer Linear Programming, Jeff T. Linderoth and Ted K. Ralphs
---
paper_title: Multiobjective optimization in Linear Repetitive Project scheduling
paper_content:
The Critical Path Method (CPM) and the Repetitive Scheduling Method (RSM) are the most often used tools for the planning, scheduling and control Linear Repetitive Projects (LRPs). CPM focuses mostly on project’s duration and critical activities, while RSM focuses on resource continuity. In this paper we present a linear programming approach to address the multi objective nature of decisions construction managers face in scheduling LRPs. The Multi Objective Linear Programming model (MOLP-LRP) is a parametric model that can optimize a schedule in terms of duration, work-breaks, unit completion time and respective costs, while at the same time the LP range sensitivity analysis can provide useful information regarding cost tradeoffs between delay, work-break and unit delivery costs. MOLPS-LRP can generate alternative schedules based on the relative magnitude and importance of different cost elements. In this sense it provides managers with the capability to consider alternative schedules besides those defined by minimum duration (CPM) or minimum resource work-breaks (RSM). Demonstrative results and analysis are provided through a well known in the literature case study example.
---
paper_title: Resource Leveling of Linear Schedules Using Integer Linear Programming
paper_content:
Since the early 1960s many techniques have been developed to plan and schedule linear construction projects. However, one, the critical path method (CPM), overshadowed the others. As a result, CPM developed into the powerful and effective tool that it is today. However, research has indicated that CPM is ineffective for linear construction. Linear construction projects are typified by activities that must be repeated in different locations such as highways, pipelines, and tunnels. Recently, there has been renewed interest in linear scheduling. Much of this interest has involved a technique called the linear scheduling method (LSM). Only recently has there been the ability to calculate the controlling activities of a linear schedule, independent of network analysis. Additional research needs to be done to develop some of the techniques available in CPM into comparable ones for linear scheduling. One of these techniques is resource leveling. This paper uses the vehicle of a highway construction project to present an integer linear programming formulation to level the resources of linear projects.
---
paper_title: The LP/IP hybrid method for construction time-cost trade-off analysis
paper_content:
Construction planners face the decisions of selecting appropriate resources, including crew sizes, equipment, methods and technologies, to perform the tasks of a construction project. In general, there is a trade-off between time and cost to complete a task - the less expensive the resources, the longer it takes. Using Critical Path Method (CPM) techniques, the overall project cost can be reduced by using less expensive resources for non-critical activities without impacting the duration. Furthermore, planners need to adjust the resource selections to shorten or lengthen the project duration. Finding the optimal decisions is difficult and time-consuming considering the numbers of permutations involved. For example, a CPM network with only eight activities, each with two options, will have 28 alternatives. For large problems, exhaustive enumeration is not economically feasible even with very fast computers. This paper presents a new algorithm using linear and integer programming to obtain optimal resource ...
---
paper_title: A Simple CPM Time-Cost Tradeoff Algorithm
paper_content:
This article describes an algorithm for efficiently shortening the duration of a project when the expected project duration exceeds a predetermined limit. The problem consists of determining which activities to expedite and by what amount. The objective is to minimize the cost of the project. ::: ::: This algorithm is considerably less complex than the analytic methods currently available. Because of its inherent simplicity, the algorithm is ideally suited for hand computation and also is suitable for computer solution. Solutions derived by the algorithm were compared with linear programming results. These comparisons revealed that the algorithm solutions are either a equally good or b nearly the same as the solutions obtained by more complex analytic methods which require a computer. ::: ::: With this method the CPM time-cost tradeoff problem is solved without access to a computer, thereby making this planning tool available to managers who otherwise would find implementation impractical.
---
paper_title: Augmented heuristic algorithm for multi-skilled resource scheduling
paper_content:
Conventional project scheduling is restricted to single-skilled resource assumption where each worker is assumed to have only one skill. This, in effect, contradicts real-world practice where workers may possess multiple skills and, on several occasions, are assigned to perform tasks for which they are not specialized. Past research has shown a simple process of heuristic approach for multi-skilled resource scheduling where a project is planned under the assumption that each resource can have more than one skill and resource substitution is allowed. Nevertheless, the approach has presented resource substitution step where an activity with higher priority can claim any resource regardless of its concurrent activities' resource requirements. Furthermore, the approach is subjected to all-or-nothing resource assignment concept where an activity cannot start and resources are not needed for that activity at all unless the required resources of that activity can be completely fulfilled. This research presents an alternative heuristic approach for multi-skilled resource scheduling in an attempt to improve the resource substitution approach. Augmented resource substitution rule and resource-driven task duration are presented to increase starting opportunity of activities on earlier time. Case studies are presented to illustrate the improved result of shorter project duration.
---
paper_title: A Network Flow Computation for Project Cost Curves
paper_content:
A network flow method is outlined for solving the linear programming problem of computing the least cost curve for a project composed of many individual jobs, where it is assumed that certain jobs must be finished before others can be started. Each job has an associated crash completion time and normal completion time, and the cost of doing the job varies linearly between these extreme times. Given that the entire project must be completed in a prescribed time interval, it is desired to find job times that minimize the total project cost. The~method solves this problem for all feasible time intervals.
---
paper_title: Schedule compression using the direct stiffness method
paper_content:
This paper presents a new method for critical path (CPM) scheduling that optimizes project duration in order to minimize the project total cost. In addition, the method could be used to produce constrained schedules that accommodate contractual completion dates of projects and their milestones. The proposed method is based on the well-known "direct stiffness method" for structural analysis. The method establishes a complete analogy between the structural analysis problem with imposed support settlement and that of project scheduling with imposed target completion date. The project CPM network is replaced by an equivalent structure. The equivalence conditions are established such that when the equivalent structure is compressed by an imposed displacement equal to the schedule compression, the sum of all member forces represents the additional cost required to achieve such compression. To enable a comparison with the currently used methods, an example application from the literature is analyzed using the pr...
---
paper_title: Heuristic scheduling of resource‐constrained, multiple‐mode and repetitive projects
paper_content:
An alternative heuristic method for scheduling repetitive projects in which resources are limited and activities may be executed with multiple modes of resource demands associated with different durations is proposed. Unlike general heuristic methods that separately analyze each competing activity and schedule only one at a time, the proposed heuristic algorithm ranks possible combinations of activities every time and simultaneously schedules all activities in the selected combination leading to minimal project duration. All alternative combinations of activities in consideration of resource constraints, multiple modes and characteristics of the repetitive projects are determined through a permutation tree-based procedure. The heuristic method is implemented based on the corresponding framework. An example is presented to demonstrate the efficiency of the proposed heuristic method. The study is expected to provide an efficient heuristic methodology for solving the project scheduling problem.
---
paper_title: Genetic algorithms for multi-constraint scheduling: an application for the construction industry
paper_content:
SUMMARY Reliable construction schedule is vital for effective co-ordination across supply chains and various trades at construction work face. According to the lean construction concept, reliability of the schedule can be enhanced through detection and satisfaction of all potential constraints prior to releasing operation assignments. However, it is difficult to implement this concept since current scheduling tools and techniques are fragmented and designed to deal with a limited set of construction constraints. This paper introduces a methodology termed ‘multi-constraint scheduling’ in which four major groups of construction constraints including physical, contract, resource, and information constraints are considered. A Genetic Algorithm (GA) has been developed and used for multi-constraint optimisation problem. Given multiple constraints such as activity dependency, limited working area, and resource and information readiness, the GA alters tasks’ priorities and construction methods so as to arrive at optimum or near optimum set of project duration, cost, and smooth resource profiles. This feature has been practically developed as an embedded macro in MS Project. Several experiments confirmed that GA can provide near optimum solutions within acceptable searching time (i.e. 5 minutes for 1.92E11 alternatives). Possible improvements to this research are further suggested in the paper.
---
paper_title: GA-BASED MULTICRITERIA OPTIMAL MODEL FOR CONSTRUCTION SCHEDULING
paper_content:
Resources for construction activities are limited in the real construction world. To avoid the waste and shortage of resources on a construction jobsite, scheduling must include resource allocation. A multicriteria computational optimal scheduling model, which integrates the time/cost trade-off model, resource-limited model, and resource leveling model, is proposed. A searching technique using genetic algorithms (GAs) is adopted in the model. Furthermore, the nondominated solutions are found by the multiple attribute decision-making method, technique for order preference by similarity to ideal solution. The model can effectively provide the optimal combination of construction durations, resource amounts, minimum direct project costs, and minimum project duration under the constraint of limited resources.
---
paper_title: Applying Pareto Ranking and Niche Formation to Genetic Algorithm-Based Multiobjective Time-Cost Optimization
paper_content:
Time–cost optimization (TCO) is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Although the TCO problem has been extensively examined, many research studies only focused on minimizing the total cost for an early completion. This does not necessarily convey any reward to the contractor. However, with the increasing popularity of alternative project delivery systems, clients and contractors are more concerned about the combined benefits and opportunities of early completion as well as cost savings. In this paper, a genetic algorithms ( GAs ) -driven multiobjective model for TCO is proposed. The model integrates the adaptive weight to balance the priority of each objective according to the performance of the previous “generation.” In addition, the model incorporates Pareto ranking as a selection criterion and the niche formation techniques to improve popularity diversity. Based on the pr...
---
paper_title: Comparing Schedule Generation Schemes in Resource-Constrained Project Scheduling Using Elitist Genetic Algorithm
paper_content:
An issue has arisen with regard to which of the schedule generation schemes will perform better for an arbitrary instance of the resource-constrained project scheduling problem (RCPSP), which is one of the most challenging areas in construction engineering and management. No general answer has been given to this issue due to the different mechanisms between the serial scheme and the parallel scheme. In an effort to address this issue, this paper compares the two schemes using a permutation-based Elitist genetic algorithm for the RCPSP. Computational experiments are presented with multiple standard problems. From the results of a paired difference experiment, the algorithm using the serial scheme provides better solutions than the one using the parallel scheme. The results also show that the algorithm with the parallel scheme takes longer to solve each problem than the one using the serial scheme.
---
paper_title: Intelligent optimization for project scheduling of the first mining face in coal mining
paper_content:
In this paper, the intelligent optimization methods including genetic algorithm (GA), particle swarm optimization (PSO) and modified particle swarm optimization (MPSO) are used in optimizing the project scheduling of the first mining face of the second region of the fifth Ping'an coal mine in China. The result of optimization provides essential information of management and decision-making for governors and builder. The process of optimization contains two parts: the first part is obtaining the time parameters of each process and the network graph of the first mining face in the second region by PERT (program evaluation and review technique) method based on the raw data. The other part is the second optimization to maximal NPV (net present value) based on the network graph. The starting dates of all processes are decision-making variables. The process order and time are the constraints. The optimization result shows that MPSO is better than GA and PSO and the optimized NPV is 14,974,000 RMB more than the original plan.
---
paper_title: Construction Resource Scheduling with Genetic Algorithms
paper_content:
A new approach for resource scheduling using genetic algorithms (GAs) is presented here. The methodology does not depend on any set of heuristic rules. Instead, its strength lies in the selection and recombination tasks of the GA to learn the domain of the specific project network. By this it is able to evolve improved schedules with respect to the objective function. Further, the model is general enough to encompass both resource leveling and limited resource allocation problems unlike existing methods, which are class-dependent. In this paper, the design and mechanisms of the model are described. Case studies with standard test problems are presented to demonstrate the performance of the GA-scheduler when compared against heuristic methods under various resource availability profiles. Results obtained with the proposed model do not indicate an exponential growth in the computational time required for larger problems.
---
paper_title: USING IMPROVED GENETIC ALGORITHMS TO FACILITATE TIME-COST OPTIMIZATION
paper_content:
Time-cost optimization problems in construction projects are characterized by the constraints on the time and cost requirements. Such problems are difficult to solve because they do not have unique solutions. Typically, if a project is running behind the scheduled plan, one option is to compress some activities on the critical path so that the target completion time can be met. As combinatorial optimization problems, time-cost optimization problems are suitable for applying genetic algorithms (GAs). However, basic GAs may involve very large computational costs. This paper presents several improvements to basic GAs and demonstrates how these improved GAs reduce computational costs and significantly increase the efficiency in searching for optimal solutions.
---
paper_title: Comparison between Genetic Algorithms and Particle Swarm Optimization
paper_content:
This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization. The operators of each paradigm are reviewed, focusing on how each affects search behavior in the problem space. The goals of the paper are to provide additional insights into how each paradigm works, and to suggest ways in which performance might be improved by incorporating features from one paradigm into the other.
---
paper_title: Nondominated Archiving Multicolony Ant Algorithm in Time-Cost Trade-Off Optimization
paper_content:
Time–cost trade-off analysis is addressed as an important aspect of any construction project planning and control. Nonexistence of a unique solution makes the time–cost trade-off problems very difficult to tackle. As a combinatorial optimization problem one may apply heuristics or mathematical programming techniques to solve time–cost trade-off problems. In this paper, a new multicolony ant algorithm is developed and used to solve the time–cost multiobjective optimization problem. Pareto archiving together with innovative solution exchange strategy are introduced which are highly efficient in developing the Pareto front and set of nondominated solutions in a time–cost optimization problem. An 18-activity time–cost problem is used to evaluate the performance of the proposed algorithm. Results show that the proposed algorithm outperforms the well-known weighted method to develop the nondominated solutions in a combinatorial optimization problem. The paper is more relevant to researchers who are interested i...
---
paper_title: Using Machine Learning and GA to Solve Time-Cost Trade-Off Problems
paper_content:
Existing genetic algorithms (GA) based systems for solving time-cost trade-off problems suffer from two limitations. First, these systems require the user to manually craft the time-cost curves for formulating the objective functions. Second, these systems only deal with linear time-cost relationships. To overcome these limitations, this paper presents a computer system called MLGAS (Machine Learning and Genetic Algorithms based System), which integrates a machine learning method with GA. A quadratic template is introduced to capture the nonlinearity of time-cost relationships. The machine learning method automatically generates the quadratic time-cost curves from historical data and also measures the credibility of each quadratic time-cost curve. The quadratic curves are then used to formulate the objective function that can be solved by the GA. Several improvements are made to enhance the capacity of GA to prevent premature convergence. Comparisons of MLGAS with an experienced project manager indicate that MLGAS generates better solutions to nonlinear time-cost trade-off problems.
---
paper_title: Optimization of Resource Allocation and Leveling Using Genetic Algorithms
paper_content:
Resource allocation and leveling are among the top challenges in project management. Due to the complexity of projects, resource allocation and leveling have been dealt with as two distinct subproblems solved mainly using heuristic procedures that cannot guarantee optimum solutions. In this paper, improvements are proposed to resource allocation and leveling heuristics, and the Genetic Algorithms (GAs) technique is used to search for near-optimum solution, considering both aspects simultaneously. In the improved heuristics, random priorities are introduced into selected tasks and their impact on the schedule is monitored. The GA procedure then searches for an optimum set of tasks' priorities that produces shorter project duration and better-leveled resource profiles. One major advantage of the procedure is its simple applicability within commercial project management software systems to improve their performance. With a widely used system as an example, a macro program is written to automate the GA proced...
---
paper_title: Use of Genetic Algorithms in Resource Scheduling of Construction Projects
paper_content:
This paper presents an augmented Lagrangian genetic algorithm model for resource scheduling. The algorithm considers scheduling characteristics that were ignored in prior research. Previous resource scheduling formulations have primarily focused on project duration minimization. Furthermore, resource leveling and resource-constrained scheduling have traditionally been solved independently. The model presented here considers all precedence relationships, multiple crew strategies, total project cost minimization, and time-cost trade-off. In the new formulation, resource leveling and resource-constrained scheduling are performed simultaneously. The model presented uses the quadratic penalty function to transform the resource-scheduling problem to an unconstrained one. The algorithm is general and can be applied to a broad class of optimization problems. An illustrative example is presented to demonstrate the performance of the proposed method.
---
paper_title: Multimode Project Scheduling Based on Particle Swarm Optimization
paper_content:
This paper introduces a methodology for solving the multimode resource-constrained project scheduling problem (MRCPSP) based on particle swarm optimization (PSO). The MRCPSP considers both renewable and nonrenewable resources that have not been addressed efficiently in the construction field. The framework of the PSO-based methodology is developed with the objective of minimizing project duration. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. Comparisons with other methods show that the PSO method is equally efficient at solving the MRCPSP.
---
paper_title: Optimizing Construction Time and Cost Using Ant Colony Optimization Approach
paper_content:
Time and cost are the most important factors to be considered in every construction project. In order to maximize the return, both the client and contractor would strive to optimize the project duration and cost concurrently. Over the years, many research studies have been conducted to model the time–cost relationships, and the modeling techniques range from the heuristic methods and mathematical approaches to genetic algorithms. Despite that, previous studies often assumed the time being constant leaving the analyses based purely on a single objective—cost. Acknowledging the significance of time–cost optimization, an evolutionary-based optimization algorithm known as ant colony optimization is applied to solve the multiobjective time–cost optimization problems. In this paper, the basic mechanism of the proposed model is unveiled. Having developed a program in the Visual Basic platform, tests are conducted to compare the performance of the proposed model against other analytical methods previously used fo...
---
paper_title: Multi Objective Optimization of Time Cost Quality Quantity Using Multi Colony Ant Algorithm
paper_content:
Research on a new metaheuristic for optimization is often initially focused on proof of concept applications. Time and cost are the most important factors to be considered in every construction project. Over the years, many research studies have been conducted to model the time-cost relationship. Construction planners often face the challenge of optimum resources utilization to compromise between different and usually conflicting aspects of projects. Time, cost and quality of project delivery are among the crucial aspects of each project. Ant colony optimization, which was introduced in the early 1990’s as a novel technique for solving hard combinational optimization problem, finds itself currently at this point of its life cycle. In this paper, new metaheuristic multi-colony ant algorithm is developed for the optimization of three objectives time-cost quality with quantity as a trade off problem. The model is also applied to two objectives time – cost trade off problem and the results are compared to those of the existing approaches.
---
paper_title: Applying a Genetic Algorithm-Based Multiobjective Approach for Time-Cost Optimization
paper_content:
Reducing both project cost and time (duration) is critical in a competitive environment. However, a trade-off between project time and cost is required. This in turn requires contracting organizations to carefully evaluate various approaches to attaining an optimal time-cost equilibrium. Although several analytical models have been developed for time-cost optimization (TCO), they mainly focus on projects where the contract duration is fixed. The optimization objective in those cases is therefore restricted to identifying the minimum total cost only. With the increasing popularity of alternative project delivery systems, clients and contractors are targeting the increased benefits and opportunities of seeking an earlier project completion. The multiobjective model for TCO proposed in this paper is powered by techniques using genetic algorithms (GAs). The proposed model integrates the adaptive weights derived from previous generations, and induces a search pressure toward an ideal point. The concept of the GA-based multiobjective TCO model is illustrated through a simple manual simulation, and the results indicate that the model could assist decision-makers in concurrently arriving at an optimal project duration and total cost.
---
paper_title: Optimization of construction time-cost trade-off analysis using genetic algorithms
paper_content:
In the management of a construction project, the project duration can often be compressed by accelerating some of its activities at an additional expense. This is the so-called time-cost trade-off ...
---
paper_title: Hybrid of genetic algorithm and simulated annealing for multiple project scheduling with multiple resource constraints
paper_content:
Since scheduling of multiple projects is a complex and time-consuming task, a large number of heuristic rules have been proposed by researchers for such problems. However, each of these rules is usually appropriate for only one specific type of problem. In view of this, a hybrid of genetic algorithm and simulated annealing (GA-SA Hybrid) is proposed in this paper for generic multi-project scheduling problems with multiple resource constraints. The proposed GA-SA Hybrid is compared to the modified simulated annealing method (MSA), which is more powerful than genetic algorithm (GA) and simulated annealing (SA). As both GA and SA are generic search methods, the GA-SA Hybrid is also a generic search method. The random-search feature of GA, SA and GA-SA Hybrid makes them applicable to almost all kinds of optimization problems. In general, these methods are more effective than most heuristic rules. Three test projects and three real projects are presented to show the advantage of the proposed GA-SA Hybrid method. It can be seen that GA-SA Hybrid has better performance than GA, SA, MSA, and some most popular heuristic methods.
---
paper_title: A genetic algorithm-based method for scheduling repetitive construction projects
paper_content:
This paper develops a new method for scheduling repetitive construction projects with several objectives such as project duration, project cost, or both of them. The method deals with constraints of precedence relationships between activities, and constraints of resource work continuity. The method considers different attributes of activities (such as activities which allow or do not allow interruptions), and different relationships between direct costs and durations for activities (such as linear, non-linear, continuous, or discrete relationship) to provide a satisfactory schedule. In order to minimize the mentioned objectives, the proposed method finds a set of suitable durations for activities by genetic algorithm, and then determines the suitable start times of these activities by a scheduling algorithm. The bridge construction example from literature is analyzed to validate the proposed method, and another example is also given to illustrate its new capability in project planning.
---
paper_title: Using genetic algorithms to solve construction time-cost trade-off problems
paper_content:
Time-cost trade-off analysis is one of the most important aspects of construction project planning and control. There are trade-offs between time and cost to complete the activities of a project; in general, the less expensive the resources used, the longer it takes to complete an activity. Using critical path method (CPM), the overall project cost can be reduced by using less expensive resources for noncritical activities without impacting the project duration. Existing methods for time-cost trade-off analysis focus on using heuristics or mathematical programming. These methods, however, are not efficient enough to solve large-scale CPM networks (hundreds of activities or more). Analogous to natural selection and genetics in reproduction, genetic algorithms (GAs) have been successfully adopted to solve many science and engineering problems and have proven to be an efficient means for searching optimal solutions in a large problem domain. This paper presents: (1) an algorithm based on the principles of GAs for construction time-cost trade-off optimization; and (2) a computer program that can execute the algorithm efficiently.
---
paper_title: A new exact penalty function method for continuous inequality constrained optimization problems
paper_content:
In this paper, a computational approach based on a new exact penalty ::: function method is devised for solving a class of continuous ::: inequality constrained optimization problems. The continuous ::: inequality constraints are first approximated by smooth function in ::: integral form. Then, we construct a new exact penalty function, ::: where the summation of all these approximate smooth functions in ::: integral form, called the constraint violation, is appended to the ::: objective function. In this way, we obtain a sequence of approximate ::: unconstrained optimization problems. It is shown that if the value ::: of the penalty parameter is sufficiently large, then any local ::: minimizer of the corresponding unconstrained optimization problem is ::: a local minimizer of the original problem. For illustration, three ::: examples are solved using the proposed method. From the solutions ::: obtained, we observe that the values of their objective functions ::: are amongst the smallest when compared with those obtained by other ::: existing methods available in the literature. More importantly, our ::: method finds solution which satisfies the continuous inequality ::: constraints.
---
|
Title: A Review of Methods and Algorithms for Optimizing Construction Scheduling
Section 1: Introduction
Description 1: Provide an overview of construction scheduling, the complexities involved, the importance of resource selection, and the constraints that must be considered. Mention the evolution of methods and algorithms over the last 20 years.
Section 2: Solving CSO problems
Description 2: Discuss different methods for solving construction scheduling optimization (CSO) problems, classifying them into mathematical, heuristic, and metaheuristic methods.
Section 3: Mathematical methods
Description 3: Describe various mathematical methods including the Critical Path Method (CPM), Integer Programming (IP), Linear Programming (LP), and Dynamic Programming, along with their applications, advantages, and limitations in construction scheduling.
Section 4: Heuristic methods
Description 4: Communicate the principles of heuristic methods for construction scheduling, such as Fondahl's method, Structural model method, and Siemens approximation. Highlight their simplicity, problem dependency, and limitations.
Section 5: Metaheuristic method
Description 5: Explain metaheuristic methods such as Genetic Algorithm (GA), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). Provide detailed exploration of how these methods are inspired by natural processes and their application in CSO problems.
Section 6: Conclusion and future research
Description 6: Summarize the discussed methods and highlight the need for global optimality in solutions. Suggest directions for future research including multi-objective construction scheduling optimization covering time, cost, risk, and quality.
|
A Survey on Wireless Ad Hoc Networks
| 10 |
---
paper_title: Floor acquisition multiple access (FAMA) for packet-radio networks
paper_content:
A family of medium access control protocols for single-channel packet radio networks is specified and analyzed. These protocols are based on a new channel access discipline called floor acquisition multiple access (FAMA), which consists of both carrier sensing and a collision-avoidance dialogue between a source and the intended receiver of a packet. Control of the channel (the floor) is assigned to at most one station in the network at any given time, and this station is guaranteed to be able to transmit one or more data packets to different destinations with no collision with transmissions from other stations. The minimum length needed in control packets to acquire the floor is specified as a function of the channel propagation time. The medium access collision avoidance (MACA) protocol proposed by Karn and variants of CSMA based on collision avoidance are shown to be variants of FAMA protocols when control packets last long enough compared to the channel propagation delay. The throughput of FAMA protocols is analyzed and compared with the throughput of non-persistent CSMA. This analysis shows that using carrier sensing as an integral part of the floor acquisition strategy provides the benefits of MACA in the presence of hidden terminals, and can provide a throughput comparable to, or better than, that of non-persistent CSMA when no hidden terminals exist.
---
paper_title: Mobile Ad Hoc Networking: Imperatives and Challenges
paper_content:
A mobile ad hoc network (MANET), sometimes called a mobile mesh network, is a self- configuring network of mobile devices connected by wireless links. The Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. It represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, ''ad-hoc'' network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure. Ad hoc networking concept is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the adhoc paradigm. Recently, the introduction of new technologies such as the Bluetooth, IEEE 802.11 and Hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent evolutions have been generating a renewed and growing interest in the research and development of MANET.So,we would like to propose the challenges and imperativels that the MANET is facing.
---
paper_title: Medium Access Control protocols for ad hoc wireless networks : A survey
paper_content:
Studies of ad hoc wireless networks are a relatively new field gaining more popularity for various new applications. In these networks, the Medium Access Control (MAC) protocols are responsible for coordinating the access from active nodes. These protocols are of significant importance since the wireless communication channel is inherently prone to errors and unique problems such as the hidden-terminal problem, the exposed-terminal problem, and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive survey of major schemes, integrating various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols and their brief description, based on their operating principles and underlying features. In conclusion, we present a brief summary of key ideas and a general direction for future work.
---
paper_title: MACAW: a media access protocol for wireless LAN's
paper_content:
In recent years, a wide variety of mobile computing devices has emerged, including portables, palmtops, and personal digital assistants. Providing adequate network connectivity for these devices will require a new generation of wireless LAN technology. In this paper we study media access protocols for a single channel wireless LAN being developed at Xerox Corporation's Palo Alto Research Center. We start with the MACA media access protocol first proposed by Karn [9] and later refined by Biba [3] which uses an RTS-CTS-DATA packet exchange and binary exponential back-off. Using packet-level simulations, we examine various performance and design issues in such protocols. Our analysis leads to a new protocol, MACAW, which uses an RTS-CTS-DS-DATA-ACK message exchange and includes a significantly different backoff algorithm.
---
paper_title: Wireless medium access control protocols
paper_content:
Technological advances, coupled with the flexibility and mobility of wireless systems, are the driving force behind the Anyone, Anywhere, Anytime paradigm of networking. At the same time, we see a convergence of the telephone, cable and data networks into a unified network that supports multimedia and real-time applications like voice and video in addition to data. Medium access control protocols define rules for orderly access to the shared medium and play a crucial role in the efficient and fair sharing of scarce wireless bandwidth. The nature of the wireless channel brings new issues like location-dependent carrier sensing, time varying channel and burst errors. Low power requirements and half duplex operation of the wireless systems add to the challenge. Wireless MAC protocols have been heavily researched and a plethora of protocols have been proposed. Protocols have been devised for different types of architectures, different applications and different media. This survey discusses the challenges in the design of wireless MAC protocols, classifies them based on architecture and mode of operation, and describes their relative performance and application domains in which they are best deployed.
---
paper_title: Receiver-initiated busy-tone multiple access in packet radio networks
paper_content:
The ALOHA and Carrier Sense Multiple Access (CSMA) protocols have been proposed for packet radio networks (PRN). However, CSMA/CD which gives superior performance and has been successful applied in local area networks cannot be readily applied in PRN since the locally generated signals will overwhelm a remote transmission, rendering it impossible to tell whether a collision has occurred or not. In addition, CSMA and CSMA/CD suffer from the “hidden node” problem in a multihop PRN. In this paper, we develop the Receiver-Initiated Busy-Tone Multiple Access Protocol to resolve these difficulties. Both fully connected and multihop networks are studied. The busy tone serves as an acknowledgement and prevents conflicting transmissions from other nodes, including “hidden nodes”.
---
paper_title: Hop-reservation multiple access (HRMA) for ad-hoc networks
paper_content:
A new multichannel MAC protocol called hop-reservation multiple access (HRMA) for wireless ad-hoc networks (multi-hop packet radio networks) is introduced, specified and analyzed. HRMA is based on simple half-duplex, very slow frequency-hopping spread spectrum (FHSS) radios and takes advantage of the time synchronization necessary for frequency-hopping. HRMA allows a pair of communicating nodes to reserve a frequency hop using a reservation and handshake mechanism that guarantee collision-free data transmission in the presence of hidden terminals. We analyze the throughput achieved in HRMA for the case of a hypercube network topology assuming variable-length packets, and compare it against the multichannel slotted ALOHA protocol, which represents the current practice of MAC protocols in commercial ad-hoc networks based on spread spectrum radios, such as Metricom's Ricochet system. The numerical results show that HRMA can achieve much higher throughput than multichannel slotted ALOHA within the traffic-load ranges of interest, especially when the average packet length is large compared to the duration of a dwell time in the frequency hopping sequence, in which case the maximum throughput of HRMA is close to the maximum possible value.
---
paper_title: Dual Busy Tone Multiple Access (DBTMA) - A Multiple Access Control Scheme for Ad Hoc Networks
paper_content:
In ad hoc networks, the hidden- and the exposed-terminal problems can severely reduce the network capacity on the MAC layer. To address these problems, the ready-to-send and clear-to-send (RTS/CTS) dialogue has been proposed in the literature. However, MAC schemes using only the RTS/CTS dialogue cannot completely solve the hidden and the exposed terminal problems, as pure "packet sensing" MAC schemes are not safe even in fully connected networks. We propose a new MAC protocol, termed the dual busy tone multiple access (DBTMA) scheme. The operation of the DBTMA protocol is based on the RTS packet and two narrow-bandwidth, out-of-band busy tones. With the use of the RTS packet and the receive busy tone, which is set up by the receiver, our scheme completely solves the hidden- and the exposed-terminal problems. The busy tone, which is set up by the transmitter, provides protection for the RTS packets, increasing the probability of successful RTS reception and, consequently, increasing the throughput. This paper outlines the operation rules of the DBTMA scheme and analyzes its performance. Simulation results are also provided to support the analytical results. It is concluded that the DBTMA protocol is superior to other schemes that rely on the RTS/CTS dialogue on a single channel or to those that rely on a single busy tone. As a point of reference, the DBTMA scheme out-performs FAMA-NCS by 20-40% in our simulations using the network topologies borrowed from the FAMA-NCS paper. In an ad hoc network with a large coverage area, DBTMA achieves performance gain of 140% over FAMA-NCS and performance gain of 20% over RI-BTMA.
---
paper_title: A multichannel CSMA MAC protocol for multihop wireless networks
paper_content:
We describe a new carrier-sense multiple access (CSMA) protocol for multihop wireless networks, sometimes also called ad hoc networks. The CSMA protocol divides the available bandwidth into several channels and selects an idle channel randomly for packet transmission. It also employs a notion of "soft" channel reservation as it gives preference to the channel that was used for the last successful transmission. We show via simulations that this multichannel CSMA protocol provides a higher throughput compared to its single channel counterpart by reducing the packet loss due to collisions. We also show that the use of channel reservation provides better performance than multichannel CSMA with purely random idle channel selection.
---
paper_title: Architecture and algorithms for an IEEE 802.11-based multi-channel wireless mesh network
paper_content:
Even though multiple non-overlapped channels exist in the 2.4 GHz and 5 GHz spectrum, most IEEE 802.11-based multi-hop ad hoc networks today use only a single channel. As a result, these networks rarely can fully exploit the aggregate bandwidth available in the radio spectrum provisioned by the standards. This prevents them from being used as an ISP's wireless last-mile access network or as a wireless enterprise backbone network. In this paper, we propose a multi-channel wireless mesh network (WMN) architecture (called Hyacinth) that equips each mesh network node with multiple 802.11 network interface cards (NICs). The central design issues of this multi-channel WMN architecture are channel assignment and routing. We show that intelligent channel assignment is critical to Hyacinth's performance, present distributed algorithms that utilize only local traffic load information to dynamically assign channels and to route packets, and compare their performance against a centralized algorithm that performs the same functions. Through an extensive simulation study, we show that even with just 2 NICs on each node, it is possible to improve the network throughput by a factor of 6 to 7 when compared with the conventional single-channel ad hoc network architecture. We also describe and evaluate a 9-node Hyacinth prototype that Is built using commodity PCs each equipped with two 802.11a NICs.
---
paper_title: An energy efficient MAC protocol for wireless LANs
paper_content:
This paper presents an optimization of the power saving mechanism in the Distributed Coordination Function (DCF) in the IEEE 802.11 standard. In the IEEE 802.11 power saving mode specified for DCF, time is divided into so-called beacon intervals. At the start of each beacon interval, each node in the power saving mode periodically wakes up for a duration called the ATIM window. The nodes are required to be synchronized to ensure that all nodes wake up at the same time. During the ATIM window, the nodes exchange control packets to determine whether they need to stay awake for the rest of the beacon interval. The size of the ATIM window has a significant impact on energy saving and throughput achieved by the nodes. This paper proposes an adaptive mechanism to dynamically choose a suitable ATIM window size. We also allow the nodes to stay awake for only a fraction of the beacon interval following the ATIM window. On the other hand, IEEE 802.11 DCF mode requires the nodes to stay awake either for the entire beacon interval following the ATIM window or none at all. Simulation results show that the proposed approach outperforms the IEEE 802.11 power saving mechanism in terms of throughput and the amount of energy consumed.
---
paper_title: PAMAS—power aware multi-access protocol with signalling for ad hoc networks
paper_content:
In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.
---
paper_title: A power control MAC protocol for ad hoc networks
paper_content:
This paper presents a power control MAC protocol based on the IEEE 802.11 standard. Several researchers have proposed a simple modification of IEEE 802.11 to incorporate power control. The main idea of these power control schemes is to use different power levels for RTS-CTS and DATA-ACK. Specifically, maximum transmit power is used for RTS-CTS, and the minimum required transmit power is used for DATA-ACK transmissions in order to save energy. However, we show that this scheme can degrade network throughput and can result in higher energy consumption than using IEEE 802.11 without power control. We propose an improved power control protocol which does not degrade throughput and yields energy saving.
---
paper_title: Analyzing the Energy Consumption of IEEE 802.11 Ad Hoc Networks
paper_content:
This paper analyzes the energy consumption of ad hoc nodes using IEEE 802.11 interfaces. Our objective is to provide theoretical limits on the lifetime gains that can be achieved by different power saving techniques proposed in the literature. The evaluation takes into account the properties of the medium access protocol and the process of forwarding packets in ad hoc mode. The key point is to determine the node lifetime based on its average power consumption. The average power consumption is estimated considering how long the node remains sleeping, idle, receiving, or transmitting.
---
paper_title: Medium Access Control protocols for ad hoc wireless networks : A survey
paper_content:
Studies of ad hoc wireless networks are a relatively new field gaining more popularity for various new applications. In these networks, the Medium Access Control (MAC) protocols are responsible for coordinating the access from active nodes. These protocols are of significant importance since the wireless communication channel is inherently prone to errors and unique problems such as the hidden-terminal problem, the exposed-terminal problem, and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive survey of major schemes, integrating various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols and their brief description, based on their operating principles and underlying features. In conclusion, we present a brief summary of key ideas and a general direction for future work.
---
paper_title: A power controlled multiple access protocol for wireless packet networks
paper_content:
Multiple access-based collision avoidance MAC protocols have typically used fixed transmission power, and have not considered power control mechanisms based on the distance of the transmitter and receiver in order to improve spatial channel reuse. This work proposes PCMA, a power controlled multiple access wireless MAC protocol within the collision avoidance framework. PCMA generalizes the transmit-or-defer "on/off" collision avoidance model of current protocols to a more flexible "variable bounded power" collision suppression model. The algorithm is provisioned for ad hoc networks and does not require the presence of base stations to manage transmission power (i.e. it is decentralized). The advantage of implementing a power controlled protocol in an ad-hoc network is that source-destination pairs can be more tightly packed into the network allowing a greater number of simultaneous transmissions (spectral reuse). Our initial simulation results show that the PCMA can improve the throughput performance of the non-power controlled IEEE 802.11 by a factor of 2 with potential for additional scalability as source-destination pairs become more localized, thus providing a compelling reason for migrating to a new power controlled multiple access wireless MAC protocol standard.
---
paper_title: Adaptive Clustering for Mobile Wireless Networks
paper_content:
This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network.
---
paper_title: Multicluster, mobile, multimedia radio network
paper_content:
A multi-cluster, multi-hop packet radio network architecture for wireless adaptive mobile information systems is presented. The proposed network supports multimedia traffic and relies on both time division and code division access schemes. This radio network is not supported by a wired infrastructure as conventional cellular systems are. Thus, it can be instantly deployed in areas with no infrastructure at all. By using a distributed clustering algorithm, nodes are organized into clusters. The clusterheads act as local coordinators to resolve channel scheduling, perform power measurement/control, maintain time division frame synchronization, and enhance the spatial reuse of time slots and codes. Moreover, to guarantee bandwidth for real time traffic, the architecture supports virtual circuits and allocates bandwidth to circuits at call setup time. The network is scalable to large numbers of nodes, and can handle mobility. Simulation experiments evaluate the performance of the proposed scheme in static and mobile environments.
---
paper_title: INSIGNIA: IN-BAND SIGNALING SUPPORT FOR QOS IN MOBILE AD HOC NETWORKS
paper_content:
Vaporous alkylene oxides are polymerized by passage over catalysts at elevated temperatures. Alkaline catalysts, such as caustic potash, caustic soda and the oxides or hydroxides of the other alkali metals are suitable for the preparation of highly polymerized wax-like products, which may be liquid or solid, while acid-reacting substances, such as sodium bisulphate, aluminium sulphate, or mixtures of these with small amounts of sulphuric acid or with diluents such as sodium sulphate, or acid phosphates, are suitable catalysts for the preparation of dioxane and its homologues. The temperatures of working lie between 40 DEG -200 DEG C., the range 100-160 DEG C. being preferred. In the examples, (1) ethylene oxide vapour is led over anhydrous caustic soda at the rate of 0,1-0,4 gram per second per litre of catalyst, the catalyst being situated in a vertical iron tube which is maintained at 120-130 DEG C., this temperature regulation being effected by means of a jacket containing a suitable liquid, e.g. ethylene glycol. Cooling in the reaction chamber is also aided by the use of excess of ethylene oxide, which passes out at the lower end of the tube and is returned to the process. The liquid formed is collected in a closed receiver, where it solidifies to a waxy or horn-like mass. This may be freed from enclosed caustic potash by dissolving it in benzene, filtering and evaporating. (2) Anhydrous sodium bisulphate is substituted for the caustic soda of the previous example, and the ethylene oxide passed over it, the vapours condensed and returned to the process. After about 24 hours, the condensate is fractionated, yielding unchanged ethylene oxide, dioxane and a small amount of acetaldehyde ethylene acetal (3) Propylene oxide is passed over anhydrous potassium hydroxide as in example (1). The product is a viscous liquid which is insoluble in water but soluble in most organic solvents, including alcohol, ether, and aromatic hydrocarbons. (4) Propylene oxide is passed over a catalyst consisting of 1 part of sodium bisulphate mixed with 3 parts of anhydrous sodium sulphate. From the issuing vapours, a crude dimethyl dioxane consisting mainly of 2,5 dimethyl dioxane is condensed, distilled in vacuo under reduced pressure, then fractionally distilled.
---
paper_title: SWAN: service differentiation in stateless wireless ad hoc networks
paper_content:
We propose SWAN, a stateless network model which uses distributed control algorithms to deliver service differentiation in mobile wireless ad hoc networks in a simple, scalable and robust manner. We use rate control for UDP and TCP best-effort traffic, and sender-based admission control for UDP real-time traffic. SWAN uses explicit congestion notification (ECN) to dynamically regulate admitted real-time traffic in the face of network dynamics brought on by mobility or traffic overload conditions. We use the term "soft" real-time services to indicate that real-time sessions could be regulated or dropped due to mobility or excessive traffic overloading at mobile wireless routers. SWAN is designed to limit such conditions, however. A novel aspect of SWAN is that it does not require the support of a QOS-capable MAC. Rather, soft real-time services are built using existing best effort wireless MAC technology. Simulation, analysis, and results from an experimental wireless testbed show that real-time applications experience low and stable delays under various multi-hop, traffic and mobility conditions. The wireless testbed and ns-2 simulator source code are available from the Web (comet.columbia.edu/swan).
---
paper_title: Soft reservation multiple access with priority assignment (SRMA/PA): a novel MAC protocol for QoS-guaranteed integrated services in mobile ad-hoc networks
paper_content:
A new medium access control (MAC) protocol-soft reservation multiple access with priority assignment (SRMA/PA) protocol-is introduced for supporting the integrated services of real-time and non-real-time applications in mobile ad-hoc networks. The SRMA/PA protocol allows the distributed nodes to contend for and reserve time slots with a RTS/CTS-like "collision-avoidance" handshake and "soft reservation" mechanism augmented with distributed and dynamic access priority control, which virtually achieves capability of the distributed scheduling to guarantee the QoS requirements of the integrated services. We have demonstrated via simulation studies that the multiplexing gain can be significantly improved without much compromising the system complexity. We also have shown that the proposed back-off mechanism designed for the delay-constrained services is useful for further improving the channel utilization.
---
paper_title: A flexible quality of service model for mobile ad-hoc networks
paper_content:
Quality of service (QoS) support in mobile ad-hoc networks (MANETs) is a challenging task. Most of the proposals in the literature only address certain aspects of the QoS support, e.g., QoS routing, QoS medium access control (MAC) and resource reservation. However, none of them proposes a QoS model for MANETs. Meanwhile, two QoS models have been proposed for the Internet, viz., the integrated services (IntServ) model and the differentiated services (DiffServ) model, but these models are aimed for wired networks. In this paper, we propose a flexible QoS model for MANETs (FQMM) which considers the characteristics of MANETs and combines the high quality QoS of IntServ and service differentiation of Diff-Serv. Salient features of FQMM include: dynamics roles of nodes, hybrid provisioning and adaptive conditioning. Preliminary simulation results show that FQMM achieves better performance in terms of throughput and service differentiation than the best-effort model.
---
paper_title: A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks
paper_content:
An ad hoc mobile network is a collection of mobile nodes that are dynamically and arbitrarily located in such a manner that the interconnections between nodes are capable of changing on a continual basis. In order to facilitate communication within the network, a routing protocol is used to discover routes between nodes. The primary goal of such an ad hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption. This article examines routing protocols for ad hoc networks and evaluates these protocols based on a given set of parameters. The article provides an overview of eight different protocols by presenting their characteristics and functionality, and then provides a comparison and discussion of their respective merits and drawbacks.
---
paper_title: A review of routing protocols for mobile ad hoc networks
paper_content:
Abstract The 1990s have seen a rapid growth of research interests in mobile ad hoc networking. The infrastructureless and the dynamic nature of these networks demands new set of networking strategies to be implemented in order to provide efficient end-to-end communication. This, along with the diverse application of these networks in many different scenarios such as battlefield and disaster recovery, have seen MANETs being researched by many different organisations and institutes. MANETs employ the traditional TCP/IP structure to provide end-to-end communication between nodes. However, due to their mobility and the limited resource in wireless networks, each layer in the TCP/IP model require redefinition or modifications to function efficiently in MANETs. One interesting research area in MANET is routing. Routing in the MANETs is a challenging task and has received a tremendous amount of attention from researches. This has led to development of many different routing protocols for MANETs, and each author of each proposed protocol argues that the strategy proposed provides an improvement over a number of different strategies considered in the literature for a given network scenario. Therefore, it is quite difficult to determine which protocols may perform best under a number of different network scenarios, such as increasing node density and traffic. In this paper, we provide an overview of a wide range of routing protocols proposed in the literature. We also provide a performance comparison of all routing protocols and suggest which protocols may perform best in large networks.
---
paper_title: QoS for Ad hoc Networking Based on Multiple Metrics: Bandwidth and Delay.
paper_content:
A link state routing approach makes available detailed information about the connectivity and the condition found in the network. OLSR protocol is an optimization over the classical link state protocol, tailored for mobile ad hoc networks. In this article, we design a QoS routing scheme over OLSR protocol, called QOLSR. In our proposal, we introduce more appropriate metrics than the hop distance used in OLSR. In order to improve quality requirements in routing information, delay and bandwidth measurements are applied. The implications of routing metrics on path computation are examined and the relational behind the selection of bandwidth and delay metrics are discussed. We first consider algorithms for single-metric approach, and then present a distributed algorithm for multiple metrics approach. We also present a scalable simulation model close to real operations in Ad Hoc Networks. The performance of our protocol are extensively investigated by simulation. Our results indicate that the attained gain by our proposal represent an important improvement in such mobile wireless networks.
---
paper_title: Scalable Routing Protocols for Mobile Ad Hoc Networks
paper_content:
The growing interest in mobile ad hoc network techniques has resulted in many routing protocol proposals. Scalability issues in ad hoc networks are attracting increasing attention these days. We survey the routing protocols that address scalability. The routing protocols included in the survey fall into three categories: flat routing protocols; hierarchical routing approaches; GPS augmented geographical routing schemes. The article compares the scalability properties and operational features of the protocols and discusses challenges in future routing protocol designs.
---
paper_title: Reducing latency and overhead of route repair with controlled flooding
paper_content:
Ad hoc routing protocols that use broadcast for route discovery may be inefficient if the path between any source-destination pair is frequently broken. We propose and evaluate a simple mechanism that allows fast route repair in on demand ad hoc routing protocols. We apply our proposal to the Ad hoc On-demand Distance Vector (AODV) routing protocol. The proposed system is based on the Controlled Flooding (CF) framework, where alternative routes are established around the main original path between source-destination pairs. With alternative routing, data packets are forwarded through a secondary path without requiring the source to re-flood the whole network, as may be the case in AODV. We are interested in one-level alternative routing. We show that our proposal reduces the connection disruption probability as well as the frequency of broadcasts.
---
paper_title: Energy efficient routing in wireless ad hoc networks
paper_content:
Ad hoc wireless networks are power constrained since nodes operate with limited battery energy. Thus, energy consumption is crucial in the design of new ad hoc routing protocols. To design such protocols, we have to look away from the traditional minimum, hop routing schemes. In this, paper, we propose three extensions to the state-of-the-art shortest-cost routing algorithm, AODV. The discovery mechanism in these extensions (LEAR-AODV, PAR-AODV, and LPR-AODV) uses energy consumption as a routing metric. They reduce the energy consumption of the nodes by routing packets to their destination using energy-optimal routes. We show that these algorithms improve the network survivability by maintaining the network connectivity. They carry out this objective with low overhead and without affecting the other wireless network protocol layers.
---
paper_title: A Survey on Position-Based Routing in Mobile Ad-Hoc Networks
paper_content:
We present an overview of ad hoc routing protocols that make forwarding decisions based on the geographical position of a packet's destination. Other than the destination's position, each node need know only its own position and the position of its one-hop neighbors in order to forward packets. Since it is not necessary to maintain explicit routes, position-based routing does scale well even if the network is highly dynamic. This is a major advantage in a mobile ad hoc network where the topology may change frequently. The main prerequisite for position-based routing is that a sender can obtain the current position of the destination. Therefore, previously proposed location services are discussed in addition to position-based packet forwarding strategies. We provide a qualitative comparison of the approaches in both areas and investigate opportunities for future research.
---
paper_title: Optimized Link State Routing Protocol (OLSR)
paper_content:
This document describes the Optimized Link State Routing (OLSR) protocol for mobile ad hoc networks. The protocol is an optimization of the classical link state algorithm tailored to the requirements of a mobile wireless LAN. The key concept used in the protocol is that of multipoint relays (MPRs). MPRs are selected nodes which forward broadcast messages during the flooding process. This technique substantially reduces the message overhead as compared to a classical flooding mechanism, where every node retransmits each message when it receives the first copy of the message. In OLSR, link state information is generated only by nodes elected as MPRs. Thus, a second optimization is achieved by minimizing the number of control messages flooded in the network. As a third optimization, an MPR node may chose to report only links between itself and its MPR selectors. Hence, as contrary to the classic link state algorithm, partial link state information is distributed in the network. This information is then used for route calculation. OLSR provides optimal routes (in terms of number of hops). The protocol is particularly suitable for large and dense networks as the technique of MPRs works well in this context.
---
paper_title: DSR : The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc Networks
paper_content:
The Dynamic Source Routing protocol (DSR) is a simple and efficient routing protocol designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes. DSR allows the network to be completely self-organizing and self-configuring, without the need for any existing network infrastructure or administration. The protocol is composed of the two mechanisms of Route Discovery and Route Maintenance, which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network. The use of source routing allows packet routing to be trivially loop-free, avoids the need for up-to-date routing information in the intermediate nodes through which packets are forwarded, and allows nodes forwarding or overhearing packets to cache the routing information in them for their own future use. All aspects of the protocol operate entirely on-demand, allowing the routing packet overhead of DSR to scale automatically to only that needed to react to changes in the routes currently in use. We have evaluated the operation of DSR through detailed simulation on a variety of movement and communication patterns, and through implementation and significant experimentation in a physical outdoor ad hoc networking testbed we have constructed in Pittsburgh, and have demonstrated the excellent performance of the protocol. In this chapter, we describe the design of DSR and provide a summary of some of our simulation and testbed implementation results for the protocol.
---
paper_title: Reducing latency and overhead of route repair with controlled flooding
paper_content:
Ad hoc routing protocols that use broadcast for route discovery may be inefficient if the path between any source-destination pair is frequently broken. We propose and evaluate a simple mechanism that allows fast route repair in on demand ad hoc routing protocols. We apply our proposal to the Ad hoc On-demand Distance Vector (AODV) routing protocol. The proposed system is based on the Controlled Flooding (CF) framework, where alternative routes are established around the main original path between source-destination pairs. With alternative routing, data packets are forwarded through a secondary path without requiring the source to re-flood the whole network, as may be the case in AODV. We are interested in one-level alternative routing. We show that our proposal reduces the connection disruption probability as well as the frequency of broadcasts.
---
paper_title: Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
paper_content:
An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
---
paper_title: The Optimized Link State Routing Protocol Evaluation through Experiments and Simulation
paper_content:
In this paper, we describe the Optimized Link State Routing Protocol (OLSR) [1] for Mobile Ad-hoc NETworks (MANETs) and the evaluation of this protocol through experiments and simulations. In particular, we emphasize the practical tests and intensive simulations, which have been used in guiding and evaluating the design of the protocol, and which have been a key to identifying both problems and solutions. OLSR is a proactive link-state routing protocol, employing periodic message exchange for updating topological information in each node in the network. I.e. topological information is flooded to all nodes in the network. Conceptually, OLSR contains three elements: Mechanisms for neighbor sensing based on periodic exchange of HELLO messages within a node’s neighborhood. Generic mechanisms for efficient flooding of control traffic into the network employing the concept of multipoint relays (MPRs) [5] for a significant reduction of duplicate retransmissions during the flooding process. And a specification of a set of control-messages providing each node with sufficient topological information to be able to compute an optimal route to each destination in the network using any shortest-path algorithm. Experimental work, running a test-network of laptops with IEEE 802.11 wireless cards, revealed interesting properties. While the protocol, as originally specified, works quite well, it was found, that enforcing “jitter” on the interval between the periodic exchange of control messages in OLSR and piggybacking said control messages into a single packet, significantly reduced the number of messages lost due to collisions. It was also observed, that under certain conditions a “naive” neighbor sensing mechanism was insufficient: a bad link between two nodes (e.g. when two nodes are on the edge of radio range) might on occasion transmit a HELLO message in both directions (hence enabling the link for routing), while not being able to sustain continuous traffic. This would result in “route-flapping” and temporary loss of connectivity. With the experimental results as basis, we have been deploying simulations to reveal the impact of the various algorithmic improvements, described above.
---
paper_title: A scalable location service for geographic ad hoc routing
paper_content:
GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node. Experiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.
---
paper_title: Location Systems for Ubiquitous Computing
paper_content:
This survey and taxonomy of location systems for mobile-computing applications describes a spectrum of current products and explores the latest in the field. To make sense of this domain, we have developed a taxonomy to help developers of location-aware applications better evaluate their options when choosing a location-sensing system. The taxonomy may also aid researchers in identifying opportunities for new location-sensing techniques.
---
paper_title: Analyzing the Energy Consumption of IEEE 802.11 Ad Hoc Networks
paper_content:
This paper analyzes the energy consumption of ad hoc nodes using IEEE 802.11 interfaces. Our objective is to provide theoretical limits on the lifetime gains that can be achieved by different power saving techniques proposed in the literature. The evaluation takes into account the properties of the medium access protocol and the process of forwarding packets in ad hoc mode. The key point is to determine the node lifetime based on its average power consumption. The average power consumption is estimated considering how long the node remains sleeping, idle, receiving, or transmitting.
---
paper_title: Geographic messaging in wireless ad hoc networks
paper_content:
This paper presents a network layer mechanism for the efficient dissemination of Global Positioning System (GPS) based node position in ad hoc networks, and shows its application to problems involving geographic awareness. In particular, we describe how, based on a "position database" maintained through the dissemination mechanism, a node of the network can direct messages to all the nodes currently present in a precise geographic area. The effectiveness and accuracy of the dissemination mechanism is demonstrated through the use of simulations. We show that for ad hoc networks with up to 60 nodes, the position database correctly determines which nodes are actually in the expected geographic area the 97% of the time.
---
paper_title: Improving End-to-End Performance of TCP over Mobile Internetworks
paper_content:
Reliable transport protocols such as TCP use end-to-end flow, congestion and error control mechanisms to provide reliable delivery over an internetwork. However, the end-to-end performance of a TCP connection can suffer significant degradation in the presence of a wireless link. We are exploring alternatives for optimizing end-to-end performance of TCP connections across an internetwork consisting of both fixed and mobile networks. The central idea in our approach is to transparently split an end-to-end connection into two separate connections; one over the wireless link and other over the wired path. The connection over the wireless link may either use regular TCP or a specialized transport protocol optimized for better performance over a wireless link. Our approach does not require any changes to the existing protocol software on stationary hosts. Results of a systematic performance evaluation using both our approach and regular TCP show that our approach yields significant performance improvements.
---
paper_title: A comparison of mechanisms for improving mobile IP handoff latency for end-to-end TCP
paper_content:
Handoff latency results in packet losses and severe End-to-End TCP performance degradation as TCP, perceiving these losses as congestion, causes source throttling or retransmission. In order to mitigate these effects, various Mobile IP(v6) extensions have been designed to augment the base Mobile IP with hierarchical registration management, address pre-fetching and local retransmission mechanisms. While these methods have reduced the impact of losses on TCP goodput and improved handoff latency, no comparative studies have been done regarding the relative performance amongst them. In this paper, we comprehensively evaluated the impact of layer-3 handoff latency on End-to-End TCP for various Mobile IP(v6) extensions. Five such frameworks are compared with the base Mobile IPv6 framework, namely, i) Hierarchical Mobile IPv6, ii) Hierarchical Mobile IPv6 with Fast-handover, iii) (Flat) Mobile IPv6 with Fast-handover, iv) Simultaneous Bindings, and v) Seamless handoff architecture for Mobile IP (S-MIP). We propose an evaluation model examining the effect of linear and ping-pong movement on handoff latency and TCP goodput, for all above frameworks. Our results show that S-MIP performs best under both ping-pong and linear movements during a handoff, with latency comparable to a layer-2 (access layer) handoff. All other frameworks suffer from packet losses and performance degradation of some sort. We also proposed an optimization for S-MIP which improves the performance by further eliminating the possibility of packets out of order, caused by the local packet forwarding mechanisms of S-MIP.
---
paper_title: Impact of routing and link layers on TCP performance in mobile ad hoc networks
paper_content:
Mobile ad hoc networks have attracted attention lately as a means of providing continuous network connectivity to mobile computing devices, regardless of physical location. To date, a large amount of research has focused on the routing protocols needed in such an environment. In this paper, we investigate the effects that the routing and link layers have on TCP performance. In particular, we show how the route cache management strategy in an on-demand ad hoc routing protocol can significantly affect TCP performance. We also take a brief look at the impact of link layer retransmissions on TCP throughput in a fixed, wireless multihop network.
---
paper_title: Improving TCP performance over mobile ad-hoc networks with out-of-order detection and response
paper_content:
In a Mobile Ad Hoc Network (MANET), temporary link failures and route changes happen frequently. With the assumption that all packet losses are due to congestion, TCP performs poorly in such environment. While there has been some research on improving TCP performance over MANET, most of them require feedback from the network or the lower layer. In this research, we explore a new approach to improve TCP performance by detecting and responding to out-of-order packet delivery events, which are the results of frequent route changes. In our simulation study, this approach had achieved on average 50% performance improvement, without requiring feedback from the network or the lower layer.
---
paper_title: A comparison of TCP performance over three routing protocols for mobile ad hoc networks
paper_content:
We examine the performance of the TCP protocol for bulk-data transfers in mobile ad hoc networks (MANETs). We vary the number of TCP connections and compare the performances of three recently proposed on-demand (AODV and DSR) and adaptive proactive (ADV) routing algorithms. It has been shown in the literature that the congestion control mechanism of TCP reacts adversely to packet losses due to temporarily broken routes in wireless networks. So, we prospose a simple heuristic, called fixed RTO, to distinguish between route loss and network congestion and thereby improve the performance of the routing algorithms. Using the ns-2 simulator, we evaluate the performances of the three routing algorithms with the standard TCP Reno protocol and Reno with fixed RTO. Our results indicate that the proactive ADV algorithm performs well under a variety of conditions and that the fixed RTO technique improves the performances of the two on-demand algorithms significantly
---
paper_title: Implementation and performance evaluation of Indirect TCP
paper_content:
With the advent of small portable computers and the technological advances in wireless communications, mobile wireless computing is likely to become very popular in the near future. Wireless links are slower and less reliable compared to wired links and are prone to loss of signal due to noise and fading. Furthermore, host mobility can give rise to periods of disconnection from the fixed network. The use of existing network protocols, which were developed mainly for the high bandwidth and faster wired links, with mobile computers thus gives rise to unique performance problems arising from host mobility and due to the characteristics of wireless medium. Indirect protocols can isolate mobility and wireless related problems using mobility support routers (MSRs) as intermediaries, which also provide backward compatibility with fixed network protocols. We present the implementation and performance evaluation of I-TCP, which is an indirect transport layer protocol for mobile wireless environments. Throughput comparison with regular (BSD) TCP shows that I-TCP performs significantly better in a wide range of conditions related to wireless losses and host mobility. We also describe the implementation and performance of I-TCP handoffs.
---
paper_title: A comparison of mechanisms for improving TCP performance over wireless links
paper_content:
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.
---
paper_title: M-TCP: TCP for mobile cellular networks
paper_content:
Transport connections set up over wireless links are frequently plagued by problems such as - high bit error rate (BER), frequent disconnections of the mobile user, and low wireless bandwidth that may change dynamically. In this paper, we study the effects of frequent disconnections and low variable bandwidth on TCP throughput and propose a protocol that addresses this problem. We discuss the implementation (in NetBSD) of our protocol called M-TCP and compare its performance against other mobile TCP implementations. We show that M-TCP has two significant advantages over other solutions: (1) it maintains end-to-end TCP semantics and, (2) it delivers excellent performance for environments where the mobile encounters periods of disconnection.
---
paper_title: A MAC protocol for full exploitation of directional antennas in ad-hoc wireless networks
paper_content:
Directional antennas in ad hoc networks offer many benefits compared with classical omnidirectional antennas. The most important include significant increase of spatial reuse, coverage range and subsequently network capacity as a whole. On the other hand, the use of directional antennas requires new approach in the design of a MAC protocol to fully exploit these benefits. Unfortunately, directional transmissions increase the hidden terminal problem, the problem of deafness and the problem of determination of neighbors' location. In this paper we propose a new MAC protocol that deals effectively with these problems while it exploits in an efficient way the advantages of the directional antennas. We evaluate our work through simulation study. Numerical results show that our protocol offers significant improvement compared to the performance of omni transmissions.
---
paper_title: Directional virtual carrier sensing for directional antennas in mobile ad hoc networks
paper_content:
This paper presents a new carrier sensing mechanism called DVCS (Directional Virtual Carrier Sensing) for wireless communication using directional antennas. DVCS does not require specific antenna configurations or external devices. Instead it only needs information on AOA (Angle of Arrival) and antenna gain for each signal from the underlying physical device, both of which are commonly used for the adaptation of antenna pattern. DVCS also supports interoperability of directional and omni-directional antennas. In this study, the performance of DVCS for mobile ad hoc networks is evaluated using simulation with a realistic directional antenna model and the full IP protocol stack. The experimental results showed that compared with omni-directional communication, DVCS improved network capacity by a factor of 3 to 4 for a 100 node ad hoc network.
---
paper_title: On designing MAC protocols for wireless networks using directional antennas
paper_content:
We investigate the possibility of using directional antennas for medium access control in wireless ad hoc networks. Previous research in ad hoc networks typically assumes the use of omnidirectional antennas at all nodes. With omnidirectional antennas, while two nodes are communicating using a given channel, MAC protocols such as IEEE 802.11 require all other nodes in the vicinity to remain silent. With directional antennas, two pairs of nodes located in each other's vicinity may potentially communicate simultaneously, increasing spatial reuse of the wireless channel. Range extension due to higher gain of directional antennas can also be useful in discovering fewer hop routes. However, new problems arise when using directional beams that simple modifications to 802.11 may not be able to mitigate. This paper identifies these problems and evaluates the tradeoffs associated with them. We also design a directional MAC protocol (MMAC) that uses multihop RTSs to establish links between distant nodes and then transmits CTS, DATA, and ACK over a single hop. While MMAC does not address all the problems identified with directional communication, it is an attempt to exploit the primary benefits of beamforming in the presence of some of these problems. Results show that MMAC can perform better than IEEE 802.11, although we find that the performance is dependent on the topology and flow patterns in the system.
---
paper_title: Impact of Directional Antennas on Ad Hoc Routing
paper_content:
Previous research on directional antennas has been confined mostly to medium access control. However, it is necessary to evaluate the impact of directional antennas on the performance of routing protocols as well. In this paper, we identify the issues and evaluate the performance of an omnidirectional routing protocol, DSR, when executed over directional antennas. Using insights gained from simulations, we propose routing strategies suitable for directional communication. Our analysis shows that by using directional antennas, ad hoc networks may achieve better performance. However, scenarios exist in which omnidirectional antennas may be suitable.
---
paper_title: Medium access control protocols using directional antennas in ad hoc networks
paper_content:
Using directional antennas can be beneficial for wireless ad hoc networks consisting of a collection of wireless hosts. To best utilize directional antennas, a suitable medium access control (MAC) protocol must be designed. Current MAC protocols, such as the IEEE 802.11 standard, do not benefit when using directional antennas, because these protocols have been designed for omnidirectional antennas. In this paper, we attempt to design new MAC protocols suitable for ad hoc networks based on directional antennas.
---
paper_title: A comparison study of omnidirectional and directional MAC protocols for ad hoc networks
paper_content:
Traditional MAC protocols used in ad hoc networks employ omnidirectional antennas. Directional antennas have emerged as an alternative due to their capability of spatial reuse, low probability of detection, robustness to jamming, and other beneficial features. We conducted a comparison study of existing directional and omnidirectional MAC protocols by contrasting their features and evaluating their performance under various network load and topology. Specifically we presented rationale for the better performance of some directional antenna based MAC protocols by using the metric of effective spatial reuse, which is also evidenced by the simulation study.
---
paper_title: Ad hoc networking with directional antennas: a complete system solution
paper_content:
In this paper, we present UDAAN ("utilizing directional antennas for ad hoc networking"), which is an interacting suite of modular network- and MAC-layer mechanisms for adaptive control of steered or switched antenna systems in an ad hoc network. UDAAN consists of several new mechanisms - a directional power-controlled MAC, neighbor discovery with beamforming, link characterization with directional antennas, proactive routing and forwarding all working cohesively to provide the first complete systems solution. We describe the development of a real-life ad hoc network testbed using UDAAN with switched directional antennas, and we discuss the lessons learned during field trials. High fidelity simulation results, using the same networking code as in the prototype, are also presented. For the range of parameters studied, our results show that UDAAN can produce up to a factor-of-10 improvement in throughput over omni-directional communications.
---
paper_title: Using Directional Antennas for Medium Access Control in Ad Hoc Networks
paper_content:
A composition for use in the treatment for developing seedless fleshy berry of grapes. By treating the flower bunches of a grape tree with a composition containing gibberellin and cyclic 3',5'-adenylic acid in the form of an aqueous solution, it became possible to make seedless fleshy berry from grape trees belonging to varieties other than Delaware, namely belonging to Campbell-Arley, Berry A, Niagara, Kyoho, etc., from which seedless fleshy berry cannot be made by the conventional treatment with gibberellin.
---
paper_title: Routing improvement using directional antennas in mobile ad hoc networks
paper_content:
In this paper, we present the initial design and evaluation of two techniques for routing improvement using directional antennas in mobile ad hoc networks. First, we use directional antennas to bridge permanent network partitions by adaptively transmitting selected packets over a longer distance, still transmitting most packets a shorter distance. Second, in a network without permanent partitions, we use directional antennas to repair routes in use, when an intermediate node moves out of wireless transmission range along the route; by using the capability of a directional antenna to transmit packets over a longer distance, we bridge the route breakage caused by the intermediate node's movement, thus reducing packet delivery latency. Through simulations, we demonstrate the effectiveness of our design in the context of the dynamic source routing protocol (DSR).
---
paper_title: Secure Routing for Mobile Ad hoc Networks
paper_content:
Buttyan L.et al.found out a security flaw in Aridane and proposed a secure routing protocol,EndairA, which can resist attacks of active-1-1 according to ref[9].But unfortunately we discover an as yet unknown active-0-1 attack,"man-in-the-middle attack",which EndairA can't resist.So we propose a new secure routing protocol,En- dairALoc.Analysis shows that EndairALoc can not only inherit security of EndairA,but also resist"man-in-the-mid- dle attack"and even wormhole attack.Furthermore EndairALoc uses pairwise secret keys instead of public keys Endai- rA used,so compared with EndairA,EndairALoc can save more energy in the process of routing establishment.
---
paper_title: SEAD: secure efficient distance vector routing for mobile wireless ad hoc networks
paper_content:
An ad hoc network is a collection of wireless computers (nodes), communicating among themselves over possibly multihop paths, without the help of any infrastructure such as base stations or access points. Although many previous ad hoc network routing protocols have been based in part on distance vector approaches, they have generally assumed a trusted environment. We design and evaluate the Secure Efficient Ad hoc Distance vector routing protocol (SEAD), a secure ad hoc network routing protocol based on the design of the Destination-Sequenced Distance-Vector routing protocol (DSDV). In order to support use with nodes of limited CPU processing capability, and to guard against denial-of-service (DoS) attacks in which an attacker attempts to cause other nodes to consume excess network bandwidth or processing time, we use efficient one-way hash functions and do not use asymmetric cryptographic operations in the protocol. SEAD performs well over the range of scenarios we tested, and is robust against multiple uncoordinated attackers creating incorrect routing state in any other node, even in spite of any active attackers or compromised nodes in the network.
---
paper_title: Secure routing in wireless sensor networks: attacks and countermeasures
paper_content:
We consider routing security in wireless sensor networks. Many sensor network routing protocols have been proposed, but none of them have been designed with security as a goal. We propose security goals for routing in sensor networks, show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks, introduce two classes of novel attacks against sensor networks sinkholes and HELLO floods, and analyze the security of all the major sensor network routing protocols. We describe crippling attacks against all of them and suggest countermeasures and design considerations. This is the first such analysis of secure routing in sensor networks.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Securing ad hoc routing protocols
paper_content:
We consider the problem of incorporating security mechanisms into routing protocols for ad hoc networks. Canned security solutions like IPSec are not applicable. We look at AODV[21] in detail and develop a security mechanism to protect its routing information. We also briefly discuss whether our techniques would also be applicable to other similar routing protocols and about how a key management scheme could be used in conjunction with the solution that we provide.
---
paper_title: A survey of secure wireless ad hoc routing
paper_content:
Ad hoc networks use mobile nodes to enable communication outside wireless transmission range. Attacks on ad hoc network routing protocols disrupt network performance and reliability. The article reviews attacks on ad hoc networks and discusses current approaches for establishing cryptographic keys in ad hoc networks. We describe the state of research in secure ad hoc routing protocols and its research challenges.
---
paper_title: Ariadne: a secure on-demand routing protocol for ad hoc networks
paper_content:
An ad hoc network is a group of wireless mobile computers (or nodes), in which individual nodes cooperate by forwarding packets for each other to allow nodes to communicate beyond direct wireless transmission range. Prior research in ad hoc networking has generally studied the routing problem in a non-adversarial setting, assuming a trusted environment. In this paper, we present attacks against routing in ad hoc networks, and we present the design and performance evaluation of a new secure on-demand ad hoc network routing protocol, called Ariadne. Ariadne prevents attackers or compromised nodes from tampering with uncompromised routes consisting of uncompromised nodes, and also prevents a large number of types of Denial-of-Service attacks. In addition, Ariadne is efficient, using only highly efficient symmetric cryptographic primitives.
---
paper_title: Techniques for intrusion-resistant ad hoc routing algorithms (TIARA)
paper_content:
Architecture Technology Corporation (ATC) has developed a new approach for building intrusion resistant ad hoc networks called TIARA (Techniques for Intrusion-Resistant Ad Hoc Routing Algorithms). The approach, developed with funding from DARPA's Fault Tolerant Networks program, relies on extending the capabilities of existing ad hoc routing algorithms to handle intruders without modifying these algorithms. TIARA implements new network layer survivability mechanisms for detecting and recovering from intruder induced malicious faults that work in concert with existing ad hoc routing algorithms and augment their capabilities. The TIARA implementation architecture is designed to allow these survivability mechanisms to be "plugged" into existing wireless routers with little effort.
---
paper_title: Wireless ad hoc network on underserved communities: An efficient solution for interactive digital TV
paper_content:
The Brazilian government intends to use the Digital TV technology as a vehicle of digital inclusion on underserved communities. The wireless ad hoc network is a low-cost, scalable and easy solution to implement the return channel. This work analyzes the performance of an ad hoc return channel using the wireless IEEE 802.11 technology in different Brazilian geographical scenarios. The results show that a high connectivity is achieved when more than 20% of the nodes are turned on, regardless of the position of the gateway. The influence of the number of hops and the number of transmitting nodes is also analyzed. A minimum throughput of 2 Mbps can be reached for increasing number of hops in the forwarding chain for a one-node transmission. Besides, when the number of transmitting nodes increases, the aggregated throughput can achieve 3.5 Mbps. The results show that the ad hoc network is a promising solution for the return channel of the interactive Digital TV.
---
|
Title: A Survey on Wireless Ad Hoc Networks
Section 1: Introduction
Description 1: This section introduces wireless ad hoc networks, explaining their characteristics, advantages, and typical use cases, and outlining the paper's structure.
Section 2: Medium Access Control Protocols
Description 2: This section discusses various MAC protocols designed for wireless ad hoc networks, including contention-free and contention-based schemes, and their strategies for efficient medium access.
Section 3: Multiple Channel Protocols
Description 3: This section explores MAC protocols employing multiple channels to improve overall network performance by reducing collisions and increasing available bandwidth.
Section 4: Power-aware Protocols
Description 4: This section focuses on protocols that implement energy-saving techniques for battery-powered mobile devices, such as active-standby switching, power setting, and retransmission avoidance.
Section 5: QoS-aware Protocols
Description 5: This section addresses protocols designed to provide Quality of Service (QoS) in ad hoc networks, ensuring limited end-to-end delay and minimum bandwidth for specific flows.
Section 6: Enabling Technologies
Description 6: This section gives an overview of Bluetooth and IEEE 802.11 technologies, which are the foundational standards for implementing wireless ad hoc networks, describing their MAC and physical layer characteristics.
Section 7: Routing Protocols
Description 7: This section examines the routing challenges in wireless ad hoc networks and compares different routing protocols, categorized into topology-based and position-based protocols.
Section 8: Transport Protocols
Description 8: This section delves into TCP performance issues in wireless ad hoc networks and explores various proposals for enhancing TCP efficiency, including split connection and cross-layer approaches.
Section 9: Directional Antennas
Description 9: This section investigates the use of directional antennas in ad hoc networks, discussing their advantages, the problems arising from their use, and proposed MAC and routing protocols to leverage their benefits.
Section 10: Security
Description 10: This section explores the security challenges in wireless ad hoc networks, identifying common attacks and discussing various protocols designed to secure ad hoc routing and communication.
Section 11: Underserved Communities
Description 11: This section describes the application of wireless ad hoc networks in fostering digital inclusion within underserved communities, emphasizing the deployment of community networks for social inclusion.
|
Fault diagnosis of electronic systems using intelligent techniques: a review
| 12 |
---
paper_title: System Test And Diagnosis
paper_content:
Part One: Motivation. 1. Introduction. 2. Maintainability: a Historical Perspective. 3. Field Diagnosis and Repair: the Problem. Part Two: Analysis and Application. 4. Bottom-Up Modeling for Diagnosis. 5. System Level Analysis for Diagnosis. 6. The Information Flow Model. 7. System Level Diagnosis. 8. Evaluating System Diagnosability. 9. Verification and Validation. 10. Architecture for System Diagnosis. Part Three: Advanced Topics. 11. Inexact Diagnosis. 12. Partitioning Large Problems. 13. Modeling Temporal Information. 14. Adaptive Diagnosis. 15. Diagnosis -- Art versus Science. References. Index.
---
paper_title: Knowledge-based systems for instrumentation diagnosis, system configuration and circuit and system design
paper_content:
Abstract A number of knowledge-based systems in the electronic engineering field have been developed in the past decade. These include those that use knowledge-based techniques to diagnose instrumentation, determine system configurations and to aid circuit and system design. This paper reviews the literature on knowledge-based systems for electronic engineering applications in these areas and reports progress made in the development of a basis for realising electronic systems.
---
paper_title: Pimtool, an expert system to troubleshoot computer hardware failures
paper_content:
This paper describes a tool to diagnose the cause of failure of a HP computer server. We do this by analyzing the dump of Processor Internal Memory (PIM) and maximizing the leverage of expert learning from one hardware failure situation to another. The tool is a rule-based expert system, with some nested rules which translate to decision trees. The rules were implemented using a metalanguage which was customized for the hardware failure analysis problem domain. Pimtool has been deployed to 25 users as of December 1996. We plan to expand usage to over 400 users by end of 1997. Using Pimtool, we expect to save over 15 minutes in Mean-Time-to-Repair (MTIR) per call. We have recognized that knowledge management will be a key issue in the future and are developing tools and strategies to address it.
---
paper_title: Artificial Intelligence: Structures and Strategies for Complex Problem Solving
paper_content:
Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA
---
paper_title: Implementing an expert system for fault diagnosis of electronic equipment
paper_content:
Abstract The aim of this work is to develop a rule-based expert system to aid an operator in the fault diagnosis of the electronics of forge press equipment. It is a menu-driven package, developed in Turbo PROLOG on an IBM PC., to help the operator fix faults up to replaceable module level. The system has been broadly categorised into eight sub-systems, and the rules, based on fault cause relations, have been developed for each of the sub-systems. This modular development reduces the access time, and also facilitates the handling of the knowledge base.
---
paper_title: ESPCRM—an expert system for personal computer repair and maintenance
paper_content:
Abstract This paper describes the design and implementation of an expert system for personal computer repair and maintenance (ESPCRM). Based on the Personal Consultant Plus 4.0 expert system shell, ESPCRM provides consultation for the repair and maintenance of the whole series of IBM/IBM compatible PCs from the XT to 486-based machines. Troubleshooting a personal computer (PC) is a knowledge-intensive task. Depending on the experience of the technician, a simple problem could take hours or even days to solve. An expert system offers a viable solution to the problem. Presently, the knowledge base of the expert system developed consists of some 94 rules, 68 parameters and 40 graphic pages. The acquisition of knowledge is conducted through interviews with technicians in the PC repair workshop catering for some 1200 PCs of various makes, models and configurations within the Nanyang Technological University (NTU).
---
paper_title: Model-based diagnosis of analog electronic circuits
paper_content:
Diagnosing analog systems, i.e. systems for which physical quantities vary over time in a continuous range is, in itself, a difficult problem. Analog electronic circuits, especially those with feedback loops, raise new difficulties that cannot be solved by using classical techniques. This paper shows how model-based diagnosis theory can be used to diagnose analog circuits. The two main tasks for making the theory applicable to real size problems will be emphasized: the modeling of the system to be diagnosed, and the building of efficient conflict recognition engines adapted to the formalism used for the modeling. This will be illustrated through the description of two systems. The first one, DEDALE, only considers failures observable in quiescent mode. It uses qualitative modeling based on relative orders of magnitude relations, for which an axiomatics is given, thus allowing a symbolic solver for checking consistency of such relations to be developed. The second one, CATS/DIANA, deals with time variations. It uses modeling based on numeric intervals, arrays of such intervals to represent transient signals, and an ATMS-like domain-independent conflict recognition engine, CATS. This engine is able to work on such data and to achieve interval propagation through constraints in such a way as to focus on the detection of all minimal nogoods. It is thus well adapted for diagnosing continuous time-varying physical systems. Experimental results of the two systems are given through various types of circuits.
---
paper_title: Diagnosing circuits with state: an inherently underconstrained problem
paper_content:
“Hard problems” can be hard because they are computationally intractable. or because they are underconstrained. Here we describe candidate generation for digital devrces with state, a fault localization problem that is intractable when the devices are described at low levels of abstraction, and is underconstrained when described at higher levels of abstraction. Previous v;ork [l] has shown that a fault in a combinatorial digital circuit can be localized using a constraint-based representation of structure and behavior. ln this paper we (1) extend this represerltation to model a circuit with state by choosrng a time granularity and vocabulary of signals appropriate to that circuit; (2) demonstrate that the same candidate generation procedure that works for combinatorial circuits becomes indiscriminate when applied to a state circuit modeled in that extended representationL(3) show how the common technique of singlestepping can be viewed as a divide-and-conquer approach to overcoming that lack of constraint; and (4) illustrate how using structural de?ail can help to make the candidate generator discriminating once again, but only at great cost.
---
paper_title: Diagnostic reasoning based on structure and behavior
paper_content:
Abstract We describe a system that reasons from first principles, i.e., using knowledge of structure and behavior. The system has been implemented and tested on several examples in the domain of troubleshooting digital electronic circuits. We give an example of the system in operation, illustrating that this approach provides several advantages, including a significant degree of device independence, the ability to constrain the hypotheses it considers at the outset, yet deal with a progressively wider range of problems, and the ability to deal with situations that are novel in the sense that their outward manifestations may not have been encountered previously. As background we review our basic approach to describing structure and behavior, then explore some of the technologies used previously in troubleshooting. Difficulties encountered there lead us to a number of new contributions, four of which make up the central focus of this paper. • — We describe a technique we call constraint suspension that provides a powerful tool for troubleshooting. • — We point out the importance of making explicit the assumptions underlying reasoning and describe a technique that helps enumerate assumptions methodically. • — The result is an overall strategy for troubleshooting based on the progressive relaxation of underlying assumptions. The system can focus its efforts initially, yet will methodically expand its focus to include a broad range of faults. • — Finally, abstracting from our examples, we find that the concept of adjacency proves to be useful in understanding why some faults are especially difficult to diagnose and why multiple representations are useful.
---
paper_title: Qualitative Reasoning: A Survey of Techniques Applications
paper_content:
After preliminary work in economics and control theory, qualitative reasoning emerged in AI at the end of the 70s and beginning of the 80s, in the form of Naive Physics and Commonsense Reasoning. This way was progressively abandoned in aid of more formalised approaches to tackle modelling problems in engineering tasks. Qualitative Reasoning became a proper subfield of AI in 1984, the year when several seminal papers developed the foundations and the main concepts that remain topical today. Since then Qualitative Reasoning has considerably broadened the scope of problems addressed, investigating new tasks and new systems, such as natural systems. This paper gives a survey of the development of Qualitative Reasoning from the 80s, focusing on the present state-of-the-art of the mathematical formalisms and modelling techniques, and presents the principal domains of application through the applied research done in France.
---
paper_title: Generation of diagnostic trees by means of simplified process models and machine learning
paper_content:
Abstract Fault diagnosis by means of diagnostic trees is of considerable interest for industrial applications. The drawbacks of this approach are mostly related to the knowledge elicitation through laborious enumeration of the tree structure and ad hoc threshold selection for symptoms definition. These problems can be alleviated if a more profound knowledge of the process is brought into play. The main idea of the paper consists of modeling the nominal and faulty states of the plant by means of interval-like component models derived from first-principles laws, e.g. the conservation law. Such a model serves to simulate the entire system under different fault conditions, in order to obtain the representative patterns of measurable process quantities, i.e. training examples. To match these patterns by diagnostic rules, multistrategy machine learning is applied. As a result, binary decision trees that relate symptoms to faults are obtained, along with the thresholds defining the symptoms. This technique is applied to a laboratory test process operating in the steady state, and is shown to be suitable for handling incipient single faults. The proposed learning approach is compared with two related machine learning methods. It is found that it achieves similar classification accuracy with better transparency of the resulting diagnostic system.
---
paper_title: Troubleshooting: When Modeling Is the Trouble
paper_content:
This paper shows how order of magnitude reasoning has been successfully used for troubleshooting complex analog circuits. The originality of this approach was to be able to remove the gap between the information required to apply a general theory of diagnosis and the limited information actually available. The expert's ability to detect a defect by reasoning about the significant changes in behavior it induces is extensively exploited here: as a kind of reasoning that justifies the qualitative modeling, as a heuristic that defmes a strategy and as a working hypothesis that makes clear the scope of this approach.
---
paper_title: AUTOMATIC FAULT-TREE GENERATION
paper_content:
Abstract This paper presents an expert system approach to off-line generation and optimisation of fault-trees for use in on-line fault diagnosis systems, incorporating the knowledge and experience of manufacturers and users. The size of the problem is such that explicit formulation of the fault-tree is very complicated. The knowledge, however, is implicitly available in the description of faults in terms of symptom-codes, results of performed tests and repair actions. Case-based reasoning is selected for the implementation to facilitate the automatic generation, consistency checking and maintenance of the fault-tree. Different diagnosis systems for different levels of diagnosis tasks can be generated automatically from the same problem description. Special attention is given to the processing speed needed for on-line use on modern transportation systems.
---
paper_title: Encapsulation and diagnosis with fault dictionaries
paper_content:
To date, test and diagnosis has been domain knowledge driven. However, as system complexity grows and we strive to develop reusable components, the concept of encapsulation becomes increasingly important. Encapsulation embodies the concepts of separation and partitioning. In this paper we deal with encapsulation by illustration of the fault dictionary approach to digital electronics. We then extend the concept of encapsulation to the system test approach as well as the development of maintenance systems. Finally we develop the concept that encapsulation is a key element in achieving general standardization open system architectures.
---
paper_title: A Probabilistic Causal Model for Diagnostic Problem Solving Part I: Integrating Symbolic Causal Inference with Numeric Probabilistic Inference
paper_content:
The issue of how to effectively integrate and use symbolic causal knowledge with numeric estimates of probabilities in abductive diagnostic expert systems is examined. In particular, a formal probabilistic causal model that integrates Bayesian classification with a domain-independent artificial intelligence model of diagnostic problem solving (parsimonious covering theory) is developed. Through a careful analysis, it is shown that the causal relationships in a general diagnostic domain can be used to remove the barriers to applying Bayesian classification effectively (large number of probabilities required as part of the knowledge base, certain unrealistic independence assumptions, the explosion of diagnostic hypotheses that occurs when multiple disorders can occur simultaneously, etc.). Further, this analysis provides insight into which notions of "parsimony" may be relevant in a given application area. In a companion paper, Part Two, a computationally efficient diagnostic strategy based on the probabilistic causal model discussed in this paper is developed.
---
paper_title: A Probabilistic Causal Model for Diagnostic Problem Solving Part II: Diagnostic Strategy
paper_content:
An important issue in diagnostic problem solving is how to generate and rank plausible hypotheses for a given set of manifestations. Since the space of possible hypotheses can be astronomically large if multiple disorders can be present simultaneously, some means is required to focus an expert system's attention on those hypotheses most likely to be valid. A domain-independent algorithm is presented that uses symbolic causal knowledge and numeric probabilistic knowledge to generate and evaluate plausible hypotheses during diagnostic problem solving. Given a set of manifestations known to be present, the algorithm uses a merit function for partially completed competing hypotheses to guide itself to the provably most probable hypothesis or hypotheses.
---
paper_title: Model-Based Diagnosis under Real-World Constraints
paper_content:
I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.
---
paper_title: Application of a bayesian network to integrated circuit tester diagnosis
paper_content:
Research efforts to implement a Bayesian belief-network-based expert system to solve a real-world diagnostic problem-the diagnosis of integrated circuit (IC) testing machines-are described. The development of several models of the IC tester diagnostic problem in belief networks also is described, the implementation of one of these models using symbolic probabilistic inference (SPI) is outlined, and the difficulties and advantages encountered are discussed. It was observed that modeling with interdependencies in belief networks simplifies the knowledge engineering task for the IC tester diagnosis problem, by avoiding procedural knowledge and focusing on the diagnostic component's interdependencies. Several general model frameworks evolved through knowledge engineering to capture diagnostic expertise that facilitated expanding and modifying the networks. However, model implementation was restricted to a small portion of the modeling, that of contact resistance failures, which were due to time limitations and inefficiencies in the prototype inference software we used. Further research is recommended to refine existing methods, in order to speed evaluation of the models created in this research. With this accomplished, a more complete diagnosis can be achieved
---
paper_title: AN INTELLIGENT APPROACH TO AUTOMATIC TEST EQUIPMENT
paper_content:
In diagnosing a failed system, a smart technician would choose tests to be performed based on the context of the situation. Currently, test program sets do not fault-. isolate within the context of a situation. Instead, testing follows a rigid, predetermined, fault-isolation sequence that is based on an embedded fault tree. Current test programs do not tolerate instrument failure and cannot redirect testing by incorporating new information. However, there is a new approach to automatic testing that emulates the best features of a trained technician yet, unlike the development of rule-based expert systems, does not require a trained technician to build the knowledge base. This new approach is model-based and has evolved over the last 10 years. This evolution has led to the development of several maintenance tools and an architecture for intelligent automatic test equipment (ATE). The architecture has been implemented for testing two cards from an AV-8B power supply.
---
paper_title: System Test And Diagnosis
paper_content:
Part One: Motivation. 1. Introduction. 2. Maintainability: a Historical Perspective. 3. Field Diagnosis and Repair: the Problem. Part Two: Analysis and Application. 4. Bottom-Up Modeling for Diagnosis. 5. System Level Analysis for Diagnosis. 6. The Information Flow Model. 7. System Level Diagnosis. 8. Evaluating System Diagnosability. 9. Verification and Validation. 10. Architecture for System Diagnosis. Part Three: Advanced Topics. 11. Inexact Diagnosis. 12. Partitioning Large Problems. 13. Modeling Temporal Information. 14. Adaptive Diagnosis. 15. Diagnosis -- Art versus Science. References. Index.
---
paper_title: Artificial Intelligence: Structures and Strategies for Complex Problem Solving
paper_content:
Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA
---
paper_title: An incremental retrieval mechanism for case-based electronic fault diagnosis
paper_content:
One problem with using CBR for diagnosis is that a full case description may not be available at the beginning of the diagnosis. The standard CBR methodology requires a detailed case description in order to perform case retrieval and this is often not practical in diagnosis. We describe two fault diagnosis tasks where many features may make up a case description but only a few features are required in an individual diagnosis. We evaluate an incremental CBR mechanism that can initiate case retrieval with a skeletal case description and will elicit extra discriminating information during the diagnostic process.
---
paper_title: A CBR application: service productivity improvement by sharing experience
paper_content:
Field service is now recognized as one of the most important corporate activities in order to improve customer satisfaction and to compete successfully world-wide competition. Sharing repair experience with a state-of-the-art computer technology is a key issue to improve the productivity of field service. We have developed a diagnostic expert system, named Doctor, which employs case-based reasoning (CBR) and lists the most necessary ten service parts from a product type and some symptoms acquired from a service-request call. In this paper, we describe the Doctor system and explain how accurate and reliable product-type case-bases are generated and updated from the troubleshooting experience and the generic case base, i.e., general diagnostic knowledge. We also demonstrate the effectiveness of our system with experimental results using real repair cases. >
---
paper_title: Circuit diagnosis support system for electronics assembly operations
paper_content:
Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation.
---
paper_title: Explanation-based learning with diagnostic models
paper_content:
The author discusses an approach to identifying and correcting errors in diagnostic models using explanation based learning. The approach uses a model of the system to be diagnosed that may have missing information about the relationships between tests and possible diagnoses. In particular, he uses a structural model or information flow model to guide diagnosis. When misdiagnosis occurs, the model is used to determine how to search for the actual fault through additional testing. When the fault is identified, an explanation is constructed from the original misdiagnosis and the model is modified to compensate for the incorrect behavior of the system. >
---
paper_title: Artificial Intelligence: Structures and Strategies for Complex Problem Solving
paper_content:
Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA
---
paper_title: Artificial Intelligence: Structures and Strategies for Complex Problem Solving
paper_content:
Artificial Intelligence: Structures and Strategies for Complex Problem Solving by George F. Luger 6th edition, Addison Wesley, 2008 The book serves as a good introductory textbook for artificial intelligence, particularly for undergraduate level. It covers major AI topics and makes good connection between different areas of artificial intelligence. Along with each technique and algorithm introduced in the book, is a discussion of its complexity and application domain. There is an attached website to the book that provides auxiliary materials for some chapters, sample problems with solutions, and ideas for student projects. Besides Prolog and Lisp, java and C++ are also used to implement many of the algorithms in the book. The book is organized in five parts. The first part (chapter 1) gives an overview of AI, its history and its various application areas. The second part (chapters 2–6) concerns with knowledge representation and search algorithms. Chapter 2 introduces predicate calculus as a mathematical tool for representing AI problems. The state space search as well as un-informed and heuristic search methods is introduced in chapters 3 and 4. Chapter 5 discusses the issue of uncertainty in problem solving and covers the foundation of stochastic methodology and its application. In chapter 6 the implementation of search algorithms is shown in production system and blackboard architectures. Part 3 (chapters 7–9) discusses knowledge representation and different methods of problem solving, including strong, weak and distributed problem solving. Chapter 7 begins with reviewing the history of evolution of AI representation schemes, including semantic networks, frames, scripts and conceptual graphs. This chapter ends with a brief introduction of Agent problem solving. Chapter 8 presents the production model and rule-based expert systems as well as case-based and model-based reasoning. The methods of dealing with various aspects of uncertainty are discussed in chapter 9. These methods include Dempster-Shafer theory of evidence, Bayesian and Belief networks, fuzzy logics and Markov models. Part 4 is devoted to machine learning. Chapter 10 describes algorithms for symbol-based learning, including induction, concept learning, vision-space search and ID3. The neural network methods for learning, such as back propagation, competitive, Associative memories and Hebbian Coincidence learning were presented in chapter 11. Genetic algorithms and evolutionary learning approaches are introduced in chapter 12. Chapter 13 introduces stochastic and dynamic models of learning along with Hidden Markov Models, Dynamic Baysian networks and Markov Decision Processes. Part 5 (chapters 14 and 15) examines two main application of AI: automated reasoning and natural language understanding. Chapter 14 begins with an introduction to weak methods in problem solving and continues with presenting resolution theorem proving. Chapter 15 deals with the complex issue of natural language understanding by discussing main methods of syntax and semantic analysis of natural language corpus. The chapter ends with examples of natural language application in Database query generation, text summarization and question answering systems. Finally, chapter 16 is a summary of the materials covered in the book as well current AI's limitations and future directions. One criticism about the book would be that the materials are not covered in enough depth. Because of the space limitation, many important AI algorithms and techniques are discussed briefly without providing enough details. As a result, some chapters (e.g., 8, 9, 11, and 13) of the book should be supported by complementary materials to make it understandable for undergraduate students and motivating for graduate students. Another issue is with the structure of the book. The order of presenting chapters introduces sequentially different challenges and techniques in problem solving. Consequently, some topics such as uncertainty and logic are not introduced separately and are distributed in different chapters of the book related to different parts. Although interesting, this makes the book hard to follow. In summary, the book gives a great insight to the readers that want to familiar themselves with artificial intelligence. It covers a broad range of topics in AI problem solving and its practical application and is a good reference for an undergraduate level introductory AI class. Elham S. Khorasani, Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA
---
paper_title: New developments using AI in fault diagnosis
paper_content:
Abstract This paper is intended to give a survey on the state of the art of model-based fault diagnosis for dynamic processes employing artificial intelligence approaches. Emphasis is placed upon the use of fuzzy models for residual generation and fuzzy logic for residual evaluation. By the suggestion of a knowledge-based observer-like concept for residual generation, the basic idea of a novel observer concept, the so-called “knowledge observer”, is introduced. The neural-network approach for residual generation and evaluation is outlined as well.
---
paper_title: Fault diagnosis in systems using fuzzy logic
paper_content:
Two tasks of fault detection in linear dynamical systems are addressed in this paper. On one hand, to estimate residuals, a system described by a model with some deviations in parameters or unknown input disturbances is considered. In such a situation, sensor fault detection using classical methods is not very efficient. In order to solve this problem, an adaptive thresholding approach using fuzzy logic is proposed. On the other hand, to locate faults, a fuzzy logic technique is put in place of usual classical logic used with dedicated observer scheme. >
---
paper_title: Test and diagnosis of analog circuits: When fuzziness can lead to accuracy
paper_content:
Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented.
---
paper_title: FLAMES: A Fuzzy Logic ATMS and Model-based Expert System for Analog Diagnosis
paper_content:
Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem.
---
paper_title: On fuzzy logic applications for automatic control, supervision, and fault diagnosis
paper_content:
The degree of vagueness of variables, process description, and automation functions is considered and is shown. Where quantitative and qualitative knowledge is available for design and information processing within automation systems. Fuzzy-rule-based systems with several levels of rules form the basis for different automation functions. Fuzzy control can be used in many ways, for normal and for special operating conditions. Experience with the design of fuzzy controllers in the basic level is summarized, as well as criteria for efficient applications. Different fuzzy control schemes are considered, including cascade, feedforward, variable structure, self-tuning, adaptive and quality control leading to hybrid classical/fuzzy control systems. It is then shown how fuzzy logic approaches can be applied to process supervision and to fault diagnosis with approximate reasoning on observed symptoms. Based on the properties of fuzzy logic approaches the contribution gives a review and classification of the potentials of fuzzy logic in process automation.
---
paper_title: Artificial neural network based multiple fault diagnosis in digital circuits
paper_content:
The paper describes a technique, based on the use of Artificial Neural Networks (ANNs), for the diagnosis of multiple faults in digital circuits. The technique utilises different quantities of randomly selected circuit test data derived from a fault truth table, which is constructed by inserting random single stuck-at faults in the circuit. The paper describes the diagnostic procedure using the technique, the ANN architecture and results obtained with example circuits. Our results demonstrate that when the test data selection procedure is guided by test vectors of the circuit a compact, efficient and flexible ANN architecture is achieved.
---
paper_title: New developments using AI in fault diagnosis
paper_content:
Abstract This paper is intended to give a survey on the state of the art of model-based fault diagnosis for dynamic processes employing artificial intelligence approaches. Emphasis is placed upon the use of fuzzy models for residual generation and fuzzy logic for residual evaluation. By the suggestion of a knowledge-based observer-like concept for residual generation, the basic idea of a novel observer concept, the so-called “knowledge observer”, is introduced. The neural-network approach for residual generation and evaluation is outlined as well.
---
paper_title: Neural network recognition of electronic malfunctions
paper_content:
Neural network software can be applied to manufacturing process control as a tool for diagnosing the state of an electronic circuit board. The neural network approach significantly reduces the amount of time required to build a diagnostic system. This time reduction occurs because the ordinary combinatorial explosion in rules for identifying faulted components can be avoided. Neural networks circumvent the combinatorial explosion by taking advantage of the fact that the fault characteristics of multiple simultaneous faults frequently correlate to the fault characteristics of the individual faulted components. This article clearly demonstrates that state-of-the-art neural networks can be used in automatic test equipment for iterative diagnosis of electronic circuit board malfunctions.
---
paper_title: Experience in using neural networks for electronic diagnosis
paper_content:
British Telecommunication plc (BT) has an interest in developing fast, efficient diagnostic systems especially for high volume circuit boards as found in today's digital telephone exchanges. Previous work to produce a diagnostic system for line cards has shown that a model-based, expert system shell can be most beneficial in assisting in the diagnosis and subsequent repair of these complex, mixed-signal cards. Expert systems, however successful, can take a long time to develop in terms of knowledge acquisition, model building and rule development. The re-emergence of neural networks stimulated the authors to develop a system that would diagnose common faults found on line cards by training a network using historical test data.
---
paper_title: System Test And Diagnosis
paper_content:
Part One: Motivation. 1. Introduction. 2. Maintainability: a Historical Perspective. 3. Field Diagnosis and Repair: the Problem. Part Two: Analysis and Application. 4. Bottom-Up Modeling for Diagnosis. 5. System Level Analysis for Diagnosis. 6. The Information Flow Model. 7. System Level Diagnosis. 8. Evaluating System Diagnosability. 9. Verification and Validation. 10. Architecture for System Diagnosis. Part Three: Advanced Topics. 11. Inexact Diagnosis. 12. Partitioning Large Problems. 13. Modeling Temporal Information. 14. Adaptive Diagnosis. 15. Diagnosis -- Art versus Science. References. Index.
---
paper_title: Diagnosis of multifaults in analogue circuits using multilayer perceptrons
paper_content:
It is shown, by means of an example, how multiple faults in bipolar analogue integrated circuits can be diagnosed, and their resistances determined, from the magnitudes of the Fourier harmonics in the spectrum of the circuit responses to a sinusoidal input test signal using a two-stage multilayer perceptron (MLP) artificial neural network arrangement to classify the responses to the corresponding fault. A sensitivity analysis is performed to identify those harmonic amplitudes which are most sensitive to the faults, and also to which faults the functioning of the circuit under test is most sensitive. The experimental and simulation procedures are described. The procedures adopted for data preprocessing and for training the MLPs are given. One hundred percent diagnostic accuracy was achieved, and most resistances were determined with tolerable accuracy.
---
paper_title: Fault diagnosis in complex systems using artificial neural networks
paper_content:
Very complex technical and other physical processes require sophisticated methods of fault diagnosis and online condition monitoring. Various conventional techniques have already been well investigated and presented in the literature. However, in the last few years, a lot of attention has been given to adaptive methods based on artificial neural networks, which can significantly improve the symptom interpretation and system performance in a case of malfunctioning. Such methods are especially considered in cases where no explicit algorithms or models for the problem under investigation exist. In such problems, automatic interpretation of faulty symptoms with the use of artificial neural network classifiers is recommended. Two different models of artificial neural networks, the extended backpropagation and the radial basis function, are discussed and applied with appropriate simulations for a real world applications in a chemical manufacturing plant. >
---
paper_title: Test and diagnosis of analog circuits: When fuzziness can lead to accuracy
paper_content:
Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented.
---
paper_title: Research Perspectives and Case Studies in System Test and Diagnosis
paper_content:
Preface. 1. Diagnostic Inaccuracies: Approaches to Mitigate W.R. Simpson. 2.Pass/Fail Limits - The Key to Effective Diagnostic Tests H. Dill. 3. Fault Hypothesis Computations Using Fuzzy Logic T.M. Bearse, M.L. Lynch. 4. Deriving a Diagnostic Inference Model from a Test Strategy T.M. Bearse. 5. Inducing Diagnostic Inference Models from Case Data J.W. Sheppard. 6. Accurate Diagnosis Through Conflict Management J.W. Sheppard, W.R. Simpson. 7. System Level Test Process Characterization and Improvement D. Farren, et al. 8. A Standard for Test and Diagnosis J. Taylor. 9. Advanced Onboard Diagnostic System for Vehicle Management K. Keller, et al. 12. Combining Model-Based and Case-Based Expert Systems M. Ben-Bassat, et al. 11. Enhanced Sequential Diagnosis A. Biasizzo, et al. Subject Index.
---
paper_title: FLAMES: A Fuzzy Logic ATMS and Model-based Expert System for Analog Diagnosis
paper_content:
Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem.
---
paper_title: Case-based diagnostic system using fuzzy neural network
paper_content:
In this paper, we describe a case-based system using fuzzy logic type neural networks for diagnosing electronic systems. We present a brief derivation of OR and AND neurons and the architecture of our system. To illustrate the effectiveness of the proposed system, we show experimental results on real data from call logs collected at the technical support centre in Ericsson Australia.
---
paper_title: INSIDE: a connectionist case-based diagnostic expert system that learns incrementally
paper_content:
The authors describe a connectionist case-based diagnostic expert system that can learn while the system is being used. The system, called INSIDE (Inertial Navigation System Interactive Diagnostic Expert), was developed for Singapore Airlines to assist the technicians in diagnosing the inertial navigation system used by the airplanes. The system learns from past repair cases and adapts its knowledge base to newly solved cases without having to relearn all the old cases. >
---
paper_title: A Model-Based Diagnosis System for Identifying Faulty Components in Digital Circuits
paper_content:
We describe the ideas and implementation of a model-based diagnosis system for digital circuits. Our work is based on Reiter's theory of diagnosis from first principles [14], incorporated with Hou's theory of measurements [17], to derive possible diagnoses in a fault diagnosis task. To determine the best order in which measurements are to be taken, a measurement selection strategy using the genetic algorithm (MSSGA) is proposed. A circuit description language for describing circuits hierarchically is given. An efficient propositional logic prover used for consistency checking based on the trie structure is developed [22]. An example run is given to illustrate the working of the system. Finally, a comparison with other systems is discussed, and possible extensions to our system are described.
---
paper_title: Circuit diagnosis support system for electronics assembly operations
paper_content:
Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation.
---
paper_title: Knowledge-based systems for instrumentation diagnosis, system configuration and circuit and system design
paper_content:
Abstract A number of knowledge-based systems in the electronic engineering field have been developed in the past decade. These include those that use knowledge-based techniques to diagnose instrumentation, determine system configurations and to aid circuit and system design. This paper reviews the literature on knowledge-based systems for electronic engineering applications in these areas and reports progress made in the development of a basis for realising electronic systems.
---
paper_title: Pimtool, an expert system to troubleshoot computer hardware failures
paper_content:
This paper describes a tool to diagnose the cause of failure of a HP computer server. We do this by analyzing the dump of Processor Internal Memory (PIM) and maximizing the leverage of expert learning from one hardware failure situation to another. The tool is a rule-based expert system, with some nested rules which translate to decision trees. The rules were implemented using a metalanguage which was customized for the hardware failure analysis problem domain. Pimtool has been deployed to 25 users as of December 1996. We plan to expand usage to over 400 users by end of 1997. Using Pimtool, we expect to save over 15 minutes in Mean-Time-to-Repair (MTIR) per call. We have recognized that knowledge management will be a key issue in the future and are developing tools and strategies to address it.
---
paper_title: AN INTELLIGENT APPROACH TO AUTOMATIC TEST EQUIPMENT
paper_content:
In diagnosing a failed system, a smart technician would choose tests to be performed based on the context of the situation. Currently, test program sets do not fault-. isolate within the context of a situation. Instead, testing follows a rigid, predetermined, fault-isolation sequence that is based on an embedded fault tree. Current test programs do not tolerate instrument failure and cannot redirect testing by incorporating new information. However, there is a new approach to automatic testing that emulates the best features of a trained technician yet, unlike the development of rule-based expert systems, does not require a trained technician to build the knowledge base. This new approach is model-based and has evolved over the last 10 years. This evolution has led to the development of several maintenance tools and an architecture for intelligent automatic test equipment (ATE). The architecture has been implemented for testing two cards from an AV-8B power supply.
---
paper_title: An incremental retrieval mechanism for case-based electronic fault diagnosis
paper_content:
One problem with using CBR for diagnosis is that a full case description may not be available at the beginning of the diagnosis. The standard CBR methodology requires a detailed case description in order to perform case retrieval and this is often not practical in diagnosis. We describe two fault diagnosis tasks where many features may make up a case description but only a few features are required in an individual diagnosis. We evaluate an incremental CBR mechanism that can initiate case retrieval with a skeletal case description and will elicit extra discriminating information during the diagnostic process.
---
paper_title: A CBR application: service productivity improvement by sharing experience
paper_content:
Field service is now recognized as one of the most important corporate activities in order to improve customer satisfaction and to compete successfully world-wide competition. Sharing repair experience with a state-of-the-art computer technology is a key issue to improve the productivity of field service. We have developed a diagnostic expert system, named Doctor, which employs case-based reasoning (CBR) and lists the most necessary ten service parts from a product type and some symptoms acquired from a service-request call. In this paper, we describe the Doctor system and explain how accurate and reliable product-type case-bases are generated and updated from the troubleshooting experience and the generic case base, i.e., general diagnostic knowledge. We also demonstrate the effectiveness of our system with experimental results using real repair cases. >
---
paper_title: Circuit diagnosis support system for electronics assembly operations
paper_content:
Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation.
---
paper_title: Test and diagnosis of analog circuits: When fuzziness can lead to accuracy
paper_content:
Testing and diagnosing analog circuits is a very challenging problem. The inaccuracy of measurement and the infinite domain of possible values are the principal difficulties. AI approaches were the base of many systems which tried to overcome these problems. The first part of this paper is a state of the art of this research area. We present two fundamental approaches, model-based reasoning and qualitative reasoning, and the systems implementing them; a discussion and an evaluation of these systems are given. In the second part, we present and propose a novel approach based on fuzzy logic in order to go further in dealing with analog circuits testing and diagnosis. Tolerance is treated by means of fuzzy intervals which are more general, more efficient and of higher fidelity to represent the imprecision in its different forms than other approaches. Fuzzy intervals are also able to be semi-qualitative which is more suitable to the simulation of analog systems. We use this idea to develop a best test point finding strategy based on fuzzy probabilities and fuzzy decision-making methodology. Finally, a complete expert system which implements this approach is presented.
---
paper_title: Circuit diagnosis support system for electronics assembly operations
paper_content:
Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation.
---
paper_title: Research Perspectives and Case Studies in System Test and Diagnosis
paper_content:
Preface. 1. Diagnostic Inaccuracies: Approaches to Mitigate W.R. Simpson. 2.Pass/Fail Limits - The Key to Effective Diagnostic Tests H. Dill. 3. Fault Hypothesis Computations Using Fuzzy Logic T.M. Bearse, M.L. Lynch. 4. Deriving a Diagnostic Inference Model from a Test Strategy T.M. Bearse. 5. Inducing Diagnostic Inference Models from Case Data J.W. Sheppard. 6. Accurate Diagnosis Through Conflict Management J.W. Sheppard, W.R. Simpson. 7. System Level Test Process Characterization and Improvement D. Farren, et al. 8. A Standard for Test and Diagnosis J. Taylor. 9. Advanced Onboard Diagnostic System for Vehicle Management K. Keller, et al. 12. Combining Model-Based and Case-Based Expert Systems M. Ben-Bassat, et al. 11. Enhanced Sequential Diagnosis A. Biasizzo, et al. Subject Index.
---
paper_title: FLAMES: A Fuzzy Logic ATMS and Model-based Expert System for Analog Diagnosis
paper_content:
Diagnosing analog circuits with their numerous known difficulties is a very hard problem. Digital approaches have proven to be inappropriate, and AI-based ones suffer from many problems. In this paper we present a new system, FLAMES, which uses fuzzy logic, model-based reasoning, ATMS extension, and the human expertise in an appropriate combination to go far in the treatment of this problem.
---
paper_title: Model-Based Diagnosis under Real-World Constraints
paper_content:
I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.
---
paper_title: The use of design descriptions in automated diagnosis
paper_content:
Abstract This paper describes a device-independent diagnostic program called dart. dart differs from previous approaches to diagnosis taken in the Artificial Intelligence community in that it works directly from design descriptions rather than mycin -like symptom-fault rules. dart differs from previous approaches to diagnosis taken in the design-automation community in that it is more general and in many cases more efficient. dart uses a device-independent language for describing devices and a device-independent inference procedure for diagnosis. The resulting generality allows it to be applied to a wide class of devices ranging from digital logic to nuclear reactors. Although this generality engenders some computational overhead on small problems, it facilitates the use of multiple design descriptions and thereby makes possible combinatoric savings that more than offsets this overhead on problems of realistic size.
---
paper_title: What Are Ontologies, and Why Do We Need Them?
paper_content:
This survey provides a conceptual introduction to ontologies and their role in information systems and AI. The authors also discuss how ontologies clarify the domain's structure of knowledge and enable knowledge sharing.
---
paper_title: Circuit diagnosis support system for electronics assembly operations
paper_content:
Abstract Diagnosis and repair operations are often major bottlenecks in electronics circuit assembly operations. Increasing board density and circuit complexity have made fault diagnosis difficult. But, with shrinking product life cycles and increasing competition, quick diagnosis and feedback is critical for cost control, process improvement, and timely product introduction. This paper describes a case-based diagnosis support system to improve the effectiveness and efficiency of circuit diagnosis in electronics assembly facilities. The system stores individual diagnostic instances rather than general rules and algorithmic procedures, and prioritizes the tests during the sequential testing process. Its knowledge base grows as new faults are detected and diagnosed by the analyzers. The system provides distributed access to multiple users, and incorporates on-line updating features that make it quick to adapt to changing circumstances. Because it is easy to install and update, this method is well-suited for real manufacturing applications. We have implemented a prototype version, and tested the approach in an actual electronics assembly environment. We describe the system's underlying principles, discuss methods to improve diagnostic effectiveness through principled test selection and sequencing, and discuss managerial implications for successful implementation.
---
|
Title: Fault diagnosis of electronic systems using intelligent techniques: a review
Section 1: Introduction
Description 1: Introduce the increasing complexity of electronic systems and the need for intelligent diagnostic techniques.
Section 2: Motivation
Description 2: Discuss the motivation behind the development of intelligent fault diagnosis techniques and the challenges faced in their industrial acceptance.
Section 3: Diagnostic Process
Description 3: Describe the steps involved in the fault diagnosis process, including fault information generation, fault hypotheses generation, and fault hypothesis discrimination.
Section 4: Traditional Approaches
Description 4: Review traditional fault diagnosis approaches such as rule-based systems and fault trees, including their applications and issues.
Section 5: Model-Based Approaches
Description 5: Discuss the various model-based approaches for fault diagnosis, including fault models, causal models, structural models, and diagnostic inference models, along with their applications and challenges.
Section 6: Machine Learning Approaches
Description 6: Explore machine learning techniques used in fault diagnosis, such as case-based reasoning, explanation-based learning, and learning knowledge from data, including their applications and issues.
Section 7: Intelligent Techniques
Description 7: Present other intelligent techniques such as fuzzy logic and artificial neural networks, covering their approaches, applications, and issues.
Section 8: Hybrid Approaches
Description 8: Discuss the combination of different techniques (e.g., model-based reasoning, case-based reasoning, fuzzy logic, neural networks) to improve fault diagnosis.
Section 9: Diagnostic Standards
Description 9: Outline the importance of IEEE standards related to AI in test and diagnostic environments, specifically AI-ESTATE.
Section 10: Commentary
Description 10: Provide a commentary on the different classes of knowledge applied to diagnosis, their advantages, disadvantages, and suitability for various application domains.
Section 11: Future Directions
Description 11: Discuss future research directions and challenges in the field of intelligent fault diagnosis of electronic systems.
Section 12: Summary
Description 12: Summarize the key points, emphasizing the need for automated diagnostic tools and the potential future developments in intelligent fault diagnosis.
|
Data quality: A survey of data quality dimensions
| 7 |
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Factors influencing organizations to improve data quality in their information systems
paper_content:
Although managers consider accurate, timely and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. This research examines factors influencing the level of data quality within a target organization. The results indicate that management's commitment to data quality and the presence of data quality champions strongly influence data quality in the target organization. The results also show that the managers of the participating organization are committed to achieving and maintaining high data quality. However, changing work processes and establishing a data quality awareness culture are required to motivate further improvements to data quality.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Data Quality: Concepts, Methodologies and Techniques
paper_content:
Poor data quality can seriously hinder or damage the efficiency and effectiveness of organizations and businesses. The growing awareness of such repercussions has led to major public initiatives like the "Data Quality Act" in the USA and the "European 2003/98" directive of the European Parliament. Batini and Scannapieco present a comprehensive and systematic introduction to the wide set of issues related to data quality. They start with a detailed description of different data quality dimensions, like accuracy, completeness, and consistency, and their importance in different types of data, like federated data, web data, or time-dependent data, and in different data categories classified according to frequency of change, like stable, long-term, and frequently changing data. The book's extensive description of techniques and methodologies from core data quality research as well as from related fields like data mining, probability theory, statistical data analysis, and machine learning gives an excellent overview of the current state of the art. The presentation is completed by a short description and critical comparison of tools and practical methodologies, which will help readers to resolve their own quality problems. This book is an ideal combination of the soundness of theoretical foundations and the applicability of practical approaches. It is ideally suited for everyone researchers, students, or professionals interested in a comprehensive overview of data quality issues. In addition, it will serve as the basis for an introductory course or for self-study on this topic.
---
paper_title: Developing a Framework for Assessing Information Quality on the World Wide Web
paper_content:
Introduction--The Big Picture Over the past decade, the [Internet.sup.1]--or World Wide Web (Technically the Internet is a huge collection of networked computers using TCP/IP protocol to exchange data. The World-wide Web (WWW) is in essence only part of this network of computers, however its visible status has meant that conceptually at least, it is often used interchangeably with "Internet" to describe the same thing.)--has established itself as the key infrastructure for information administration, exchange, and publication (Alexander & Tate, 1999), and Internet Search Engines are the most commonly used tool to retrieve that information (Wang, 2001). The deficiency of enforceable standards however, has resulted in frequent information quality problems (Eppler & Muenzenmayer, 2002). This paper is part of a research project undertaken at Edith Cowan, Wollongong and Sienna Universities, to build an Internet Focused Crawler that uses "Quality" criterion in determining returns to user queries. Such a task requires that the conceptual notions of quality be ultimately quantified into Search Engine algorithms that interact with Webpage technologies, eliminating documents that do not meet specifically determined standards of quality. The focus of this paper, as part of the wider research, is on the concepts of Quality in Information and Information Systems, specifically as it pertains to Information and Information Retrieval on the Internet. As with much of the research into Information Quality (IQ) in Information Systems, the term is interchangeable with Data Quality (DQ). What Is Information Quality? Data and Information Quality is commonly thought of as a multi-dimensional concept (Klein, 2001) with varying attributed characteristics depending on an author's philosophical view-point. Most commonly, the term "Data Quality" is described as data that is "Fit-for-use" (Wang & Strong, 1996), which implies that it is relative, as data considered appropriate for one use may not possess sufficient attributes for another use (Tayi & Ballou, 1998). IQ as a series of Dimensions Table 1 summaries 12 widely accepted IQ Frameworks collated from the last decade of IS research. While varied in their approach and application, the frameworks share a number of characteristics regarding their classifications of the dimensions of quality. An analysis of Table 1 reveals the common elements between the different IQ Frameworks. These include such traditional dimensions as accuracy, consistency, timeliness, completeness, accessibility, objectiveness and relevancy. Table 2 provides a summary of the most common dimensions and the frequency with which they are included in the above IQ Frameworks. Each dimension also includes a short definition. IQ in the context of its use In order to accurately define and measure the concept of Information quality, it is not enough to identify the common elements of IQ Frameworks as individual entities in their own right. In fact, Information Quality needs to be assessed within the context of its generation (Shanks & Corbitt, 1999) and intended use (Katerattanakul & Siau, 1999). This is because the attributes of data quality can vary depending on the context in which the data is to be used (Shankar & Watts, 2003). Defining what Information Quality is within the context of the World Wide Web and its Search Engines then, will depend greatly on whether dimensions are being identified for the producers of information, the storage and maintenance systems used for information, or for the searchers and users of information. The currently accepted view of assessing IQ, involves understanding it from the users point of view. Strong and Wang (1997) suggest that quality of data cannot be assessed independent of the people who use data. Applying this commonly to the World Wide Web has its own set of problems. …
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Data Quality: Concepts, Methodologies and Techniques
paper_content:
Poor data quality can seriously hinder or damage the efficiency and effectiveness of organizations and businesses. The growing awareness of such repercussions has led to major public initiatives like the "Data Quality Act" in the USA and the "European 2003/98" directive of the European Parliament. Batini and Scannapieco present a comprehensive and systematic introduction to the wide set of issues related to data quality. They start with a detailed description of different data quality dimensions, like accuracy, completeness, and consistency, and their importance in different types of data, like federated data, web data, or time-dependent data, and in different data categories classified according to frequency of change, like stable, long-term, and frequently changing data. The book's extensive description of techniques and methodologies from core data quality research as well as from related fields like data mining, probability theory, statistical data analysis, and machine learning gives an excellent overview of the current state of the art. The presentation is completed by a short description and critical comparison of tools and practical methodologies, which will help readers to resolve their own quality problems. This book is an ideal combination of the soundness of theoretical foundations and the applicability of practical approaches. It is ideally suited for everyone researchers, students, or professionals interested in a comprehensive overview of data quality issues. In addition, it will serve as the basis for an introductory course or for self-study on this topic.
---
paper_title: Data quality improvement using fuzzy association rules
paper_content:
The activities and decisions of organizations and companies are based on data and the information obtained from data analysis. Data quality plays a crucial role in data analysis, because the incorrect data leads to wrong decisions. Nowadays, improving the data quality manually is very difficult and in many cases is impossible as data quality is one of the complicated and non-structured concepts and data refinement process can not be done without the help of professional domain experts, and detection and correction of errors require a thorough knowledge in the related domain of the data. Thus, the necessity of using (semi-)automatic methods is discussed to find data defects and errors and solve them. Because data mining methods are designed to discover interesting patterns in datasets, we can use them efficiently to improve different dimensions of data quality. In this paper, a new method is presented to measure the accuracy dimension of data quality using fuzzy association rules. Finally, Experimental results of the proposed method show the effectiveness of the proposed method to find incorrect values in datasets.
---
paper_title: Dimensions of Business Processes Quality (QoBP)
paper_content:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: A Noval Data Quality Controlling and Assessing Model Based on Rules
paper_content:
As a resource, data is the base for information construction and application. According to the principle of “garbage in and garbage out”, it needs us to ensure data reliability, no errors and accurately reflect the real situation to support the right decisions. However, due to various reasons, it leads to poor quality of dirty data in existing system business, while the dirty data is an important factor which affects right decisions. For the above, in this paper, a metadata-based data quality rule base is created for improving traditional quality control model, a more practical application of the weighted assessment algorithm is proposed and a three-tier data quality assessment system model is constructed based on the study of definition and classification of quality, assessment algorithm, metadata and the control theory. This model is confirmed to achieve comprehensive quality of data management and control in oilfield practical applications.
---
paper_title: Analysis of data quality and information quality problems in digital manufacturing
paper_content:
This work focuses on the increasing importance of data quality in organizations, especially in digital manufacturing companies. The paper firstly reviews related works in field of data quality, including definition, dimensions, measurement and assessment, and improvement of data quality. Then, by taking the digital manufacturing as research object, the different information roles, information manufacturing processes, influential factors of information quality, and the transformation levels and paths of the data/information quality in digital manufacturing companies are analyzed. Finally an approach for the diagnosis, control and improvement of data/information quality in digital manufacturing companies, which is the basis for further works, is proposed.
---
paper_title: A Conceptual Framework and Belief Function Approach to Assessing Overall Information Quality
paper_content:
We develop an information quality model based on a user-centric view adapted from Financial Accounting Standards Board1, Wang et al.2, and Wang and Strong3. The model consists of four essential attributes (or assertions): ‘Accessibility,’ ‘Interpretability,’ ‘Relevance,’ and ‘Integrity.’ Four sub-attributes lead to an evaluation of Integrity: ‘Accuracy,’ ‘Completeness,’ ‘Consistency,’ and ‘Existence.’ These sub-attributes relating to 'Integrity' are intrinsic in nature and relate to the process of how the information was created while the first three attributes: ‘Accessibility,’ ‘Interpretability,’ and ‘Relevance’ are extrinsic in nature. We present our model as an evidential network under the belief-function framework to permit user assessment of quality parameters. Two algorithms for combining assessments into an overall IQ measure are explored, and examples in the domain of medical information are used to illustrate key concepts. We discuss two scenarios, ‘online-user’ and ‘assurance-provider,’ which reflect two likely and important aspects of IQ evaluation currently facing information users – concerns about the impact of poor quality online information, and the need for information quality assurance.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Fundamentals of Data Warehouses
paper_content:
From the Publisher: ::: Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents a comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Modeling Data and Process Quality in Multi-Input, Multi-Output Information Systems
paper_content:
This paper presents a general model to assess the impact of data and process quality upon the outputs of multi-user information-decision systems. The data flow/data processing quality control model is designed to address several dimensions of data quality at the collection, input, processing and output stages. ::: ::: Starting from a data flow diagram of the type used in structured analysis, the model yields a representation of possible errors in multiple intermediate and final outputs in terms of input and process error functions. The model generates expressions for the possible magnitudes of errors in selected outputs. This is accomplished using a recursive-type algorithm which traces systematically the propagation and alteration of various errors. These error expressions can be used to analyze the impact that alternative quality control procedures would have on the selected outputs. ::: ::: The paper concludes with a discussion of the tractability of the model for various types of information systems as well as an application to a representative scenario.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: A Conceptual Framework and Belief Function Approach to Assessing Overall Information Quality
paper_content:
We develop an information quality model based on a user-centric view adapted from Financial Accounting Standards Board1, Wang et al.2, and Wang and Strong3. The model consists of four essential attributes (or assertions): ‘Accessibility,’ ‘Interpretability,’ ‘Relevance,’ and ‘Integrity.’ Four sub-attributes lead to an evaluation of Integrity: ‘Accuracy,’ ‘Completeness,’ ‘Consistency,’ and ‘Existence.’ These sub-attributes relating to 'Integrity' are intrinsic in nature and relate to the process of how the information was created while the first three attributes: ‘Accessibility,’ ‘Interpretability,’ and ‘Relevance’ are extrinsic in nature. We present our model as an evidential network under the belief-function framework to permit user assessment of quality parameters. Two algorithms for combining assessments into an overall IQ measure are explored, and examples in the domain of medical information are used to illustrate key concepts. We discuss two scenarios, ‘online-user’ and ‘assurance-provider,’ which reflect two likely and important aspects of IQ evaluation currently facing information users – concerns about the impact of poor quality online information, and the need for information quality assurance.
---
paper_title: Quality-Driven Query Answering for Integrated Information Systems
paper_content:
Querying the Web.- Integrating Autonomous Information Sources.- Information Quality.- Information Quality Criteria.- Quality Ranking Methods.- Quality-Driven Query Answering.- Quality-Driven Query Planning.- Query Planning Revisited.- Completeness of Data.- Completeness-Driven Query Optimization.- Discussion.- Conclusion.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Fundamentals of Data Warehouses
paper_content:
From the Publisher: ::: Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents a comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Dimensions of Business Processes Quality (QoBP)
paper_content:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Dimensions of Business Processes Quality (QoBP)
paper_content:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
---
paper_title: Developing a Framework for Assessing Information Quality on the World Wide Web
paper_content:
Introduction--The Big Picture Over the past decade, the [Internet.sup.1]--or World Wide Web (Technically the Internet is a huge collection of networked computers using TCP/IP protocol to exchange data. The World-wide Web (WWW) is in essence only part of this network of computers, however its visible status has meant that conceptually at least, it is often used interchangeably with "Internet" to describe the same thing.)--has established itself as the key infrastructure for information administration, exchange, and publication (Alexander & Tate, 1999), and Internet Search Engines are the most commonly used tool to retrieve that information (Wang, 2001). The deficiency of enforceable standards however, has resulted in frequent information quality problems (Eppler & Muenzenmayer, 2002). This paper is part of a research project undertaken at Edith Cowan, Wollongong and Sienna Universities, to build an Internet Focused Crawler that uses "Quality" criterion in determining returns to user queries. Such a task requires that the conceptual notions of quality be ultimately quantified into Search Engine algorithms that interact with Webpage technologies, eliminating documents that do not meet specifically determined standards of quality. The focus of this paper, as part of the wider research, is on the concepts of Quality in Information and Information Systems, specifically as it pertains to Information and Information Retrieval on the Internet. As with much of the research into Information Quality (IQ) in Information Systems, the term is interchangeable with Data Quality (DQ). What Is Information Quality? Data and Information Quality is commonly thought of as a multi-dimensional concept (Klein, 2001) with varying attributed characteristics depending on an author's philosophical view-point. Most commonly, the term "Data Quality" is described as data that is "Fit-for-use" (Wang & Strong, 1996), which implies that it is relative, as data considered appropriate for one use may not possess sufficient attributes for another use (Tayi & Ballou, 1998). IQ as a series of Dimensions Table 1 summaries 12 widely accepted IQ Frameworks collated from the last decade of IS research. While varied in their approach and application, the frameworks share a number of characteristics regarding their classifications of the dimensions of quality. An analysis of Table 1 reveals the common elements between the different IQ Frameworks. These include such traditional dimensions as accuracy, consistency, timeliness, completeness, accessibility, objectiveness and relevancy. Table 2 provides a summary of the most common dimensions and the frequency with which they are included in the above IQ Frameworks. Each dimension also includes a short definition. IQ in the context of its use In order to accurately define and measure the concept of Information quality, it is not enough to identify the common elements of IQ Frameworks as individual entities in their own right. In fact, Information Quality needs to be assessed within the context of its generation (Shanks & Corbitt, 1999) and intended use (Katerattanakul & Siau, 1999). This is because the attributes of data quality can vary depending on the context in which the data is to be used (Shankar & Watts, 2003). Defining what Information Quality is within the context of the World Wide Web and its Search Engines then, will depend greatly on whether dimensions are being identified for the producers of information, the storage and maintenance systems used for information, or for the searchers and users of information. The currently accepted view of assessing IQ, involves understanding it from the users point of view. Strong and Wang (1997) suggest that quality of data cannot be assessed independent of the people who use data. Applying this commonly to the World Wide Web has its own set of problems. …
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Dimensions of Business Processes Quality (QoBP)
paper_content:
Conceptual modeling is an important tool for understanding and revealing weaknesses of business processes. Yet, the current practice in reengineering projects often considers simply the as-is process model as a brain-storming tool. This approach heavily relies on the intuition of the participants and misses a clear description of the quality requirements. Against this background, we identify four generic quality categories of business process quality, and populate them with quality requirements from related research. We refer to the resulting framework as the Quality of Business Process (QoBP) framework. Furthermore, we present the findings from applying the QoBP framework in a case study with a major Australian bank, showing that it helps to systematically fill the white space between as-is and to-be process modeling.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Beyond accuracy: what data quality means to data consumers
paper_content:
Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Developing a Framework for Assessing Information Quality on the World Wide Web
paper_content:
Introduction--The Big Picture Over the past decade, the [Internet.sup.1]--or World Wide Web (Technically the Internet is a huge collection of networked computers using TCP/IP protocol to exchange data. The World-wide Web (WWW) is in essence only part of this network of computers, however its visible status has meant that conceptually at least, it is often used interchangeably with "Internet" to describe the same thing.)--has established itself as the key infrastructure for information administration, exchange, and publication (Alexander & Tate, 1999), and Internet Search Engines are the most commonly used tool to retrieve that information (Wang, 2001). The deficiency of enforceable standards however, has resulted in frequent information quality problems (Eppler & Muenzenmayer, 2002). This paper is part of a research project undertaken at Edith Cowan, Wollongong and Sienna Universities, to build an Internet Focused Crawler that uses "Quality" criterion in determining returns to user queries. Such a task requires that the conceptual notions of quality be ultimately quantified into Search Engine algorithms that interact with Webpage technologies, eliminating documents that do not meet specifically determined standards of quality. The focus of this paper, as part of the wider research, is on the concepts of Quality in Information and Information Systems, specifically as it pertains to Information and Information Retrieval on the Internet. As with much of the research into Information Quality (IQ) in Information Systems, the term is interchangeable with Data Quality (DQ). What Is Information Quality? Data and Information Quality is commonly thought of as a multi-dimensional concept (Klein, 2001) with varying attributed characteristics depending on an author's philosophical view-point. Most commonly, the term "Data Quality" is described as data that is "Fit-for-use" (Wang & Strong, 1996), which implies that it is relative, as data considered appropriate for one use may not possess sufficient attributes for another use (Tayi & Ballou, 1998). IQ as a series of Dimensions Table 1 summaries 12 widely accepted IQ Frameworks collated from the last decade of IS research. While varied in their approach and application, the frameworks share a number of characteristics regarding their classifications of the dimensions of quality. An analysis of Table 1 reveals the common elements between the different IQ Frameworks. These include such traditional dimensions as accuracy, consistency, timeliness, completeness, accessibility, objectiveness and relevancy. Table 2 provides a summary of the most common dimensions and the frequency with which they are included in the above IQ Frameworks. Each dimension also includes a short definition. IQ in the context of its use In order to accurately define and measure the concept of Information quality, it is not enough to identify the common elements of IQ Frameworks as individual entities in their own right. In fact, Information Quality needs to be assessed within the context of its generation (Shanks & Corbitt, 1999) and intended use (Katerattanakul & Siau, 1999). This is because the attributes of data quality can vary depending on the context in which the data is to be used (Shankar & Watts, 2003). Defining what Information Quality is within the context of the World Wide Web and its Search Engines then, will depend greatly on whether dimensions are being identified for the producers of information, the storage and maintenance systems used for information, or for the searchers and users of information. The currently accepted view of assessing IQ, involves understanding it from the users point of view. Strong and Wang (1997) suggest that quality of data cannot be assessed independent of the people who use data. Applying this commonly to the World Wide Web has its own set of problems. …
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information (Tm)
paper_content:
Information is currency. Recent studies show that data quality problems are costing businesses billions of dollars each year, with poor data linked to waste and inefficiency, damaged credibility among customers and suppliers, and an organizational inability to make sound decisions. In this important and timely new book, Danette McGilvray presents her ?Ten Steps? approach to information quality, a proven method for both understanding and creating information quality in the enterprise. Her trademarked approach-in which she has trained Fortune 500 clients and hundreds of workshop attendees-applies to all types of data and to all types of organizations. ::: ::: * Includes numerous templates, detailed examples, and practical advice for executing every step of the ?Ten Steps? approach. ::: ::: * Allows for quick reference with an easy-to-use format highlighting key concepts and definitions, important checkpoints, communication activities, and best practices. ::: ::: * A companion Web site includes links to numerous data quality resources, including many of the planning and information-gathering templates featured in the text, quick summaries of key ideas from the Ten Step methodology, and other tools and information available online. ::: ::: Table of Contents ::: ::: Introduction ::: The Reason for This Book ::: Intended Audiences ::: Structure of This Book ::: How to Use This Book ::: Acknowledgements ::: ::: Chapter 1 Overview ::: Impact of Information and Data Quality ::: About the Methodology ::: Approaches to Data Quality in Projects ::: Engaging Management ::: ::: Chapter 2 Key Concepts ::: Introduction ::: Framework for Information Quality (FIQ) ::: Information Life Cycle ::: Data Quality Dimensions ::: Business Impact Techniques ::: Data Categories ::: Data Specifications ::: Data Governance and Stewardship ::: The Information and Data Quality Improvement Cycle ::: The Ten Steps? Process ::: Best Practices and Guidelines ::: ::: Chapter 3 The Ten Steps ::: 1. Define Business Need and Approach ::: 2. Analyze Information Environment ::: 3. Assess Data Quality ::: 4. Assess Business Impact ::: 5. Identify Root Causes ::: 6. Develop Improvement Plans ::: 7. Prevent Future Data Errors ::: 8. Correct Current Data Errors ::: 9. Implement Controls ::: 10. Communicate Actions and Results ::: ::: Chapter 4 Structuring Your Project ::: Projects and The Ten Steps ::: Data Quality Project Roles ::: Project Timing ::: ::: Chapter 5 Other Techniques and Tools ::: Introduction ::: Information Life Cycle Approaches ::: Capture Data ::: Analyze and Document Results ::: Metrics ::: Data Quality Tools ::: The Ten Steps and Six Sigma ::: ::: Chapter 6 A Few Final Words ::: ::: Appendix Quick References ::: Framework for Information Quality ::: POSMAD Interaction Matrix Detail ::: POSMAD Phases and Activities ::: Data Quality Dimensions ::: Business Impact Techniques ::: The Ten Steps? Overview ::: Definitions of Data Categories
---
paper_title: Dependency discovery in data quality
paper_content:
A conceptual framework for the automatic discovery of dependencies between data quality dimensions is described. Dependency discovery consists in recovering the dependency structure for a set of data quality dimensions measured on attributes of a database. This task is accomplished through the data mining methodology, by learning a Bayesian Network from a database. The Bayesian Network is used to analyze dependency between data quality dimensions associated with different attributes. The proposed framework is instantiated on a real world database. The task of dependency discovery is presented in the case when the following data quality dimensions are considered; accuracy, completeness, and consistency. The Bayesian Network model shows how data quality can be improved while satisfying budget constraints.
---
paper_title: Methodologies for data quality assessment and improvement
paper_content:
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
---
paper_title: Data quality assessment
paper_content:
How good is a company's data quality? Answering this question requires usable data quality metrics. Currently, most data quality measures are developed on an ad hoc basis to solve specific problems [6, 8], and fundamental principles necessary for developing usable metrics in practice are lacking. In this article, we describe principles that can help organizations develop usable data quality metrics.
---
paper_title: Analysis of data quality and information quality problems in digital manufacturing
paper_content:
This work focuses on the increasing importance of data quality in organizations, especially in digital manufacturing companies. The paper firstly reviews related works in field of data quality, including definition, dimensions, measurement and assessment, and improvement of data quality. Then, by taking the digital manufacturing as research object, the different information roles, information manufacturing processes, influential factors of information quality, and the transformation levels and paths of the data/information quality in digital manufacturing companies are analyzed. Finally an approach for the diagnosis, control and improvement of data/information quality in digital manufacturing companies, which is the basis for further works, is proposed.
---
|
Title: Data Quality: A Survey of Data Quality Dimensions
Section 1: INTRODUCTION
Description 1: Introduce the importance of data quality in organizational activities and decision-making processes.
Section 2: DATA QUALITY STRATEGIES AND TECHNIQUES
Description 2: Discuss the strategies and techniques used for improving data quality.
Section 3: TYPES OF DATA
Description 3: Explain different classifications of data and how they relate to data quality.
Section 4: DATA QUALITY DEFINITIONS
Description 4: Describe various definitions of data quality across different fields.
Section 5: DATA QUALITY PROBLEMS CLASSIFICATION
Description 5: Classify the common problems associated with data quality to better understand their impact.
Section 6: DISCUSSION
Description 6: Analyze the dependencies between different data quality dimensions and their importance in improving process quality.
Section 7: CONCLUSION
Description 7: Summarize the findings and propose future work for assessing and improving data quality dimensions.
|
Adoption of the Internet of Things (IoT) in agriculture and smart farming towards urban greening: A review
| 7 |
---
paper_title: A smart agricultural model by integrating IoT, mobile and cloud-based big data analytics
paper_content:
Nowadays, the traditional database paradigm does not have enough storage for the data produced by Internet of Things (IoT) devices leads to the need of cloud storage. These data's are analyzed with the help of Big Data mining techniques. Cloud based big data analytics and the IoT technology performs an important role in the feasibility study of smart agriculture. Smart or precision agricultural systems are estimated to play an essential role in improving agriculture activities. Mobile device usage is very common by everyone, including the farmers. In that, in the daily life of farmers the Information and Communication Technologies (ICT) play a vital role to get the agricultural Information. The IoT has various applications in Digital Agriculture domain like monitoring the crop growth, selection of the fertilizer, irrigation decision support system, etc. In this paper, IoT device is used to sense the agricultural data and it is stored into the Cloud database. Cloud based Big data analysis is used to analyze the data viz. fertilizer requirements, analysis the crops, market and stock requirements for the crop. Then the prediction is performed based on data mining technique which information reaches the farmer via mobile app. Our ultimate aim is to increase the crop production and control the agricultural cost of the products using this predicted information.
---
paper_title: IoT enabled plant soil moisture monitoring using wireless sensor networks
paper_content:
In recent years, the increasing demand on organic farming necessitates continuous monitoring of plant health. In order to ensure the quality and quantity this becomes more essential. Hence, the objective of this research is to develop a remote monitoring system that continuously monitors the soil moisture of the plant. The Wireless Sensor Network (WSN) is integrated with Internet of Things (IoT) to achieve the above objective. Further, to enhance the network lifetime, Exponential Weighted Moving Average (EWMA) event detection algorithm is adopted in the proposed research.
---
paper_title: IoT based smart irrigation system and nutrient detection with disease analysis
paper_content:
Agriculture remains the sector which contributes the highest to India's GDP. But, when considering technology that is deployed in this field, we find that the development is not tremendous. After consulting with the Kerala Agricultural University, Mannuthy and Kerala Rice Research Station, Vytilla, we identified a few fundamental issues that are faced by the paddy farmers today. It includes the problem of over or under watering and the need for regular manual irrigation. Furthermore, when it comes to rice, which is the staple crop of Kerala, there does not exist a system for automatically monitoring the diseases associated with the rice species, and checking whether the crop is supplied with ample amount of nutrients. In this paper, we have devised a means for cost-effective automated irrigation and fertigation along with MATLAB based image processing for identifying the rice diseases and nutrient deficiencies. Here, we are focusing on two important nutrients, namely magnesium and nitrogen. The hardware consists of a Raspberry Pi, DHT11 temperature and humidity sensor and solenoid valves. Furthermore, the proposed model enables the farmer to monitor weather conditions using an Android Application, with which he also has a choice to override the system if required.
---
paper_title: Intelligent irrigation system — An IOT based approach
paper_content:
The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.
---
paper_title: Lossy compression on IoT big data by exploiting spatiotemporal correlation
paper_content:
As the volume of data generated by various deployed IoT devices increases, storing and processing IoT big data becomes a huge challenge. While compression, especially lossy ones, can drastically reduce data volume, finding an optimal balance between the volume reduction and the information loss is not an easy task given that the data collected by diverse sensors exhibit different characteristics. Motivated by this, we present a feasibility analysis of lossy compression on agricultural sensor data by comparing fidelity of reconstructed data from various signal processing algorithms and temporal difference encoding. Specifically, we evaluated five real-world sensor data from weather stations as one of major IoT applications. Our experimental results indicate that Discrete Cosine Transform (DCT) and Fast Walsh-Hadamard Transform (FWHT) generate higher compression ratios than others. In terms of information loss, Lossy Delta Encoding (LDE) significantly outperforms others nonetheless. We also observe that, as compression factor is increased, error rates for all compression algorithms also increase. However, the impact of introduced error is much severe in DCT and FWHT while LDE was able to maintain a relatively lower error rate than other methods.
---
paper_title: Automation in drip irrigation using IOT devices
paper_content:
Internet of things (IOT) has become very popular and it is growing rapidly in the field of communication. This paper describes the control and monitor of drip irrigation process using IOT and Image processing techniques. The automated system includes a database to analyze the water requirement of plant. The database contains a predefined soil moisture values and it is editable according to the soil type of the place where we implement this system. Different types of soils will have different moisture values and moisture values will also change in accordance with the climate. Our proposed system uses a soil moisture sensor to take the soil moisture reading from the field in real time to water them in an automated way by switching the drip service ON/OFF using an android App. Most of the times infected plants are identified by the change in color of the leaves. A camera is used to take pictures of the plants leaves and image processing algorithms are implemented on those captured image for further steps. A database is used for comparison between the original and real time image, which results in identifying the diseased plant or part of plant. Thus, our proposed system is made effective and cost efficient.
---
paper_title: A sustainable agricultural system using IoT
paper_content:
The Internet of Things (IoT) is an element of worldwide data that comprise of web associated items, or things implanted with gadgets, software, sensors and also different instruments which are becoming an integral component of future internet. This work developed a system which will automatically monitor the agricultural field as well as performing live video streaming for monitoring the Agriculture field from the server itself, through raspberry pi camera. The agriculture fields are monitored for environmental temperature, humidity and soil moisture. The automatic irrigation will be performed based on the set points of the temperature, humidity and soil moisture sensor. The data collected from the field are monitored in IoT, the data are then processed and necessary information is passed through the field owners for counter measures.
---
paper_title: Providing smart agricultural solutions/techniques by using Iot based toolkit
paper_content:
IoT integrates the pervasive computing, ubiquitous communications and ambient intelligence. India is known as the ‘land of agriculture’. Approximately 70% of the Indian population are connected with agriculture and its related activities. Most of the Agricultural activities are done by villagers. Natural resources and whether conditions are major vital factors in agriculture. This paper proposes automation and smart IOT based solution for agriculture. The highlighting features of this paper include temperature and humidity detection, soil moisture detection, leaf wetness detection, wind speed/direction and rainfall detection, soil pH detection, seed recognition and efficient irrigation system. Our proposed system uses a microcontroller named raspberry pi with sensors which are use to sense the different environmental conditions.
---
paper_title: IOT Based Monitoring System in Smart Agriculture
paper_content:
Internet of Things (IoT) plays a crucial role in smart agriculture. Smart farming is an emerging concept, because IoT sensors capable of providing information about their agriculture fields. The paper aims making use of evolving technology i.e. IoT and smart agriculture using automation. Monitoring environmental factors is the major factor to improve the yield of the efficient crops. The feature of this paper includes monitoring temperature and humidity in agricultural field through sensors using CC3200 single chip. Camera is interfaced with CC3200 to capture images and send that pictures through MMS to farmers mobile using Wi-Fi.
---
paper_title: Performance evaluation of IEEE 802.15.4-compliant smart water meters for automating large-scale waterways
paper_content:
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
---
paper_title: Plant and taste to reap with Internet of Things implementation of IoT in agriculture to make it a parallel industry
paper_content:
India is an agricultural country but no one wants to become a farmer. It's high time that we make agriculture a parallel industry in India [1]. Our project PATRIOT focuses on the same. It is rightly termed as patriot as anyone involved with the project will serve the country and feel proud and patriotic. Right from detection of crop diseases [2] to automating the things like water pump [3], a lot of work has been done on implementation of IoT in Agriculture. But they do not cater to the real problems of the farmers. The farmers can do without a crop disease detection app or automations but raise in income is the major challenge which is overlooked. The implementation of IoT should be such that it should increase the income of the farmer and at the same time help consumers to get organic fruits and vegetables at competitive prices. Also, IoT should itself raise funds for realistic implementation of IoT. PATRIOT stands for plant and taste to reap with Internet of Things. It depicts a gradual and feasible implementation of IoT in agriculture. It realizes the fact that implementation of IoT cannot be hurried upon and would require a huge investment. So, it focuses on raising the finance required for implementation of IoT in Agriculture with the help of IoT itself. PATRIOT has a foolproof plan to help anyone to become a farmer without sacrificing on the present luxurious life style and high salary job. It introduces the concept of agriculture “Anywhere-Anytime” using IoT. PATRIOT will make the farmer dynamic and richer. It will make the consumer pay less for the food and food products. It will generate the huge amount of money for the realistic implementation of IoT. It will give each of us an opportunity to plant and look after the plantation using IOT. It will give each of us a satisfaction of tasting the vegetables, fruits that we have grown. It will make us reap the benefits of farming without ever disturbing our busy schedule. It will make us, above all, a true PATRIOT because we will get immense satisfaction in helping our country in a way in which it actually should be.
---
paper_title: IoT based smart irrigation monitoring and controlling system
paper_content:
Interconnection of number of devices through internet describes the Internet of things (IoT). Every object is connected with each other through unique identifier so that data can be transferred without human to human interaction. It allows establishing solutions for better management of natural resources. The smart objects embedded with sensors enables interaction with the physical and logical worlds according to the concept of IoT. In this paper proposed system is based on IoT that uses real time input data. Smart farm irrigation system uses android phone for remote monitoring and controlling of drips through wireless sensor network. Zigbee is used for communication between sensor nodes and base station. Real time sensed data handling and demonstration on the server is accomplished using web based java graphical user interface. Wireless monitoring of field irrigation system reduces human intervention and allows remote monitoring and controlling on android phone. Cloud Computing is an attractive solution to the large amount of data generated by the wireless sensor network. This paper proposes and evaluates a cloud-based wireless communication system to monitor and control a set of sensors and actuators to assess the plants water need.
---
paper_title: Automated irrigation with advanced seed germination and pest control
paper_content:
Agriculture can be split into multiple phases like sowing the seeds, germinating the seeds, irrigating, etc. Our innovation deals with automating certain phases of agriculture, such as irrigation and pest control, using artificial intelligence, machine learning and Internet of things. This can be achieved with the help of Ubiquitous computing (Nomadic computing) and sensors working on the concept of IOT. Initially using data analytics techniques, the data about the moisture level maintained in various soils over the entire period to grow various crops are recorded in an online database. In automation the system uses this data to maintain the moisture level in the soil to grow the desired crop. The moisture level in the soil is monitored using multiple moisture sensors which are located all around the field. When the system finds that the moisture is less in the soil than the required amount, as per the data in the database, the field is irrigated using an existing system which uses mechanical valves to allow water to various fields in the farm. But the existing system only works based on the commands from user, in the form of messages from a mobile computer, to control the valves. The proposed system automates this system by integrating it, using the concept of IOT, to its centralised computer system. The proposed system also includes pest control. Currently we use chemicals to kill the pests, whereas, the proposed system uses an ultrasonic sound emitter to keep the rodents away from the farm. It is scientifically proven that the ultrasonic sounds are capable of keeping the rodents away. Also it is proven that certain frequency sound waves boost the growth rate of crops. Thus using all these technologies, low cost and efficient growth of crops is achieved, helping in development of agriculture field.
---
paper_title: Implementation of smart infrastructure and non-invasive wearable for real time tracking and early identification of diseases in cattle farming using IoT
paper_content:
The major objective of this system is to make infrastructure of cattle farming smarter and to implement a noninvasive wearable to track physiological and biological activities of cattle using Internet of Things alias IoT. Each cattle is tagged with a wearable device. The wearable device and sink node is designed based on architecture of device to cloud. The wearable device is for early detection of illness, abnormalities detection, emergency handling, location tracking, calving time intimation and determine disease before visual signs. The sink node is responsible for smart lighting, smart ventilation, smart watering and smoke detection along with sprinkler actuation to make infrastructure smarter and safer. All sensor readings will be forwarded to thingspeak cloud to enable remote access. Through thingspeak cloud real time health characteristics of individual cattle, smoke level in farm, daily water and electricity usage will be displayed in line graph. All data along with time and date can be extracted as an excel sheet for further analysis. As a result overall cattle health and milk production will be improved by reducing cattle health inspection costs ensuring small size, low cost, high consistency and reliability.
---
paper_title: The comparison of soil sensors for integrated creation of IOT-based Wetting front detector (WFD) with an efficient irrigation system to support precision farming
paper_content:
This study investigates a prototyping of integrated system of Internet of Things based Wetting front detector (IOT-WFD) which focuses on how to enhance the IOT based Wetting front detector design for smart irrigation system. The empirical study was conducted with 2 sensors type to detect the wetting fronts which are the Frequency Domain Reflectrometry sensor (FDR) and Resistor-based sensor (RB) integrated and design with low-cost WFD. The results of this study point toward the IOT-WFD as an appropriated technology providing real time wetting front information in soil positively for application in terms of agricultural water management, with precision agriculture and efficient irrigation domain with a related decision knowledge that matches with the technology trend and smart farmer requirements. Evidence of positive results of this prototyping summary has been provided.
---
paper_title: Design and Implementation of Smart Irrigation System Based on LoRa
paper_content:
Water resource: one of the most important natural resource problem to be paid more attention in the world in 21st Century. Irrigation method in traditional agriculture has low utilization of water resource. With the development of Internet of Things(IoT), smart irrigation system has became a new trend in the field of agricultural irrigation. This paper proposes a LoRa- based smart irrigation system. In this system, the irrigation node is mainly composed of LoRa communication module, solenoid valve and hydroelectric generator. The irrigation node sends data to cloud through LoRa gateways via wireless transmission. The system can be controlled remotely by mobile applications. Experimental results show that both transmission distance and energy consumption in the proposed system are reliable the proposed system are reliable.
---
paper_title: Rice crop monitoring system — A lot based machine vision approach
paper_content:
In India farmers lose 37% of average estimated rice crop because of bacterial blight disease “Xanthomonasoryzaepv(XOO)” every year. To reduce the spread of infection and to increase the production of the crop the paper “Rice crop monitoring system-An IoT machine vision approach” is beneficial. It detects the infected part of the crop and takes immediate remedy by spraying the pesticide automatically on the infected area of the specific microbial affected plant by tracking the exact location and taking the periodic images of the crop for large area. This system reduces the wastage of pesticides to the entire farm and also reduces the human effort.
---
paper_title: Design of Urban Greening Intelligent Monitoring System Based on Internet of Things Technology
paper_content:
The traditional field control of irrigation in the urban greening is often accompanied by some problems such as low irrigation efficiency, high labor costs and low precision. A set of intelligent monitoring system integrated with automatic irrigation and remote monitor is designed in order to solve these problems. The system is based on the Internet of things (IoT) technology which can be divided into three layers of IoT. In the transport layer, we built Zigbee wireless sensor networks (WSN). Besides, we use the J2EE technology to build the Web server application with perfect function in the application layer. The iterative learning control(ILC) arithmetic is used to improve the irrigation efficiency by self correcting the initial lead of closing valve. Both the hardware and software in the system adopt the idea of modularized design and with the cooperation between the hardware and the software it is easy to add or delete of the greening land, ZigBee nodes and sensors. As the experimental results show that the system has the feature of flexibility in the building and expanding process, operating stability, comprehensive functions and high irrigation efficiency, it is suitable to introduce the system to the whole city.
---
paper_title: Implement smart farm with IoT technology
paper_content:
With the advent of Internet of Things (IoT) and industrialization, the development of Information Technology (IT) has led to various studies not only in industry but also in agriculture. Especially, IoT technology can overcome distance and place constraints of wired communication systems used in existing farms, and can expect agricultural IT development from automation of agricultural data collection. In this paper, smart farm system using low power Bluetooth and Low Power Wide Area Networks (LPWAN) communication modules including the wired communication network used in the existing farm was constructed. In addition, the system implements the monitoring and control functions using the MQ Telemetry Transport (MQTT) communication method, which is an IoT dedicated protocol, thereby enhancing the possibility of development of agricultural IoT.
---
paper_title: Cloud based data analysis and monitoring of smart multi-level irrigation system using IoT
paper_content:
India is one of the countries with scarcest water resources in the world; due to poor utilization of the water resources, some parts of our country are facing the risk of draught. In order to conserve existing water resources and efficiently manage it for agriculture, recent advances in technology can be used. Internet of Things is one of such new technology which can help our country to reduce the overall impact of faulty water management in agriculture sector. In this paper, we have designed and developed a new framework for multilevel farming in urban area where cultivation space is limited. We have provided local node for each level with its individual local decision making system, sensors and actuators which is customized to the selected crop. These local nodes communicate to a centralized node via wireless communication. This centralized node is connected to a Cloud Server where the received data will be stored and processed. Cloud based data analysis and monitoring allows the user to analyze and monitor the irrigation system through internet providing ubiquitous access. Our Experimental results show reduced water consumption and better power utilization.
---
paper_title: Applied internet of thing for smart hydroponic farming ecosystem (HFE)
paper_content:
In the present, Thailand is developing to fully apply the Internet of things (IoT) [12] into daily life because IoT is a new trend of the technology and it is very popular today. The IoT helps us link objects and mechanisms to the internet for remote control. In addition, Thailand focuses on agriculture because Thailand is an agricultural country and it is also the main occupation of people in the country, which makes agriculture have many formats in Thailand, but hydroponics [5] [11] is an interesting new format that uses less area than others. Although hydroponics uses less space than conventional planting, it can provide many products for the farmer. In hydroponic farming, it is difficult to plant and manage if you aren't a professional farmer or don't have good knowledge about farming. For some it can be very hard to do hydroponic farming. This paper will propose a Hydroponic Farming Ecosystem (HFE) that uses IoT devices to monitor humidity, nutrient solution temperature, air temperature, PH and Electrical Conductivity (EC). The HFE is made to support non-professional farmers, city people who have limited knowledge in farming and people who are interested in doing vertical planting in very small areas in the city such as building tops, balconies of small rooms in high-rise buildings, and in small office spaces. To make the system easy to control and easy to use, we have an android application to control IoT devices in the HFE and alarm users when their farm is in an abnormal situation.
---
paper_title: Application of MQTT protocol for real time weather monitoring and precision farming
paper_content:
Smart Farming has become the main trend in agriculture sector. Industrial investments are increasing from day to day to achieve integration of IoT solutions in their smart farming products to cut down the overall cost, increase quality and quantity of the harvesting. Advancements in communication technologies such as GSM and GPRS have enabled remote controlling of irrigation systems. But a precision agriculture system needs continuous monitoring of all the climatic conditions. We propose a MQTT protocol based smart farming solution to collect knowledge from the field ambience for continuous analysis and also to develop and deploy a smart technology for agriculture sector for improving environmental and agricultural sustainability to improve crop traceability and to increase overall yield. The focus area of the proposed system is environmental parameters such as crops temperature, ambient humidity, soil moisture level and light intensity. We also evaluate at the end the performance and efficiency of MQTT protocol for different loading conditions in this application.
---
paper_title: IoT Solutions for Precision Farming and Food Manufacturing: Artificial Intelligence Applications in Digital Food
paper_content:
In many respects, farming and food processing have lagged other industries when it comes to adoption of innovative technology. Whilst bioengineering has brought about seeds with much higher yield and less need for water and nutrients, it is only now with IoT that farmers can work on the intensive use of natural resources to increase the sustainability of their operations. In the last ten years, high-end machinery has evolved primarily to the benefit of larger corporations, with the introduction of satellite driven machines, sensors and all components of precision farming. The most recent IoT advances bring about a level of simplification and cost reduction that enable all farmers to benefit and a true adoption of prescription agriculture. In this conference presentation, we will examine a practical case of a Malthouse, where careful modeling of how CO2, Temperature, Humidity and PH vary in the three steps of the malting process, enabled an artificial intelligence system to prescribe different setting and schedules. The end result is malt with higher content of starch and proteins, which in turn means higher alcohol in the downstream process. A second example on the cultivation of Medical Marijuana, where similarly but in a more complex fashion (138 variables) the artificial intelligence supported the tuning of many settings and schedules, is presented only in the conference.
---
paper_title: A novel technology for smart agriculture based on IoT with cloud computing
paper_content:
Internet of Things (IoT) is one of the fastest developing technologies throughout the India. But, most of the population (70%) in India depending on agriculture. This situation is one of the reason, that hindering the development of country. in order to solve this problem only one solution that, smart agriculture by adding new technological methods instead of present traditional agriculture methods. Hence we proposed new IoT technology with cloud computing and Li-Fi. Wi-Fi is great for general wireless coverage within buildings, whereas Li-Fi[10] is wireless data coverage with high density in confined area. Li-Fi provides better bandwidth, efficiency, availability and security than Wi-Fi and has already achieved blisteringly high speed in the lab. First this project includes remote controlled process to perform tasks like spraying, weeding, bird and animal scaring, keeping vigilance, moisture sensing, etc. Secondly it includes smart warehouse management which includes temperature maintenance, humidity maintenance and theft detection in the warehouse. Thirdly, intelligent decision making based on accurate real time field data for smart irrigation with smart control. Controlling of all these operations will be through any remote smart device or computer connected to Internet and the operations will be performed by interfacing cameras, sensors, Li-Fi or ZigBee modules.
---
paper_title: Design of an Intelligent Management System for Agricultural Greenhouses Based on the Internet of Things
paper_content:
China is a large agricultural country with the largest population in the world. This creates a high demand for food, which is prompting the study of high quality and high-yielding crops. China's current agricultural production is sufficient to feed the nation; however, compared with developed countries agricultural farming is still lagging behind, mainly due to the fact that the system of growing agricultural crops is not based on maximizing output, the latter would include scientific sowing, irrigation and fertilization. In the past few years many seasonal fruits have been offered for sale in markets, but these crops are grown in traditional backward agricultural greenhouses and large scale changes are needed to modernize production. The reform of small-scale greenhouse agricultural production is relatively easy and could be implemented. The concept of the Agricultural Internet of Things utilizes networking technology in agricultural production, the hardware part of this agricultural IoT include temperature, humidity and light sensors and processors with a large data processing capability; these hardware devices are connected by short-distance wireless communication technology, such as Bluetooth, WIFI or Zigbee. In fact, Zigbee technology, because of its convenient networking and low power consumption, is widely used in the agricultural internet. The sensor network is combined with well-established web technology, in the form of a wireless sensor network, to remotely control and monitor data from the sensors.In this paper a smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies. The system consists of sensor networks and asoftware control system. The sensor network consists of the master control center and various sensors using Zigbee protocols. The hardware control center communicates with a middleware system via serial network interface converters. The middleware communicates with a hardware network using an underlying interface and it also communicates with a web system using an upper interface. The top web system provides users with an interface to view and manage the hardware facilities ; administrators can thus view the status of agricultural greenhouses and issue commands to the sensors through this system in order to remotely manage the temperature, humidity and irrigation in the greenhouses. The main topics covered in this paper are:1. To research the current development of new technologies applicable to agriculture and summarizes the strong points concerning the application of the Agricultural Internet of Things both at home and abroad. Also proposed are some new methods of agricultural greenhouse management.2. An analysis of system requirements, the users’ expectations of the system and the response to needs analysis, and the overall design of the system to determine it’s architecture.3. Using software engineering to ensure that functional modules of the system, as far as possible, meet the requirements of high cohesion and low coupling between modules, also detailed design and implementation of each module is considered.
---
paper_title: IoT based smart irrigation system and nutrient detection with disease analysis
paper_content:
Agriculture remains the sector which contributes the highest to India's GDP. But, when considering technology that is deployed in this field, we find that the development is not tremendous. After consulting with the Kerala Agricultural University, Mannuthy and Kerala Rice Research Station, Vytilla, we identified a few fundamental issues that are faced by the paddy farmers today. It includes the problem of over or under watering and the need for regular manual irrigation. Furthermore, when it comes to rice, which is the staple crop of Kerala, there does not exist a system for automatically monitoring the diseases associated with the rice species, and checking whether the crop is supplied with ample amount of nutrients. In this paper, we have devised a means for cost-effective automated irrigation and fertigation along with MATLAB based image processing for identifying the rice diseases and nutrient deficiencies. Here, we are focusing on two important nutrients, namely magnesium and nitrogen. The hardware consists of a Raspberry Pi, DHT11 temperature and humidity sensor and solenoid valves. Furthermore, the proposed model enables the farmer to monitor weather conditions using an Android Application, with which he also has a choice to override the system if required.
---
paper_title: Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions
paper_content:
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
---
paper_title: Intelligent irrigation system — An IOT based approach
paper_content:
The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.
---
paper_title: A sustainable agricultural system using IoT
paper_content:
The Internet of Things (IoT) is an element of worldwide data that comprise of web associated items, or things implanted with gadgets, software, sensors and also different instruments which are becoming an integral component of future internet. This work developed a system which will automatically monitor the agricultural field as well as performing live video streaming for monitoring the Agriculture field from the server itself, through raspberry pi camera. The agriculture fields are monitored for environmental temperature, humidity and soil moisture. The automatic irrigation will be performed based on the set points of the temperature, humidity and soil moisture sensor. The data collected from the field are monitored in IoT, the data are then processed and necessary information is passed through the field owners for counter measures.
---
paper_title: Providing smart agricultural solutions/techniques by using Iot based toolkit
paper_content:
IoT integrates the pervasive computing, ubiquitous communications and ambient intelligence. India is known as the ‘land of agriculture’. Approximately 70% of the Indian population are connected with agriculture and its related activities. Most of the Agricultural activities are done by villagers. Natural resources and whether conditions are major vital factors in agriculture. This paper proposes automation and smart IOT based solution for agriculture. The highlighting features of this paper include temperature and humidity detection, soil moisture detection, leaf wetness detection, wind speed/direction and rainfall detection, soil pH detection, seed recognition and efficient irrigation system. Our proposed system uses a microcontroller named raspberry pi with sensors which are use to sense the different environmental conditions.
---
paper_title: Multistage Signaling Game-Based Optimal Detection Strategies for Suppressing Malware Diffusion in Fog-Cloud-Based IoT Networks
paper_content:
We consider the Internet of Things (IoT) with malware diffusion and seek optimal malware detection strategies for preserving the privacy of smart objects in IoT networks and suppressing malware diffusion. To this end, we propose a malware detection infrastructure realized by an intrusion detection system (IDS) with cloud and fog computing to overcome the IDS deployment problem in smart objects due to their limited resources and heterogeneous subnetworks. We then employ a signaling game to disclose interactions between smart objects and the corresponding fog node because of malware uncertainty in smart objects. To minimize privacy leakage of smart objects, we also develop optimal strategies that maximize malware detection probability by theoretically computing the perfect Bayesian equilibrium of the game. Moreover, we analyze the factors influencing the optimal probability of a malicious smart object diffusing malware, and factors influencing the performance of a fog node in determining an infected smart object. Finally, we present a framework to demonstrate a potential and practical application of suppressing malware diffusion in IoT networks.
---
paper_title: Material intelligence as a driver for value creation in IoT-enabled business ecosystems
paper_content:
Purpose ::: ::: ::: ::: ::: The purpose of this study is to identify and discuss the role of intelligent materials in the emergence of new business models based on the Internet of Things (IoT). The study suggests new areas for further research to better understand the influences of material intelligence on the business models in industry-wide service ecosystems. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: ::: The study uses data from an earlier study of intelligent materials in the steel industry networks. The insights are based on 34 qualitative interviews among 15 organizations in the industry. The data are reanalyzed for this study. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: ::: The observations from the steel industry show how material intelligence can be harnessed for value creation in IoT-based business ecosystems. The results suggest that not all “things” connected to the IoT need to be intelligent, if information related to the things are collected, stored and shared for collaborative value creation among the actors involved in the business ecosystem. ::: ::: ::: ::: ::: Research limitations/implications ::: ::: ::: ::: ::: The study discusses how IoT deployments allow businesses to benefit from the velocity and variety of information associated with things and guides future research to study the ways in which value is created through IoT-enabled business models. ::: ::: ::: ::: ::: Practical implications ::: ::: ::: ::: ::: Rather than focusing on improving the efficiency of the supply network, the study presents new paths for competitive advantages in the new IoT ecosystems. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: ::: The study contributes to the mounting research on the IoT by identifying and discussing the critical aspects of how IoT can transform business models and supply networks within end-to-end ecosystems.
---
paper_title: IOT Based Monitoring System in Smart Agriculture
paper_content:
Internet of Things (IoT) plays a crucial role in smart agriculture. Smart farming is an emerging concept, because IoT sensors capable of providing information about their agriculture fields. The paper aims making use of evolving technology i.e. IoT and smart agriculture using automation. Monitoring environmental factors is the major factor to improve the yield of the efficient crops. The feature of this paper includes monitoring temperature and humidity in agricultural field through sensors using CC3200 single chip. Camera is interfaced with CC3200 to capture images and send that pictures through MMS to farmers mobile using Wi-Fi.
---
paper_title: Pentaho and Jaspersoft: A Comparative Study of Business Intelligence Open Source Tools Processing Big Data to Evaluate Performances
paper_content:
Regardless of the recent growth in the use of “Big Data” and “Business Intelligence” (BI) tools, little research has been undertaken about the implications involved. Analytical tools affect the development and sustainability of a company, as evaluating clientele needs to advance in the competitive market is critical. With the advancement of the population, processing large amounts of data has become too cumbersome for companies. At some stage in a company’s lifecycle, all companies need to create new and better data processing systems that improve their decision-making processes. Companies use BI Results to collect data that is drawn from interpretations grouped from cues in the data set BI information system that helps organisations with activities that give them the advantage in a competitive market. However, many organizations establish such systems, without conducting a preliminary analysis of the needs and wants of a company, or without determining the benefits and targets that they aim to achieve with the implementation. They rarely measure the large costs associated with the implementation blowout of such applications, which results in these impulsive solutions that are unfinished or too complex and unfeasible, in other words unsustainable even if implemented. BI open source tools are specific tools that solve this issue for organizations in need, with data storage and management. This paper compares two of the best positioned BI open source tools in the market: Pentaho and Jaspersoft, processing big data through six different sized databases, especially focussing on their Extract Transform and Load (ETL) and Reporting processes by measuring their performances using Computer Algebra Systems (CAS). The ETL experimental analysis results clearly show that Jaspersoft BI has an increment of CPU time in the process of data over Pentaho BI, which is represented by an average of 42.28% in performance metrics over the six databases. Meanwhile, Pentaho BI had a marked increment of the CPU time in the process of data over Jaspersoft evidenced by the reporting analysis outcomes with an average of 43.12% over six databases that prove the point of this study. This study is a guiding reference for many researchers and those IT professionals who support the conveniences of Big Data processing, and the implementation of BI open source tool based on their needs.
---
paper_title: Performance evaluation of IEEE 802.15.4-compliant smart water meters for automating large-scale waterways
paper_content:
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
---
paper_title: Intelligent soil quality monitoring system for judicious irrigation
paper_content:
As the world population increases exponentially, it is mandatory that we adopt modern agricultural practices to meet food safety demands. Intelligent sensor networks acquire data about soil regarding moisture and nutrient deficiency. This data can be employed to automate irrigation and alert farmer about nutrient deficiency. The data obtained from soil is pushed to the cloud. Great processing capabilities makes cloud storage an important solution to store the vast amount of data generated by the wireless sensor network, which can also be viewed by the farmer through a Smartphone application. This paper proposes and evaluates on a real deployment of wireless sensor network based on an Internet of things platform.
---
paper_title: Review: Security and Privacy Issues of Fog Computing for the Internet of Things (IoT)
paper_content:
. ::: Internet of Things (IoT), devices, and remote data centers need to connect. The purpose of fog is to reduce the amount of data transported for processing, analysis, and storage, to speed-up the computing processes. The gap between, Fog computing technologies and devices need to narrow down as growth in business today relies on the ability to connect to digital channels for processing large amounts of data. Cloud computing is unfeasible for many internet of things applications, therefore fog computing is often seen as a viable alternative. Fog is suitable for many IoT services as it has enabled an extensive collection of benefits, such as decreased bandwidth, reduced latency, and enhanced security. However, Fog devices that are placed at the edge of the internet have met numerous privacy and security threats. This study aims to examine and highlight the security and privacy issues of fog computing through a comprehensive review of recently published literature of fog computing and suggest solutions for identified problems. Data extracted from 34 peer-reviewed scientific publications (2011–2017) were studied, leading to the identification of 49 different issues that were raised, in relation to fog computing. This study revealed a general agreement among researchers about the novelty of Fog computing, and its early stages of development, and identifies several challenges that need to be met, before its wider application and use reaches its full potential.
---
paper_title: IoT based smart irrigation monitoring and controlling system
paper_content:
Interconnection of number of devices through internet describes the Internet of things (IoT). Every object is connected with each other through unique identifier so that data can be transferred without human to human interaction. It allows establishing solutions for better management of natural resources. The smart objects embedded with sensors enables interaction with the physical and logical worlds according to the concept of IoT. In this paper proposed system is based on IoT that uses real time input data. Smart farm irrigation system uses android phone for remote monitoring and controlling of drips through wireless sensor network. Zigbee is used for communication between sensor nodes and base station. Real time sensed data handling and demonstration on the server is accomplished using web based java graphical user interface. Wireless monitoring of field irrigation system reduces human intervention and allows remote monitoring and controlling on android phone. Cloud Computing is an attractive solution to the large amount of data generated by the wireless sensor network. This paper proposes and evaluates a cloud-based wireless communication system to monitor and control a set of sensors and actuators to assess the plants water need.
---
paper_title: Implementation of smart infrastructure and non-invasive wearable for real time tracking and early identification of diseases in cattle farming using IoT
paper_content:
The major objective of this system is to make infrastructure of cattle farming smarter and to implement a noninvasive wearable to track physiological and biological activities of cattle using Internet of Things alias IoT. Each cattle is tagged with a wearable device. The wearable device and sink node is designed based on architecture of device to cloud. The wearable device is for early detection of illness, abnormalities detection, emergency handling, location tracking, calving time intimation and determine disease before visual signs. The sink node is responsible for smart lighting, smart ventilation, smart watering and smoke detection along with sprinkler actuation to make infrastructure smarter and safer. All sensor readings will be forwarded to thingspeak cloud to enable remote access. Through thingspeak cloud real time health characteristics of individual cattle, smoke level in farm, daily water and electricity usage will be displayed in line graph. All data along with time and date can be extracted as an excel sheet for further analysis. As a result overall cattle health and milk production will be improved by reducing cattle health inspection costs ensuring small size, low cost, high consistency and reliability.
---
paper_title: Adoption of the Internet of Things technologies in business procurement: impact on organizational buying behavior
paper_content:
Purpose ::: ::: ::: ::: ::: The purpose of this paper is to discuss the potential of Internet of Things (IoT) to affect organizational buying behavior. Potential impacts on organizational communication, buying center structure and processes and privacy and security issues are discussed. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: ::: This is a conceptual paper that advances testable propositions based on the technology overview and use of existing organizational buying behavior theory. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: ::: This paper concludes that major changes are likely as a result of the adoption of IoT. The nature of organizational communication may shift to more machine-to-machine communication and buying centers may become smaller, less hierarchical but more coordinated, with less conflict. In addition, privacy and security concerns will need to be addressed. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: ::: This is the first attempt to conceptualize the impact of adoption of IoT technologies that may help future researchers to examine the impact on a more granular level. For practitioners, it may help them prepare for the impacts of the IoT technological juggernaut.
---
paper_title: Opinion: Smart farming is key to developing sustainable agriculture
paper_content:
Agriculture has seen many revolutions, whether the domestication of animals and plants a few thousand years ago, the systematic use of crop rotations and other improvements in farming practice a few hundred years ago, or the “green revolution” with systematic breeding and the widespread use of man-made fertilizers and pesticides a few decades ago. We suggest that agriculture is undergoing a fourth revolution triggered by the exponentially increasing use of information and communication technology (ICT) in agriculture. ::: ::: ::: ::: New technologies, such as unmanned aerial vehicles with powerful, lightweight cameras, allow for improved farm management advice. Image courtesy of Shutterstock/Kleir. ::: ::: ::: ::: Autonomous, robotic vehicles have been developed for farming purposes, such as mechanical weeding, application of fertilizer, or harvesting of fruits. The development of unmanned aerial vehicles with autonomous flight control (1), together with the development of lightweight and powerful hyperspectral snapshot cameras that can be used to calculate biomass development and fertilization status of crops (2, 3), opens the field for sophisticated farm management advice. Moreover, decision-tree models are available now that allow farmers to differentiate between plant diseases based on optical information (4). Virtual fence technologies (5) allow cattle herd management based on remote-sensing signals and sensors or actuators attached to the livestock. ::: ::: Taken together, these technical improvements constitute a technical revolution that will generate disruptive changes in agricultural practices. This trend holds for farming not only in developed countries but also in developing countries, where deployments in ICT (e.g., use of mobile phones, access to the Internet) are being adopted at a rapid pace and could become the game-changers in the future (e.g., in the form of seasonal drought forecasts, climate-smart agriculture). ::: ::: Such profound changes in practice come not only with opportunities but also big challenges. It is crucial to point them out at an early stage of this … ::: ::: [↵][1]1To whom correspondence should be addressed. Email: achim.walter{at}usys.ethz.ch. ::: ::: [1]: #xref-corresp-1-1
---
paper_title: Comparison of edge computing implementations: Fog computing, cloudlet and mobile edge computing
paper_content:
When it comes to storage and computation of large scales of data, Cloud Computing has acted as the de-facto solution over the past decade. However, with the massive growth in intelligent and mobile devices coupled with technologies like Internet of Things (IoT), V2X Communications, Augmented Reality (AR), the focus has shifted towards gaining real-time responses along with support for context-awareness and mobility. Due to the delays induced on the Wide Area Network (WAN) and location agnostic provisioning of resources on the cloud, there is a need to bring the features of the cloud closer to the consumer devices. This led to the birth of the Edge Computing paradigm which aims to provide context aware storage and distributed Computing at the edge of the networks. In this paper, we discuss the three different implementations of Edge Computing namely Fog Computing, Cloudlet and Mobile Edge Computing in detail and compare their features. We define a set of parameters based on which one of these implementations can be chosen optimally given a particular use-case or application and present a decision tree for the selection of the optimal implementation.
---
paper_title: Design and Implementation of Smart Irrigation System Based on LoRa
paper_content:
Water resource: one of the most important natural resource problem to be paid more attention in the world in 21st Century. Irrigation method in traditional agriculture has low utilization of water resource. With the development of Internet of Things(IoT), smart irrigation system has became a new trend in the field of agricultural irrigation. This paper proposes a LoRa- based smart irrigation system. In this system, the irrigation node is mainly composed of LoRa communication module, solenoid valve and hydroelectric generator. The irrigation node sends data to cloud through LoRa gateways via wireless transmission. The system can be controlled remotely by mobile applications. Experimental results show that both transmission distance and energy consumption in the proposed system are reliable the proposed system are reliable.
---
paper_title: Precision agriculture using remote monitoring systems in Brazil
paper_content:
Soil and nutrient depletion from intensive use of land is a critical issue for food production. An understanding of whether the soil is adequately treated with appropriate crop management practices in real-time during production cycles could prevent soil erosion and the overuse of natural or artificial resources to keep the soil healthy and suitable for planting. Precision agriculture traditionally uses expensive techniques to monitor the health of soil and crops including images from satellites and airplanes. Recently there are several studies using drones and a multitude of sensors connected to farm machinery to observe and measure the health of soil and crops during planting and harvesting. This paper describes a real-time, in-situ agricultural internet of things (IoT) device designed to monitor the state of the soil and the environment. This device was designed to be compatible with open hardware and it is composed of temperature and humidity sensors (soil and environment), electrical conductivity of the soil and luminosity, Global Positioning System (GPS) and a ZigBee radio for data communication. The field trial involved soil testing and measurements of the local climate in Sao Paulo, Brazil. The measurements of soil temperature, humidity and conductivity are used to monitor soil conditions. The local climate data could be used to support decisions about irrigation and other activities related to crop health. On-going research includes methods to reduce the consumption of energy and increase the number of sensors. Future applications include the use of the IoT device to detect fire in crops, a common problem in sugar cane crops and the integration of the IoT device with irrigation management systems to improve water usage.
---
paper_title: On Reducing IoT Service Delay via Fog Offloading
paper_content:
With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve the quality of service (QoS) for IoT applications through fog computing is becoming an important problem. In this paper, we introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing collaboration and offloading policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.
---
paper_title: Cloud based data analysis and monitoring of smart multi-level irrigation system using IoT
paper_content:
India is one of the countries with scarcest water resources in the world; due to poor utilization of the water resources, some parts of our country are facing the risk of draught. In order to conserve existing water resources and efficiently manage it for agriculture, recent advances in technology can be used. Internet of Things is one of such new technology which can help our country to reduce the overall impact of faulty water management in agriculture sector. In this paper, we have designed and developed a new framework for multilevel farming in urban area where cultivation space is limited. We have provided local node for each level with its individual local decision making system, sensors and actuators which is customized to the selected crop. These local nodes communicate to a centralized node via wireless communication. This centralized node is connected to a Cloud Server where the received data will be stored and processed. Cloud based data analysis and monitoring allows the user to analyze and monitor the irrigation system through internet providing ubiquitous access. Our Experimental results show reduced water consumption and better power utilization.
---
paper_title: Smart farming using IOT
paper_content:
Even today, different developing countries are also using traditional methods and backward techniques in agriculture sector. Little or very less technological advancement is found here that has increased the production efficiency significantly. To increase the productivity, a novel design approach is presented in this paper. Smart farming with the help of Internet of Things (IOT) has been designed. A remote controlled vehicle operates on both automatic and manual modes, for various agriculture operations like spraying, cutting, weeding etc. The controller keeps monitoring the temperature, humidity, soil condition and accordingly supplies water to the field.
---
paper_title: FEMOS: Fog-Enabled Multitier Operations Scheduling in Dynamic Wireless Networks
paper_content:
Fog computing has recently emerged as a promising technique in content delivery wireless networks to alleviate the heavy bursty traffic burdens on backhaul connections. In order to improve the overall system performance, in terms of network throughput, service delay and fairness, it is very crucial and challenging to jointly optimize node assignments at control tier and resource allocation at access tier under dynamic user requirements and wireless network conditions. To solve this problem, in this paper, a fog-enabled multitier network architecture is proposed to model a typical content delivery wireless network with heterogeneous node capabilities in computing, communication, and storage. Further, based on Lyapunov optimization techniques, a new online low-complexity algorithm, namely fog-enabled multitier operations scheduling (FEMOS), is developed to decompose the original complicated problem into two operations across different tiers. Rigorous performance analysis derives the tradeoff relationship between average network throughput and service delay, i.e., ${[O(1/V), O(V)] }$ with a control parameter $\boldsymbol {V}$ , under FEMOS algorithm in dynamic wireless networks. For different network sizes and traffic loads, extensive simulation results show that FEMOS is a fair and efficient algorithm for all user terminals and, more importantly, it can offer much better performance, in terms of network throughput, service delay, and queue backlog, than traditional node assignment and resource allocation algorithms.
---
paper_title: An intelligent report generator for efficient farming
paper_content:
Today, the farmers are suffering from uncertain monsoons and water scarcity due to global warming. The integration of standard farming strategies with latest technologies as Internet of Things and Wireless device Networks may result in agricultural modernization. Keeping this situation in mind it's my approach to style device for automation of agriculture, associate in tending ’Internet of Things’ primarily based devices that is capable of analyzing the perceived data then transmit valuable farming data to the user. These devices is controlled and monitored from remote location and its data is processed and applied in agricultural fields. Intelligent Report Generator for efficient farming deals with the easy testing of soil nutrient. This report helps farmer to understand the health of their farming land soil. This in turn helps farmer to decide, which are the suitable crops to sow at that particular season in that particular land. By investigating the occasional information from farmland, Soil Health Report card is arranged and sent to farmer by means of SMS, which helps the agriculturist in basic decision making. Condition dampness and temperature is detected utilizing DHT11 sensor and light is detected utilizing LDR sensor, soil pH meter is utilized to know the soil pH esteem.
---
|
Title: Adoption of the Internet of Things (IoT) in agriculture and smart farming towards urban greening: A review
Section 1: INTRODUCTION
Description 1: This section introduces the concept of IoT in agriculture and smart farming, its significance, and the primary aim of the review.
Section 2: Collection of raw data
Description 2: This section describes the methodology for gathering data, including the selection criteria and sources of information.
Section 3: Data inclusion criteria
Description 3: This section outlines the criteria used for including data in the study, such as specific attributes and exclusions.
Section 4: Data analysis
Description 4: This section explains the methods used to analyze the collected data and presents emerging themes from the analysis.
Section 5: RESULTS
Description 5: This section presents the findings of the review, including the most researched IoT sub-verticals, collected data measurements, and technologies.
Section 6: DISCUSSION
Description 6: This section discusses the implications of the results, identifies gaps in the current research, and suggests areas for future investigation.
Section 7: CONCLUSION
Description 7: This section concludes the paper by summarizing key findings and highlighting the potential benefits and challenges of IoT adoption in agriculture and smart farming.
|
Person Detection from Overhead View: A Survey
| 6 |
---
paper_title: Human Behavior Understanding via Top-View Vision
paper_content:
Abstract Aiming to target occlusion problem in complex scenes, human action recognition via a top-view vision is proposed. However, in this view, the human behavior rotated will be mistakenly identified as another behavior. To address this situation, taking into account the rotation invariance of the moments, human static posture are represented by Hu moments, and using the SVM as trainer and classifier. According to the change of coordinate of the binary image centroid, semantic web of dynamic behavior is established. The experimental results show that this method can accurately identify the human dynamic information and has a high recognition rate.
---
paper_title: 3D-sensing Distributed Embedded System for People Tracking and Counting
paper_content:
The present study focuses on the development of an embedded smart camera network dedicated to track and count people in public spaces. In the network, each node is capable of sensing, tracking and counting people while communicating with the adjacent nodes of the network. Each node typically uses a 3D-sensing camera positioned in a downward-view but the designed framework can accept other configurations. We present an estimation method for the relative position and orientation of the depth cameras. This system performs background modeling during the calibration process, using a fast and lightweight segmentation algorithm.
---
paper_title: A People Counting System Based on Dense and Close Stereovision
paper_content:
We present in this paper a system for passengers counting in buses based on stereovision. The objective of this work is to provide a precise counting system well adapted to buses environment. The processing chain corresponding to this counting system involves several blocks dedicated to the detection, segmentation, tracking and counting. From original stereoscopic images, the system operates primarily on the information contained in disparity maps previously calculated with a novel algorithm. We show that one can obtain a counting accuracy of 99% on a large data set including specific scenarios played in laboratory and on some video sequences shot in a bus during exploitation period.
---
paper_title: Directional People Counter Based on Head Tracking
paper_content:
This paper presents an application for counting people through a single fixed camera. This system performs the count distinction between input and output of people moving through the supervised area. The counter requires two steps: detection and tracking. The detection is based on finding people's heads through preprocessed image correlation with several circular patterns. Tracking is made through the application of a Kalman filter to determine the trajectory of the candidates. Finally, the system updates the counters based on the direction of the trajectories. Different tests using a set of real video sequences taken from different indoor areas give results ranging between 87% and 98% accuracies depending on the volume of flow of people crossing the counting zone. Problematic situations, such as occlusions, people grouped in different ways, scene luminance changes, etc., were used to validate the performance of the system.
---
paper_title: Real-time people detection and tracking for indoor surveillance using multiple top-view depth cameras
paper_content:
This paper proposes a real-time indoor surveillance system which installs multiple depth cameras from vertical top-view to track humans. This system leads to a novel framework to solve the traditional challenge of surveillance through tracking of multiple persons, such as severe occlusion, similar appearance, illumination changes, and outline deformation. To cover the entire space of indoor surveillance scene, the image stitching based on the cameras' spatial relation is also utilized. The background subtraction of the stitched top-view image can then be performed to extract the foreground objects in the cluttered environment. The detection scheme including the graph-based segmentation, the head hemiellipsoid model, and the geodesic distance map are cascaded to detect humans. Moreover, the shape feature based on diffusion distance is designed to verify the human tracking hypotheses within particle filter. The experimental results demonstrate the real-time performance and robustness in comparison with several state-of-the-art detection and tracking algorithms.
---
paper_title: A robust person detector for overhead views
paper_content:
In cluttered environments the overhead view is often preferred because looking down can afford better visibility and coverage. However detecting people in this or any other extreme view can be challenging as there is a significant variation in a person's appearances depending only on their position in the picture. The Histogram of Oriented Gradient (HOG) algorithm, a standard algorithm for pedestrian detection, does not perform well here, especially where the image quality is poor. We show that on average, 9 false detections occur per image. We propose a new algorithm where transforming the image patch containing a person to remove positional dependency and then applying the HOG algorithm eliminates 98% of the spurious detections in noisy images from an industrial assembly line and detects people with a 95% efficiency.
---
paper_title: Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras
paper_content:
The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.
---
paper_title: A Robust Features-Based Person Tracker for Overhead Views in Industrial Environment
paper_content:
A top view camera having wide range lens installed overhead of the objects contributes greatly toward resolving the tracking problem and also maintains comprehensive visual access of the environment. Video analytics becoming more important to Internet of Things applications including automatic people monitoring and surveillance systems. We followed an approach based on machine learning features-based person tracking algorithm in industrial environment. The algorithm implements simple motion detection framework through motion blobs. The algorithm, rHOG uses the history of already imaged/blobed population with the anticipated blob position of the person observed. We have compared our results, acquired through five varying test sequences, with established algorithms used for object tracking. The results highlight that our algorithm beats others tracking algorithms by greater margins. The accuracy depicted in our results shows 99% of accuracy compared to the last known best algorithm, the mean shift algorithm, yielding 48% accuracy in result. Furthermore, unlike other blob-based tracking algorithms, our algorithm has additional property to discriminate any blob as a person or no person. Our proposed tracking algorithm has the additional advantage of detecting stationary person for a long time, handling occlusion, abrupt change in the environment, and keeps performing the tracking by compensating for the gaps in data pertaining to all the frames.
---
paper_title: Top-view people counting in public transportation using Kinect
paper_content:
This article describes a method for people counting in public transportation. In this particular scenario, various body poses corresponding to holding handrails must be accounted for. Kinect sensor mounted vertically has been employed to acquire a database of images of 1-5 persons and an algorithm based on maxima detection and head candidates filtering has been devised for robust people counting, resulting in 90% accuracy.
---
paper_title: An evaluation of crowd counting methods, features and regression models
paper_content:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression
---
paper_title: Counting pedestrians with a zenithal arrangement of depth cameras
paper_content:
Counting people is a basic operation in applications that include surveillance, marketing, services, and others. Recently, computer vision techniques have emerged as a non-intrusive, cost-effective, and reliable solution to the problem of counting pedestrians. In this article, we introduce a system capable of counting people using a cooperating network of depth cameras placed in zenithal position. In our method, we first detect people in each camera of the array separately. Then, we construct and consolidate tracklets based on their closeness and time stamp. Our experimental results show that the method permits to extend the narrow range of a single sensor to wider scenarios.
---
paper_title: Reliable Human Detection and Tracking in Top-View Depth Images
paper_content:
The paper presents a method for human detection and tracking in depth images captured by a top-view camera system. We introduce a new feature descriptor which outperforms state-of-the-art features like Simplified Local Ternary Patterns in the given scenario. We use this feature descriptor to train a head-shoulder detector using a discriminative class scheme. A separate processing step ensures that only a minimal but sufficient number of head-shoulder candidates is evaluated. This contributes to an excellent runtime performance. A final tracking step reliably propagates detections in time and provides stable tracking results. The quality of the presented method allows us to recognize many challenging situations with humans tailgating and piggybacking.
---
paper_title: A Robust Human Detection and Tracking System Using a Human-Model-Based Camera Calibration
paper_content:
We present a robust, real-time human detection and tracking system that achieves very good results in a wide range of commercial applications, including counting people and measuring occupancy. The key to the system is a human model based camera calibration process. The system uses a simplified human model that allows the user to very quickly and easily configure the system. This simple initialization procedure is essential for commercial viability.
---
paper_title: Video security for ambient intelligence
paper_content:
Moving toward the implementation of the intelligent building idea in the framework of ambient intelligence, a video security application for people detection, tracking, and counting in indoor environments is presented in this paper. In addition to security purposes, the system may be employed to estimate the number of accesses in public buildings, as well as the preferred followed routes. Computer vision techniques are used to analyze and process video streams acquired from multiple video cameras. Image segmentation is performed to detect moving regions and to calculate the number of people in the scene. Testing was performed on indoor video sequences with different illumination conditions.
---
paper_title: Counting pedestrians with a zenithal arrangement of depth cameras
paper_content:
Counting people is a basic operation in applications that include surveillance, marketing, services, and others. Recently, computer vision techniques have emerged as a non-intrusive, cost-effective, and reliable solution to the problem of counting pedestrians. In this article, we introduce a system capable of counting people using a cooperating network of depth cameras placed in zenithal position. In our method, we first detect people in each camera of the array separately. Then, we construct and consolidate tracklets based on their closeness and time stamp. Our experimental results show that the method permits to extend the narrow range of a single sensor to wider scenarios.
---
paper_title: The design and implementation of a vision-based people counting system in buses
paper_content:
This system counts passengers through a single camera that is fixed at an overhead (zenithal) position. The proposed algorithm is designed to solve the problem of illumination, which can occur when the bus door opens or closes. A processing ratio of about 30 frames/s is necessary to overcome the extreme change in illumination. The system combines global positioning system data and standard time to establish the number of passengers at each bus stop at different periods of time and calculates the number of passengers and unoccupied seats to provide business-related information.
---
paper_title: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis
paper_content:
This paper addresses the problem of determining a person's body and head orientations while tracking the person in an indoor environment monitored by a single top-view camera. The challenging part of this problem lies in the wide range of human postures depending on the position of the camera and articulations of the pose. In this work, a two-level cascaded particle filter approach is introduced to track humans. Color clues are used as the first level for each iteration and edge-orientation histograms are reutilized to support the tracking at the second level. To determine body and head orientations, a combination of Shape Context and SIFT features is proposed. Body orientation is calculated by matching the upper region of the body with predefined shape templates, then finding the orientation within the ranges of π/8 degrees. Then, the optical flow vectors of SIFT features around the head region are calculated to evaluate the direction and type of the motion of the body and head. We demonstrate preliminary results of our approach showing that body and head orientations are successfully estimated. A discussion on various motion patterns and future improvements for more complicated situations is also given.
---
paper_title: Real-time people detection and tracking for indoor surveillance using multiple top-view depth cameras
paper_content:
This paper proposes a real-time indoor surveillance system which installs multiple depth cameras from vertical top-view to track humans. This system leads to a novel framework to solve the traditional challenge of surveillance through tracking of multiple persons, such as severe occlusion, similar appearance, illumination changes, and outline deformation. To cover the entire space of indoor surveillance scene, the image stitching based on the cameras' spatial relation is also utilized. The background subtraction of the stitched top-view image can then be performed to extract the foreground objects in the cluttered environment. The detection scheme including the graph-based segmentation, the head hemiellipsoid model, and the geodesic distance map are cascaded to detect humans. Moreover, the shape feature based on diffusion distance is designed to verify the human tracking hypotheses within particle filter. The experimental results demonstrate the real-time performance and robustness in comparison with several state-of-the-art detection and tracking algorithms.
---
paper_title: Counting pedestrians with a zenithal arrangement of depth cameras
paper_content:
Counting people is a basic operation in applications that include surveillance, marketing, services, and others. Recently, computer vision techniques have emerged as a non-intrusive, cost-effective, and reliable solution to the problem of counting pedestrians. In this article, we introduce a system capable of counting people using a cooperating network of depth cameras placed in zenithal position. In our method, we first detect people in each camera of the array separately. Then, we construct and consolidate tracklets based on their closeness and time stamp. Our experimental results show that the method permits to extend the narrow range of a single sensor to wider scenarios.
---
paper_title: A Robust Human Detection and Tracking System Using a Human-Model-Based Camera Calibration
paper_content:
We present a robust, real-time human detection and tracking system that achieves very good results in a wide range of commercial applications, including counting people and measuring occupancy. The key to the system is a human model based camera calibration process. The system uses a simplified human model that allows the user to very quickly and easily configure the system. This simple initialization procedure is essential for commercial viability.
---
paper_title: The design and implementation of a vision-based people counting system in buses
paper_content:
This system counts passengers through a single camera that is fixed at an overhead (zenithal) position. The proposed algorithm is designed to solve the problem of illumination, which can occur when the bus door opens or closes. A processing ratio of about 30 frames/s is necessary to overcome the extreme change in illumination. The system combines global positioning system data and standard time to establish the number of passengers at each bus stop at different periods of time and calculates the number of passengers and unoccupied seats to provide business-related information.
---
paper_title: HEAD DETECTION IN STEREO DATA FOR PEOPLE COUNTING AND SEGMENTATION
paper_content:
In this paper we propose a head detection method using range data from a stereo camera. The method is based on a technique that has been introduced in the domain of voxel data. For application in stereo cameras, the technique is extended (1) to be applicable to stereo data, and (2) to be robust with regard to noise and variation in environmental settings. The method consists of foreground selection, head detection, and blob separation, and, to improve results in case of misdetections, incorporates a means for people tracking. It is tested in experiments with actual stereo data, gathered from three distinct real-life scenarios. Experimental results show that the proposed method performs well in terms of both precision and recall. In addition, the method was shown to perform well in highly crowded situations. From our results, we may conclude that the proposed method provides a strong basis for head detection in applications that utilise stereo cameras.
---
paper_title: Vision-based overhead view person recognition
paper_content:
Person recognition is a fundamental problem faced in any computer vision system. This problem is relatively easy if the frontal view is available, however, it gets intractable in the absence of the frontal view. We have provided a framework, which tries to solve this problem using the top view of the person. A special scenario of "smart conference room" is considered. Although, not much information is available in the top view, we have shown that by making use of DTC and Bayesian networks the output of the various sensors can be combined to solve this problem. The results presented in the end show that we can do person recognition (pose independent) with 96% accuracy for a group of 12 people. For pose dependent case, we have achieved 100% accuracy. Finally we have provided a framework to achieve this in real time.
---
paper_title: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis
paper_content:
This paper addresses the problem of determining a person's body and head orientations while tracking the person in an indoor environment monitored by a single top-view camera. The challenging part of this problem lies in the wide range of human postures depending on the position of the camera and articulations of the pose. In this work, a two-level cascaded particle filter approach is introduced to track humans. Color clues are used as the first level for each iteration and edge-orientation histograms are reutilized to support the tracking at the second level. To determine body and head orientations, a combination of Shape Context and SIFT features is proposed. Body orientation is calculated by matching the upper region of the body with predefined shape templates, then finding the orientation within the ranges of π/8 degrees. Then, the optical flow vectors of SIFT features around the head region are calculated to evaluate the direction and type of the motion of the body and head. We demonstrate preliminary results of our approach showing that body and head orientations are successfully estimated. A discussion on various motion patterns and future improvements for more complicated situations is also given.
---
paper_title: A versatile and effective method for counting people on either RGB or depth overhead cameras
paper_content:
In this paper we present an innovative method for counting people from zenithal mounted cameras. The proposed method is designed to be computationally efficient and able to provide accurate counting under different realistic conditions. The method can operate with traditional surveillance cameras or with depth imaging sensors. The validation has been carried out on a significant dataset of images that has been specifically devised and collected in order to account for the main factors that may impact on the counting accuracy and, in particular, the acquisition technology (traditional RGB camera and depth sensor), the installation scenario (indoor and outdoor), the density of the people flow (isolated people and groups of persons). Results confirm that the method can achieve an accuracy ranging between 90% and 98% depending on the adopted sensor technology and on the complexity of the scenario.
---
paper_title: Human Behavior Understanding via Top-View Vision
paper_content:
Abstract Aiming to target occlusion problem in complex scenes, human action recognition via a top-view vision is proposed. However, in this view, the human behavior rotated will be mistakenly identified as another behavior. To address this situation, taking into account the rotation invariance of the moments, human static posture are represented by Hu moments, and using the SVM as trainer and classifier. According to the change of coordinate of the binary image centroid, semantic web of dynamic behavior is established. The experimental results show that this method can accurately identify the human dynamic information and has a high recognition rate.
---
paper_title: 3D-sensing Distributed Embedded System for People Tracking and Counting
paper_content:
The present study focuses on the development of an embedded smart camera network dedicated to track and count people in public spaces. In the network, each node is capable of sensing, tracking and counting people while communicating with the adjacent nodes of the network. Each node typically uses a 3D-sensing camera positioned in a downward-view but the designed framework can accept other configurations. We present an estimation method for the relative position and orientation of the depth cameras. This system performs background modeling during the calibration process, using a fast and lightweight segmentation algorithm.
---
paper_title: A People Counting System Based on Dense and Close Stereovision
paper_content:
We present in this paper a system for passengers counting in buses based on stereovision. The objective of this work is to provide a precise counting system well adapted to buses environment. The processing chain corresponding to this counting system involves several blocks dedicated to the detection, segmentation, tracking and counting. From original stereoscopic images, the system operates primarily on the information contained in disparity maps previously calculated with a novel algorithm. We show that one can obtain a counting accuracy of 99% on a large data set including specific scenarios played in laboratory and on some video sequences shot in a bus during exploitation period.
---
paper_title: Directional People Counter Based on Head Tracking
paper_content:
This paper presents an application for counting people through a single fixed camera. This system performs the count distinction between input and output of people moving through the supervised area. The counter requires two steps: detection and tracking. The detection is based on finding people's heads through preprocessed image correlation with several circular patterns. Tracking is made through the application of a Kalman filter to determine the trajectory of the candidates. Finally, the system updates the counters based on the direction of the trajectories. Different tests using a set of real video sequences taken from different indoor areas give results ranging between 87% and 98% accuracies depending on the volume of flow of people crossing the counting zone. Problematic situations, such as occlusions, people grouped in different ways, scene luminance changes, etc., were used to validate the performance of the system.
---
paper_title: Anovel framework for automatic passenger counting
paper_content:
Wepropose anovel framework for counting passengers in a railway station. The framework has three components: people detection, trackingand validation. We detect every person using Hough circle when he or she enters the field of view. The person is then tracked using optical flow until (s)he leaves the field of view. Finally, the tracker generated trajectory is validated through aspatio-temporal background subtraction technique. The number of valid trajectories provides passenger count. Each of the three components of the proposed framework has been compared with competitive methods on three datasets of varying crowd densities. Extensive experiments have been conducted on the datasets having top views of the passengers. Experimental results demonstrate that the proposed algorithmic framework performs well both on dense and sparse crowds and it can successfully detect and track persons with different hair colors, hoodies, caps, long winter jackets, bags and so on. The proposed algorithm shows promising results also for people moving in different directions. The proposed framework can detect up to 30% more accurately and 20% more precisely than other competitive methods.
---
paper_title: Real-time people detection and tracking for indoor surveillance using multiple top-view depth cameras
paper_content:
This paper proposes a real-time indoor surveillance system which installs multiple depth cameras from vertical top-view to track humans. This system leads to a novel framework to solve the traditional challenge of surveillance through tracking of multiple persons, such as severe occlusion, similar appearance, illumination changes, and outline deformation. To cover the entire space of indoor surveillance scene, the image stitching based on the cameras' spatial relation is also utilized. The background subtraction of the stitched top-view image can then be performed to extract the foreground objects in the cluttered environment. The detection scheme including the graph-based segmentation, the head hemiellipsoid model, and the geodesic distance map are cascaded to detect humans. Moreover, the shape feature based on diffusion distance is designed to verify the human tracking hypotheses within particle filter. The experimental results demonstrate the real-time performance and robustness in comparison with several state-of-the-art detection and tracking algorithms.
---
paper_title: Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras
paper_content:
The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.
---
paper_title: Video security for ambient intelligence
paper_content:
Moving toward the implementation of the intelligent building idea in the framework of ambient intelligence, a video security application for people detection, tracking, and counting in indoor environments is presented in this paper. In addition to security purposes, the system may be employed to estimate the number of accesses in public buildings, as well as the preferred followed routes. Computer vision techniques are used to analyze and process video streams acquired from multiple video cameras. Image segmentation is performed to detect moving regions and to calculate the number of people in the scene. Testing was performed on indoor video sequences with different illumination conditions.
---
paper_title: Counting people by using a single camera without calibration
paper_content:
The counting of people is considered important for air-conditioning system in the intelligent building. Based on image processing, an automatic simple bi-directional counting method of pedestrians through a gate is proposed. In this method, only one common camera is used which is hung from the ceiling of the gate with a directly downward view. An experimental formula is used to estimate the number of people in the foreground blobs. Then, a simple-fast tracking algorithm is used to track blobs. The moving direction is determined by using the track information and the path classification. In addition, the method also has merge-split counting strategy to solve the merging or splitting cases. The method has been tested in a 3.2GHz Intel Core machine with 40fps on 1080×720 images without code optimization. Average accuracy rate of 98% and 95% are achieved on videos with normal traffic flow and videos with many cases of merges and splits, respectively.
---
paper_title: Counting pedestrians with a zenithal arrangement of depth cameras
paper_content:
Counting people is a basic operation in applications that include surveillance, marketing, services, and others. Recently, computer vision techniques have emerged as a non-intrusive, cost-effective, and reliable solution to the problem of counting pedestrians. In this article, we introduce a system capable of counting people using a cooperating network of depth cameras placed in zenithal position. In our method, we first detect people in each camera of the array separately. Then, we construct and consolidate tracklets based on their closeness and time stamp. Our experimental results show that the method permits to extend the narrow range of a single sensor to wider scenarios.
---
paper_title: Real-Time People Counting from Depth Images
paper_content:
In this paper, we propose a real-time algorithm for counting people from depth image sequences acquired using the Kinect sensor. Counting people in public vehicles became a vital research topic. Information on the passenger flow plays a pivotal role in transportation databases. It helps the transport operators to optimize their operational costs, providing that the data are acquired automatically and with sufficient accuracy. We show that our algorithm is accurate and fast as it allows 16 frames per second to be processed. Thus, it can be used either in real-time to process traffic information on the fly, or in the batch mode for analyzing very large databases of previously acquired image data.
---
paper_title: A robust method for detecting and counting people
paper_content:
Estimating the number of people passing a gate or a door provides useful information for video-based surveillance and monitoring applications. This paper describes a robust method for bi-directional people counting. The method includes three steps: moving people detecting, tracking and counting. A new algorithm of detecting for moving people based on edge detection is proposed. We construct a foreground/background edge model (FBEM) from serials of frames, and retrieve the foreground edge, thus the moving peoplepsilas bounding box is obtained. Two effective methods are used for moving people tracking with the results in previous step. Besides, the counting process is described in detail, and merge/split phenomenon is also discussed to overcome the problem of people touching together. The experiment results show that a robust and high accuracy of bi-directional counting can be achieved using the method in this paper.
---
paper_title: People-flow counting in complex environments by combining depth and color information
paper_content:
People-flow counting is one of the key techniques of intelligence video surveillance systems and the information of people-flow obtained from this technique is an very important evidence for many applications, such as business analysis, staff planning, security, etc. Traditionally, the color image information based methods encounter kinds of challenges, such as shadows, illumination changing, cloth color, etc., while the depth information based methods suffer from lack of texture. In this paper, we propose an effective approach of people-flow counting by combining color and depth information. First, we adopt a background subtraction technique to fast obtain the moving regions on depth images. Second, the water filling algorithm is used to effectively detect head candidates on the moving regions. Then we use the SVM to recognize the real heads from the candidates. Finally, we adopt a weighted K Nearest Neighbor based multi-target tracking method to track each confirmed head and count the people through the surveillance region. Four datasets constructed from two surveillance scenes are used to evaluate the proposed method. Experimental results show that our method outperform the state-of-the-art methods. Our method can work stably on condition of kinds of interruptions and can not only obtain high precisions, but also high recalls on four datasets.
---
paper_title: Automatic Counting of Interacting People by using a Single Uncalibrated Camera
paper_content:
Automatic counting of people, entering or exiting a region of interest, is very important for both business and security applications. This paper introduces an automatic and robust people counting system which can count multiple people who interact in the region of interest, by using only one camera. Two-level hierarchical tracking is employed. For cases not involving merges or splits, a fast blob tracking method is used. In order to deal with interactions among people in a more thorough and reliable way, the system uses the mean shift tracking algorithm. Using the first-level blob tracker in general, and employing the mean shift tracking only in the case of merges and splits saves power and makes the system computationally efficient. The system setup parameter can be automatically learned in a new environment from a 3 to 5 minute-video with people going in or out of the target region one at a time. With a 2 GHz Pentium machine, the system runs at about 33 fps on 320times240 images without code optimization. Average accuracy rates of 98.5% and 95% are achieved on videos with normal traffic flow and videos with many cases of merges and splits, respectively
---
paper_title: A Robust Human Detection and Tracking System Using a Human-Model-Based Camera Calibration
paper_content:
We present a robust, real-time human detection and tracking system that achieves very good results in a wide range of commercial applications, including counting people and measuring occupancy. The key to the system is a human model based camera calibration process. The system uses a simplified human model that allows the user to very quickly and easily configure the system. This simple initialization procedure is essential for commercial viability.
---
paper_title: The design and implementation of a vision-based people counting system in buses
paper_content:
This system counts passengers through a single camera that is fixed at an overhead (zenithal) position. The proposed algorithm is designed to solve the problem of illumination, which can occur when the bus door opens or closes. A processing ratio of about 30 frames/s is necessary to overcome the extreme change in illumination. The system combines global positioning system data and standard time to establish the number of passengers at each bus stop at different periods of time and calculates the number of passengers and unoccupied seats to provide business-related information.
---
paper_title: HEAD DETECTION IN STEREO DATA FOR PEOPLE COUNTING AND SEGMENTATION
paper_content:
In this paper we propose a head detection method using range data from a stereo camera. The method is based on a technique that has been introduced in the domain of voxel data. For application in stereo cameras, the technique is extended (1) to be applicable to stereo data, and (2) to be robust with regard to noise and variation in environmental settings. The method consists of foreground selection, head detection, and blob separation, and, to improve results in case of misdetections, incorporates a means for people tracking. It is tested in experiments with actual stereo data, gathered from three distinct real-life scenarios. Experimental results show that the proposed method performs well in terms of both precision and recall. In addition, the method was shown to perform well in highly crowded situations. From our results, we may conclude that the proposed method provides a strong basis for head detection in applications that utilise stereo cameras.
---
paper_title: Vision-based overhead view person recognition
paper_content:
Person recognition is a fundamental problem faced in any computer vision system. This problem is relatively easy if the frontal view is available, however, it gets intractable in the absence of the frontal view. We have provided a framework, which tries to solve this problem using the top view of the person. A special scenario of "smart conference room" is considered. Although, not much information is available in the top view, we have shown that by making use of DTC and Bayesian networks the output of the various sensors can be combined to solve this problem. The results presented in the end show that we can do person recognition (pose independent) with 96% accuracy for a group of 12 people. For pose dependent case, we have achieved 100% accuracy. Finally we have provided a framework to achieve this in real time.
---
paper_title: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis
paper_content:
This paper addresses the problem of determining a person's body and head orientations while tracking the person in an indoor environment monitored by a single top-view camera. The challenging part of this problem lies in the wide range of human postures depending on the position of the camera and articulations of the pose. In this work, a two-level cascaded particle filter approach is introduced to track humans. Color clues are used as the first level for each iteration and edge-orientation histograms are reutilized to support the tracking at the second level. To determine body and head orientations, a combination of Shape Context and SIFT features is proposed. Body orientation is calculated by matching the upper region of the body with predefined shape templates, then finding the orientation within the ranges of π/8 degrees. Then, the optical flow vectors of SIFT features around the head region are calculated to evaluate the direction and type of the motion of the body and head. We demonstrate preliminary results of our approach showing that body and head orientations are successfully estimated. A discussion on various motion patterns and future improvements for more complicated situations is also given.
---
paper_title: A robust person detector for overhead views
paper_content:
In cluttered environments the overhead view is often preferred because looking down can afford better visibility and coverage. However detecting people in this or any other extreme view can be challenging as there is a significant variation in a person's appearances depending only on their position in the picture. The Histogram of Oriented Gradient (HOG) algorithm, a standard algorithm for pedestrian detection, does not perform well here, especially where the image quality is poor. We show that on average, 9 false detections occur per image. We propose a new algorithm where transforming the image patch containing a person to remove positional dependency and then applying the HOG algorithm eliminates 98% of the spurious detections in noisy images from an industrial assembly line and detects people with a 95% efficiency.
---
paper_title: Robust CoHOG Feature Extraction in Human-Centered Image/Video Management System
paper_content:
Many human-centered image and video management systems depend on robust human detection. To extract robust features for human detection, this paper investigates the following shortcomings of co-occurrence histograms of oriented gradients (CoHOGs) which significantly limit its advantages: (1) The magnitudes of the gradients are discarded, and only the orientations are used; (2) the gradients are not smoothed, and thus, aliasing effect exists; and (3) the dimensionality of the CoHOG feature vector is very large (e.g., 200 000). To deal with these problems, in this paper, we propose a framework that performs the following: (1) utilizes a novel gradient decomposition and combination strategy to make full use of the information of gradients; (2) adopts a two-stage gradient smoothing scheme to perform efficient gradient interpolation; and (3) employs incremental principal component analysis to reduce the large dimensionality of the CoHOG features. Experimental results on the two different human databases demonstrate the effectiveness of the proposed method.
---
paper_title: Histograms of oriented gradients for human detection
paper_content:
We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
---
paper_title: Reliable Human Detection and Tracking in Top-View Depth Images
paper_content:
The paper presents a method for human detection and tracking in depth images captured by a top-view camera system. We introduce a new feature descriptor which outperforms state-of-the-art features like Simplified Local Ternary Patterns in the given scenario. We use this feature descriptor to train a head-shoulder detector using a discriminative class scheme. A separate processing step ensures that only a minimal but sufficient number of head-shoulder candidates is evaluated. This contributes to an excellent runtime performance. A final tracking step reliably propagates detections in time and provides stable tracking results. The quality of the presented method allows us to recognize many challenging situations with humans tailgating and piggybacking.
---
paper_title: People-flow counting in complex environments by combining depth and color information
paper_content:
People-flow counting is one of the key techniques of intelligence video surveillance systems and the information of people-flow obtained from this technique is an very important evidence for many applications, such as business analysis, staff planning, security, etc. Traditionally, the color image information based methods encounter kinds of challenges, such as shadows, illumination changing, cloth color, etc., while the depth information based methods suffer from lack of texture. In this paper, we propose an effective approach of people-flow counting by combining color and depth information. First, we adopt a background subtraction technique to fast obtain the moving regions on depth images. Second, the water filling algorithm is used to effectively detect head candidates on the moving regions. Then we use the SVM to recognize the real heads from the candidates. Finally, we adopt a weighted K Nearest Neighbor based multi-target tracking method to track each confirmed head and count the people through the surveillance region. Four datasets constructed from two surveillance scenes are used to evaluate the proposed method. Experimental results show that our method outperform the state-of-the-art methods. Our method can work stably on condition of kinds of interruptions and can not only obtain high precisions, but also high recalls on four datasets.
---
paper_title: A robust algorithm for detecting people in overhead views
paper_content:
In this research a human detection system is proposed in which people are viewed from an overhead camera with a wide angle lens. Due to perspective change a person can have different orientations and sizes at different positions in the scene relative to the optical centre. We exploit this property of the overhead camera and develop a novel algorithm which uses the variable size bounding boxes with different orientations, with respect to the radial distance of the center of the image. In these overhead view images we neither used any assumption about the pose or the visibility of a person nor imposed any restriction about the environment. When compare the results of proposed algorithm with a standard histogram of oriented gradient (HOG) algorithm, we achieve not only a huge gain in overall detection rate but also a significant improvement in reducing spurious detections per image. On average, 9 false detections occur per image. A new algorithm is proposed where transforming the image patch containing a person to remove positional dependency and then applying the HOG algorithm eliminates 98% of the spurious detections in noisy images from an industrial assembly line and detects people with a 95% efficiency.
---
paper_title: Efficient HOG human detection
paper_content:
While Histograms of Oriented Gradients (HOG) plus Support Vector Machine (SVM) (HOG+SVM) is the most successful human detection algorithm, it is time-consuming. This paper proposes two ways to deal with this problem. One way is to reuse the features in blocks to construct the HOG features for intersecting detection windows. Another way is to utilize sub-cell based interpolation to efficiently compute the HOG features for each block. The combination of the two ways results in significant increase in detecting humans-more than five times better. To evaluate the proposed method, we have established a top-view human database. Experimental results on the top-view database and the well-known INRIA data set have demonstrated the effectiveness and efficiency of the proposed method.
---
|
Title: Person Detection from Overhead View: A Survey
Section 1: INTRODUCTION
Description 1: Introduce the significance, applications, and challenges of person detection from an overhead view. Provide an overview of the techniques and the structure of the survey.
Section 2: OVERHEAD BASED PERSON DETECTION TECHNIQUES
Description 2: Discuss the general process of person detection from overhead views, including defining the region of interest (ROI) and localizing the person. Divide the techniques into blob-based and feature-based methods.
Section 3: Blob based Techniques
Description 3: Describe blob-based techniques for overhead person detection, including background subtraction, foreground extraction, segmentation, and pre-processing methods. Review various studies and methods used in blob-based detection.
Section 4: Feature based Techniques
Description 4: Explain feature-based techniques for overhead person detection, emphasizing algorithms and feature extraction methods such as SIFT, HOG, and machine learning classifiers. Review different studies and their methodologies.
Section 5: DISCUSSION
Description 5: Analyze and compare the discussed techniques, highlighting their effectiveness, limitations, and challenges. Discuss issues related to datasets, recording environments, and methodology inconsistencies. Suggest potential improvements and future research directions.
Section 6: CONCLUSION
Description 6: Summarize the key findings from the survey, emphasizing the challenges and open research areas in overhead person detection. Provide recommendations for future research directions.
|
Assistive Technologies for Bipolar Disorder: A Survey
| 8 |
---
paper_title: Predicting Mood Changes in Bipolar Disorder Through Heartbeat Nonlinear Dynamics
paper_content:
Bipolar disorder (BD) is characterized by an alternation of mood states from depression to (hypo)mania. Mixed states, i.e., a combination of depression and mania symptoms at the same time, can also be present. The diagnosis of this disorder in the current clinical practice is based only on subjective interviews and questionnaires, while no reliable objective psycho-physiological markers are available. Furthermore, there are no biological markers predicting BD outcomes, or providing information about the future clinical course of the phenomenon. To overcome this limitation, here we propose a methodology predicting mood changes in BD using heartbeat nonlinear dynamics exclusively, derived from the ECG. Mood changes are here intended as transitioning between two mental states: euthymic state (EUT), i.e., the good affective balance, and non-euthymic (non-EUT) states. Heart rate variability (HRV) series from 14 bipolar spectrum patients (age: 33.43 $\pm$ 9.76, age range: 23–54; six females) involved in the European project PSYCHE, undergoing whole night electrocardiogram (ECG) monitoring were analyzed. Data were gathered from a wearable system comprised of a comfortable t-shirt with integrated fabric electrodes and sensors able to acquire ECGs. Each patient was monitored twice a week, for 14 weeks, being able to perform normal (unstructured) activities. From each acquisition, the longest artifact-free segment of heartbeat dynamics was selected for further analyses. Sub-segments of 5 min of this segment were used to estimate trends of HRV linear and nonlinear dynamics. Considering data from a current observation at day $t_0$ , and past observations at days ( $t_{-1}$ , $t_{-2}$ ,...,), personalized prediction accuracies in forecasting a mood state (EUT/non-EUT) at day $t_{+1}$ were 69% on average, reaching values as high as 83.3%. This approach opens to the possibility of predicting mood states in bipolar patients through heartbeat nonlinear dynamics exclusively.
---
paper_title: A systematic review of the evidence of the burden of bipolar disorder in Europe
paper_content:
BackgroundBipolar disorder is recognized as a major mental health issue, and its economic impact has been examined in the United States. However, there exists a general scarcity of published studies and lack of standardized data on the burden of the illness across European countries. In this systematic literature review, we highlight the epidemiological, clinical, and economic outcomes of bipolar disorder in Europe.MethodsA systematic review of publications from the last 10 years relating to the burden of bipolar disorder was conducted, including studies on epidemiology, patient-related issues, and costs.ResultsData from the UK, Germany, and Italy indicated a prevalence of bipolar disorder of ~1%, and a misdiagnosis rate of 70% from Spain. In one study, up to 75% of patients had at least one DSM-IV comorbidity, commonly anxiety disorders and substance/alcohol abuse. Attempted suicide rates varied between 21%–54%. In the UK, the estimated rate of premature mortality of patients with bipolar I disorder was 18%. The chronicity of bipolar disorder exerted a profound and debilitating effect on the patient. In Germany, 70% of patients were underemployed, and 72% received disability payments. In Italy, 63%–67% of patients were unemployed. In the UK, the annual costs of unemployment and suicide were £1510 million and £179 million, respectively, at 1999/2000 prices. The estimated UK national cost of bipolar disorder was £4.59 billion, with hospitalization during acute episodes representing the largest component.ConclusionBipolar disorder is a major and underestimated health problem in Europe. A number of issues impact on the economic burden of the disease, such as comorbidities, suicide, early death, unemployment or underemployment. Direct costs of bipolar disorder are mainly associated with hospitalization during acute episodes. Indirect costs are a major contributor to the overall economic burden but are not always recognized in research studies.
---
paper_title: A systematic review of the evidence of the burden of bipolar disorder in Europe
paper_content:
BackgroundBipolar disorder is recognized as a major mental health issue, and its economic impact has been examined in the United States. However, there exists a general scarcity of published studies and lack of standardized data on the burden of the illness across European countries. In this systematic literature review, we highlight the epidemiological, clinical, and economic outcomes of bipolar disorder in Europe.MethodsA systematic review of publications from the last 10 years relating to the burden of bipolar disorder was conducted, including studies on epidemiology, patient-related issues, and costs.ResultsData from the UK, Germany, and Italy indicated a prevalence of bipolar disorder of ~1%, and a misdiagnosis rate of 70% from Spain. In one study, up to 75% of patients had at least one DSM-IV comorbidity, commonly anxiety disorders and substance/alcohol abuse. Attempted suicide rates varied between 21%–54%. In the UK, the estimated rate of premature mortality of patients with bipolar I disorder was 18%. The chronicity of bipolar disorder exerted a profound and debilitating effect on the patient. In Germany, 70% of patients were underemployed, and 72% received disability payments. In Italy, 63%–67% of patients were unemployed. In the UK, the annual costs of unemployment and suicide were £1510 million and £179 million, respectively, at 1999/2000 prices. The estimated UK national cost of bipolar disorder was £4.59 billion, with hospitalization during acute episodes representing the largest component.ConclusionBipolar disorder is a major and underestimated health problem in Europe. A number of issues impact on the economic burden of the disease, such as comorbidities, suicide, early death, unemployment or underemployment. Direct costs of bipolar disorder are mainly associated with hospitalization during acute episodes. Indirect costs are a major contributor to the overall economic burden but are not always recognized in research studies.
---
paper_title: Suicide and other causes of mortality in bipolar disorder: a longitudinal study
paper_content:
Background. The high risk of suicide in bipolar disorder is well recognized, but may have been overestimated. There is conflicting evidence about deaths from other causes and little known about risk factors for suicide. We aimed to estimate suicide and mortality rates in a cohort of bipolar patients and to identify risk factors for suicide. Method. All patients who presented for the first time with a DSM-IV diagnosis of bipolar I disorder in a defined area of southeast London over a 35-year period (1965–1999) were identified. Mortality rates were compared with those of the 1991 England and Wales population, indirectly standardized for age and gender. Univariate and multivariate analyses were used to test potential risk factors for suicide. Results. Of the 239 patients in the cohort, 235 (98·3%) were traced. Forty-two died during the 4422 person-years of follow-up, eight from suicide. The standardized mortality ratio (SMR) for suicide was 9·77 [95% confidence interval (CI) 4·22–19·24], which, although significantly elevated compared to the general population, represented a lower case fatality than expected from previous literature. Deaths from all other causes were not excessive for the age groups studied in this cohort. Alcohol abuse [hazard ratio (HR) 6·81, 95% CI 1·69–27·36, p =0·007] and deterioration from pre-morbid level of functioning up to a year after onset (HR 5·20, 95% CI 1·24–21·89, p =0·024) were associated with increased risk of suicide. Conclusions. Suicide is significantly increased in unselected bipolar patients but actual case fatality is not as high as previously claimed. A history of alcohol abuse and deterioration in function predict suicide in bipolar disorder.
---
paper_title: Suicide attempts in bipolar I and bipolar II disorder: a review and meta-analysis of the evidence
paper_content:
Novick DM, Swartz HA, Frank E. Suicide attempts in bipolar I and bipolar II disorder: a review and meta-analysis of the evidence. Bipolar Disord 2010: 12: 1–9. © 2010 The Authors. ::: Journal compilation © 2010 John Wiley & Sons A/S. ::: ::: Objective: The prevalence of suicide attempts (SA) in bipolar II disorder (BPII), particularly in comparison to the prevalence in bipolar I disorder (BPI), is an understudied and controversial issue with mixed results. To date, there has been no comprehensive review of the published prevalence data for attempted suicide in BPII. ::: ::: Methods: We conducted a literature review and meta-analysis of published reports that specified the proportion of individuals with BPII in their presentation of SA data. Systematic searching yielded 24 reports providing rates of SA in BPII and 21 reports including rates of SA in both BPI and BPII. We estimated the prevalence of SA in BPII by combining data across reports of similar designs. To compare rates of SA in BPII and BPI, we calculated a pooled odds ratio (OR) and 95% confidence interval (CI) with random-effect meta-analytic techniques with retrospective data from 15 reports that detailed rates of SA in both BPI and BPII. ::: ::: Results: Among the 24 reports with any BPII data, 32.4% (356/1099) of individuals retrospectively reported a lifetime history of SA, 19.8% (93/469) prospectively reported attempted suicide, and 20.5% (55/268) of index attempters were diagnosed with BPII. In 15 retrospective studies suitable for meta-analysis, the prevalence of attempted suicide in BPII and BPI was not significantly different: 32.4% and 36.3%, respectively (OR = 1.21, 95% CI: 0.98–1.48, p = 0.07). ::: ::: Conclusion: The contribution of BPII to suicidal behavior is considerable. Our findings suggest that there is no significant effect of bipolar subtype on rate of SA. Our findings are particularly alarming in concert with other evidence, including (i) the well-documented predictive role of SA for completed suicide and (ii) the evidence suggesting that individuals with BPII use significantly more violent and lethal methods than do individuals with BPI. To reduce suicide-related morbidity and mortality, routine clinical care for BPII must include ongoing risk assessment and interventions targeted at risk factors.
---
paper_title: Costs of Bipolar Disorder
paper_content:
Bipolar disorder is a chronic affective disorder that causes significant economic burden to patients, families and society. It has a lifetime prevalence of approximately 1.3%. Bipolar disorder is characterised by recurrent mania or hypomania and depressive episodes that cause impairments in functioning and health-related quality of life. Patients require acute and maintenance therapy delivered via inpatient and outpatient treatment. Patients with bipolar disorder often have contact with the social welfare and legal systems; bipolar disorder impairs occupational functioning and may lead to premature mortality through suicide. This review examines the symptomatology of bipolar disorder and identifies those features that make it difficult and costly to treat. Methods for assessing direct and indirect costs are reviewed. We report on comprehensive cost studies as well as administrative claims data and program evaluations. The majority of data is drawn from studies conducted in the US; however, we discuss European studies when appropriate. Only two comprehensive cost-of-illness studies on bipolar disorder, one prevalence-based and one incidence-based, have been reported. There are, however, several comprehensive cost-of-illness studies measuring economic burden of affective disorders including bipolar disorder. Estimates of total costs of affective disorders in the US range from $US30.4-43.7 billion (1990 values). In the prevalence-based cost-of-illness study on bipolar disorder, total annual costs were estimated at $US45.2 billion (1991 values). In the incidence-based study, lifetime costs were estimated at $US24 billion. Although there have been recent advances in pharmacotherapy and outpatient therapy, hospitalisation still accounts for a substantial portion of the direct costs. A variety of outpatient services are increasingly important for the care of patients with bipolar disorder and costs in this area continue to grow. Indirect costs due to morbidity and premature mortality comprise a large portion of the cost of illness. Lost workdays or inability to work due to the disease cause high morbidity costs. Intangible costs such as family burden and impaired health-related quality of life are common, although it has proved difficult to attach monetary values to these costs.
---
paper_title: The 16-Item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression
paper_content:
Abstract Background The 16-item Quick Inventory of Depressive Symptomatology (QIDS), a new measure of depressive symptom severity derived from the 30-item Inventory of Depressive Symptomatology (IDS), is available in both self-report (QIDS-SR 16 ) and clinician-rated (QIDS-C 16 ) formats. Methods This report evaluates and compares the psychometric properties of the QIDS-SR 16 in relation to the IDS-SR 30 and the 24-item Hamilton Rating Scale for Depression (HAM-D 24 ) in 596 adult outpatients treated for chronic nonpsychotic, major depressive disorder. Results Internal consistency was high for the QIDS-SR 16 (Cronbach's α = .86), the IDS-SR 30 (Cronbach's α = .92), and the HAM-D 24 (Cronbach's α = .88). QIDS-SR 16 total scores were highly correlated with IDS-SR 30 (.96) and HAM-D 24 (.86) total scores. Item–total correlations revealed that several similar items were highly correlated with both QIDS-SR 16 and IDS-SR 30 total scores. Roughly 1.3 times the QIDS-SR 16 total score is predictive of the HAM-D 17 (17-item version of the HAM-D) total score. Conclusions The QIDS-SR 16 was as sensitive to symptom change as the IDS-SR 30 and HAM-D 24 , indicating high concurrent validity for all three scales. The QIDS-SR 16 has highly acceptable psychometric properties, which supports the usefulness of this brief rating of depressive symptom severity in both clinical and research settings.
---
paper_title: Current Use of Depression Rating Scales in Mental Health Setting
paper_content:
Objective ::: This study was to investigate the current use of depression rating scales by psychiatrists and clinical psychologists in Korea.
---
paper_title: Carer distress : A prospective, population-based study
paper_content:
This study investigates whether transitions into and out of unpaid caregiving are associated with increased risk for onset of or delayed recovery from psychological distress, and traces the prevalence of distress across successive years of caring activity and after caregiving has ceased. The analysis is based on data from the British Household Panel Survey covering 3000 would-be carers, 2900 former carers, and 11,100 non-carers during the 1990s; their psychological well-being was assessed at annual intervals using the General Health Questionnaire. Carers providing long hours of care over extended spells present raised levels of distress, women more so than men. Compared with non-carers, risk for onset of distress increases progressively with the amount of time devoted to caregiving each week. Adverse effects on the psychological well-being of heavily involved carers are most pronounced around the start of their care episodes and when caregiving ends. Ongoing care increases their susceptibility to recurring distress, and adverse health effects are evident beyond the end of their caregiving episodes. Several groups of carers experience psychological health inequalities compared with non-carers, especially those looking after a spouse or partner, and mothers caring for a sick or disabled child. The findings underline the importance for effective carer support and health promotion of early identification of carers, monitoring high risk groups, timing appropriate interventions, and targeting resources.
---
paper_title: Assessment Scales in Depression, Mania and Anxiety
paper_content:
The first brief chapter in this book rehearses the arguments for using rating scales routinely in evidence-based practice in order to evaluate effectiveness, inform treatment choice and improve outcomes. Scales are also useful to highlight or detect symptoms that might be missed by a more
---
paper_title: Novel Technology as Platform for Interventions for caregivers and individuals with Severe Mental Health Illnesses: A Systematic Review
paper_content:
Abstract Background Severe mental illnesses (SMIs) have been found to be associated with both increases in morbidity-mortality, need for treatment care in patients themselves, and burden for relatives as caregivers. A growing number of web-based and mobile software applications have appeared that aim to address various barriers with respect to access to care. Our objective was to review and summarize recent advancements in such interventions for caregivers of individuals with a SMI. Methods We conducted a systematic search for papers evaluating interactive mobile or web-based software (using no or only minimal support from a professional) specifically aimed at supporting informal caregivers. We also searched for those supporting patients with SMI so as to not to miss any which might include relatives. Results Out of a total of 1673 initial hits, we identified 11 articles reporting on 9 different mobile or web-based software programs. The main result is that none of those studies focused on caregivers, and the ones we identified using mobile or web-based applications were just for patients and not their relatives. Limitations Differentiating between online and offline available software might not always have been totally reliable, and we might have therefore missed some studies. Conclusions In summary, the studies provided evidence that remotely accessible interventions for patients with SMI are feasible and acceptable to patients. No such empirically evaluated program was available for informal caregivers such as relatives. Keeping in mind the influential role of those informal caregivers in the process of treatment and self-management, this is highly relevant for public health. Supporting informal caregivers can improve well-being of both caregivers and patients.
---
paper_title: Mobile Interventions for Severe Mental Illness: Design and Preliminary Data from Three Approaches
paper_content:
Mobile devices can be used to deliver psychosocial interventions, yet there is little prior application in severe mental illness. We provide the rationale, design, and preliminary data from three ongoing clinical trials of mobile interventions developed for bipolar disorder or schizophrenia. Project 1 used a personal digital assistant to prompt engagement in personalized self-management behaviors based on real-time data. Project 2 employed experience sampling via text messages to facilitate case management. Project 3 built on group functional skills training for schizophrenia by incorporating between-session mobile phone contacts with therapists. Preliminary findings were of minimal participant attrition, and no broken devices; yet, several operational and technical barriers needed to be addressed. Adherence was similar to that reported in non-psychiatric populations, with high participant satisfaction. Thus, mobile devices appear feasible and acceptable in augmenting psychosocial interventions for severe mental illness, with future research in establishing efficacy, cost-effectiveness, and ethical and safety protocols.
---
paper_title: Beating Bipolar: exploratory trial of a novel Internet-based psychoeducational treatment for bipolar disorder.
paper_content:
OBJECTIVES ::: Psychoeducational approaches are promising interventions for the long-term management of bipolar disorder. In consultation with professionals, patients, and their families we have developed a novel web-based psychoeducational intervention for bipolar disorder called Beating Bipolar. We undertook a preliminary exploratory randomized trial to examine efficacy, feasibility and acceptability. ::: ::: ::: METHODS ::: This was an exploratory randomized controlled trial of Beating Bipolar (current controlled trials registration number: ISRCTN81375447). The control arm was treatment-as-usual and the a priori primary outcome measure was quality of life [measured by the brief World Health Organization Quality of Life (WHOQOL-BREF) scale]. Secondary outcomes included psychosocial functioning, insight, depressive and manic symptoms and relapse, and use of healthcare resources. Fifty participants were randomized to either the Beating Bipolar intervention plus treatment-as-usual or just treatment-as-usual. The intervention was delivered over a four-month period and outcomes were assessed six months later. ::: ::: ::: RESULTS ::: There was no significant difference between the intervention and control groups on the primary outcome measure (total WHOQOL-BREF score) but there was a modest improvement within the psychological subsection of the WHOQOL-BREF for the intervention group relative to the control group. There were no significant differences between the groups on any of the secondary outcome measures. ::: ::: ::: CONCLUSIONS ::: Beating Bipolar is potentially a safe and engaging intervention which can be delivered remotely to large numbers of patients with bipolar disorder at relatively low cost. It may have a modest effect on psychological quality of life. Further work is required to establish the impact of this intervention on insight, knowledge, treatment adherence, self-efficacy and self-management skills.
---
paper_title: Designing mobile health technology for bipolar disorder: a field trial of the monarca system
paper_content:
An increasing number of pervasive healthcare systems are being designed, that allow people to monitor and get feedback on their health and wellness. To address the challenges of self-management of mental illnesses, we have developed the MONARCA system - a personal monitoring system for bipolar patients. We conducted a 14 week field trial in which 12 patients used the system, and we report findings focusing on their experiences. The results were positive; compared to using paper-based forms, the adherence to self-assessment improved; the system was considered very easy to use; and the perceived usefulness of the system was high. Based on this study, the paper discusses three HCI questions related to the design of personal health technologies; how to design for disease awareness and self-treatment, how to ensure adherence to personal health technologies, and the roles of different types of technology platforms.
---
paper_title: Keeping therapies simple: psychoeducation in the prevention of relapse in affective disorders
paper_content:
Psychological interventions for mood disorders can be divided into ‘ skilled’ and ‘simple’. Psychoeducation belongs to the latter group: a simple and illness-focused therapy with prophylactic efficacy in all major mood disorders. Successful implementation of psychoeducation requires a proper setting, including open-door policy, team effort and empowerment of the therapeutic alliance.
---
paper_title: IntelliCare: An Eclectic, Skills-Based App Suite for the Treatment of Depression and Anxiety
paper_content:
Background: Digital mental health tools have tended to use psychoeducational strategies based on treatment orientations developed and validated outside of digital health. These features do not map well to the brief but frequent ways that people use mobile phones and mobile phone apps today. To address these challenges, we developed a suite of apps for depression and anxiety called IntelliCare, each developed with a focused goal and interactional style. IntelliCare apps prioritize interactive skills training over education and are designed for frequent but short interactions. Objective: The overall objective of this study was to pilot a coach-assisted version of IntelliCare and evaluate its use and efficacy at reducing symptoms of depression and anxiety. Methods: Participants, recruited through a health care system, Web-based and community advertising, and clinical research registries, were included in this single-arm trial if they had elevated symptoms of depression or anxiety. Participants had access to the 14 IntelliCare apps from Google Play and received 8 weeks of coaching on the use of IntelliCare. Coaching included an initial phone call plus 2 or more texts per week over the 8 weeks, with some participants receiving an additional brief phone call. Primary outcomes included the Patient Health Questionnaire-9 (PHQ-9) for depression and the Generalized Anxiety Disorder-7 (GAD-7) for anxiety. Participants were compensated up to US $90 for completing all assessments; compensation was not for app use or treatment engagement. Results: Of the 99 participants who initiated treatment, 90.1% (90/99) completed 8 weeks. Participants showed substantial reductions in the PHQ-9 and GAD-7 (P<.001). Participants used the apps an average of 195.4 (SD 141) times over the 8 weeks. The average length of use was 1.1 (SD 2.1) minutes, and 95% of participants downloaded 5 or more of the IntelliCare apps. Conclusions: This study supports the IntelliCare framework of providing a suite of skills-focused apps that can be used frequently and briefly to reduce symptoms of depression and anxiety. The IntelliCare system is elemental, allowing individual apps to be used or not used based on their effectiveness and utility, and it is eclectic, viewing treatment strategies as elements that can be applied as needed rather than adhering to a singular, overarching, theoretical model. Trial Registration: Clinicaltrials.gov NCT02176226; http://clinicaltrials.gov/ct2/show/NCT02176226 (Archived by WebCite at http://www.webcitation/6mQZuBGk1) [J Med Internet Res 2017;19(1):e10]
---
paper_title: Mobile technology for medication adherence in people with mood disorders: A systematic review
paper_content:
Abstract Background Medication non-adherence is a critical challenge for many patients diagnosed with mood disorders (Goodwin and Jamison, 1990). There is a need for alternative strategies that improve adherence among patients with mood disorders that are cost-effective, able to reach large patient populations, easy to implement, and that allow for communication with patients outside of in-person visits. Technology-based approaches to promote medication adherence are increasingly being explored to address this need. The aim of this paper is to provide a systematic review of the use of mobile technologies to improve medication adherence in patients with mood disorders. Methods A total of nine articles were identified as describing mobile technology targeting medication adherence in mood disorder populations. Results Results showed overall satisfaction and feasibility of mobile technology, and reduction in mood symptoms; however, few examined effectiveness of mobile technology improving medication adherence through randomized control trials. Limitations Given the limited number of studies, further research is needed to determine long term effectiveness. Conclusions Mobile technologies has the potential to improve medication adherence and can be further utilized for symptom tracking, side effects tracking, direct links to prescription refills, and provide patients with greater ownership over their treatment progress.
---
paper_title: Monitoring activity of patients with bipolar disorder using smart phones
paper_content:
Mobile computing is changing the landscape of clinical monitoring and self-monitoring. One of the major impacts will be in healthcare, where increase in number of sensing modalities is providing more and more information on the state of overall wellbeing, behaviour and health. There are numerous applications of mobile computing that range from wellbeing applications, such as physical fitness, stress or burnout up to applications that target mental disorders including bipolar disorder. Use of information provided by mobile computing devices can track the state of the subjects and also allow for experience sampling in order to gather subjective information. This paper reports on the results obtained from a medical trial with monitoring of bipolar disorder patients and how the episodes of the diseases correlate to the analysis of the data sampled from mobile phone acting as a monitoring device.
---
paper_title: Mobile phones as medical devices in mental disorder treatment: an overview
paper_content:
Mental disorders can have a significant, nega- tive impact on sufferers' lives, as well as on their friends and family, healthcare systems and other parts of society. Approximately 25 % of all people in Europe and the USA experience a mental disorder at least once in their lifetime. Currently, monitoring mental disorders relies on subjective clinical self-reporting rating scales, which were developed more than 50 years ago. In this paper, we discuss how mobile phones can support the treatment of mental disor- ders by (1) implementing human-computer interfaces to support therapy and (2) collecting relevant data from patients' daily lives to monitor the current state and development of their mental disorders. Concerning the first point, we review various systems that utilize mobile phones for the treatment of mental disorders. We also evaluate how their core design features and dimensions can be applied in other, similar systems. Concerning the second point, we highlight the feasibility of using mobile phones to collect comprehensive data including voice data, motion and location information. Data mining methods are also reviewed and discussed. Based on the presented studies, we summarize advantages and drawbacks of the most prom- ising mobile phone technologies for detecting mood dis- orders like depression or bipolar disorder. Finally, we discuss practical implementation details, legal issues and business models for the introduction of mobile phones as medical devices.
---
paper_title: Advances in Electrodermal Activity Processing with Applications for Mental Health
paper_content:
This book explores Autonomic Nervous System (ANS) dynamics as investigated through Electrodermal Activity (EDA) processing. It presents groundbreaking research in thetechnical field of biomedical engineering, especially biomedical signal processing, as well as clinical fields ofpsychometrics, affective computing, and psychological assessment. This volumedescribes some of the most complete, effective, and personalized methodologies for extracting data from a non-stationary, nonlinear EDA signal in order to characterize the affective and emotional state of a human subject. These methodologies are underscored by discussion of real-world applications in mood assessment. The text also examines the physiological bases of emotion recognition through noninvasivemonitoring of the autonomic nervous system. This is an ideal book forbiomedical engineers, physiologists,neuroscientists, engineers, applied mathmeticians, psychiatric and psychological clinicians, and graduate students in these fields. This book also:Expertly introduces a novel approach for EDA analysis based on convex optimization and sparsity, a topic of rapidly increasing interestAuthoritatively presents groundbreaking research achieved using EDA as an exemplarybiomarker of ANS dynamicsDeftly exploresEDA's potential as a source of reliable and effective markers for the assessment of emotional responses in healthy subjects,as well as for the recognition of pathological mood states in bipolar patients
---
paper_title: Smartphone-centred wearable sensors network for monitoring patients with bipolar disorder
paper_content:
Bipolar Disorder is a severe form of mental illness. It is characterized by alternated episodes of mania and depression, and it is treated typically with a combination of pharmacotherapy and psychotherapy. Recognizing early warning signs of upcoming phases of mania or depression would be of great help for a personalized medical treatment. Unfortunately, this is a difficult task to be performed for both patient and doctors. In this paper we present the MONARCA wearable system, which is meant for recognizing early warning signs and predict maniac or depressive episodes. The system is a smartphone-centred and minimally invasive wearable sensors network that is being developing in the framework of the MONARCA European project.
---
paper_title: Towards long term monitoring of electrodermal activity in daily life
paper_content:
Manic depression, also known as bipolar disorder, is a common and severe form of mental disorder. The European research project MONARCA aims at developing and validating mobile technologies for multi-parametric, long term monitoring of physiological and behavioral information relevant to bipolar disorder. One aspect of MONARCA is to investigate the long term monitoring of Electrodermal activity (EDA) to support the diagnosis and treatment of bipolar disorder patients. EDA is known as an indicator of the emotional state and the stress level of a person. To realize a long-term monitoring of the EDA, the integration of the sensor system in the shoe or sock is a promising approach. This paper presents a first step towards such a sensor system. In a feasibility study including 8 subjects, we investigate the correlation between EDA measurements at the fingers, which is the most established sensing site, with measurements of the EDA at the feet. The results indicate that 88% of the evoked skin conductance responses (SCRs) occur at both sensing sites. When using an action movie as psychophysiologically activating stimulus, we have found weaker reactivity in the foot than in the hand EDA. The results also suggest that the influence of moderate physical activity on EDA measurements is low and has a similar effect for both recording sites. This suggests that the foot recording location is suitable for recordings in daily life even in the presence of moderate movement.
---
paper_title: Decomposition of skin conductance data by means of nonnegative deconvolution
paper_content:
Skin conductance (SC) data are usually characterized by a sequence of overlapping phasic skin conductance responses (SCRs) overlying a tonic component. The variability of SCR shapes hereby complicates the proper decomposition of SC data. A method is proposed for full decomposition of SC data into tonic and phasic components. A two-compartment diffusion model was found to adequately describe a standard SCR shape based on the process of sweat diffusion. Nonnegative deconvolution is used to decompose SC data into discrete compact responses and at the same time assess deviations from the standard SCR shape, which could be ascribed to the additional process of pore opening. Based on the result of single non-overlapped SCRs, response parameters can be estimated precisely as shown in a paradigm with varying inter-stimulus intervals.
---
paper_title: Predicting Mood Changes in Bipolar Disorder Through Heartbeat Nonlinear Dynamics
paper_content:
Bipolar disorder (BD) is characterized by an alternation of mood states from depression to (hypo)mania. Mixed states, i.e., a combination of depression and mania symptoms at the same time, can also be present. The diagnosis of this disorder in the current clinical practice is based only on subjective interviews and questionnaires, while no reliable objective psycho-physiological markers are available. Furthermore, there are no biological markers predicting BD outcomes, or providing information about the future clinical course of the phenomenon. To overcome this limitation, here we propose a methodology predicting mood changes in BD using heartbeat nonlinear dynamics exclusively, derived from the ECG. Mood changes are here intended as transitioning between two mental states: euthymic state (EUT), i.e., the good affective balance, and non-euthymic (non-EUT) states. Heart rate variability (HRV) series from 14 bipolar spectrum patients (age: 33.43 $\pm$ 9.76, age range: 23–54; six females) involved in the European project PSYCHE, undergoing whole night electrocardiogram (ECG) monitoring were analyzed. Data were gathered from a wearable system comprised of a comfortable t-shirt with integrated fabric electrodes and sensors able to acquire ECGs. Each patient was monitored twice a week, for 14 weeks, being able to perform normal (unstructured) activities. From each acquisition, the longest artifact-free segment of heartbeat dynamics was selected for further analyses. Sub-segments of 5 min of this segment were used to estimate trends of HRV linear and nonlinear dynamics. Considering data from a current observation at day $t_0$ , and past observations at days ( $t_{-1}$ , $t_{-2}$ ,...,), personalized prediction accuracies in forecasting a mood state (EUT/non-EUT) at day $t_{+1}$ were 69% on average, reaching values as high as 83.3%. This approach opens to the possibility of predicting mood states in bipolar patients through heartbeat nonlinear dynamics exclusively.
---
paper_title: Heart rate variability in bipolar mania and schizophrenia
paper_content:
Background: Autonomic nervous system (ANS) dysfunction and reduced heart rate variability (HRV) have been reported in a wide variety of psychiatric disorders, but have not been well characterized in bipolar mania. We recorded cardiac activity and assessed HRV in acutely hospitalized manic bipolar (BD) and schizophrenia (SCZ) patients compared to age- and gender-matched healthy comparison (HC) subjects. Method: HRV was assessed using time domain, frequency domain, and nonlinear analyses in 23 manic BD, 14 SCZ, and 23 HC subjects during a 5 min rest period. Psychiatric symptoms were assessed by administration of the Brief Psychiatric Rating Scale (BPRS) and the Young Mania Rating Scale (YMRS). Results: Manic BD patients demonstrated a significant reduction in HRV, parasympathetic activity, and cardiac entropy compared to HC subjects, while SCZ patients demonstrated a similar, but non-significant, trend towards lower HRV and entropy. Reduction in parasympathetic tone was significantly correlated with higher YMRS scores and the unusual thought content subscale on the BPRS. Decreased entropy was associated with increased aggression and diminished personal hygiene on the YMRS scale. Conclusion: Cardiac function in manic BD individuals is characterized by decreased HRV, reduced vagal tone, and a decline in heart rate complexity as assessed by linear and nonlinear methods of analysis. Autonomic dysregulation is associated with more severe psychiatric symptoms, suggesting HRV dysfunction in this disorder may be dependent on the phase of the illness.
---
paper_title: Heart Rate Variability as an Index of Regulated Emotional Responding
paper_content:
The study of individual differences in emotional responding can provide considerable insight into interpersonal dynamics and the etiology of psychopathology. Heart rate variability (HRV) analysis is emerging as an objective measure of regulated emotional responding (generating emotional responses of appropriate timing and magnitude). This review provides a theoretical and empirical rationale for the use of HRV as an index of individual differences in regulated emotional responding. Two major theoretical frameworks that articulate the role of HRV in emotional responding are presented, and relevant empirical literature is reviewed. The case is made that HRV is an accessible research tool that can increase the understanding of emotion in social and psychopathological processes.
---
paper_title: Wearable Monitoring for Mood Recognition in Bipolar Disorder Based on History-Dependent Long-Term Heart Rate Variability Analysis
paper_content:
Current clinical practice in diagnosing patients affected by psychiatric disorders such as bipolar disorder is based only on verbal interviews and scores from specific questionnaires, and no reliable and objective psycho-physiological markers are taken into account. In this paper, we propose to use a wearable system based on a comfortable t-shirt with integrated fabric electrodes and sensors able to acquire electrocardiogram, respirogram, and body posture information in order to detect a pattern of objective physiological parameters to support diagnosis. Moreover, we implemented a novel ad hoc methodology of advanced biosignal processing able to effectively recognize four possible clinical mood states in bipolar patients (i.e., depression, mixed state, hypomania, and euthymia) continuously monitored up to 18 h, using heart rate variability information exclusively. Mood assessment is intended as an intrasubject evaluation in which the patient's states are modeled as a Markov chain, i.e., in the time domain, each mood state refers to the previous one. As validation, eight bipolar patients were monitored collecting and analyzing more than 400 h of autonomic and cardiovascular activity. Experimental results demonstrate that our novel concept of personalized and pervasive monitoring constitutes a viable and robust clinical decision support system for bipolar disorders recognizing mood states with a total classification accuracy up to 95.81%.
---
paper_title: Sleep disorders as core symptoms of depression
paper_content:
Links between sleep and depression are strong. About three quarters of depressed patients have insomnia symptoms, and hypersomnia is present in about 40% of young depressed adults and 10% of older patients, with a preponderance in females. The symptoms cause huge distress, have a major impact on quality of life, and are a strong risk factor for suicide. As well as the subjective experience of sleep symptoms, there are well-documented changes in objective sleep architecture in depression. Mechanisms of sleep regulation and how they might be disturbed in depression are discussed. The sleep symptoms are often unresolved by treatment, and confer a greater risk of relapse and recurrence. Epidemiological studies have pointed out that insomnia in nondepressed subjects is a risk factor for later development of depression. There is therefore a need for more successful management of sleep disturbance in depression, in order to improve quality of life in these patients and reduce an important factor in depressive relapse and recurrence.
---
paper_title: Hypersomnia subtypes, sleep and relapse in bipolar disorder.
paper_content:
Background ::: Though poorly defined, hypersomnia is associated with negative health outcomes, and new-onset and recurrence of psychiatric illness. Lack of definition impedes generalizability across studies. The present research clarifies hypersomnia diagnoses in bipolar disorder by exploring possible subgroups and their relationship to prospective sleep data and relapse into mood episodes.
---
paper_title: Rate of switch from depression into mania after therapeutic sleep deprivation in bipolar depression
paper_content:
Sleep deprivation is a potentially useful non-pharmacological treatment for depression. A relationship between sleep loss and the onset of mania has been reported, so it is possible that a switch from depression into mania after sleep deprivation might be expected in bipolar depressed patients who are treated with sleep deprivation. In a sample of 206 bipolar depressed treated with three cycles of sleep deprivation, alone or in combination with heterogeneous medications, we observed a 4.85% switch rate into mania and a 5.83% switch rate into hypomania. These percentages are comparable to those observed with antidepressant drug treatments.
---
paper_title: Can home-monitoring of sleep predict depressive episodes in bipolar patients?
paper_content:
The aim of this study is the evaluation of the autonomic regulations during depressive stages in bipolar patients in order to test new quantitative and objective measures to detect such events. A sensorized T-shirt was used to record ECG signal and body movements during the night, from which HRV data and sleep macrostructure were estimated and analyzed. 9 out of 20 features extracted resulted to be significant (p<;0.05) in discriminating among depressive and non-depressive states. Such features are representation of HRV dynamics in both linear and non-linear domain and parameters linked to sleep modulations.
---
paper_title: Designing mobile health technology for bipolar disorder: a field trial of the monarca system
paper_content:
An increasing number of pervasive healthcare systems are being designed, that allow people to monitor and get feedback on their health and wellness. To address the challenges of self-management of mental illnesses, we have developed the MONARCA system - a personal monitoring system for bipolar patients. We conducted a 14 week field trial in which 12 patients used the system, and we report findings focusing on their experiences. The results were positive; compared to using paper-based forms, the adherence to self-assessment improved; the system was considered very easy to use; and the perceived usefulness of the system was high. Based on this study, the paper discusses three HCI questions related to the design of personal health technologies; how to design for disease awareness and self-treatment, how to ensure adherence to personal health technologies, and the roles of different types of technology platforms.
---
paper_title: Smartphone-centred wearable sensors network for monitoring patients with bipolar disorder
paper_content:
Bipolar Disorder is a severe form of mental illness. It is characterized by alternated episodes of mania and depression, and it is treated typically with a combination of pharmacotherapy and psychotherapy. Recognizing early warning signs of upcoming phases of mania or depression would be of great help for a personalized medical treatment. Unfortunately, this is a difficult task to be performed for both patient and doctors. In this paper we present the MONARCA wearable system, which is meant for recognizing early warning signs and predict maniac or depressive episodes. The system is a smartphone-centred and minimally invasive wearable sensors network that is being developing in the framework of the MONARCA European project.
---
paper_title: PSYCHE: Personalised monitoring systems for care in mental health
paper_content:
One of the areas of great demand for the need of continuous monitoring, patient participation and medical prediction is that of mood disorders, more specifically bipolar disorders. Due to the unpredictable and episodic nature of bipolar disorder, it is necessary to take the traditional standard procedures of mood assessment through the administration of rating scales and questionnaires and integrate this with tangible data found in emerging research on central and peripheral changes in brain function that may be associated to the clinical status and response to treatment throughout the course of bipolar disorder. This paper presents PSYCHE system, a personal, cost-effective, multi-parametric monitoring system based on textile platforms and portable sensing devices for the long term and short term acquisition of data from selected class of patients affected by mood disorders. The acquired data will be processed and analyzed in the established platform that takes into account the Electronic Health Records (EHR) of the patient, a personalized data referee system, as well as medical analysis in order to verify the diagnosis and help in prognosis of the illness. Constant feedback and monitoring will be used to manage the illness, to give patients support, to facilitate interaction between patient and physician as well as to alert professionals in case of patients relapse and depressive or manic episodes income, as the ultimate goal is to identify signal trends indicating detection and prediction of critical events.
---
paper_title: Staging Bipolar Disorder
paper_content:
The purpose of this study was to analyze the evidence supporting a staging model for bipolar disorder. The authors conducted an extensive Medline and Pubmed search of the published literature using a variety of search terms (staging, bipolar disorder, early intervention) to find relevant articles, which were reviewed in detail. Only recently specific proposals have been made to apply clinical staging to bipolar disorder. The staging model in bipolar disorder suggests a progression from prodromal (at-risk) to more severe and refractory presentations (Stage IV). A staging model implies a longitudinal appraisal of different aspects: clinical variables, such as number of episodes and subsyndromal symptoms, functional and cognitive impairment, comorbidity, biomarkers, and neuroanatomical changes. Staging models are based on the fact that response to treatment is generally better when it is introduced early in the course of the illness. It assumes that earlier stages have better prognosis and require simpler therapeutic regimens. Staging may assist in bipolar disorder treatment planning and prognosis, and emphasize the importance of early intervention. Further research is required in this exciting and novel area.
---
paper_title: Framework to Predict Bipolar Episodes
paper_content:
Patients suffering from Bipolar disorder (BD) experience repeated relapses of depressive and manic states. The extremity of this disorder can lead to many unpleasant events, even suicide attempts, which make early detection vital. Presently, the primary method for identifying these states is evaluation by psychiatrists based on patient’s self-reporting. However, ubiquitous use of mobile devices in combination with sensor fusion has the potential to provide a faster and convenient alternative mode of diagnosis to better manage the illness. This paper proposes a continuous, autonomous sensor fusion based monitoring framework to identify and predict state changes in patients suffering from bipolar disorder. Instead of relying on subjective self-reported data, the proposed system uses sensors to measure and collect, Heart Rate Variability, Quantity and Quality of sleep and Electrodermal activity data as predictors to discern between the two bipolar states. Using classification techniques along with a fusion algorithm, a prediction algorithm can be derived based on all the sensor modalities, gathered via a mobile application, is used to set alerts and visualize the information and results efficiently.
---
paper_title: Why Don't Psychiatrists Use Scales to Measure Outcome When Treating Depressed Patients?
paper_content:
OBJECTIVE ::: A survey of psychiatrists in the United Kingdom found that only a minority routinely used standardized measures to assess outcome when treating depression and anxiety disorders. The goals of the present study were to determine how frequently psychiatrists in the United States use scales to measure outcome when treating depressed patients and, for those clinicians who do not regularly use such scales, to ascertain the reasons for the lack of use. ::: ::: ::: METHOD ::: The subjects were 314 psychiatrists who attended a continuing medical education conference in California, Massachusetts, New York, or Wisconsin in 2006 or 2007. Prior to a lecture, the subjects completed a questionnaire that included 2 questions regarding the use of rating scales to monitor outcome when treating depression. ::: ::: ::: RESULTS ::: More than 80% of the psychiatrists indicated that they did not routinely use scales to monitor outcome when treating depression. The most frequent reasons psychiatrists gave for not using scales were that they did not believe scales would be clinically helpful, that scales take too much time to use, and that they were not trained in the use of such measures. ::: ::: ::: CONCLUSIONS ::: The majority of psychiatrists indicated that they do not routinely use standardized measures to evaluate outcome when treating depressed patients. The Centers for Medicare and Medicaid Services' Physician Quality Reporting Initiative is intended to improve quality of care by providing physicians financial incentives to document outcomes reflecting best practices. If standardized outcome assessment is to assume increasing importance in this country, either educational efforts or payor mandates, or both, will be necessary to change clinicians' behavior.
---
paper_title: Psychiatrists in the UK do not use outcomes measures: National survey
paper_content:
Governmental policy statements on mental health practice over the past decade have emphasised the importance of routinely measuring individual patient outcomes (Department of Health, [1991][1], [1998][2]; [Secretary of State for Health, 1999][3]). Despite the availability of various standardised
---
paper_title: Heart rate variability in bipolar mania and schizophrenia
paper_content:
Background: Autonomic nervous system (ANS) dysfunction and reduced heart rate variability (HRV) have been reported in a wide variety of psychiatric disorders, but have not been well characterized in bipolar mania. We recorded cardiac activity and assessed HRV in acutely hospitalized manic bipolar (BD) and schizophrenia (SCZ) patients compared to age- and gender-matched healthy comparison (HC) subjects. Method: HRV was assessed using time domain, frequency domain, and nonlinear analyses in 23 manic BD, 14 SCZ, and 23 HC subjects during a 5 min rest period. Psychiatric symptoms were assessed by administration of the Brief Psychiatric Rating Scale (BPRS) and the Young Mania Rating Scale (YMRS). Results: Manic BD patients demonstrated a significant reduction in HRV, parasympathetic activity, and cardiac entropy compared to HC subjects, while SCZ patients demonstrated a similar, but non-significant, trend towards lower HRV and entropy. Reduction in parasympathetic tone was significantly correlated with higher YMRS scores and the unusual thought content subscale on the BPRS. Decreased entropy was associated with increased aggression and diminished personal hygiene on the YMRS scale. Conclusion: Cardiac function in manic BD individuals is characterized by decreased HRV, reduced vagal tone, and a decline in heart rate complexity as assessed by linear and nonlinear methods of analysis. Autonomic dysregulation is associated with more severe psychiatric symptoms, suggesting HRV dysfunction in this disorder may be dependent on the phase of the illness.
---
paper_title: Current Use of Depression Rating Scales in Mental Health Setting
paper_content:
Objective ::: This study was to investigate the current use of depression rating scales by psychiatrists and clinical psychologists in Korea.
---
paper_title: Smartphone-centred wearable sensors network for monitoring patients with bipolar disorder
paper_content:
Bipolar Disorder is a severe form of mental illness. It is characterized by alternated episodes of mania and depression, and it is treated typically with a combination of pharmacotherapy and psychotherapy. Recognizing early warning signs of upcoming phases of mania or depression would be of great help for a personalized medical treatment. Unfortunately, this is a difficult task to be performed for both patient and doctors. In this paper we present the MONARCA wearable system, which is meant for recognizing early warning signs and predict maniac or depressive episodes. The system is a smartphone-centred and minimally invasive wearable sensors network that is being developing in the framework of the MONARCA European project.
---
paper_title: Wearable Monitoring for Mood Recognition in Bipolar Disorder Based on History-Dependent Long-Term Heart Rate Variability Analysis
paper_content:
Current clinical practice in diagnosing patients affected by psychiatric disorders such as bipolar disorder is based only on verbal interviews and scores from specific questionnaires, and no reliable and objective psycho-physiological markers are taken into account. In this paper, we propose to use a wearable system based on a comfortable t-shirt with integrated fabric electrodes and sensors able to acquire electrocardiogram, respirogram, and body posture information in order to detect a pattern of objective physiological parameters to support diagnosis. Moreover, we implemented a novel ad hoc methodology of advanced biosignal processing able to effectively recognize four possible clinical mood states in bipolar patients (i.e., depression, mixed state, hypomania, and euthymia) continuously monitored up to 18 h, using heart rate variability information exclusively. Mood assessment is intended as an intrasubject evaluation in which the patient's states are modeled as a Markov chain, i.e., in the time domain, each mood state refers to the previous one. As validation, eight bipolar patients were monitored collecting and analyzing more than 400 h of autonomic and cardiovascular activity. Experimental results demonstrate that our novel concept of personalized and pervasive monitoring constitutes a viable and robust clinical decision support system for bipolar disorders recognizing mood states with a total classification accuracy up to 95.81%.
---
paper_title: A Bipolar Disorder Monitoring System Based on Wearable Device and Smartphone
paper_content:
Abstract: The growing aging population and the increasingly healthcare costs demand a new paradigm for the healthcare system, which must focus on patient and emphasize mainly prevention, not only treatment. In this context, and with the wide availability of mobile technology, we here propose a patient monitoring system based on wearable sensors, and connected to a medical datacenter through a mobile device. Patients with bipolar disorder can benefit from this remote monitoring system, as warning signs can be detected early, preventing a hospitalization. Studies have reported that great part of those diagnosed with bipolar disorder do not report recognizing any early warning signs. Thus, to prevent relapses, in our system, predictive information associated with warning signs are sent to the patient's doctor. This paper proposes a continuous monitoring system of the patients movement amount, including quality of sleep, through a system composed of wearable device, patient's smartphone and a web application.
---
paper_title: Mobile Interventions for Severe Mental Illness: Design and Preliminary Data from Three Approaches
paper_content:
Mobile devices can be used to deliver psychosocial interventions, yet there is little prior application in severe mental illness. We provide the rationale, design, and preliminary data from three ongoing clinical trials of mobile interventions developed for bipolar disorder or schizophrenia. Project 1 used a personal digital assistant to prompt engagement in personalized self-management behaviors based on real-time data. Project 2 employed experience sampling via text messages to facilitate case management. Project 3 built on group functional skills training for schizophrenia by incorporating between-session mobile phone contacts with therapists. Preliminary findings were of minimal participant attrition, and no broken devices; yet, several operational and technical barriers needed to be addressed. Adherence was similar to that reported in non-psychiatric populations, with high participant satisfaction. Thus, mobile devices appear feasible and acceptable in augmenting psychosocial interventions for severe mental illness, with future research in establishing efficacy, cost-effectiveness, and ethical and safety protocols.
---
paper_title: Smartphone-centred wearable sensors network for monitoring patients with bipolar disorder
paper_content:
Bipolar Disorder is a severe form of mental illness. It is characterized by alternated episodes of mania and depression, and it is treated typically with a combination of pharmacotherapy and psychotherapy. Recognizing early warning signs of upcoming phases of mania or depression would be of great help for a personalized medical treatment. Unfortunately, this is a difficult task to be performed for both patient and doctors. In this paper we present the MONARCA wearable system, which is meant for recognizing early warning signs and predict maniac or depressive episodes. The system is a smartphone-centred and minimally invasive wearable sensors network that is being developing in the framework of the MONARCA European project.
---
|
Title: Assistive Technologies for Bipolar Disorder: A Survey
Section 1: INTRODUCTION
Description 1: This section introduces bipolar disorder, its different states, and provides an overview of the paper's structure.
Section 2: SEVERITY ACROSS THE GLOBE
Description 2: This section provides worldwide statistics on the prevalence, mortality rates, and socioeconomic burden of bipolar disorder.
Section 3: CURRENT SYSTEM IN PLACE
Description 3: This section discusses the existing methods for diagnosing and treating bipolar disorder, including common rating scales and psychiatric interviews.
Section 4: BEHAVIORAL INTERVENTION TECHNOLOGIES
Description 4: This section introduces various behavioral intervention technologies, such as internet-based psychotherapies, web-based and mobile-based interventions, and psychoeducation.
Section 5: ONGOING RESEARCHES TO INFER BIPOLAR STATE
Description 5: This section covers ongoing research on using biomarkers to predict bipolar states, with subsections on social interaction and physical motion, electrodermal activity, heart rate variability, and sleep patterns.
Section 6: CASE STUDIES AND PROJECTS
Description 6: This section details two significant projects, MONARCA and PSYCHE, that contribute to the monitoring, treatment, and prediction of bipolar disorder.
Section 7: EVOLUTION
Description 7: This section reviews the evolution of technologies aimed at assisting bipolar disorder and the shift in research focus over the past decade.
Section 8: DISCUSSION
Description 8: This section summarizes the findings of the paper, discusses the effectiveness and user adherence to current technologies, and identifies gaps in research.
Section 9: FUTURE WORK
Description 9: This section highlights the future directions for research, including challenges and potential improvements for personalized monitoring systems and Body Area Networks (BAN).
|
Application of Machine Learning Approaches in Intrusion Detection System: A Survey
| 9 |
---
paper_title: An autonomous labeling approach to support vector machines algorithms for network traffic anomaly detection
paper_content:
In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold. This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.
---
paper_title: A training algorithm for optimal margin classifiers
paper_content:
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
---
paper_title: Fuzzy set theory
paper_content:
Seit Beginn der neunziger Jahre wird unter dem Schlagwort źFuzzy Logic" von einer Vielzahl von Anwendungen der Theorie unscharfer Mengen vor allem in Japan berichtet. Dies hat in Europa und hier insbesondere in Deutschland zu einem Umdenken bei Wissenschaftlern und Praktikern gefuhrt, die lange Zeit die Potentiale der neuen Technologie verkannten. Nun ist schlagartig eine sehr groβe Nachfrage nach Informationen zu diesem Thema entstanden. Dieser Artikel gibt einen Uberblick uber die wesentlichen Grundlagen der Theorie unscharfer Mengen und zeigt potentielle Anwendungen auf. ::: ::: Since the beginning nineties, entitled with the catchword "Fuzzy Logic" reports have been given on numerous, predominantly Japanese applications of fuzzy set theory. This had brought about a change in thinking for scientists and practitioners in Europe and, in particular, in Germany. Had the potentials of this new technology for a long time been underestimated, there now suddenly springs up a very high demand for information on this topic. This article gives a survey of the fundamentals of fuzzy set theory and describes potential applications.
---
|
Title: Application of Machine Learning Approaches in Intrusion Detection System: A Survey
Section 1: INTRODUCTION
Description 1: Provide an overview of the importance of internet security and the role of intrusion detection systems (IDS). Describe the two primary approaches to IDS: anomaly detection and signature-based detection. Briefly introduce the scope of the paper and its organization.
Section 2: Machine Learning Approach
Description 2: Discuss the concept of machine learning, its importance, and its broad categories (supervised, unsupervised, and reinforcement learning) in the context of intrusion detection systems.
Section 3: Single Classifiers
Description 3: Describe various standalone machine learning algorithms used for intrusion detection, including Decision Tree, Naive Bayes, K-nearest neighbor, Artificial Neural Network, Support Vector Machines, and Fuzzy Logic.
Section 4: Hybrid Classifiers
Description 4: Explain the concept of hybrid classifiers, which combine multiple machine learning algorithms to enhance IDS performance.
Section 5: Ensemble Classifiers
Description 5: Discuss ensemble classifiers, which integrate multiple classifiers to improve overall detection accuracy, and introduce common strategies like majority vote, bagging, and boosting.
Section 6: Distribution of Papers by Year of Publication
Description 6: Analyze the distribution of research papers over the years, categorizing them by single, hybrid, and ensemble classifiers used in intrusion detection.
Section 7: Used Dataset in Researches
Description 7: Review the datasets commonly used in IDS research, including the KDD Cup 1999, DARPA datasets, and the NSL-KDD dataset, and provide a summary of these datasets and their characteristics.
Section 8: Feature Selection
Description 8: Highlight the importance of feature selection in enhancing system performance, and analyze the use of feature selection step in the reviewed papers.
Section 9: Discussion and Conclusion
Description 9: Summarize the key findings from the survey, discuss the limitations of the current study, and suggest future research directions in applying machine learning approaches to IDS.
|
A Survey on Operational Transformation Algorithms: Challenges, Issues and Achievements 1
| 23 |
---
paper_title: Concurrency control in groupware systems
paper_content:
Groupware systems are computer-based systems that support two or more users engaged in a common task, and that provide an interface to a shared environment. These systems frequently require fine-granularity sharing of data and fast response times. This paper distinguishes real-time groupware systems from other multi-user systems and discusses their concurrency control requirements. An algorithm for concurrency control in real-time groupware systems is then presented. The advantages of this algorithm are its simplicity of use and its responsiveness: users can operate directly on the data without obtaining locks. The algorithm must know some semantics of the operations. However the algorithm's overall structure is independent of the semantic information, allowing the algorithm to be adapted to many situations. An example application of the algorithm to group text editing is given, along with a sketch of its proof of correctness in this particular case. We note that the behavior desired in many of these systems is non-serializable.
---
paper_title: Empirical Study on Collaborative Writing: What Do Co-authors Do, Use, and Like?
paper_content:
How do people work when they are collaborating to write a document? What kind of tools do they use and, in particular, do they resort to groupware for this task? Forty-one people filled out a questionnaire placed on the World Wide Web. In spite of the existence of specialized collaborative writing tools, most respondents reported using individual word processors and email as their main tools for writing joint documents. Respondents noted the importance of functions such as change tracking, version control, and synchronous work for collaborative writing tools. This study also confirmed the great variability that exists between collaborative writing projects, whether it be group membership, management, writing strategy, or scheduling issues.
---
paper_title: Achieving convergence, causality preservation, and intention preservation in real-time cooperative editing systems
paper_content:
Real-time cooperative editing systems allow multiple users to view and edit the same text/graphic/image/multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms
---
paper_title: Operational transformation in real-time group editors: issues, algorithms, and achievements
paper_content:
Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing.
---
paper_title: Generalizing operational transformation to the standard general markup language
paper_content:
In this paper we extend operational transformation to support synchronous collaborative editing of documents written in dialects of SGML (Standard General Markup Language) such as XML and HTML, based on SGML's abstract data model, the grove. We argue that concurrent updates to a shared grove must be transformed before being applied to each replica to ensure consistency. We express grove operations as property changes on positionally-addressed nodes, define a set of transformation functions, and show how to apply an existing generic operational transformation algorithm to achieve this. This result makes synchronous group editing applicable to the modern Web.
---
paper_title: Concurrency control in groupware systems
paper_content:
Groupware systems are computer-based systems that support two or more users engaged in a common task, and that provide an interface to a shared environment. These systems frequently require fine-granularity sharing of data and fast response times. This paper distinguishes real-time groupware systems from other multi-user systems and discusses their concurrency control requirements. An algorithm for concurrency control in real-time groupware systems is then presented. The advantages of this algorithm are its simplicity of use and its responsiveness: users can operate directly on the data without obtaining locks. The algorithm must know some semantics of the operations. However the algorithm's overall structure is independent of the semantic information, allowing the algorithm to be adapted to many situations. An example application of the algorithm to group text editing is given, along with a sketch of its proof of correctness in this particular case. We note that the behavior desired in many of these systems is non-serializable.
---
paper_title: Operational transformation in real-time group editors: issues, algorithms, and achievements
paper_content:
Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing.
---
paper_title: Achieving convergence, causality preservation, and intention preservation in real-time cooperative editing systems
paper_content:
Real-time cooperative editing systems allow multiple users to view and edit the same text/graphic/image/multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms
---
paper_title: Achieving convergence, causality preservation, and intention preservation in real-time cooperative editing systems
paper_content:
Real-time cooperative editing systems allow multiple users to view and edit the same text/graphic/image/multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms
---
paper_title: Achieving convergence, causality preservation, and intention preservation in real-time cooperative editing systems
paper_content:
Real-time cooperative editing systems allow multiple users to view and edit the same text/graphic/image/multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms
---
paper_title: An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems
paper_content:
Operational transformation (OT) as a consistency control method has been well accepted in group editors. With OT, the users can edit any part of a shared document at any time and local responsiveness is not sensitive to communication latencies. However, established theoretical frameworks for developing OT algorithms either require transformation functions to work in all possible cases, which complicates the design of transformation functions, or include an under-formalized condition of intention preservation, which results in algorithms that cannot be formally proved and must be fixed over time to address newly discovered counterexamples. To address those limitations, this paper proposes an alternative framework, called admissibility-based transformation (ABT), that is theoretically based on formalized, provable correctness criteria and practically no longer requires transformation functions to work under all conditions. Compared to previous approaches, ABT simplifies the design and proofs of OT algorithms.
---
paper_title: Operational transformation in real-time group editors: issues, algorithms, and achievements
paper_content:
Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing.
---
paper_title: A time interval based consistency control algorithm for interactive groupware applications
paper_content:
Traditional concurrency control methods such as locking and serialization are not suitable for distributed interactive applications that demand fast local response. Operational transformation (OT) is the standard solution to concurrency control and consistency maintenance in group editors, an important class of interactive groupware applications. It generally trades consistency for local responsiveness, because human users can often tolerate temporary inconsistencies but do not like their interactions be lost or nondeterministically blocked. This paper presents a time interval based operational transformation algorithm (TIBOT) that overcomes the various limitations of previous related work. Our approach guarantees content convergence and is significantly more simple and efficient than existing approaches. This is achieved in a pure replicated architecture by using a linear clock and by posing some constraints on communication that are reasonable for the application domain.
---
paper_title: Operational transformation in real-time group editors: issues, algorithms, and achievements
paper_content:
Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing.
---
paper_title: Operational transformation in real-time group editors: issues, algorithms, and achievements
paper_content:
Rd-time group editors dow a group of users to view and edit, the same document at the same time horn geograpbicdy di.~ersed sites connected by communication networks. Consistency maintenance is one of the most si@cant &alwiges in the design and implementation of thwe types of systems. R=earch on rd-time group editors in the past decade has invented au inuolative tetique for consistency maintenance, ded operational transformation This paper presents an integrative review of the evolution of operational tra=formation techniques, with the go~ of identifying the major is-m~s, dgotiths, achievements, and remaining Mlenges. In addition, this paper contribut= a new optimized generic operational transformation control algorithm. Ke~vords Consistency maint enauce, operational transformation, convergence, CauS*ty pras~ation, intention pre~tion, group e&tors, groupware, distributed computing.
---
paper_title: Generalizing operational transformation to the standard general markup language
paper_content:
In this paper we extend operational transformation to support synchronous collaborative editing of documents written in dialects of SGML (Standard General Markup Language) such as XML and HTML, based on SGML's abstract data model, the grove. We argue that concurrent updates to a shared grove must be transformed before being applied to each replica to ensure consistency. We express grove operations as property changes on positionally-addressed nodes, define a set of transformation functions, and show how to apply an existing generic operational transformation algorithm to achieve this. This result makes synchronous group editing applicable to the modern Web.
---
paper_title: An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems
paper_content:
Operational transformation (OT) as a consistency control method has been well accepted in group editors. With OT, the users can edit any part of a shared document at any time and local responsiveness is not sensitive to communication latencies. However, established theoretical frameworks for developing OT algorithms either require transformation functions to work in all possible cases, which complicates the design of transformation functions, or include an under-formalized condition of intention preservation, which results in algorithms that cannot be formally proved and must be fixed over time to address newly discovered counterexamples. To address those limitations, this paper proposes an alternative framework, called admissibility-based transformation (ABT), that is theoretically based on formalized, provable correctness criteria and practically no longer requires transformation functions to work under all conditions. Compared to previous approaches, ABT simplifies the design and proofs of OT algorithms.
---
paper_title: A time interval based consistency control algorithm for interactive groupware applications
paper_content:
Traditional concurrency control methods such as locking and serialization are not suitable for distributed interactive applications that demand fast local response. Operational transformation (OT) is the standard solution to concurrency control and consistency maintenance in group editors, an important class of interactive groupware applications. It generally trades consistency for local responsiveness, because human users can often tolerate temporary inconsistencies but do not like their interactions be lost or nondeterministically blocked. This paper presents a time interval based operational transformation algorithm (TIBOT) that overcomes the various limitations of previous related work. Our approach guarantees content convergence and is significantly more simple and efficient than existing approaches. This is achieved in a pure replicated architecture by using a linear clock and by posing some constraints on communication that are reasonable for the application domain.
---
paper_title: An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems
paper_content:
Operational transformation (OT) as a consistency control method has been well accepted in group editors. With OT, the users can edit any part of a shared document at any time and local responsiveness is not sensitive to communication latencies. However, established theoretical frameworks for developing OT algorithms either require transformation functions to work in all possible cases, which complicates the design of transformation functions, or include an under-formalized condition of intention preservation, which results in algorithms that cannot be formally proved and must be fixed over time to address newly discovered counterexamples. To address those limitations, this paper proposes an alternative framework, called admissibility-based transformation (ABT), that is theoretically based on formalized, provable correctness criteria and practically no longer requires transformation functions to work under all conditions. Compared to previous approaches, ABT simplifies the design and proofs of OT algorithms.
---
paper_title: An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems
paper_content:
Operational transformation (OT) as a consistency control method has been well accepted in group editors. With OT, the users can edit any part of a shared document at any time and local responsiveness is not sensitive to communication latencies. However, established theoretical frameworks for developing OT algorithms either require transformation functions to work in all possible cases, which complicates the design of transformation functions, or include an under-formalized condition of intention preservation, which results in algorithms that cannot be formally proved and must be fixed over time to address newly discovered counterexamples. To address those limitations, this paper proposes an alternative framework, called admissibility-based transformation (ABT), that is theoretically based on formalized, provable correctness criteria and practically no longer requires transformation functions to work under all conditions. Compared to previous approaches, ABT simplifies the design and proofs of OT algorithms.
---
paper_title: Commutativity-based concurrency control in groupware
paper_content:
Commutativity of operations is often exploited in concurrent systems to attain high levels of concurrency. A commutativity-based concurrency control method, called operational transformation (OT), has been actively researched in groupware over the past 15 years. However, much progress can still be made on more practicable approaches to developing and proving OT algorithms. Several constraints have been proposed previously but they are generally difficult to follow and verify in practice. This paper proposes an alternative approach to address this problem. A new consistency model is defined which greatly simplifies the design and proof of OT algorithms
---
paper_title: An Admissibility-Based Operational Transformation Framework for Collaborative Editing Systems
paper_content:
Operational transformation (OT) as a consistency control method has been well accepted in group editors. With OT, the users can edit any part of a shared document at any time and local responsiveness is not sensitive to communication latencies. However, established theoretical frameworks for developing OT algorithms either require transformation functions to work in all possible cases, which complicates the design of transformation functions, or include an under-formalized condition of intention preservation, which results in algorithms that cannot be formally proved and must be fixed over time to address newly discovered counterexamples. To address those limitations, this paper proposes an alternative framework, called admissibility-based transformation (ABT), that is theoretically based on formalized, provable correctness criteria and practically no longer requires transformation functions to work under all conditions. Compared to previous approaches, ABT simplifies the design and proofs of OT algorithms.
---
|
Title: A Survey on Operational Transformation Algorithms: Challenges, Issues and Achievements
Section 1: INTRODUCTION
Description 1: Write an introduction to the fundamental issues and challenges in consistency maintenance in collaborative systems and provide background information on the context and significance of operational transformation.
Section 2: OPERATIONAL TRANSFORMATION
Description 2: Discuss the basic concept and principles of operational transformation as an optimistic consistency control method in collaborative applications.
Section 3: Inclusion and Exclusion Transformation
Description 3: Explain the concepts of inclusion transformation (IT) and exclusion transformation (ET) functions, and their roles in ensuring consistency in collaborative editing.
Section 4: Transformation Properties
Description 4: Outline the key properties (TP1 and TP2) required for the transformation functions to ensure consistent final document states.
Section 5: The dOPT Puzzle
Description 5: Describe the dOPT puzzle, its implications, and the scenarios in which the original dOPT algorithm fails.
Section 6: The TP2 Puzzle
Description 6: Discuss the TP2 puzzle, illustrate with examples, and explain why the simple fixes are insufficient.
Section 7: ALGORITHMS
Description 7: Provide an overview of various OT algorithms proposed over the years, highlighting their approaches and unique features.
Section 8: dOPT
Description 8: Discuss the dOPT algorithm, its mechanisms, and how it ensures the convergence property.
Section 9: adOPTed
Description 9: Explain the adOPTed algorithm, derived from dOPT, and describe its adaptations to a centralized server environment.
Section 10: IMOR
Description 10: Detail the IMOR algorithm and how it uses additional parameters to solve concurrent insertion issues.
Section 11: GOT
Description 11: Describe the General Operational Transformation (GOT) algorithm and how it achieves intention-preservation and convergence without relying on TP1 and TP2.
Section 12: GOTO
Description 12: Explain the GOTO algorithm, an optimized version of GOT, and how it reduces the number of transformations required.
Section 13: SOCT2
Description 13: Outline the SOCT2 algorithm, its transformation functions, and how it ensures convergence.
Section 14: SOCT3
Description 14: Detail the SOCT3 algorithm's use of a global order sequencing mechanism to avoid undo/redo operations.
Section 15: SOCT4
Description 15: Discuss the improvements in SOCT4 with regards to forward transposition and state vector elimination.
Section 16: TIBOT
Description 16: Describe the Time Interval Based Operational Transformation (TIBOT) algorithm and its efficiency in terms of complexity.
Section 17: LBT
Description 17: Explain the LBT algorithm and its method of maintaining special transformation paths to ensure consistency.
Section 18: SLOT
Description 18: Introduce the SLOT algorithm and its simplicity and efficiency compared to other OT algorithms.
Section 19: SDT
Description 19: Discuss the State Difference Based Transformation (SDT) algorithm and its ability to ensure convergence in peer-to-peer editors.
Section 20: ABT
Description 20: Detail the Admissibility-Based Transformation (ABT) algorithm, its correctness criteria, and its simpler handling of transformation functions.
Section 21: ABTS
Description 21: Explain the ABTS algorithm, an extension of ABT to support string-based operations with formal correctness proofs.
Section 22: 3.16
Description 22: Summarize the comparison of various OT algorithms and their respective strengths and weaknesses.
Section 23: FUTURE WORK
Description 23: Discuss potential future research areas and ongoing challenges in reducing time and space complexity and understanding user intentions.
|
A Survey on Key Management of Identity-based Schemes in Mobile Ad Hoc Networks
| 10 |
---
paper_title: Toward secure key distribution in truly ad-hoc networks
paper_content:
Ad-hoc networks - and in particular wireless mobile ad-hoc networks $have unique characteristics and constraints that make traditional cryptographic mechanisms and assumptions inappropriate. In particular it may not be warranted to assume pre-existing shared secrets between members of the network or the presence of a common PKI. Thus, the issue of key distribution in ad-hoc networks represents an important problem. Unfortunately, this issue has been largely ignored; as an example, most protocols for secure ad-hoc routing assume that key distribution has already taken place. Traditional key distribution schemes either do not apply in an ad-hoc scenario or are not efficient enough for small, resource-constrained devices. We propose to combine efficient techniques from identity-based (ID-based) and threshold cryptography to provide a mechanism that enables flexible and efficient key distribution while respecting the constraints of ad-hoc networks. We also discuss the available mechanisms and their suitability for the proposed task.
---
paper_title: A Survey of Applications of Identity-Based Cryptography in Mobile Ad-Hoc Networks
paper_content:
Security in mobile ad-hoc networks (MANETs) continues to attract attention after years of research. Recent advances in identity-based cryptography (IBC) sheds light on this problem and has become popular as a solution base. We present a comprehensive picture and capture the state of the art of IBC security applications in MANETs based on a survey of publications on this topic since the emergence of IBC in 2001. In this paper, we also share insights into open research problems and point out interesting future directions in this area.
---
paper_title: A Survey of Identity-Based Cryptography
paper_content:
In this paper, we survey the state of research on identity-based cryptography. We start from reviewing the basic concepts of identity-based encryption and signature schemes, and subsequently review some important identity-based cryptographic schemes based on the bilinear pairing, a computational primitive widely used to build up various identity-based cryptographic schemes in the current literature. We also survey the cryptographic schemes such as a “certificate-based encryption scheme” and a “public key encryption scheme with keyword search”, which were able to be constructed thanks to the successful realization of identity-based encryption. Finally, we discuss how feasible and under what conditions identity-based cryptography may be used in current and future environments and propose some interesting open problems concerning with practical and theoretical aspects of identity-based cryptography.
---
paper_title: Toward secure key distribution in truly ad-hoc networks
paper_content:
Ad-hoc networks - and in particular wireless mobile ad-hoc networks $have unique characteristics and constraints that make traditional cryptographic mechanisms and assumptions inappropriate. In particular it may not be warranted to assume pre-existing shared secrets between members of the network or the presence of a common PKI. Thus, the issue of key distribution in ad-hoc networks represents an important problem. Unfortunately, this issue has been largely ignored; as an example, most protocols for secure ad-hoc routing assume that key distribution has already taken place. Traditional key distribution schemes either do not apply in an ad-hoc scenario or are not efficient enough for small, resource-constrained devices. We propose to combine efficient techniques from identity-based (ID-based) and threshold cryptography to provide a mechanism that enables flexible and efficient key distribution while respecting the constraints of ad-hoc networks. We also discuss the available mechanisms and their suitability for the proposed task.
---
paper_title: Identity-based cryptosystems and signature schemes
paper_content:
In this paper we introduce a novel type of cryptographic scheme, which enables any pair of users to communicate securely and to verify each other’s signatures without exchanging private or public keys, without keeping key directories, and without using the services of a third party. The scheme assumes the existence of trusted key generation centers, whose sole purpose is to give each user a personalized smart card when he first joins the network. The information embedded in this card enables the user to sign and encrypt the messages he sends and to decrypt and verify the messages he receives in a totally independent way, regardless of the identity of the other party. Previously issued cards do not have to be updated when new users join the network, and the various centers do not have to coordinate their activities or even to keep a user list. The centers can be closed after all the cards are issued, and the network can continue to function in a completely decentralized way for an indefinite period.
---
paper_title: Short Signatures from the Weil Pairing
paper_content:
We introduce a short signature scheme based on the Computational Diffie–Hellman assumption on certain elliptic and hyperelliptic curves. For standard security parameters, the signature length is about half that of a DSA signature with a similar level of security. Our short signature scheme is designed for systems where signatures are typed in by a human or are sent over a low-bandwidth channel. We survey a number of properties of our signature scheme such as signature aggregation and batch verification.
---
paper_title: Identity-based encryption from the Weil pairing
paper_content:
We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems.
---
paper_title: Toward secure key distribution in truly ad-hoc networks
paper_content:
Ad-hoc networks - and in particular wireless mobile ad-hoc networks $have unique characteristics and constraints that make traditional cryptographic mechanisms and assumptions inappropriate. In particular it may not be warranted to assume pre-existing shared secrets between members of the network or the presence of a common PKI. Thus, the issue of key distribution in ad-hoc networks represents an important problem. Unfortunately, this issue has been largely ignored; as an example, most protocols for secure ad-hoc routing assume that key distribution has already taken place. Traditional key distribution schemes either do not apply in an ad-hoc scenario or are not efficient enough for small, resource-constrained devices. We propose to combine efficient techniques from identity-based (ID-based) and threshold cryptography to provide a mechanism that enables flexible and efficient key distribution while respecting the constraints of ad-hoc networks. We also discuss the available mechanisms and their suitability for the proposed task.
---
paper_title: Threshold Cryptosystems
paper_content:
In a society oriented cryptography it is better to have a public key for the company (organization) than having one for each individual employee [Des88]. Certainly in emergency situations, power is shared in many organizations. Solutions to this problem were presented [Des88], based on [GMW87], but are completely impractical and interactive. In this paper practical non-interactive public key systems are proposed which allow the reuse of the shared secret key since the key is not revealed either to insiders or to outsiders.
---
paper_title: Identity-based key distribution for mobile Ad Hoc networks
paper_content:
An identity-based cryptosystem can make a special contribution to building key distribution and management architectures in resource-constrained mobile ad hoc networks since it does not suffer from certificate management problems. In this paper, based on a lightweight cryptosystem, elliptic curve cryptography (ECC), we propose an identity-based distributed key-distribution protocol for mobile ad hoc networks. In this protocol, using secret sharing, we build a virtual private key generator which calculates one part of a user's secret key and sends it to the user via public channels, while, the other part of the secret key is generated by the user. So, the secret key of the user is generated collaboratively by the virtual authority and the user. Each has half of the secret information about the secret key of the user. Thus there is no secret key distribution problem. In addition, the user's secret key is known only to the user itself, therefore there is no key escrow.
---
paper_title: An efficient and non-interactive hierarchical key agreement protocol
paper_content:
The non-interactive identity-based key agreement schemes are believed to be applicable to mobile ad-hoc networks (MANETs) that have a hierarchical structure such as hierarchical military MANETs. It was observed by Gennaro et al. (2008) that there is still an open problem on the security of the existing schemes, i.e., how to achieve the desirable security against corrupted nodes in the higher levels of a hierarchy? In this paper, we propose a novel and very efficient non-interactive hierarchical identity-based key agreement scheme that solves the open problem and outperforms all existing schemes in terms of computational efficiency and data storage.
---
paper_title: A Certificateless Key Management Scheme in Mobile Ad Hoc Networks
paper_content:
Key management plays an important role in the security of today's information technology, especially in wireless and mobile environments like mobile ad hoc networks (MANETs) in which key management has received more and more attention for the difficulty to be implemented in such dynamic network. Traditional key management schemes are mainly based on PKI and identity-based public key cryptography (ID-PKC), which suffers from the computational costs of certificate verification and the key escrow problem. In this paper, we present a novel distributed key management scheme, a combination of certificateless public key cryptography (CL-PKC) and threshold cryptography, which not only eliminates the need for certificate-based public key distribution and the key escrow problem but also prevents single point of failure.
---
paper_title: TIDS: threshold and identity-based security scheme for wireless ad hoc networks ☆
paper_content:
Abstract As various applications of wireless ad hoc network have been proposed, security has received increasing attentions as one of the critical research challenges. In this paper, we consider the security issues at network layer, wherein routing and packet forwarding are the main operations. We propose a novel efficient security scheme in order to provide various security characteristics, such as authentication , confidentiality , integrity and non-repudiation for wireless ad hoc networks. In our scheme, we deploy the recently developed concepts of identity-based signcryption and threshold secret sharing . We describe our proposed security solution in context of dynamic source routing (DSR) protocol. Without any assumption of pre-fixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide key generation and key management services using threshold secret sharing algorithm, which effectively solves the problem of single point of failure in the traditional public-key infrastructure (PKI) supported system. The identity-based signcryption mechanism is applied here not only to provide end-to-end authenticity and confidentiality in a single step, but also to save network bandwidth and computational power of wireless nodes. Moreover, one-way hash chain is used to protect hop-by-hop transmission.
---
paper_title: Virtual private key generator based escrow-free certificateless public key cryptosystem for mobile ad hoc networks
paper_content:
A certificateless public key cryptosystem can make a special contribution to building key distribution and management architecture in resource-constrained mobile ad hoc networks (MANETs) because it has no separate certificate and no complex certificate management problems. In this paper, we present a virtual private key generator (VPKG)-based escrow-free certificateless public key cryptosystem as a novel combination of certificateless and threshold cryptography. Using secret sharing, we build a VPKG whose members collaboratively calculate the partial private key and send it to the user via public channels. The private key of a user is generated jointly by the VPKG and the user. Each of them has “half” of the secret information about the private key of the user. In addition, binding a user's public key with its identity and partial private key, respectively, raises our schemes to the same trust level as is enjoyed in a traditional public key infrastructure. We also show that the proposed scheme is secure against public key replacement attacks and passive attacks. Copyright © 2012 John Wiley & Sons, Ltd.
---
paper_title: Toward secure key distribution in truly ad-hoc networks
paper_content:
Ad-hoc networks - and in particular wireless mobile ad-hoc networks $have unique characteristics and constraints that make traditional cryptographic mechanisms and assumptions inappropriate. In particular it may not be warranted to assume pre-existing shared secrets between members of the network or the presence of a common PKI. Thus, the issue of key distribution in ad-hoc networks represents an important problem. Unfortunately, this issue has been largely ignored; as an example, most protocols for secure ad-hoc routing assume that key distribution has already taken place. Traditional key distribution schemes either do not apply in an ad-hoc scenario or are not efficient enough for small, resource-constrained devices. We propose to combine efficient techniques from identity-based (ID-based) and threshold cryptography to provide a mechanism that enables flexible and efficient key distribution while respecting the constraints of ad-hoc networks. We also discuss the available mechanisms and their suitability for the proposed task.
---
paper_title: Strongly-Resilient and Non-Interactive Hierarchical Key-Agreement in MANETs
paper_content:
Key agreement is a fundamental security functionality by which pairs of nodes agree on shared keys to be used for protecting their pairwise communications. In this work we study key-agreement schemes that are well-suited for the mobile network environment. Specifically, we describe schemes with the following characteristics: ::: ::: ::: Non-interactive:any two nodes can compute a unique shared secret key without interaction; ::: Identity-based:to compute the shared secret key, each node only needs its own secret key and the identity of its peer; ::: Hierarchical:the scheme is decentralized through a hierarchy where intermediate nodes in the hierarchy can derive the secret keys for each of its children without any limitations or prior knowledge on the number of such children or their identities; ::: Resilient:the scheme is fully resilient against compromise of any number of leavesin the hierarchy, and of a threshold number of nodes in each of the upper levels of the hierarchy. ::: ::: ::: ::: Several schemes in the literature have three of these four properties, but the schemes in this work are the first to possess all four. This makes them well-suited for environments such as MANETs and tactical networks which are very dynamic, have significant bandwidth and energy constraints, and where many nodes are vulnerable to compromise. We provide rigorous analysis of the proposed schemes and discuss implementations aspects.
---
paper_title: A new scheme for key management in ad hoc networks
paper_content:
Robust key management is one of the most crucial technologies for security of ad hoc networks. In this paper, a new scheme for key management is proposed using identity-based (ID-based) signcryption and threshold cryptography. It enables flexible and efficient key management while respecting the constraints of ad hoc networks. In our new scheme, public key certificates are not needed and every client uses its identity as its public key. It greatly decreases the need of computing and storage abilities of clients' terminals, as well as communication cost for system key management.
---
paper_title: A practical scheme for non-interactive verifiable secret sharing
paper_content:
This paper presents an extremely efficient, non-interactive protocol for verifiable secret sharing. Verifiable secret sharing (VSS) is a way of bequeathing information to a set of processors such that a quorum of processors is needed to access the information. VSS is a fundamental tool of cryptography and distributed computing. Seemingly difficult problems such as secret bidding, fair voting, leader election, and flipping a fair coin have simple one-round reductions to VSS. There is a constant-round reduction from Byzantine Agreement to non-interactive VSS. Non-interactive VSS provides asynchronous networks with a constant-round simulation of simultaneous broadcast networks whenever even a bare majority of processors are good. VSS is constantly repeated in the simulation of fault-free protocols by faulty systems. As verifiable secret sharing is a bottleneck for so many results, it is essential to find efficient solutions.
---
paper_title: Halo: A Hierarchical Identity-Based Public Key Infrastructure for Peer-to-Peer Opportunistic Collaboration
paper_content:
The lack of information security protection for peer-to-peer systems has hampered the use of this robust and scalable technology in sensitive applications. The security weakness is rooted in the server-less architecture and the demand driven ad-hoc operation scenarios of peer-to-peer systems. Together, they prohibit scalable key management using traditional symmetric/ asymmetric cryptographic techniques. The advent of hierarchical identity-based cryptography and thresholded/joint secret sharing offers a possible solution to this problem. In this paper, we present the design of Halo, a hierarchical identity-based public key infrastructure that uses these novel technologies to perform recursive instantiation of private key generators and establish a trust hierarchy with unlimited number of levels. The PKI thus enables the employment of hierarchical identity-based public key encryption, signature, and signcryption for the protection of peer-to-peer applications. The effort to implement a proof-of-concept prototype as a JXTA service module was also discussed.
---
paper_title: Public key cryptography sans certificates in ad hoc networks
paper_content:
Several researchers have proposed the use of threshold cryptographic model to enable secure communication in ad hoc networks without the need of a trusted center. In this model, the system remains secure even in the presence of a certain threshold t of corrupted/malicious nodes. In this paper, we show how to perform necessary public key operations without node-specific certificates in ad hoc networks. These operations include pair-wise key establishment, signing, and encryption. We achieve this by using Feldman's verifiable polynomial secret sharing (VSS) as a key distribution scheme and treating the secret shares as the private keys. Unlike in the standard public key cryptography, where entities have independent private/public key pairs, in the proposed scheme the private keys are related (they are points on a polynomial of degree t) and each public key can be computed from the public VSS information and node identifier. We show that such related keys can still be securely used for standard signature and encryption operations (using resp. Schnorr signatures and ElGamal encryption) and for pairwise key establishment, as long as there are no more that t collusions/corruptions in the system. The proposed usage of shares as private keys can also be viewed as a threshold-tolerant identity-based cryptosystem under standard (discrete logarithm based) assumptions.
---
paper_title: Threshold and identity-based key management and authentication for wireless ad hoc networks
paper_content:
As various applications of wireless ad hoc network have been proposed, security has become one of the big research challenges and is receiving increasing attention. In this paper, we propose a distributed key management and authentication approach by deploying the recently developed concepts of identity-based cryptography and threshold secret sharing. Without any assumption of prefixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide the key generation and key management service, which effectively solves the problem of single point of failure in the traditional public key infrastructure (PKI)-supported system. The identity-based cryptography mechanism is applied here not only to provide end-to-end authenticity and confidentiality, but also to save network bandwidth and computational power of wireless nodes.
---
paper_title: AC-PKI: anonymous and certificateless public-key infrastructure for mobile ad hoc networks
paper_content:
This paper studies public-key management, a fundamental problem in providing security support for mobile ad hoc networks. The infrastructureless nature and network dynamics of ad hoc networks make the conventional certificate-based public-key solutions less suitable. To tackle this problem, we propose a novel anonymous and certificateless public-key infrastructure (AC-PKI) for ad hoc networks. AC-PKI enables public-key services with certificateless public keys and thus avoids the complicated certificate management inevitable in conventional certificate-based solutions. To satisfy the demand for private keys during network operation, we employ the secret-sharing technique to distribute a system master-key among a preselected set of nodes, called D-PKG, which offer a collaborative private-key-generation service. In addition, we identify pinpoint attacks against D-PKG and propose anonymizing D-PKG as the countermeasure. Moreover, we determine the optimal secret-sharing parameters to achieve the maximum security.
---
paper_title: TIDS: threshold and identity-based security scheme for wireless ad hoc networks ☆
paper_content:
Abstract As various applications of wireless ad hoc network have been proposed, security has received increasing attentions as one of the critical research challenges. In this paper, we consider the security issues at network layer, wherein routing and packet forwarding are the main operations. We propose a novel efficient security scheme in order to provide various security characteristics, such as authentication , confidentiality , integrity and non-repudiation for wireless ad hoc networks. In our scheme, we deploy the recently developed concepts of identity-based signcryption and threshold secret sharing . We describe our proposed security solution in context of dynamic source routing (DSR) protocol. Without any assumption of pre-fixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide key generation and key management services using threshold secret sharing algorithm, which effectively solves the problem of single point of failure in the traditional public-key infrastructure (PKI) supported system. The identity-based signcryption mechanism is applied here not only to provide end-to-end authenticity and confidentiality in a single step, but also to save network bandwidth and computational power of wireless nodes. Moreover, one-way hash chain is used to protect hop-by-hop transmission.
---
paper_title: An Identity-Based Signature from Gap Diffie-Hellman Groups
paper_content:
In this paper we propose an identity(ID)-based signature scheme using gap Diffie-Hellman (GDH) groups. Our scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. Using GDH groups obtained from bilinear pairings, as a special case of our scheme, we obtain an ID-based signature scheme that shares the same system parameters with the ID-based encryption scheme (BF-IBE) by Boneh and Franklin [BF01], and is as efficient as the BF-IBE. Combining our signature scheme with the BF-IBE yields a complete solution of an ID-based public key system. It can be an alternative for certificate-based public key infrastructures, especially when efficient key management and moderate security are required.
---
paper_title: Toward secure key distribution in truly ad-hoc networks
paper_content:
Ad-hoc networks - and in particular wireless mobile ad-hoc networks $have unique characteristics and constraints that make traditional cryptographic mechanisms and assumptions inappropriate. In particular it may not be warranted to assume pre-existing shared secrets between members of the network or the presence of a common PKI. Thus, the issue of key distribution in ad-hoc networks represents an important problem. Unfortunately, this issue has been largely ignored; as an example, most protocols for secure ad-hoc routing assume that key distribution has already taken place. Traditional key distribution schemes either do not apply in an ad-hoc scenario or are not efficient enough for small, resource-constrained devices. We propose to combine efficient techniques from identity-based (ID-based) and threshold cryptography to provide a mechanism that enables flexible and efficient key distribution while respecting the constraints of ad-hoc networks. We also discuss the available mechanisms and their suitability for the proposed task.
---
paper_title: Reducing the cost of security in link-state routing
paper_content:
Security in link-state routing protocols is a feature that is both desirable and costly. This paper examines the cost of security and presents two techniques for efficient and secure processing of link state updates. The first technique is geared towards a relatively stable internetwork environment while the second is designed with a more volatile environment in mind.
---
paper_title: Identity-based cryptosystems and signature schemes
paper_content:
In this paper we introduce a novel type of cryptographic scheme, which enables any pair of users to communicate securely and to verify each other’s signatures without exchanging private or public keys, without keeping key directories, and without using the services of a third party. The scheme assumes the existence of trusted key generation centers, whose sole purpose is to give each user a personalized smart card when he first joins the network. The information embedded in this card enables the user to sign and encrypt the messages he sends and to decrypt and verify the messages he receives in a totally independent way, regardless of the identity of the other party. Previously issued cards do not have to be updated when new users join the network, and the various centers do not have to coordinate their activities or even to keep a user list. The centers can be closed after all the cards are issued, and the network can continue to function in a completely decentralized way for an indefinite period.
---
paper_title: A new scheme for key management in ad hoc networks
paper_content:
Robust key management is one of the most crucial technologies for security of ad hoc networks. In this paper, a new scheme for key management is proposed using identity-based (ID-based) signcryption and threshold cryptography. It enables flexible and efficient key management while respecting the constraints of ad hoc networks. In our new scheme, public key certificates are not needed and every client uses its identity as its public key. It greatly decreases the need of computing and storage abilities of clients' terminals, as well as communication cost for system key management.
---
paper_title: A Key Management Scheme for Ad Hoc Networks
paper_content:
An improved key management scheme is proposed for Ad Hoc network. The improved scheme adopts combined technologies of verifiable secret sharing,public key encryption and random numbers. The analysis results show that the improved scheme could further raise security performance while reduces computation cost.
---
paper_title: Identity-based encryption from the Weil pairing
paper_content:
We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems.
---
paper_title: Threshold and identity-based key management and authentication for wireless ad hoc networks
paper_content:
As various applications of wireless ad hoc network have been proposed, security has become one of the big research challenges and is receiving increasing attention. In this paper, we propose a distributed key management and authentication approach by deploying the recently developed concepts of identity-based cryptography and threshold secret sharing. Without any assumption of prefixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide the key generation and key management service, which effectively solves the problem of single point of failure in the traditional public key infrastructure (PKI)-supported system. The identity-based cryptography mechanism is applied here not only to provide end-to-end authenticity and confidentiality, but also to save network bandwidth and computational power of wireless nodes.
---
paper_title: Efficient signature generation by smart cards
paper_content:
We present a new public-key signature scheme and a corresponding authentication scheme that are based on discrete logarithms in a subgroup of units in ? p where p is a sufficiently large prime, e.g., p ? 2512. A key idea is to use for the base of the discrete logarithm an integer ? in ? p such that the order of ? is a sufficiently large prime q, e.g., q ? 2140. In this way we improve the ElGamal signature scheme in the speed of the procedures for the generation and the verification of signatures and also in the bit length of signatures. We present an efficient algorithm that preprocesses the exponentiation of a random residue modulo p.
---
paper_title: An Identity-Based Signature from Gap Diffie-Hellman Groups
paper_content:
In this paper we propose an identity(ID)-based signature scheme using gap Diffie-Hellman (GDH) groups. Our scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. Using GDH groups obtained from bilinear pairings, as a special case of our scheme, we obtain an ID-based signature scheme that shares the same system parameters with the ID-based encryption scheme (BF-IBE) by Boneh and Franklin [BF01], and is as efficient as the BF-IBE. Combining our signature scheme with the BF-IBE yields a complete solution of an ID-based public key system. It can be an alternative for certificate-based public key infrastructures, especially when efficient key management and moderate security are required.
---
paper_title: A practical scheme for non-interactive verifiable secret sharing
paper_content:
This paper presents an extremely efficient, non-interactive protocol for verifiable secret sharing. Verifiable secret sharing (VSS) is a way of bequeathing information to a set of processors such that a quorum of processors is needed to access the information. VSS is a fundamental tool of cryptography and distributed computing. Seemingly difficult problems such as secret bidding, fair voting, leader election, and flipping a fair coin have simple one-round reductions to VSS. There is a constant-round reduction from Byzantine Agreement to non-interactive VSS. Non-interactive VSS provides asynchronous networks with a constant-round simulation of simultaneous broadcast networks whenever even a bare majority of processors are good. VSS is constantly repeated in the simulation of fault-free protocols by faulty systems. As verifiable secret sharing is a bottleneck for so many results, it is essential to find efficient solutions.
---
paper_title: Identity-based encryption from the Weil pairing
paper_content:
We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems.
---
paper_title: Public key cryptography sans certificates in ad hoc networks
paper_content:
Several researchers have proposed the use of threshold cryptographic model to enable secure communication in ad hoc networks without the need of a trusted center. In this model, the system remains secure even in the presence of a certain threshold t of corrupted/malicious nodes. In this paper, we show how to perform necessary public key operations without node-specific certificates in ad hoc networks. These operations include pair-wise key establishment, signing, and encryption. We achieve this by using Feldman's verifiable polynomial secret sharing (VSS) as a key distribution scheme and treating the secret shares as the private keys. Unlike in the standard public key cryptography, where entities have independent private/public key pairs, in the proposed scheme the private keys are related (they are points on a polynomial of degree t) and each public key can be computed from the public VSS information and node identifier. We show that such related keys can still be securely used for standard signature and encryption operations (using resp. Schnorr signatures and ElGamal encryption) and for pairwise key establishment, as long as there are no more that t collusions/corruptions in the system. The proposed usage of shares as private keys can also be viewed as a threshold-tolerant identity-based cryptosystem under standard (discrete logarithm based) assumptions.
---
paper_title: Identity-based key distribution for mobile Ad Hoc networks
paper_content:
An identity-based cryptosystem can make a special contribution to building key distribution and management architectures in resource-constrained mobile ad hoc networks since it does not suffer from certificate management problems. In this paper, based on a lightweight cryptosystem, elliptic curve cryptography (ECC), we propose an identity-based distributed key-distribution protocol for mobile ad hoc networks. In this protocol, using secret sharing, we build a virtual private key generator which calculates one part of a user's secret key and sends it to the user via public channels, while, the other part of the secret key is generated by the user. So, the secret key of the user is generated collaboratively by the virtual authority and the user. Each has half of the secret information about the secret key of the user. Thus there is no secret key distribution problem. In addition, the user's secret key is known only to the user itself, therefore there is no key escrow.
---
paper_title: A Certificateless Key Management Scheme in Mobile Ad Hoc Networks
paper_content:
Key management plays an important role in the security of today's information technology, especially in wireless and mobile environments like mobile ad hoc networks (MANETs) in which key management has received more and more attention for the difficulty to be implemented in such dynamic network. Traditional key management schemes are mainly based on PKI and identity-based public key cryptography (ID-PKC), which suffers from the computational costs of certificate verification and the key escrow problem. In this paper, we present a novel distributed key management scheme, a combination of certificateless public key cryptography (CL-PKC) and threshold cryptography, which not only eliminates the need for certificate-based public key distribution and the key escrow problem but also prevents single point of failure.
---
paper_title: Virtual private key generator based escrow-free certificateless public key cryptosystem for mobile ad hoc networks
paper_content:
A certificateless public key cryptosystem can make a special contribution to building key distribution and management architecture in resource-constrained mobile ad hoc networks (MANETs) because it has no separate certificate and no complex certificate management problems. In this paper, we present a virtual private key generator (VPKG)-based escrow-free certificateless public key cryptosystem as a novel combination of certificateless and threshold cryptography. Using secret sharing, we build a VPKG whose members collaboratively calculate the partial private key and send it to the user via public channels. The private key of a user is generated jointly by the VPKG and the user. Each of them has “half” of the secret information about the private key of the user. In addition, binding a user's public key with its identity and partial private key, respectively, raises our schemes to the same trust level as is enjoyed in a traditional public key infrastructure. We also show that the proposed scheme is secure against public key replacement attacks and passive attacks. Copyright © 2012 John Wiley & Sons, Ltd.
---
paper_title: Identity-based encryption from the Weil pairing
paper_content:
We propose a fully functional identity-based encryption (IBE) scheme. The scheme has chosen ciphertext security in the random oracle model assuming a variant of the computational Diffie--Hellman problem. Our system is based on bilinear maps between groups. The Weil pairing on elliptic curves is an example of such a map. We give precise definitions for secure IBE schemes and give several applications for such systems.
---
paper_title: Hierarchical routing for multi-layer ad-hoc wireless networks with UAVs
paper_content:
Routing scalability in multi-hop wireless networks faces many challenges. The spatial concurrency constraint on nearby nodes sharing the same channel is the fundamental limitation. A previous theoretical study shows that the throughput furnished to each user is rapidly reduced as network size is increased. In order to solve this problem, we extended the hierarchical state routing scheme to a hierarchical multilayer environment. With the hierarchical approach, many problems caused by "flat" multi-hopping disappear. In the real battlefield, a multi-level physical heterogeneous network with UAVs provides an ideal support for the multi-area theater with a large number of fighting units. Extended hierarchical state relating (EHSR) shows very promising results in this hierarchical infrastructure.
---
paper_title: An efficient and non-interactive hierarchical key agreement protocol
paper_content:
The non-interactive identity-based key agreement schemes are believed to be applicable to mobile ad-hoc networks (MANETs) that have a hierarchical structure such as hierarchical military MANETs. It was observed by Gennaro et al. (2008) that there is still an open problem on the security of the existing schemes, i.e., how to achieve the desirable security against corrupted nodes in the higher levels of a hierarchy? In this paper, we propose a novel and very efficient non-interactive hierarchical identity-based key agreement scheme that solves the open problem and outperforms all existing schemes in terms of computational efficiency and data storage.
---
paper_title: Toward Hierarchical Identity-Based Encryption
paper_content:
We introduce the concept of hierarchical identity-based encryption (HIBE) schemes, give precise definitions of their security and mention some applications. A two-level HIBE (2-HIBE) scheme consists of a root private key generator (PKG), domain PKGs and users, all of which are associated with primitive IDs (PIDs) that are arbitrary strings. A user's public key consists of their PID and their domain's PID (in whole called an address). In a regular IBE (which corresponds to a 1-HIBE) scheme, there is only one PKG that distributes private keys to each user (whose public keys are their PID). In a 2-HIBE, users retrieve their private key from their domain PKG. Domain PKGs can compute the private key of any user in their domain, provided they have previously requested their domain secret key from the root PKG (who possesses a master secret). We can go beyond two levels by adding subdomains, subsubdomains, and so on. We present a two-level system with total collusion resistance at the upper (domain) level and partial collusion resistance at the lower (user) level, which has chosen-ciphertext security in the random-oracle model.
---
paper_title: New Strategies for Revocation in Ad-Hoc Networks
paper_content:
Responding to misbehavior in ad-hoc and sensor networks is difficult. We propose new techniques for deciding when to remove nodes in a decentralized manner. Rather than blackballing nodes that misbehave, a more efficient approach turns out to be reelection - requiring nodes to secure a majority or plurality of approval from their neighbors at regular intervals. This can be implemented in a standard model of voting in which the nodes form a club, or in a lightweight scheme where each node periodically broadcasts a 'buddy list' of neighbors it trusts. This allows much greater flexibility of trust strategies than a predetermined voting mechanism. We then consider an even more radical strategy still - suicide attacks - in which a node on perceiving another node to be misbehaving simply declares both of them to be dead. Other nodes thereafter ignore them both. Suicide attacks, found in a number of contexts in nature from bees to helper T-cells, turn out to be more efficient still for an interesting range of system parameters.
---
paper_title: TIDS: threshold and identity-based security scheme for wireless ad hoc networks ☆
paper_content:
Abstract As various applications of wireless ad hoc network have been proposed, security has received increasing attentions as one of the critical research challenges. In this paper, we consider the security issues at network layer, wherein routing and packet forwarding are the main operations. We propose a novel efficient security scheme in order to provide various security characteristics, such as authentication , confidentiality , integrity and non-repudiation for wireless ad hoc networks. In our scheme, we deploy the recently developed concepts of identity-based signcryption and threshold secret sharing . We describe our proposed security solution in context of dynamic source routing (DSR) protocol. Without any assumption of pre-fixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide key generation and key management services using threshold secret sharing algorithm, which effectively solves the problem of single point of failure in the traditional public-key infrastructure (PKI) supported system. The identity-based signcryption mechanism is applied here not only to provide end-to-end authenticity and confidentiality in a single step, but also to save network bandwidth and computational power of wireless nodes. Moreover, one-way hash chain is used to protect hop-by-hop transmission.
---
paper_title: The capacity of wireless networks
paper_content:
When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput /spl lambda/(n) obtainable by each node for a randomly chosen destination is /spl Theta/(W//spl radic/(nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is /spl Theta/(W/spl radic/An) bit-meters per second. Thus even under optimal circumstances, the throughput is only /spl Theta/(W//spl radic/n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.
---
paper_title: Provably Secure Certificate-based Signature Scheme for Ad Hoc Networks
paper_content:
Certificate-based cryptography proposed by Gentry in Eurocrypt 2003 combines the advantages of traditional public key cryptography (PKI) and identity-based cryptography, and removes the certificate management problem and the private key escrow security concern. Based on computational Diffie-Hellman assumption, a certificate-based signature scheme is constructed to insure the security of communication in mobile Ad hoc networks,. The security of the scheme is proved under the Random Oracle Model. The scheme is also efficient, since the signing algorithm does not need the computation of the bilinear pairing and the verification algorithm needs that computation only once. Thus it is particularly useful in Ad hoc networks.
---
paper_title: DSR : The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc Networks
paper_content:
The Dynamic Source Routing protocol (DSR) is a simple and efficient routing protocol designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes. DSR allows the network to be completely self-organizing and self-configuring, without the need for any existing network infrastructure or administration. The protocol is composed of the two mechanisms of Route Discovery and Route Maintenance, which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network. The use of source routing allows packet routing to be trivially loop-free, avoids the need for up-to-date routing information in the intermediate nodes through which packets are forwarded, and allows nodes forwarding or overhearing packets to cache the routing information in them for their own future use. All aspects of the protocol operate entirely on-demand, allowing the routing packet overhead of DSR to scale automatically to only that needed to react to changes in the routes currently in use. We have evaluated the operation of DSR through detailed simulation on a variety of movement and communication patterns, and through implementation and significant experimentation in a physical outdoor ad hoc networking testbed we have constructed in Pittsburgh, and have demonstrated the excellent performance of the protocol. In this chapter, we describe the design of DSR and provide a summary of some of our simulation and testbed implementation results for the protocol.
---
paper_title: Strongly-Resilient and Non-Interactive Hierarchical Key-Agreement in MANETs
paper_content:
Key agreement is a fundamental security functionality by which pairs of nodes agree on shared keys to be used for protecting their pairwise communications. In this work we study key-agreement schemes that are well-suited for the mobile network environment. Specifically, we describe schemes with the following characteristics: ::: ::: ::: Non-interactive:any two nodes can compute a unique shared secret key without interaction; ::: Identity-based:to compute the shared secret key, each node only needs its own secret key and the identity of its peer; ::: Hierarchical:the scheme is decentralized through a hierarchy where intermediate nodes in the hierarchy can derive the secret keys for each of its children without any limitations or prior knowledge on the number of such children or their identities; ::: Resilient:the scheme is fully resilient against compromise of any number of leavesin the hierarchy, and of a threshold number of nodes in each of the upper levels of the hierarchy. ::: ::: ::: ::: Several schemes in the literature have three of these four properties, but the schemes in this work are the first to possess all four. This makes them well-suited for environments such as MANETs and tactical networks which are very dynamic, have significant bandwidth and energy constraints, and where many nodes are vulnerable to compromise. We provide rigorous analysis of the proposed schemes and discuss implementations aspects.
---
paper_title: A hierarchical identity based key management scheme in tactical Mobile Ad Hoc Networks
paper_content:
Hierarchical key management schemes would serve well for military applications where the organization of the network is already hierarchical in nature. Most of the existing key management schemes concentrate only on network structures and key allocation algorithms, ignoring attributes of the nodes themselves. Due to the distributed and dynamic nature of MANETs, it is possible to show that there is a security benefit to be attained when the node states are considered in the process of constructing a private key generator (PKG). In this paper, we propose a distributed hierarchical key management scheme in which nodes can get their keys updated either from their parent nodes or a threshold of sibling nodes. The proposed scheme can select the best nodes to be used as PGKs from all available ones considering their security conditions and energy states. Simulation results show that the proposed scheme can decrease network compromising probability and increase network lifetime.
---
paper_title: Routing Protocols for Self-Organizing Hierarchical Ad-Hoc Wireless Networks
paper_content:
A novel self-organizing hierarchical architecture is proposed for improving the scalability properties of adhoc wireless networks. This paper focuses on the design and evaluation of routing protocols applicable to this class of hierarchical ad-hoc networks. The performance of a hierarchical network with the popular dynamic source routing (DSR) protocol is evaluated and compared with that of a conventional “flat” ad-hoc networks using an ns-2 simulation model. The results for an example sensor network scenario show significant capacity increases with the hierarchical architecture (∼4:1). Alternative routing metrics that account for energy efficiency are also considered briefly, and the effect on user performance and system capacity are given for a specific example.
---
paper_title: A practical scheme for non-interactive verifiable secret sharing
paper_content:
This paper presents an extremely efficient, non-interactive protocol for verifiable secret sharing. Verifiable secret sharing (VSS) is a way of bequeathing information to a set of processors such that a quorum of processors is needed to access the information. VSS is a fundamental tool of cryptography and distributed computing. Seemingly difficult problems such as secret bidding, fair voting, leader election, and flipping a fair coin have simple one-round reductions to VSS. There is a constant-round reduction from Byzantine Agreement to non-interactive VSS. Non-interactive VSS provides asynchronous networks with a constant-round simulation of simultaneous broadcast networks whenever even a bare majority of processors are good. VSS is constantly repeated in the simulation of fault-free protocols by faulty systems. As verifiable secret sharing is a bottleneck for so many results, it is essential to find efficient solutions.
---
paper_title: Halo: A Hierarchical Identity-Based Public Key Infrastructure for Peer-to-Peer Opportunistic Collaboration
paper_content:
The lack of information security protection for peer-to-peer systems has hampered the use of this robust and scalable technology in sensitive applications. The security weakness is rooted in the server-less architecture and the demand driven ad-hoc operation scenarios of peer-to-peer systems. Together, they prohibit scalable key management using traditional symmetric/ asymmetric cryptographic techniques. The advent of hierarchical identity-based cryptography and thresholded/joint secret sharing offers a possible solution to this problem. In this paper, we present the design of Halo, a hierarchical identity-based public key infrastructure that uses these novel technologies to perform recursive instantiation of private key generators and establish a trust hierarchy with unlimited number of levels. The PKI thus enables the employment of hierarchical identity-based public key encryption, signature, and signcryption for the protection of peer-to-peer applications. The effort to implement a proof-of-concept prototype as a JXTA service module was also discussed.
---
paper_title: An ad hoc network with mobile backbones
paper_content:
A mobile ad hoc network (MANET) is usually assumed to be homogeneous, where each mobile node shares the same radio capacity. However, a homogeneous ad hoc network suffers from poor scalability. Recent research has demonstrated its performance bottleneck both theoretically and through simulation experiments and testbed measurement Building a physically hierarchical ad hoc network is a very promising way to achieve good scalability. In this paper, we present a design methodology to build a hierarchical large-scale ad hoc network using different types of radio capabilities at different layers. In such a structure, nodes are first dynamically grouped into multihop clusters. Each group elects a cluster-head to be a backbone node (BN). Then higher-level links are established to connect the BN into a backbone network. Following this method recursively, a multilevel hierarchical network can be established. Three critical issues are addressed in this paper. We first analyze the optimal number of BN for a layer in theory. Then, we propose a new stable clustering scheme to deploy the BN. Finally LANMAR routing is extended to operate the physical hierarchy efficiently. Simulation results using GloMoSim show that our proposed schemes achieve good performance.
---
paper_title: Perfectly-Secure Key Distribution for Dynamic Conferences
paper_content:
A key distribution scheme for dynamic conferences is a method by which initially an (off-line) trusted server distributes private individual pieces of information to a set of users. Later any group of users of a given size (a dynamic conference) is able to compute a common secure key. In this paper we study the theory and applications of such perfectly secure systems. In this setting, any group of t users can compute a common key by each user computing using only his private piece of information and the identities of the other t - 1 group users. Keys are secure against coalitions of up to k users, that is, even if k users pool together their pieces they cannot compute anything about a key of any t-size conference comprised of other users.First we consider a non-interactive model where users compute the common key without any interaction. We prove a lower hound on the size of the user's piece of information of (k+t-1 t-1) times the size of the common key. We then establish the optimality of this bound, by describing and analyzing a scheme which exactly meets this limitation (the construction extends the one in [2]). Then, we consider the model where interaction is allowed in the common key computation phase, and show a gap between the models by exhibiting an interactive scheme in which the user's information is only k + t - 1 times the size of the common key. We further show various applications and useful modifications of our basic scheme. Finally, we present its adaptation to network topologies with neighborhood constraints.
---
paper_title: Lightweight Sybil Attack Detection in MANETs
paper_content:
Fully self-organized mobile ad hoc networks (MANETs) represent complex distributed systems that may also be part of a huge complex system, such as a complex system-of-systems used for crisis management operations. Due to the complex nature of MANETs and its resource constraint nodes, there has always been a need to develop lightweight security solutions. Since MANETs require a unique, distinct, and persistent identity per node in order for their security protocols to be viable, Sybil attacks pose a serious threat to such networks. A Sybil attacker can either create more than one identity on a single physical device in order to launch a coordinated attack on the network or can switch identities in order to weaken the detection process, thereby promoting lack of accountability in the network. In this research, we propose a lightweight scheme to detect the new identities of Sybil attackers without using centralized trusted third party or any extra hardware, such as directional antennae or a geographical positioning system. Through the help of extensive simulations and real-world testbed experiments, we are able to demonstrate that our proposed scheme detects Sybil identities with good accuracy even in the presence of mobility.
---
paper_title: Threshold and identity-based key management and authentication for wireless ad hoc networks
paper_content:
As various applications of wireless ad hoc network have been proposed, security has become one of the big research challenges and is receiving increasing attention. In this paper, we propose a distributed key management and authentication approach by deploying the recently developed concepts of identity-based cryptography and threshold secret sharing. Without any assumption of prefixed trust relationship between nodes, the ad hoc network works in a self-organizing way to provide the key generation and key management service, which effectively solves the problem of single point of failure in the traditional public key infrastructure (PKI)-supported system. The identity-based cryptography mechanism is applied here not only to provide end-to-end authenticity and confidentiality, but also to save network bandwidth and computational power of wireless nodes.
---
|
Title: A Survey on Key Management of Identity-based Schemes in Mobile Ad Hoc Networks
Section 1: PRELIMINARIES
Description 1: Discuss the background knowledge including brief history, basic concepts of IBC, identity-based encryption (IBE) and signature (IBS) schemes, and threshold cryptography.
Section 2: Identity-Based Cryptography
Description 2: Explain the concept of identity-based cryptography, its development, and key functions involved in IBE and IBS schemes.
Section 3: Threshold Cryptography
Description 3: Describe the principle of (t, n) threshold cryptography and its application in identity-based key management schemes.
Section 4: IDENTITY-BASED KEY MANAGEMENT SCHEMES
Description 4: Introduce multiple identity-based key management schemes developed to mitigate the key escrow problem in MANETs.
Section 5: Traditional Threshold Identity-Based Schemes
Description 5: Detail the operation, strengths, and weaknesses of traditional threshold identity-based schemes, including key generation and distribution processes.
Section 6: SSPK Identity-Based Schemes
Description 6: Discuss the SSPK identity-based scheme using Verifiable Secret Sharing (VSS) and how it differs from traditional threshold schemes.
Section 7: Certificateless Schemes
Description 7: Explain certificateless public key cryptography (CL-PKC) schemes, their benefits, and how they overcome the key escrow problem.
Section 8: Hierarchical Identity-Based Schemes
Description 8: Describe hierarchical identity-based schemes, their applications in hierarchical MANETs, and methods for key management in such environments.
Section 9: Techniques to Improve Identity-Based Scheme
Description 9: Summarize various techniques in the literature to enhance the security and availability of key management in identity-based schemes for MANETs.
Section 10: CONCLUSIONS
Description 10: Offer an overview of the findings, comparing various identity-based key management schemes, discussing their strengths and weaknesses, and proposing areas for future research.
|
Behavioral Systems Theory: A Survey
| 7 |
---
paper_title: Paradigms and puzzles in the theory of dynamical systems
paper_content:
A self-contained exposition is given of an approach to mathematical models, in particular, to the theory of dynamical systems. The basic ingredients form a triptych, with the behavior of a system in the center, and behavioral equations with latent variables as side panels. The author discusses a variety of representation and parametrization problems, in particular, questions related to input/output and state models. The proposed concept of a dynamical system leads to a new view of the notions of controllability and observability, and of the interconnection of systems, in particular, to what constitutes a feedback control law. The final issue addressed is that of system identification. It is argued that exact system identification leads to the question of computing the most powerful unfalsified model. >
---
paper_title: Multidimensional constant linear systems
paper_content:
A continuous resp. discrete r-dimensional (r≥1) system is the solution space of a system of linear partial differential resp. difference equations with constant coefficients for a vector of functions or distributions in r variables resp. of r-fold indexed sequences. Although such linear systems, both multidimensional and multivariable, have been used and studied in analysis and algebra for a long time, for instance by Ehrenpreis et al. thirty years ago, these systems have only recently been recognized as objects of special significance for system theory and for technical applications. Their introduction in this context in the discrete one-dimensional (r=1) case is due to J. C. Willems. The main duality theorem of this paper establishes a categorical duality between these multidimensional systems and finitely generated modules over the polynomial algebra in r indeterminates by making use of deep results in the areas of partial differential equations, several complex variables and algebra. This duality theorem makes many notions and theorems from algebra available for system theoretic considerations. This strategy is pursued here in several directions and is similar to the use of polynomial algebra in the standard one-dimensional theory, but mathematically more difficult. The following subjects are treated: input-output structures of systems and their transfer matrix, signal flow spaces and graphs of systems and block diagrams, transfer equivalence and (minimal) realizations, controllability and observability, rank singularities and their connection with the integral respresentation theorem, invertible systems, the constructive solution of the Cauchy problem and convolutional transfer operators for discrete systems. Several constructions on the basis of the Gröbner basis algorithms are executed. The connections with other approaches to multidimensional systems are established as far as possible (to the author).
---
paper_title: Paradigms and puzzles in the theory of dynamical systems
paper_content:
A self-contained exposition is given of an approach to mathematical models, in particular, to the theory of dynamical systems. The basic ingredients form a triptych, with the behavior of a system in the center, and behavioral equations with latent variables as side panels. The author discusses a variety of representation and parametrization problems, in particular, questions related to input/output and state models. The proposed concept of a dynamical system leads to a new view of the notions of controllability and observability, and of the interconnection of systems, in particular, to what constitutes a feedback control law. The final issue addressed is that of system identification. It is argued that exact system identification leads to the question of computing the most powerful unfalsified model. >
---
paper_title: When are linear differentiation-invariant spaces differential?
paper_content:
It is shown that a linear differentiation-invariant subspace of a C∞-trajectory space is differential (i.e., can be represented as the kernel of a linear constant-coefficient differential operator) if and only if its McMillan degree is finite.
---
paper_title: Multidimensional constant linear systems
paper_content:
A continuous resp. discrete r-dimensional (r≥1) system is the solution space of a system of linear partial differential resp. difference equations with constant coefficients for a vector of functions or distributions in r variables resp. of r-fold indexed sequences. Although such linear systems, both multidimensional and multivariable, have been used and studied in analysis and algebra for a long time, for instance by Ehrenpreis et al. thirty years ago, these systems have only recently been recognized as objects of special significance for system theory and for technical applications. Their introduction in this context in the discrete one-dimensional (r=1) case is due to J. C. Willems. The main duality theorem of this paper establishes a categorical duality between these multidimensional systems and finitely generated modules over the polynomial algebra in r indeterminates by making use of deep results in the areas of partial differential equations, several complex variables and algebra. This duality theorem makes many notions and theorems from algebra available for system theoretic considerations. This strategy is pursued here in several directions and is similar to the use of polynomial algebra in the standard one-dimensional theory, but mathematically more difficult. The following subjects are treated: input-output structures of systems and their transfer matrix, signal flow spaces and graphs of systems and block diagrams, transfer equivalence and (minimal) realizations, controllability and observability, rank singularities and their connection with the integral respresentation theorem, invertible systems, the constructive solution of the Cauchy problem and convolutional transfer operators for discrete systems. Several constructions on the basis of the Gröbner basis algorithms are executed. The connections with other approaches to multidimensional systems are established as far as possible (to the author).
---
paper_title: Modules and Behaviours in nD Systems Theory
paper_content:
This paper is intended both as an introduction to the behavioural theory of nD systems, in particular the duality of Oberst and its applications, and also as a bridge between the behavioural theory and the module-theoretic approach of Fliess, Pommaret and others. Our presentation centres on Pommaret's notion of a system observable, and uses this concept to provide new interpretations of known behavioural results. We discuss among other subjects autonomous systems, controllable systems, observability, transfer matrices, computation of trajectories, and system complexity.
---
paper_title: Multidimensional constant linear systems
paper_content:
A continuous resp. discrete r-dimensional (r≥1) system is the solution space of a system of linear partial differential resp. difference equations with constant coefficients for a vector of functions or distributions in r variables resp. of r-fold indexed sequences. Although such linear systems, both multidimensional and multivariable, have been used and studied in analysis and algebra for a long time, for instance by Ehrenpreis et al. thirty years ago, these systems have only recently been recognized as objects of special significance for system theory and for technical applications. Their introduction in this context in the discrete one-dimensional (r=1) case is due to J. C. Willems. The main duality theorem of this paper establishes a categorical duality between these multidimensional systems and finitely generated modules over the polynomial algebra in r indeterminates by making use of deep results in the areas of partial differential equations, several complex variables and algebra. This duality theorem makes many notions and theorems from algebra available for system theoretic considerations. This strategy is pursued here in several directions and is similar to the use of polynomial algebra in the standard one-dimensional theory, but mathematically more difficult. The following subjects are treated: input-output structures of systems and their transfer matrix, signal flow spaces and graphs of systems and block diagrams, transfer equivalence and (minimal) realizations, controllability and observability, rank singularities and their connection with the integral respresentation theorem, invertible systems, the constructive solution of the Cauchy problem and convolutional transfer operators for discrete systems. Several constructions on the basis of the Gröbner basis algorithms are executed. The connections with other approaches to multidimensional systems are established as far as possible (to the author).
---
paper_title: Controllable and Autonomous nD Linear Systems
paper_content:
AbstractThe theory of multidimensional systems suffers in certain areas from a lack of development of fundamental concepts. Using the behavioural approach, the study of linear shift-invariant nD systems can be encompassed within the well-established framework of commutative algebra, as previously shown by Oberst. We consider here the discrete case. In this paper, we take two basic properties of discrete nD systems, controllability and autonomy, and show that they have simple algebraic characterizations. We make several non-trivial generalizations of previous results for the 2D case. In particular we analyse the controllable--autonomous decomposition and the controllable subsystem of autoregressive systems. We also show that a controllable nD subsystem of ::: $$(k^q )^{(Z^n )} $$ ::: is precisely one which is minimal in its transfer class.
---
paper_title: A New Perspective on Controllability Properties for Dynamical Systems
paper_content:
In this paper we study the properties of weak and strong controllability as newly defined in (Rocha, 1995) for delay-differential systems in a behavioural setting, now for the multidimensional case. Further we give an overview of the relationships between these properties and the original behavioural controllability concept introduced in (Willems, 1988).
---
paper_title: A Behavioral Approach To Control Of Distributed Systems
paper_content:
This paper develops a theory of control for distributed systems (i.e., those defined by systems of constant coefficient partial differential operators) via the behavioral approach of Willems. The study here is algebraic in the sense that it relates behaviors of distributed systems to submodules of free modules over the polynomial ring in several indeterminates. As in the lumped case, behaviors of distributed ARMA systems can be reduced to AR behaviors. This paper first studies the notion of AR controllable distributed systems following the corresponding definition for lumped systems due to Willems. It shows that, as in the lumped case, the class of controllable AR systems is precisely the class of MA systems. It then shows that controllable 2-D distributed systems are necessarily given by free submodules, whereas this is not the case for n-D distributed systems, $n \ge 3$. This therefore points out an important difference between these two cases. This paper then defines two notions of autonomous distributed systems which mimic different properties of lumped autonomous systems. ::: Control is the process of restricting a behavior to a specific desirable autonomous subbehavior. A notion of stability generalizing bounded input--bounded output stability of lumped systems is proposed and the pole placement problem is defined for distributed systems. This paper then solves this problem for a class of distributed behaviors.
---
paper_title: Structure and representation of 2-D systems
paper_content:
A closure cap of the screw-on or similar type having exertior knurls or the like at the periphery thereof, in combination with a safety overcap having complementary interior knurls or projections for selective engagement with the exterior knurls of the closure cap for turning the latter by rotation of the overcap, said overcap including a yieldable resilient depressed base engaging the base of the closure cap normally maintaining the overcap unengaged with respect to the closure cap. The interior knurls or projections on the interior of the overcap engage the exterior knurls at the periphery of the closure cap only upon the application of downward pressure on the safety overcap relative to the closure cap, said overcap rising to non-engaged position upon release of the pressure.
---
paper_title: Autonomicity and the absence of free variables for behaviors over finite rings
paper_content:
The equivalence between autonomicity and the absence of free variables is a well-known result for linear, shift-invariant and complete behaviors over a field. However, this no longer holds for a behavior that does not satisfy one of those properties. In this paper we consider linear, shift-invariant and complete behaviors over $\mathbb{Z}_{p^r}$ and study under which conditions such behaviors are autonomous and/or have no free variables.
---
paper_title: Linear Recurring Arrays, Linear Systems and Multidimensional Cyclic Codes over Quasi-Frobenius Rings
paper_content:
This paper generalizes the duality between polynomial modules and their inverse systems (Macaulay), behaviors (Willems) or zero sets of arrays or multi-sequences from the known case of base fields to that of commutative quasi-Frobenius (QF) base rings or even to QF-modules over arbitrary commutative Artinian rings. The latter generalization was inspired by the work of Nechaev et al. who studied linear recurring arrays over QF-rings and modules. Such a duality can be and has been suggestively interpreted as a Nullstellensatz for polynomial ideals or modules. We also give an algorithmic characterization of principal systems. We use these results to define and characterize n-dimensional cyclic codes and their dual codes over QF rings for n>1. If the base ring is an Artinian principal ideal ring and hence QF, we give a sufficient condition on the codeword lengths so that each such code is generated by just one codeword. Our result is the n-dimensional extension of the results by Calderbank and Sloane, Kanwar and Lopez-Permouth, Z. X. Wan, and Norton and Salagean for n=1.
---
paper_title: Reed-Solomon list decoding from a system-theoretic perspective
paper_content:
In this paper, the Sudan-Guruswami approach to list decoding of Reed-Solomon (RS) codes is cast in a system-theoretic framework. With the data, a set of trajectories or time series is associated which is then modeled as a so-called behavior. In this way, a connection is made with the behavioral approach to system theory. It is shown how a polynomial representation of the modeling behavior gives rise to the bivariate interpolating polynomials of the Sudan-Guruswami approach. The concept of "weighted row reduced" is introduced and used to achieve minimality. Two decoding methods are derived and a parametrization of all bivariate interpolating polynomials is given.
---
paper_title: Autonomy properties of multidimensional linear systems over rings
paper_content:
Based on the notions of rank and reduced rank from commutative algebra, we discuss several aspects of the concept of autonomy for multidimensional discrete linear systems over finite rings of the form Z/mZ. We review several algebraic characterizations of autonomy that are equivalent for systems over fields and investigate their relationship in the ring case. The strongest of these notions turns out to be equivalent to the non-existence of trajectories with finite support (besides the zero trajectory), and the weakest one amounts to the fact that the system has no free variables (inputs).
---
paper_title: Stability and Stabilization of Multidimensional Input/Output Systems
paper_content:
In this paper we discuss stability and stabilization of continuous and discrete multidimensional input/output (IO) behaviors (of dimension $r$) which are described by linear systems of complex partial differential (resp., difference) equations with constant coefficients, where the signals are taken from various function spaces, in particular from those of polynomial-exponential functions. Stability is defined with respect to a disjoint decomposition of the $r$-dimensional complex space into a stable and an unstable region, with the standard stable region in the one-dimensional continuous case being the set of complex numbers with negative real part. A rational function is called stable if it has no poles in the unstable region. An IO behavior is called stable if the characteristic variety of its autonomous part has no points in the unstable region. This is equivalent to the stability of its transfer matrix and an additional condition. The system is called stabilizable if there is a compensator IO system such that the output feedback system is well-posed and stable. We characterize stability and stabilizability and construct all stabilizing compensators of a stabilizable IO system (parametrization). The theorems and proofs are new but essentially inspired and influenced by and related to the stabilization theorems concerning multidimensional IO maps as developed, for instance, by Bose, Guiver, Shankar, Sule, Xu, Lin, Ying, Zerz, and Quadrat and, of course, the seminal papers of Vidyasagar, Youla, and others in the one-dimensional case. In contrast to the existing literature, the theorems and proofs of this paper do not need or employ the so-called fractional representation approach, i.e., various matrix fraction descriptions of the transfer matrix, thus avoiding the often lengthy matrix computations and seeming to be of interest even for one-dimensional systems (at least to the author). An important mathematical tool, new in systems theory, is Gabriel’s localization theory which, only in the case of ideal-convex (Shankar, Sule) unstable regions, coincides with the usual one. Algorithmic tests for stability, stabilizability, and ideal-convexity, and the algorithmic construction of stabilizing compensators, are addressed but still encounter many difficulties; see in particular the open problems listed by Xu e
---
paper_title: Representations and structural properties of periodic systems
paper_content:
We consider periodic behavioral systems as introduced in [Kuijper, M., & Willems, J. C. (1997). A behavioral framework for periodically time-varying systems. In Proceedings of the 36th conference on decision & control (Vol. 3, pp. 2013-2016). San Diego, California, USA, 10-12 December 1997] and analyze two main issues: behavioral representation/controllability and autonomy. More concretely, we study the equivalence and the minimality of kernel representations, and introduce latent variable (and, in particular, image) representations. Moreover we relate the controllability of a periodic system with the controllability of an associated time-invariant system known as lifted system, and derive a controllability test. Further, we prove the existence of an autonomous/controllable decomposition similar to the time-invariant case. Finally, we introduce a new concept of free variables and inputs, which can be regarded as a generalization of the one adopted for time-invariant systems, but appears to be more adequate for the periodic case.
---
paper_title: On the Parametrization of All Regularly Implementing and Stabilizing Controllers
paper_content:
In this paper we deal with problems of controller parametrization in the context of behavioral systems. Given a full plant behavior, a subbehavior of the manifest plant behavior is called regularly implementable if it can be achieved as the controlled behavior resulting from the interconnection of the full plant behavior with a suitable controller behavior, in such a way that the controller does not impose restrictions that are already present in the plant. We establish a parametrization of all controllers that regularly implement a given behavior. We also obtain a parametrization of all stabilizing controllers.
---
paper_title: State maps for linear systems
paper_content:
Modeling of physical systems consists of writing the equations describing a phenomenon and yields as a result a set of differential-algebraic equations. As such, state-space models are not a natural starting point for modeling, while they have utmost importance in the simulation and control phase. The paper addresses the problem of computing state variables for systems of linear differential-algebraic equations of various forms. The point of view from which the problem is considered is the behavioral one, as put forward in [J. C. Willems, Automatica J. IFAC, 22 (1986), pp. 561--580; Dynamics Reported, 2 (1989), pp. 171--269; IEEE Trans. Automat. Control, 36 (1991), pp. 259--294].
---
paper_title: Observer synthesis in the behavioral approach
paper_content:
Analyzes the observer design problem in the behavioral context. Observability and detectability notions are first introduced and fully characterized. Necessary and sufficient conditions for the existence of an observer, possibly an asymptotic or an exact one, are introduced, and a complete parameterization of all admissible observers is given. The problem of obtaining observers endowed with a (strictly) proper transfer matrix and the design of observer-based controllers are later addressed and solved. Finally, the above issues are particularized to the case of state-space systems, thus showing they naturally generalize well-known theorems of traditional system theory.
---
paper_title: On the state of behaviors
paper_content:
The theme of the present paper is the study of the concept of state and the corresponding state maps in the context of Willems’ behavioral theory. We concentrate on Markovian system and their representation in terms of first order difference or differential systems. We follow by a full analysis of the special case of state systems, the embedding of a linear system in a state system via the use of state maps arriving at state representations or, equivalently, to a realization theory for behaviors. Minimality is defined and characterized and a state space isomorphism theorem is established. Realization procedures based on the shift realization are developed as well as a rigorous analysis of the construction of state maps. The paper owes much to Rapisarda and Willems [P. Rapisarda, J.C. Willems, State maps for linear systems, SIAM J. Contr. Optim. 35 (1997) 1053–1091].
---
|
Title: Behavioral Systems Theory: A Survey
Section 1: Introduction
Description 1: Provide an overview and background of behavioral systems theory, highlighting the contributions of J.C. Willems and the current state of research in this field.
Section 2: Starting point
Description 2: Introduce the reader to the basic concepts of viewing a system, explain Willems' definition of a dynamical system, and set the foundation for understanding behavioral systems.
Section 3: E. Zerz
Description 3: Discuss the modifications and alternative definitions of systems proposed by researchers following Willems' original definition, including key concepts and terminologies.
Section 4: First steps
Description 4: Explain the initial steps in the study of behavioral systems, including linearity, time-invariance, and the concept of kernel representations in both discrete and continuous time.
Section 5: Milestones
Description 5: Highlight significant contributions and fundamental theorems in the development of behavioral systems theory, focusing on concepts such as input-output relationships, controllability, and autonomy.
Section 6: Further paths
Description 6: Examine extensions of the basic theory into more complex areas such as multidimensional systems, continuous time-varying systems, and discrete systems over finite rings, illustrating their unique challenges and results.
Section 7: Where to go from here
Description 7: Provide a short outlook on recent developments, potential future research directions, and important topics within behavioral systems theory that were not covered in the survey.
|
Adaptation in cloud resource configuration: a survey
| 12 |
---
paper_title: Allocation of Virtual Machines in Cloud Data Centers—A Survey of Problem Models and Optimization Algorithms
paper_content:
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and environmental impact. Therefore, cloud providers must optimize the use of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This article surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, and pointing out areas that need further research.
---
paper_title: Resource Management in Clouds: Survey and Research Challenges
paper_content:
Resource management in a cloud environment is a hard problem, due to: the scale of modern data centers; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load; as well as the range of objectives of the different actors in a cloud ecosystem. Consequently, both academia and industry began significant research efforts in this area. In this paper, we survey the recent literature, covering 250+ publications, and highlighting key results. We outline a conceptual framework for cloud resource management and use it to structure the state-of-the-art review. Based on our analysis, we identify five challenges for future investigation. These relate to: providing predictable performance for cloud-hosted applications; achieving global manageability for cloud systems; engineering scalable resource management systems; understanding economic behavior and cloud pricing; and developing solutions for the mobile cloud paradigm .
---
paper_title: Survey of Elasticity Management Solutions in Cloud Computing
paper_content:
Application Service Providers (ASPs) are increasingly adopting the cloud computing paradigm to provision remotely available resources for their applications. In this context, the ability of cloud computing to provision resources on-demand in an elastic manner is of the utmost practical interest for them. As a consequence, the field of cloud computing has witnessed the development of a large amount of elasticity management solutions deeply rooted in works from distributed systems and grid computing research communities. This chapter presents some solutions that differ in their goals, in the actions they are able to perform and in their architectures. In this chapter, we provide an overview of the concept of cloud elasticity and propose a classification of the mechanisms and techniques employed to manage elasticity. We also use this classification as a common ground to study and compare elasticity management solutions.
---
paper_title: QoS-Aware Autonomic Resource Management in Cloud Computing: A Systematic Review
paper_content:
As computing infrastructure expands, resource management in a large, heterogeneous, and distributed environment becomes a challenging task. In a cloud environment, with uncertainty and dispersion of resources, one encounters problems of allocation of resources, which is caused by things such as heterogeneity, dynamism, and failures. Unfortunately, existing resource management techniques, frameworks, and mechanisms are insufficient to handle these environments, applications, and resource behaviors. To provide efficient performance of workloads and applications, the aforementioned characteristics should be addressed effectively. This research depicts a broad methodical literature analysis of autonomic resource management in the area of the cloud in general and QoS (Quality of Service)-aware autonomic resource management specifically. The current status of autonomic resource management in cloud computing is distributed into various categories. Methodical analysis of autonomic resource management in cloud computing and its techniques are described as developed by various industry and academic groups. Further, taxonomy of autonomic resource management in the cloud has been presented. This research work will help researchers find the important characteristics of autonomic resource management and will also help to select the most suitable technique for autonomic resource management in a specific application along with significant future research directions.
---
paper_title: A Survey on Resource Allocation and Monitoring in Cloud Computing
paper_content:
The cloud provider plays a major role especially providing resources such as computing power for the cloud subscriber to deploy their applications on multiple platforms anywhere; anytime. Hence the cloud users still having problem for resource management in receiving the guaranteed computing resources on time. This will impact the service time and the service level agreements for various users in multiple applications. Therefore there is a need for a new resolution to resolve this problem. This survey paper conducts a study in resource allocation and monitoring in the cloud computing environment. We describe cloud computing and its properties, research issues in resource management mainly in resource allocation and monitoring and finally solutions approach for resource allocation and monitoring. It is believed that this paper would benefit both cloud users and researchers for further knowledge on resource management in cloud computing. On the other hand, the resources on the cloud are pooled in order to serve multiple subscribers. The provider use multi-tenancy model where the resources (physical and virtual) are reassigned dynamically based on the tenant requirement (5). The assigning of the resources will be based on the lease and SLA agreement, whereby different clients will need more or less amount of virtual resources. Subsequently, the growth of demands for cloud services is bringing more challenge for the provider to provide the resources to the client subscriber. Therefore, in this paper we provide a review on cloud computing which focus on resource management: allocation and monitoring. Our methodologies for this review are as follows: We provide a cloud computing taxonomy covers the cloud definitions, characteristics and deployment models. We then analyze the literatures and discuss about resource management, the process and the elements. We then concentrate literatures on resource allocation and monitoring. We derived the problems, challenge and the approach solution for resource allocation and monitoring in the cloud. This paper organizes as follows: Section II introduces an overview of Cloud Computing, Section III discuss about resource management and its processes, Section IV discuss about related work with resource management in the cloud, section, Section V describes about approach solution to resource allocation and monitoring and, finally Section VI concludes the paper.
---
paper_title: Autonomic Computing
paper_content:
Systems that install, heal, protect themselves and adapt to your needs -automatically Using autonomic computing to reduce costs, improve services, and enhance agility Autonomic components, architectures, standards, and development tools Planning for and implementing autonomic technology Current autonomic solutions from IBM and other leading companiesReducing IT costs, improving service, and enabling the "on-demand" businessIT operations costs are accelerating, and today's increasingly complex architectures and distributed computing infrastructures only make matters worse. The solution: autonomic computing. Autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting. They operate intelligently and dynamically, acting on your policies and service requirements. This book presents everything IT leaders and managers need to know to prepare for autonomic computing-and to begin leveraging its benefits. Coverage includes: How autonomic computing can reduce costs, improve service levels, enhance agility, simplify management, and help deliver the "on demand" business The key elements and attributes of autonomic computing systems Current autonomic technologies from IBM and many other leading suppliers Autonomic computing architectures, open standards, development tools, and enablers Implementation considerations, including a new assessment methodology The future of autonomic computing: business opportunities and research challenges
---
paper_title: A Systematic Review of Service Level Management in the Cloud
paper_content:
Cloud computing make it possible to flexibly procure, scale, and release computational resources on demand in response to workload changes. Stakeholders in business and academia are increasingly exploring cloud deployment options for their critical applications. One open problem is that service level agreements (SLAs) in the cloud ecosystem are yet to mature to a state where critical applications can be reliably deployed in clouds. This article systematically surveys the landscape of SLA-based cloud research to understand the state of the art and identify open problems. The survey is particularly aimed at the resource allocation phase of the SLA life cycle while highlighting implications on other phases. Results indicate that (i) minimal number of SLA parameters are accounted for in most studies; (ii) heuristics, policies, and optimisation are the most commonly used techniques for resource allocation; and (iii) the monitor-analysis-plan-execute (MAPE) architecture style is predominant in autonomic cloud systems. The results contribute to the fundamentals of engineering cloud SLA and their autonomic management, motivating further research and industrial-oriented solutions.
---
paper_title: Cloud Elasticity: A Survey
paper_content:
Cloud elasticity is a unique feature of cloud environments, which allows for the on demand de-provisioning or reconfiguration of the resources of cloud deployments. The efficient handling of cloud elasticity is a challenge that attracts the interest of the research community. This work constitutes a survey of research efforts towards this direction. The main contribution of this work is an up-to-date review of the latest elasticity handling approaches and a detailed classification scheme, focusing on the elasticity decision making techniques. Finally, we discuss various research challenges and directions of further research, regarding all phases of cloud elasticity, which can be deemed as a special case of autonomic behavior of computing systems This research has been co-financed by the European Union European Social Fund - ESF and Greek national funds through the Operational Program "Education and Lifelong Learning of the National Strategic Reference Framework NSRF - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.".
---
paper_title: A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems
paper_content:
Abstract Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions.
---
paper_title: Elasticity in Cloud Computing : What it is, and What it is Not
paper_content:
Originating from the field of physics and economics, the term elasticity is nowadays heavily used in the context of cloud computing. In this context, elasticity is commonly understood as the ability of a system to automatically provision and deprovision computing resources on demand as workloads change. However, elasticity still lacks a precise definition as well as representative metrics coupled with a benchmarking methodology to enable comparability of systems. Existing definitions of elasticity are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related terms such as scalability and efficiency; the proposed measurement methodologies do not provide means to quantify elasticity without mixing it with efficiency or scalability aspects. In this short paper, we propose a precise definition of elasticity and analyze its core properties and requirements explicitly distinguishing from related terms such as scalability and efficiency. Furthermore, we present a set of appropriate elasticity metrics and sketch a new elasticity tailored benchmarking methodology addressing the special requirements on workload design and calibration.
---
paper_title: Dynamic Frequency and Voltage Scaling for a Multiple-Clock-Domain Microprocessor
paper_content:
Multiple clock domains is one solution to the increasing problem of propagating the clock signal across increasingly larger and faster chips. The ability to independently scale frequency and voltage in each domain creates a powerful means of reducing power dissipation. A multiple clock domain (MCD) microarchitecture, which uses a globally asynchronous, locally synchronous (GALS) clocking style, permits future aggressive frequency increases, maintains a synchronous design methodology, and exploits the trend of making functional blocks more autonomous. In MCD, each processor domain is internally synchronous, but domains operate asynchronously with respect to one another. Designers still apply existing synchronous design techniques to each domain, but global clock skew is no longer a constraint. Moreover, domains can have independent voltage and frequency control, enabling dynamic voltage scaling at the domain level.
---
paper_title: Adaptive resource configuration for Cloud infrastructure management
paper_content:
To guarantee the vision of Cloud Computing QoS goals between the Cloud provider and the customer have to be dynamically met. This so-called Service Level Agreement (SLA) enactment should involve little human-based interaction in order to guarantee the scalability and efficient resource utilization of the system. To achieve this we start from Autonomic Computing, examine the autonomic control loop and adapt it to govern Cloud Computing infrastructures. We first hierarchically structure all possible adaptation actions into so-called escalation levels. We then focus on one of these levels by analyzing monitored data from virtual machines and making decisions on their resource configuration with the help of knowledge management (KM). The monitored data stems both from synthetically generated workload categorized in different workload volatility classes and from a real-world scenario: scientific workflow applications in bioinformatics. As KM techniques, we investigate two methods, Case-Based Reasoning and a rule-based approach. We design and implement both of them and evaluate them with the help of a simulation engine. Simulation reveals the feasibility of the CBR approach and major improvements by the rule-based approach considering SLA violations, resource utilization, the number of necessary reconfigurations and time performance for both, synthetically generated and real-world data. Highlights? We apply knowledge management to guarantee SLAs and low resource wastage in Clouds. ? Escalation levels provide a hierarchical model to structure possible reconfiguration actions. ? Case-Based Reasoning and rule-based approach prove feasibility as KM techniques. ? In-depth evaluation of rule-based approach shows major improvements towards CBR. ? KM is applied to real-world data gathered from scientific bioinformatic workflows.
---
paper_title: Towards energy-aware scheduling in data centers using machine learning
paper_content:
As energy-related costs have become a major economical factor for IT infrastructures and data-centers, companies and the research community are being challenged to find better and more efficient power-aware resource management strategies. There is a growing interest in "Green" IT and there is still a big gap in this area to be covered. In order to obtain an energy-efficient data center, we propose a framework that provides an intelligent consolidation methodology using different techniques such as turning on/off machines, power-aware consolidation algorithms, and machine learning techniques to deal with uncertain information while maximizing performance. For the machine learning approach, we use models learned from previous system behaviors in order to predict power consumption levels, CPU loads, and SLA timings, and improve scheduling decisions. Our framework is vertical, because it considers from watt consumption to workload features, and cross-disciplinary, as it uses a wide variety of techniques. We evaluate these techniques with a framework that covers the whole control cycle of a real scenario, using a simulation with representative heterogeneous workloads, and we measure the quality of the results according to a set of metrics focused toward our goals, besides traditional policies. The results obtained indicate that our approach is close to the optimal placement and behaves better when the level of uncertainty increases.
---
paper_title: Towards an adaptive human-centric computing resource management framework based on resource prediction and multi-objective genetic algorithm
paper_content:
The complexity, scale and dynamic of data source in the human-centric computing bring great challenges to maintainers. It is problem to be solved that how to reduce manual intervention in large scale human-centric computing, such as cloud computing resource management so that system can automatically manage according to configuration strategies. To address the problem, a resource management framework based on resource prediction and multi-objective optimization genetic algorithm resource allocation (RPMGA-RMF) was proposed. It searches for optimal load cluster as training sample based on load similarity. The neural network (NN) algorithm was used to predict resource load. Meanwhile, the model also built virtual machine migration request in accordance with obtained predicted load value. The multi-objective genetic algorithm (GA) based on hybrid group encoding algorithm was introduced for virtual machine (VM) resource management, so as to provide optimal VM migration strategy, thus achieving adaptive optimization configuration management of resource. Experimental resource based on CloudSim platform shows that the RPMGA-RMF can decrease VM migration times while reduce physical node simultaneously. The system energy consumption can be reduced and load balancing can be achieved either.
---
paper_title: Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures
paper_content:
Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs).
---
paper_title: CloudScale: elastic resource scaling for multi-tenant cloud systems
paper_content:
Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (
---
paper_title: Automated control for elastic storage
paper_content:
Elasticity - where systems acquire and release resources in response to dynamic workloads, while paying only for what they need - is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application services that allocate and release resources in discrete units, such as virtual server instances of predetermined sizes. It focuses on elastic control of the storage tier, in which adding or removing a storage node or "brick" requires rebalancing stored data across the nodes. The storage tier presents new challenges for elastic control: actuator delays (lag) due to rebalancing, interference with applications and sensor measurements, and the need to synchronize the multiple control elements, including rebalancing. We have designed and implemented a new controller for elastic storage systems to address these challenges. Using a popular distributed storage system - the Hadoop Distributed File System (HDFS) - under dynamic Web 2.0 workloads, we show how the controller adapts to workload changes to maintain performance objectives efficiently in a pay-as-you-go cloud computing environment.
---
paper_title: AROMA: automated resource allocation and configuration of mapreduce environment in the cloud
paper_content:
Distributed data processing framework MapReduce is increasingly deployed in Clouds to leverage the pay-per-usage cloud computing model. Popular Hadoop MapReduce environment expects that end users determine the type and amount of Cloud resources for reservation as well as the configuration of Hadoop parameters. However, such resource reservation and job provisioning decisions require in-depth knowledge of system internals and laborious but often ineffective parameter tuning. We propose and develop AROMA, a system that automates the allocation of heterogeneous Cloud resources and configuration of Hadoop parameters for achieving quality of service goals while minimizing the incurred cost. It addresses the significant challenge of provisioning ad-hoc jobs that have performance deadlines in Clouds through a novel two-phase machine learning and optimization framework. Its technical core is a support vector machine based performance model that enables the integration of various aspects of resource provisioning and auto-configuration of Hadoop jobs. It adapts to ad-hoc jobs by robustly matching their resource utilization signature with previously executed jobs and making provisioning decisions accordingly. We implement AROMA as an automated job provisioning system for Hadoop MapReduce hosted in virtualized HP ProLiant blade servers. Experimental results show AROMA's effectiveness in providing performance guarantee of diverse Hadoop benchmark jobs while minimizing the cost of Cloud resource usage.
---
paper_title: An adaptive hybrid elasticity controller for cloud infrastructures
paper_content:
Cloud elasticity is the ability of the cloud infrastructure to rapidly change the amount of resources allocated to a service in order to meet the actual varying demands on the service while enforcing SLAs. In this paper, we focus on horizontal elasticity, the ability of the infrastructure to add or remove virtual machines allocated to a service deployed in the cloud. We model a cloud service using queuing theory. Using that model we build two adaptive proactive controllers that estimate the future load on a service. We explore the different possible scenarios for deploying a proactive elasticity controller coupled with a reactive elasticity controller in the cloud. Using simulation with workload traces from the FIFA world-cup web servers, we show that a hybrid controller that incorporates a reactive controller for scale up coupled with our proactive controllers for scale down decisions reduces SLA violations by a factor of 2 to 10 compared to a regression based controller or a completely reactive controller.
---
paper_title: Power and performance management of virtualized computing environments via lookahead control
paper_content:
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
---
paper_title: Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing
paper_content:
Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.
---
paper_title: Autonomic Management of Cloud Service Centers with Availability Guarantees
paper_content:
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
---
paper_title: Enabling cost-aware and adaptive elasticity of multi-tier cloud applications
paper_content:
Elasticity (on-demand scaling) of applications is one of the most important features of cloud computing. This elasticity is the ability to adaptively scale resources up and down in order to meet varying application demands. To date, most existing scaling techniques can maintain applications' Quality of Service (QoS) but do not adequately address issues relating to minimizing the costs of using the service. In this paper, we propose an elastic scaling approach that makes use of cost-aware criteria to detect and analyse the bottlenecks within multi-tier cloud-based applications. We present an adaptive scaling algorithm that reduces the costs incurred by users of cloud infrastructure services, allowing them to scale their applications only at bottleneck tiers, and present the design of an intelligent platform that automates the scaling process. Our approach is generic for a wide class of multi-tier applications, and we demonstrate its effectiveness against other approaches by studying the behaviour of an example e-commerce application using a standard workload benchmark.
---
paper_title: Q-clouds: managing performance interference effects for QoS-aware clouds
paper_content:
Cloud computing offers users the ability to access large pools of computational and storage resources on demand. Multiple commercial clouds already allow businesses to replace, or supplement, privately owned IT assets, alleviating them from the burden of managing and maintaining these facilities. However, there are issues that must be addressed before this vision of utility computing can be fully realized. In existing systems, customers are charged based upon the amount of resources used or reserved, but no guarantees are made regarding the application level performance or quality-of-service (QoS) that the given resources will provide. As cloud providers continue to utilize virtualization technologies in their systems, this can become problematic. In particular, the consolidation of multiple customer applications onto multicore servers introduces performance interference between collocated workloads, significantly impacting application QoS. To address this challenge, we advocate that the cloud should transparently provision additional resources as necessary to achieve the performance that customers would have realized if they were running in isolation. Accordingly, we have developed Q-Clouds, a QoS-aware control framework that tunes resource allocations to mitigate performance interference effects. Q-Clouds uses online feedback to build a multi-input multi-output (MIMO) model that captures performance interference interactions, and uses it to perform closed loop resource management. In addition, we utilize this functionality to allow applications to specify multiple levels of QoS as application Q-states. For such applications, Q-Clouds dynamically provisions underutilized resources to enable elevated QoS levels, thereby improving system efficiency. Experimental evaluations of our solution using benchmark applications illustrate the benefits: performance interference is mitigated completely when feasible, and system utilization is improved by up to 35% using Q-states.
---
paper_title: Divide the Task, Multiply the Outcome: Cooperative VM Consolidation
paper_content:
Efficient resource utilization is one of the main concerns of cloud providers, as it has a direct impact on energy costs and thus their revenue. Virtual machine (VM) consolidation is one the common techniques, used by infrastructure providers to efficiently utilize their resources. However, when it comes to large-scale infrastructures, consolidation decisions become computationally complex, since VMs are multi-dimensional entities with changing demand and unknown lifetime, and users often overestimate their actual demand. These uncertainties urges the system to take consolidation decisions continuously in a real time manner. In this work, we investigate a decentralized approach for VM consolidation using Peer to Peer (P2P) principles. We investigate the opportunities offered by P2P systems, as scalable and robust management structures, to address VM consolidation concerns. We present a P2P consolidation protocol, considering the dimensionality of resources and dynamicity of the environment. The protocol benefits from concurrency and decentralization of control and it uses a dimension aware decision function for efficient consolidation. We evaluate the protocol through simulation of 100,000 physical machines and 200,000 VM requests. Results demonstrate the potentials and advantages of using a P2P structure to make resource management decisions in large scale data centers. They show that the P2P approach is feasible and scalable and produces resource utilization of 75% when the consolidation aim is 90%.
---
paper_title: Joint admission control and resource allocation in virtualized servers
paper_content:
In service oriented architectures, Quality of Service (QoS) is a key issue. Service requestors evaluate QoS at run time to address their service invocation to the most suitable provider. Thus, QoS has a direct impact on the providers' revenues. However, QoS requirements are difficult to satisfy because of the high variability of Internet workloads. This paper presents a self-managing technique that jointly addresses the resource allocation and admission control optimization problems in virtualized servers. Resource allocation and admission control represent key components of an autonomic infrastructure and are responsible for the fulfillment of service level agreements. Our solution is designed taking into account the provider's revenues, the cost of resource utilization, and customers' QoS requirements, specified in terms of the response time of individual requests. The effectiveness of our joint resource allocation and admission control solution, compared to top performing state-of-the-art techniques, is evaluated using synthetic as well as realistic workloads, for a number of different scenarios of interest. Results show that our solution can satisfy QoS constraints while still yielding a significant gain in terms of profits for the provider, especially under high workload conditions, if compared to the alternative methods. Moreover, it is robust to service time variance, resource usage cost, and workload mispredictions.
---
paper_title: Automated control for elastic n-tier workloads based on empirical modeling
paper_content:
Elastic n-tier applications have non-stationary workloads that require adaptive control of resources allocated to them. This presents not only an opportunity in pay-as-you-use clouds, but also a challenge to dynamically allocate virtual machines appropriately. Previous approaches based on control theory, queuing networks, and machine learning work well for some situations, but each model has its own limitations due to inaccuracies in performance prediction. In this paper we propose a multi-model controller, which integrates adaptation decisions from several models, choosing the best. The focus of our work is an empirical model, based on detailed measurement data from previous application runs. The main advantage of the empirical model is that it returns high quality performance predictions based on measured data. For new application scenarios, we use other models or heuristics as a starting point, and all performance data are continuously incorporated into the empirical model's knowledge base. Using a prototype implementation of the multi-model controller, a cloud testbed, and an n-tier benchmark (RUBBoS), we evaluated and validated the advantages of the empirical model. For example, measured data show that it is more effective to add two nodes as a group, one for each tier, when two tiers approach saturation simultaneously.
---
paper_title: Autonomic resource provisioning in cloud systems with availability goals
paper_content:
The elasticity afforded by cloud computing allows consumers to dynamically request and relinquish computing and storage resources and pay for them on a pay-per-use basis. Cloud computing providers rely on virtualization techniques to manage the dynamic nature of their infrastructure allowing consumers to dynamically allocate and deallocate virtual machines of different capacities. Cloud providers need to optimally decide the best allocation of virtual machines to physical machines as the demand varies dynamically. When making such decisions, cloud providers can migrate VMs already allocated and/or use external cloud providers. This paper considers the problem in which the cloud provider wants to maximize its revenue, subject to capacity, availability SLA, and VM migration constraints. The paper presents a heuristic solution, called Near Optimal (NOPT), to this NP-hard problem and discusses the results of its experimental evaluation in comparison with a best fit (BF) allocation strategy. The results show that NOPT provides a 45% improvement in average revenue when compared with BF for the parameters used in the experiment. Moreover, the NOPT algorithm maintained the availability close to one for all classes of users while BF exhibited a lower availability and even failed to meet the availability SLA at times.
---
paper_title: Automated control of multiple virtualized resources
paper_content:
Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy service-level objectives(SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I/O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.
---
paper_title: Autonomic resource management in virtualized data centers using fuzzy logic-based approaches
paper_content:
Data centers, as resource providers, are expected to deliver on performance guarantees while optimizing resource utilization to reduce cost. Virtualization techniques provide the opportunity of consolidating multiple separately managed containers of virtual resources on underutilized physical servers. A key challenge that comes with virtualization is the simultaneous on-demand provisioning of shared physical resources to virtual containers and the management of their capacities to meet service-quality targets at the least cost. This paper proposes a two-level resource management system to dynamically allocate resources to individual virtual containers. It uses local controllers at the virtual-container level and a global controller at the resource-pool level. An important advantage of this two-level control architecture is that it allows independent controller designs for separately optimizing the performance of applications and the use of resources. Autonomic resource allocation is realized through the interaction of the local and global controllers. A novelty of the local controller designs is their use of fuzzy logic-based approaches to efficiently and robustly deal with the complexity and uncertainties of dynamically changing workloads and resource usage. The global controller determines the resource allocation based on a proposed profit model, with the goal of maximizing the total profit of the data center. Experimental results obtained through a prototype implementation demonstrate that, for the scenarios under consideration, the proposed resource management system can significantly reduce resource consumption while still achieving application performance targets.
---
paper_title: Integrated and autonomic cloud resource scaling
paper_content:
A Cloud is a very dynamic environment where resources offered by a Cloud Service Provider (CSP), out of one or more Cloud Data Centers (DCs) are acquired or released (by an enterprise (tenant) on-demand and at any scale. Typically a tenant will use Cloud service interfaces to acquire or release resources directly. This process can be automated by a CSP by providing auto-scaling capability where a tenant sets policies indicating under what condition resources should be auto-scaled. This is specially needed in a Cloud environment because of the huge scale at which a Cloud operates. Typical solutions are naive causing spurious auto-scaling decisions. For example, they are based on only thresholding triggers and the thresholding mechanisms themselves are not Cloud-ready. In a Cloud, resources from three separate domains, compute, storage and network, are acquired or released on-demand. But in typical solutions resources from these three domains are not auto-scaled in an integrated fashion. Integrated auto-scaling prevents further spurious scaling and reduces the number of auto-scaling systems to be supported in a Cloud management system. In addition, network resources typically are not auto-scaled. In this paper we describe a Cloud resource auto-scaling system that addresses and overcomes above limitations.
---
paper_title: Autonomic resource provisioning for cloud-based software
paper_content:
Cloud elasticity provides a software system with the ability to maintain optimal user experience by automatically acquiring and releasing resources, while paying only for what has been consumed. The mechanism for automatically adding or removing resources on the fly is referred to as auto-scaling. The state-of-the-practice with respect to auto-scaling involves specifying threshold-based rules to implement elasticity policies for cloud-based applications. However, there are several shortcomings regarding this approach. Firstly, the elasticity rules must be specified precisely by quantitative values, which requires deep knowledge and expertise. Furthermore, existing approaches do not explicitly deal with uncertainty in cloud-based software, where noise and unexpected events are common. This paper exploits fuzzy logic to enable qualitative specification of elasticity rules for cloud-based software. In addition, this paper discusses a control theoretical approach using type-2 fuzzy logic systems to reason about elasticity under uncertainties. We conduct several experiments to demonstrate that cloud-based software enhanced with such elasticity controller can robustly handle unexpected spikes in the workload and provide acceptable user experience. This translates into increased profit for the cloud application owner.
---
paper_title: Optimization of virtual resource management for cloud applications to cope with traffic burst
paper_content:
Being the latest computing paradigm, cloud computing has proliferated as many IT giants started to deliver resources as services. Thus application providers are free from the burden of the low-level implementation and system administration. Meanwhile, the fact that we are in an era of information explosion brings certain challenges. Some websites may encounter a sharp rising workload due to some unexpected social concerns, which make these websites unavailable or even fail to provide services in time. Currently, a post-action method based on human experience and system alarm is widely used to handle this scenario in industry, which has shortcomings like reaction delay. In our paper, we want to solve this problem by deploying such websites on cloud, and use features of the cloud to tackle it. We present a framework of dynamic virtual resource management in clouds, to cope with traffic burst that applications might encounter. The framework implements a whole work-flow from prediction of the sharp rising workload to a customized resource management module which guarantees the high availability of web applications and cost-effectiveness of the cloud service providers. Our experiments show the accuracy of our workload forecasting method by comparing it with other methods. The 1998 World Cup workload dataset used in our experiment reveals the applicability of our model in the specific scenarios of traffic burst. Also, a simulation-based experiment is designed to indicate that the proposed management framework detects changes in workload intensity that occur over time and allocates multiple virtualized IT resources accordingly to achieve high availability and cost-effective targets. We present a framework of dynamic resource management to cope with traffic burst.The prediction of traffic burst is based on Gompertz Curve and Moving Average model.VM scheduler involves VM Provisioning, VM Placement and VM Recycling.High availability and cost-effectiveness are achieved by the proposed framework.
---
paper_title: SLA-Aware Virtual Resource Management for Cloud Infrastructures
paper_content:
Cloud platforms host several independent applications on a shared resource pool with the ability to allocate computing power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate several virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to control the virtualized environment which decouples the provisioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Programming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.
---
paper_title: Shares and utilities based power consolidation in virtualized server environments
paper_content:
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47% or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the min-max and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for power-performance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the end-to-end validity of our approach and demonstrate that our final candidate algorithm, PowerExpandMinMax, consistently yields the best overall utility across a broad spectrum of inputs - varying VM sizes and utilities, varying server capacities and varying power costs - thus providing a practical solution for administrators.
---
paper_title: Lightweight Resource Scaling for Cloud Applications
paper_content:
Elastic resource provisioning is a key feature of cloud computing, allowing users to scale up or down resource allocation for their applications at run-time. To date, most practical approaches to managing elasticity are based on allocation/de-allocation of the virtual machine (VM) instances to the application. This VM-level elasticity typically incurs both considerable overhead and extra costs, especially for applications with rapidly fluctuating demands. In this paper, we propose a lightweight approach to enable cost-effective elasticity for cloud applications. Our approach operates fine-grained scaling at the resource level itself (CPUs, memory, I/O, etc) in addition to VM-level scaling. We also present the design and implementation of an intelligent platform for light-weight resource management of cloud applications. We describe our algorithms for light-weight scaling and VM-level scaling and show their interaction. We then use an industry standard benchmark to evaluate the effectiveness of our approach and compare its performance against traditional approaches.
---
paper_title: 1000 Islands: Integrated Capacity and Workload Management for the Next Generation Data Center
paper_content:
Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a "service hosting abstraction" that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.
---
paper_title: Utility-driven workload management using nested control design
paper_content:
Virtualization and consolidation of IT resources have created a need for more effective workload management tools, one that dynamically controls resource allocation to a hosted application to achieve quality of service (QoS) goals. These goals can in turn be driven by the utility of the service, typically based on the application's service level agreement (SLA) as well as the cost of resources allocated. In this paper, we build on our earlier work on dynamic CPU allocation to applications on shared servers, and present a feedback control system consisting of two nested integral control loops for managing the QoS metric of the application along with the utilization of the allocated CPU resource. The control system was implemented on a lab testbed running an Apache Web server and using the 90th percentile of the response times as the QoS metric. Experiments using a synthetic workload based on an industry benchmark validated two important features of the nested control design. First, compared to a single loop for controlling response time only, the nested design is less sensitive to the bimodal behavior of the system resulting in more robust performance. Second, compared to a single loop for controlling CPU utilization only, the new design provides a framework for dealing with the tradeoff between better QoS and lower cost of resources, therefore resulting in better overall utility of the service.
---
paper_title: VDC Planner: Dynamic migration-aware Virtual Data Center embedding for clouds
paper_content:
Cloud computing promises to provide computing resources to a large number of service applications in an on demand manner. Traditionally, cloud providers such as Amazon only provide guaranteed allocation for compute and storage resources, and fail to support bandwidth requirements and performance isolation among these applications. To address this limitation, recently, a number of proposals advocate providing both guaranteed server and network resources in the form of Virtual Data Centers (VDCs). This raises the problem of optimally allocating both servers and data center networks to multiple VDCs in order to maximize the total revenue, while minimizing the total energy consumption in the data center. However, despite recent studies on this problem, none of the existing solutions have considered the possibility of using VM migration to dynamically adjust the resource allocation, in order to meet the fluctuating resource demand of VDCs. In this paper, we propose VDC Planner, a migration-aware dynamic virtual data center embedding framework that aims at achieving high revenue while minimizing the total energy cost over-time. Our framework supports various usage scenarios, including VDC embedding, VDC scaling as well as dynamic VDC consolidation. Through experiments using realistic workload traces, we show our proposed approach achieves both higher revenue and lower average scheduling delay compared to existing migration-oblivious solutions.
---
paper_title: A Hierarchical Approach for the Resource Management of Very Large Cloud Platforms
paper_content:
Worldwide interest in the delivery of computing and storage capacity as a service continues to grow at a rapid pace. The complexities of such cloud computing centers require advanced resource management solutions that are capable of dynamically adapting the cloud platform while providing continuous service and performance guarantees. The goal of this paper is to devise resource allocation policies for virtualized cloud environments that satisfy performance and availability guarantees and minimize energy costs in very large cloud service centers. We present a scalable distributed hierarchical framework based on a mixed-integer nonlinear optimization of resource management acting at multiple timescales. Extensive experiments across a wide variety of configurations demonstrate the efficiency and effectiveness of our approach.
---
paper_title: A Model-free Learning Approach for Coordinated Configuration of Virtual Machines and Appliances
paper_content:
Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the-fly to adapt to system dynamics. The interplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this paper, we propose a framework, namely CoTuner, for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advantages of Simplex and RL methods and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen-based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that CoTuner is able to drive a virtual server system into an optimal or near optimal configuration state dynamically, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based solely on Simplex or basic RL algorithm, the hybrid RL algorithm gains 30% to 40% throughput improvement. Moreover, the algorithm is able to reduce SLA violation of the applications by more than 80%.
---
paper_title: Dynamic resource allocation with management objectives—Implementation for an OpenStack cloud
paper_content:
We report on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources in support of both interactive and computationally intensive applications. The design supports an extensible set of management objectives between which the system can switch at runtime. We demonstrate through examples how management objectives related to load-balancing and energy efficiency can be mapped onto the controllers of the resource allocation subsystem, which attempts to achieve an activated management objective at all times. The design is extensible in the sense that additional objectives can be introduced by providing instantiations for generic functions in the controllers. Our implementation monitors the fulfillment of the relevant management metrics in real time. Testbed evaluation demonstrates the effectiveness of our approach in a dynamic environment. It further illustrates the trade-off between closely meeting a specific management objective and the associated cost of VM live-migration.
---
paper_title: Cloud-scale resource management: challenges and techniques
paper_content:
Managing resources at large scale while providing performance isolation and efficient use of underlying hardware is a key challenge for any cloud management software. Most virtual machine (VM) resource management systems like VMware DRS clusters, Microsoft PRO and Eucalyptus, do not currently scale to the number of hosts and VMs supported by cloud service providers. In addition to scale, other challenges include heterogeneity of systems, compatibility constraints between virtual machines and underlying hardware, islands of resources created due to storage and network connectivity and limited scale of storage resources. ::: ::: In this paper, we shed light on some of the key issues in building cloud-scale resource management systems, based on five years of research and shipping cluster resource management products. Furthermore, we discuss various techniques to provide large scale resource management, along with the pros and cons of each technique. We hope to motivate future research in this area to develop practical solutions to these issues.
---
paper_title: Towards energy-aware scheduling in data centers using machine learning
paper_content:
As energy-related costs have become a major economical factor for IT infrastructures and data-centers, companies and the research community are being challenged to find better and more efficient power-aware resource management strategies. There is a growing interest in "Green" IT and there is still a big gap in this area to be covered. In order to obtain an energy-efficient data center, we propose a framework that provides an intelligent consolidation methodology using different techniques such as turning on/off machines, power-aware consolidation algorithms, and machine learning techniques to deal with uncertain information while maximizing performance. For the machine learning approach, we use models learned from previous system behaviors in order to predict power consumption levels, CPU loads, and SLA timings, and improve scheduling decisions. Our framework is vertical, because it considers from watt consumption to workload features, and cross-disciplinary, as it uses a wide variety of techniques. We evaluate these techniques with a framework that covers the whole control cycle of a real scenario, using a simulation with representative heterogeneous workloads, and we measure the quality of the results according to a set of metrics focused toward our goals, besides traditional policies. The results obtained indicate that our approach is close to the optimal placement and behaves better when the level of uncertainty increases.
---
paper_title: Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures
paper_content:
Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs).
---
paper_title: CloudScale: elastic resource scaling for multi-tenant cloud systems
paper_content:
Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (
---
paper_title: Automated control for elastic storage
paper_content:
Elasticity - where systems acquire and release resources in response to dynamic workloads, while paying only for what they need - is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application services that allocate and release resources in discrete units, such as virtual server instances of predetermined sizes. It focuses on elastic control of the storage tier, in which adding or removing a storage node or "brick" requires rebalancing stored data across the nodes. The storage tier presents new challenges for elastic control: actuator delays (lag) due to rebalancing, interference with applications and sensor measurements, and the need to synchronize the multiple control elements, including rebalancing. We have designed and implemented a new controller for elastic storage systems to address these challenges. Using a popular distributed storage system - the Hadoop Distributed File System (HDFS) - under dynamic Web 2.0 workloads, we show how the controller adapts to workload changes to maintain performance objectives efficiently in a pay-as-you-go cloud computing environment.
---
paper_title: AROMA: automated resource allocation and configuration of mapreduce environment in the cloud
paper_content:
Distributed data processing framework MapReduce is increasingly deployed in Clouds to leverage the pay-per-usage cloud computing model. Popular Hadoop MapReduce environment expects that end users determine the type and amount of Cloud resources for reservation as well as the configuration of Hadoop parameters. However, such resource reservation and job provisioning decisions require in-depth knowledge of system internals and laborious but often ineffective parameter tuning. We propose and develop AROMA, a system that automates the allocation of heterogeneous Cloud resources and configuration of Hadoop parameters for achieving quality of service goals while minimizing the incurred cost. It addresses the significant challenge of provisioning ad-hoc jobs that have performance deadlines in Clouds through a novel two-phase machine learning and optimization framework. Its technical core is a support vector machine based performance model that enables the integration of various aspects of resource provisioning and auto-configuration of Hadoop jobs. It adapts to ad-hoc jobs by robustly matching their resource utilization signature with previously executed jobs and making provisioning decisions accordingly. We implement AROMA as an automated job provisioning system for Hadoop MapReduce hosted in virtualized HP ProLiant blade servers. Experimental results show AROMA's effectiveness in providing performance guarantee of diverse Hadoop benchmark jobs while minimizing the cost of Cloud resource usage.
---
paper_title: An adaptive hybrid elasticity controller for cloud infrastructures
paper_content:
Cloud elasticity is the ability of the cloud infrastructure to rapidly change the amount of resources allocated to a service in order to meet the actual varying demands on the service while enforcing SLAs. In this paper, we focus on horizontal elasticity, the ability of the infrastructure to add or remove virtual machines allocated to a service deployed in the cloud. We model a cloud service using queuing theory. Using that model we build two adaptive proactive controllers that estimate the future load on a service. We explore the different possible scenarios for deploying a proactive elasticity controller coupled with a reactive elasticity controller in the cloud. Using simulation with workload traces from the FIFA world-cup web servers, we show that a hybrid controller that incorporates a reactive controller for scale up coupled with our proactive controllers for scale down decisions reduces SLA violations by a factor of 2 to 10 compared to a regression based controller or a completely reactive controller.
---
paper_title: Agile dynamic provisioning of multi-tier Internet applications
paper_content:
Dynamic capacity provisioning is a useful technique for handling the multi-time-scale variations seen in Internet workloads. In this article, we propose a novel dynamic provisioning technique for multi-tier Internet applications that employs (1) a flexible queuing model to determine how much of the resources to allocate to each tier of the application, and (2) a combination of predictive and reactive methods that determine when to provision these resources, both at large and small time scales. We propose a novel data center architecture based on virtual machine monitors to reduce provisioning overheads. Our experiments on a forty-machine Xen/Linux-based hosting platform demonstrate the responsiveness of our technique in handling dynamic workloads. In one scenario where a flash crowd caused the workload of a three-tier application to double, our technique was able to double the application capacity within five minutes, thus maintaining response-time targets. Our technique also reduced the overhead of switching servers across applications from several minutes to less than a second, while meeting the performance targets of residual sessions.
---
paper_title: Power and performance management of virtualized computing environments via lookahead control
paper_content:
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
---
paper_title: Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing
paper_content:
Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.
---
paper_title: Autonomic Management of Cloud Service Centers with Availability Guarantees
paper_content:
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
---
paper_title: Computational frameworks for the fast Fourier transform
paper_content:
1. The Radix-2 Frameworks. Matrix Notation and Algorithms The FFT Idea The Cooley-Tukey Factorization Weight and Butterfly Computations Bit Reversal and Transposition The Cooley-Tukey Framework The Stockham Autosort Frameworks The Pease Framework Decimation in Frequency and Inverse FFTs 2. General Radix Frameworks. The General Radix Ideas Index Reversal and Transposition Mixed-Radix Factorizations Radix-4 and Radix-8 Frameworks The Split-Radix Frameworks 3. High Performance Frameworks. The Multiple DFT Problem Matrix Transposition The Large Single-Vector FFT Problem Multi-Dimensional FFT Problem Distributed Memory FFTs Shared Memory FFTs 4. Selected Topics. Prime Factor FFTs Convolution FFTs of Real Data Cosine and Sine Transforms Fast Poisson Solvers Bibliography Index.
---
paper_title: Q-clouds: managing performance interference effects for QoS-aware clouds
paper_content:
Cloud computing offers users the ability to access large pools of computational and storage resources on demand. Multiple commercial clouds already allow businesses to replace, or supplement, privately owned IT assets, alleviating them from the burden of managing and maintaining these facilities. However, there are issues that must be addressed before this vision of utility computing can be fully realized. In existing systems, customers are charged based upon the amount of resources used or reserved, but no guarantees are made regarding the application level performance or quality-of-service (QoS) that the given resources will provide. As cloud providers continue to utilize virtualization technologies in their systems, this can become problematic. In particular, the consolidation of multiple customer applications onto multicore servers introduces performance interference between collocated workloads, significantly impacting application QoS. To address this challenge, we advocate that the cloud should transparently provision additional resources as necessary to achieve the performance that customers would have realized if they were running in isolation. Accordingly, we have developed Q-Clouds, a QoS-aware control framework that tunes resource allocations to mitigate performance interference effects. Q-Clouds uses online feedback to build a multi-input multi-output (MIMO) model that captures performance interference interactions, and uses it to perform closed loop resource management. In addition, we utilize this functionality to allow applications to specify multiple levels of QoS as application Q-states. For such applications, Q-Clouds dynamically provisions underutilized resources to enable elevated QoS levels, thereby improving system efficiency. Experimental evaluations of our solution using benchmark applications illustrate the benefits: performance interference is mitigated completely when feasible, and system utilization is improved by up to 35% using Q-states.
---
paper_title: Automated control of multiple virtualized resources
paper_content:
Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy service-level objectives(SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I/O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.
---
paper_title: Integrated and autonomic cloud resource scaling
paper_content:
A Cloud is a very dynamic environment where resources offered by a Cloud Service Provider (CSP), out of one or more Cloud Data Centers (DCs) are acquired or released (by an enterprise (tenant) on-demand and at any scale. Typically a tenant will use Cloud service interfaces to acquire or release resources directly. This process can be automated by a CSP by providing auto-scaling capability where a tenant sets policies indicating under what condition resources should be auto-scaled. This is specially needed in a Cloud environment because of the huge scale at which a Cloud operates. Typical solutions are naive causing spurious auto-scaling decisions. For example, they are based on only thresholding triggers and the thresholding mechanisms themselves are not Cloud-ready. In a Cloud, resources from three separate domains, compute, storage and network, are acquired or released on-demand. But in typical solutions resources from these three domains are not auto-scaled in an integrated fashion. Integrated auto-scaling prevents further spurious scaling and reduces the number of auto-scaling systems to be supported in a Cloud management system. In addition, network resources typically are not auto-scaled. In this paper we describe a Cloud resource auto-scaling system that addresses and overcomes above limitations.
---
paper_title: Autonomic resource provisioning for cloud-based software
paper_content:
Cloud elasticity provides a software system with the ability to maintain optimal user experience by automatically acquiring and releasing resources, while paying only for what has been consumed. The mechanism for automatically adding or removing resources on the fly is referred to as auto-scaling. The state-of-the-practice with respect to auto-scaling involves specifying threshold-based rules to implement elasticity policies for cloud-based applications. However, there are several shortcomings regarding this approach. Firstly, the elasticity rules must be specified precisely by quantitative values, which requires deep knowledge and expertise. Furthermore, existing approaches do not explicitly deal with uncertainty in cloud-based software, where noise and unexpected events are common. This paper exploits fuzzy logic to enable qualitative specification of elasticity rules for cloud-based software. In addition, this paper discusses a control theoretical approach using type-2 fuzzy logic systems to reason about elasticity under uncertainties. We conduct several experiments to demonstrate that cloud-based software enhanced with such elasticity controller can robustly handle unexpected spikes in the workload and provide acceptable user experience. This translates into increased profit for the cloud application owner.
---
paper_title: Lightweight Resource Scaling for Cloud Applications
paper_content:
Elastic resource provisioning is a key feature of cloud computing, allowing users to scale up or down resource allocation for their applications at run-time. To date, most practical approaches to managing elasticity are based on allocation/de-allocation of the virtual machine (VM) instances to the application. This VM-level elasticity typically incurs both considerable overhead and extra costs, especially for applications with rapidly fluctuating demands. In this paper, we propose a lightweight approach to enable cost-effective elasticity for cloud applications. Our approach operates fine-grained scaling at the resource level itself (CPUs, memory, I/O, etc) in addition to VM-level scaling. We also present the design and implementation of an intelligent platform for light-weight resource management of cloud applications. We describe our algorithms for light-weight scaling and VM-level scaling and show their interaction. We then use an industry standard benchmark to evaluate the effectiveness of our approach and compare its performance against traditional approaches.
---
paper_title: Autonomous learning for efficient resource utilization of dynamic VM migration
paper_content:
Dynamic migration of virtual machines on a cluster of physical machines is designed to maximize resource utilization by balancing loads across the cluster. When the utilization of a physical machine is beyond a fixed threshold, the machine is deemed overloaded. A virtual machine is then selected within the overloaded physical machine for migration to a lightly loaded physical machine. Key to such threshold-based VM migration is to determine when to move which VM to what physical machine, since wrong or inadequate decisions can cause unnecessary migrations that would adversely affect the overall performance. We present in this paper a learning framework that autonomously finds and adjusts thresholds at runtime for different computing requirements. Central to our approach is the previous history of migrations and their effects before and after each migration in terms of standard deviation of utilization. We set up an experimental environment that consists of extensive real world benchmarking problems and a cluster of 16 physical machines each of which has on average eight virtual machines. We demonstrate through experimental results that our approach autonomously finds thresholds close to the optimal ones for different computing scenarios and that such varying thresholds yield an optimal number of VM migrations for maximizing resource utilization.
---
paper_title: Dynamically Weighted Load Evaluation Method Based on Self-adaptive Threshold in Cloud Computing
paper_content:
Cloud resources and their loads possess dynamic characteristics. Current research methods have utilized certain physical indicators and fixed thresholds to evaluate cloud resources, which cannot meet the dynamic needs of cloud resources or accurately reflect their resource states. To address this challenge, this paper proposes a Self-adaptive threshold based Dynamically Weighted load evaluation Method (termed SDWM). It evaluates the load state of the resource through a dynamically weighted evaluation method. First, the work proposes some dynamic evaluation indicators in order to evaluate the resource state more accurately. Second, SDWM divided the resource load into three states, including Overload, Normal and Idle using the self-adaptive threshold. It then migrated those overload resources to a balance load, and releases the idle resources whose idle times exceeded a threshold to save energy, which could effectively improve system utilization. Finally, SDWM leveraged an energy evaluation model to describe energy quantitatively using the migration amount of the resource request. The parameters of the energy model were obtained from a linear regression model according to the actual experimental environment. Experimental results showed that SDWM is superior to other methods in energy conservation, task response time, and resource utilization, and the improvements are 31.5 %, 50 %, 50.8 %, respectively. These results demonstrate the positive effect of the dynamic self-adaptive threshold. More specially, SDWM shows great adaptability when resources dynamically join or exit.
---
paper_title: Utility-driven workload management using nested control design
paper_content:
Virtualization and consolidation of IT resources have created a need for more effective workload management tools, one that dynamically controls resource allocation to a hosted application to achieve quality of service (QoS) goals. These goals can in turn be driven by the utility of the service, typically based on the application's service level agreement (SLA) as well as the cost of resources allocated. In this paper, we build on our earlier work on dynamic CPU allocation to applications on shared servers, and present a feedback control system consisting of two nested integral control loops for managing the QoS metric of the application along with the utilization of the allocated CPU resource. The control system was implemented on a lab testbed running an Apache Web server and using the 90th percentile of the response times as the QoS metric. Experiments using a synthetic workload based on an industry benchmark validated two important features of the nested control design. First, compared to a single loop for controlling response time only, the nested design is less sensitive to the bimodal behavior of the system resulting in more robust performance. Second, compared to a single loop for controlling CPU utilization only, the new design provides a framework for dealing with the tradeoff between better QoS and lower cost of resources, therefore resulting in better overall utility of the service.
---
paper_title: Adaptive resource provisioning for read intensive multi-tier applications in the cloud
paper_content:
A Service-Level Agreement (SLA) provides surety for specific quality attributes to the consumers of services. However, current SLAs offered by cloud infrastructure providers do not address response time, which, from the user's point of view, is the most important quality attribute for Web applications. Satisfying a maximum average response time guarantee for Web applications is difficult for two main reasons: first, traffic patterns are highly dynamic and difficult to predict accurately; second, the complex nature of multi-tier Web applications increases the difficulty of identifying bottlenecks and resolving them automatically. This paper proposes a methodology and presents a working prototype system for automatic detection and resolution of bottlenecks in a multi-tier Web application hosted on a cloud in order to satisfy specific maximum response time requirements. It also proposes a method for identifying and retracting over-provisioned resources in multi-tier cloud-hosted Web applications. We demonstrate the feasibility of the approach in an experimental evaluation with a testbed EUCALYPTUS-based cloud and a synthetic workload. Automatic bottleneck detection and resolution under dynamic resource management has the potential to enable cloud infrastructure providers to provide SLAs for Web applications that guarantee specific response time requirements while minimizing resource utilization.
---
paper_title: A Hierarchical Approach for the Resource Management of Very Large Cloud Platforms
paper_content:
Worldwide interest in the delivery of computing and storage capacity as a service continues to grow at a rapid pace. The complexities of such cloud computing centers require advanced resource management solutions that are capable of dynamically adapting the cloud platform while providing continuous service and performance guarantees. The goal of this paper is to devise resource allocation policies for virtualized cloud environments that satisfy performance and availability guarantees and minimize energy costs in very large cloud service centers. We present a scalable distributed hierarchical framework based on a mixed-integer nonlinear optimization of resource management acting at multiple timescales. Extensive experiments across a wide variety of configurations demonstrate the efficiency and effectiveness of our approach.
---
paper_title: Autonomic Workload and Resources Management of Cloud Computing Services
paper_content:
The power consumption of data centers and cloud systems have increased almost three times between 2007 and 2012. Over-provisioning techniques are typically used for meeting the peak workloads. In this paper we present an autonomic power and performance management method for cloud systems in order to dynamically match the application requirements with "just-enough" system resources at runtime that lead to significant power reduction while meeting the quality of service requirements of the cloud applications. Our solution offers the following capabilities: 1) real-time monitoring of the cloud resources and workload behavior running on virtual machines (VMs), 2) determine the current operating point of both workloads and the VMs running these workloads, 3) characterize workload behavior and predict the next operating point for the VMs, 4) dynamically manage the VM resources (scaling up and down the number of cores, CPU frequency, and memory amount) at run time, and 5) assign available cloud resources that can guarantee optimal power consumption without sacrificing the QoS requirements of cloud workloads. We validate the performance of our approach using the RUB is benchmark, an auction model emulating eBay transactions that generates a wide range of workloads (such as browsing and bidding with different number of clients). Our experimental results show that our approach can lead to reduction in power consumption up to 87% when compared to the static resource allocation strategy, 72% compared to adaptive frequency scaling strategy and 66% compared to a similar multi-resource management strategy.
---
paper_title: A Model-free Learning Approach for Coordinated Configuration of Virtual Machines and Appliances
paper_content:
Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the-fly to adapt to system dynamics. The interplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this paper, we propose a framework, namely CoTuner, for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advantages of Simplex and RL methods and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen-based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that CoTuner is able to drive a virtual server system into an optimal or near optimal configuration state dynamically, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based solely on Simplex or basic RL algorithm, the hybrid RL algorithm gains 30% to 40% throughput improvement. Moreover, the algorithm is able to reduce SLA violation of the applications by more than 80%.
---
paper_title: Towards an adaptive human-centric computing resource management framework based on resource prediction and multi-objective genetic algorithm
paper_content:
The complexity, scale and dynamic of data source in the human-centric computing bring great challenges to maintainers. It is problem to be solved that how to reduce manual intervention in large scale human-centric computing, such as cloud computing resource management so that system can automatically manage according to configuration strategies. To address the problem, a resource management framework based on resource prediction and multi-objective optimization genetic algorithm resource allocation (RPMGA-RMF) was proposed. It searches for optimal load cluster as training sample based on load similarity. The neural network (NN) algorithm was used to predict resource load. Meanwhile, the model also built virtual machine migration request in accordance with obtained predicted load value. The multi-objective genetic algorithm (GA) based on hybrid group encoding algorithm was introduced for virtual machine (VM) resource management, so as to provide optimal VM migration strategy, thus achieving adaptive optimization configuration management of resource. Experimental resource based on CloudSim platform shows that the RPMGA-RMF can decrease VM migration times while reduce physical node simultaneously. The system energy consumption can be reduced and load balancing can be achieved either.
---
paper_title: Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures
paper_content:
Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs).
---
paper_title: CloudScale: elastic resource scaling for multi-tenant cloud systems
paper_content:
Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (
---
paper_title: Automated control for elastic storage
paper_content:
Elasticity - where systems acquire and release resources in response to dynamic workloads, while paying only for what they need - is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application services that allocate and release resources in discrete units, such as virtual server instances of predetermined sizes. It focuses on elastic control of the storage tier, in which adding or removing a storage node or "brick" requires rebalancing stored data across the nodes. The storage tier presents new challenges for elastic control: actuator delays (lag) due to rebalancing, interference with applications and sensor measurements, and the need to synchronize the multiple control elements, including rebalancing. We have designed and implemented a new controller for elastic storage systems to address these challenges. Using a popular distributed storage system - the Hadoop Distributed File System (HDFS) - under dynamic Web 2.0 workloads, we show how the controller adapts to workload changes to maintain performance objectives efficiently in a pay-as-you-go cloud computing environment.
---
paper_title: AROMA: automated resource allocation and configuration of mapreduce environment in the cloud
paper_content:
Distributed data processing framework MapReduce is increasingly deployed in Clouds to leverage the pay-per-usage cloud computing model. Popular Hadoop MapReduce environment expects that end users determine the type and amount of Cloud resources for reservation as well as the configuration of Hadoop parameters. However, such resource reservation and job provisioning decisions require in-depth knowledge of system internals and laborious but often ineffective parameter tuning. We propose and develop AROMA, a system that automates the allocation of heterogeneous Cloud resources and configuration of Hadoop parameters for achieving quality of service goals while minimizing the incurred cost. It addresses the significant challenge of provisioning ad-hoc jobs that have performance deadlines in Clouds through a novel two-phase machine learning and optimization framework. Its technical core is a support vector machine based performance model that enables the integration of various aspects of resource provisioning and auto-configuration of Hadoop jobs. It adapts to ad-hoc jobs by robustly matching their resource utilization signature with previously executed jobs and making provisioning decisions accordingly. We implement AROMA as an automated job provisioning system for Hadoop MapReduce hosted in virtualized HP ProLiant blade servers. Experimental results show AROMA's effectiveness in providing performance guarantee of diverse Hadoop benchmark jobs while minimizing the cost of Cloud resource usage.
---
paper_title: Power and performance management of virtualized computing environments via lookahead control
paper_content:
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
---
paper_title: Autonomic Management of Cloud Service Centers with Availability Guarantees
paper_content:
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
---
paper_title: Divide the Task, Multiply the Outcome: Cooperative VM Consolidation
paper_content:
Efficient resource utilization is one of the main concerns of cloud providers, as it has a direct impact on energy costs and thus their revenue. Virtual machine (VM) consolidation is one the common techniques, used by infrastructure providers to efficiently utilize their resources. However, when it comes to large-scale infrastructures, consolidation decisions become computationally complex, since VMs are multi-dimensional entities with changing demand and unknown lifetime, and users often overestimate their actual demand. These uncertainties urges the system to take consolidation decisions continuously in a real time manner. In this work, we investigate a decentralized approach for VM consolidation using Peer to Peer (P2P) principles. We investigate the opportunities offered by P2P systems, as scalable and robust management structures, to address VM consolidation concerns. We present a P2P consolidation protocol, considering the dimensionality of resources and dynamicity of the environment. The protocol benefits from concurrency and decentralization of control and it uses a dimension aware decision function for efficient consolidation. We evaluate the protocol through simulation of 100,000 physical machines and 200,000 VM requests. Results demonstrate the potentials and advantages of using a P2P structure to make resource management decisions in large scale data centers. They show that the P2P approach is feasible and scalable and produces resource utilization of 75% when the consolidation aim is 90%.
---
paper_title: Delivering Energy Proportionality with Non Energy-Proportional Systems - Optimizing the Ensemble
paper_content:
With power having become a critical issue in the operation of data centers today, there has been an increased push towards the vision of "energy-proportional computing", in which no power is used by idle systems, very low power is used by lightly loaded systems, and proportionately higher power at higher loads. Unfortunately, given the state of the art of today's hardware, designing individual servers that exhibit this property remains an open challenge. However, even in the absence of redesigned hardware, we demonstrate how optimization-based techniques can be used to build systems with off-the-shelf hardware that, when viewed at the aggregate level, approximate the behavior of energy-proportional systems. This paper explores the viability and tradeoffs of optimization-based approaches using two different case studies. First, we show how different power-saving mechanisms can be combined to deliver an aggregate system that is proportional in its use of server power. Second, we show early results on delivering a proportional cooling system for these servers. When compared to the power consumed at 100% utilization, results from our testbed show that optimization-based systems can reduce the power consumed at 0% utilization to 15% for server power and 32% for cooling power.
---
paper_title: Joint admission control and resource allocation in virtualized servers
paper_content:
In service oriented architectures, Quality of Service (QoS) is a key issue. Service requestors evaluate QoS at run time to address their service invocation to the most suitable provider. Thus, QoS has a direct impact on the providers' revenues. However, QoS requirements are difficult to satisfy because of the high variability of Internet workloads. This paper presents a self-managing technique that jointly addresses the resource allocation and admission control optimization problems in virtualized servers. Resource allocation and admission control represent key components of an autonomic infrastructure and are responsible for the fulfillment of service level agreements. Our solution is designed taking into account the provider's revenues, the cost of resource utilization, and customers' QoS requirements, specified in terms of the response time of individual requests. The effectiveness of our joint resource allocation and admission control solution, compared to top performing state-of-the-art techniques, is evaluated using synthetic as well as realistic workloads, for a number of different scenarios of interest. Results show that our solution can satisfy QoS constraints while still yielding a significant gain in terms of profits for the provider, especially under high workload conditions, if compared to the alternative methods. Moreover, it is robust to service time variance, resource usage cost, and workload mispredictions.
---
paper_title: Automated control of multiple virtualized resources
paper_content:
Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy service-level objectives(SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I/O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.
---
paper_title: SLA-Aware Virtual Resource Management for Cloud Infrastructures
paper_content:
Cloud platforms host several independent applications on a shared resource pool with the ability to allocate computing power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate several virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to control the virtualized environment which decouples the provisioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Programming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.
---
paper_title: Autonomic Resource Allocation for Cloud Data Centers: A Peer to Peer Approach
paper_content:
We address the problem of resource management for large scale cloud data centers. We propose a Peer to Peer (P2P) resource management framework, comprised of a number of agents, overlayed as a scale-free network. The structural properties of the overlay, along with dividing the management responsibilities among the agents enables the management framework to be scalable in terms of both the number of physical servers and incoming Virtual Machine (VM) requests, while it is computationally feasible. While our framework is intended for use in different cloud management functionalities, e.g. Admission control or fault tolerance, we focus on the problem of resource allocation in clouds. We evaluate our approach by simulating a data center with 2500 servers, striving to allocate resources to 20000 incoming VM placement requests. The simulation results indicate that by maintaining an efficient request propagation, we can achieve promising levels of performance and scalability when dealing with large number of servers and placement requests.
---
paper_title: 1000 Islands: Integrated Capacity and Workload Management for the Next Generation Data Center
paper_content:
Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a "service hosting abstraction" that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.
---
paper_title: A Hierarchical Approach for the Resource Management of Very Large Cloud Platforms
paper_content:
Worldwide interest in the delivery of computing and storage capacity as a service continues to grow at a rapid pace. The complexities of such cloud computing centers require advanced resource management solutions that are capable of dynamically adapting the cloud platform while providing continuous service and performance guarantees. The goal of this paper is to devise resource allocation policies for virtualized cloud environments that satisfy performance and availability guarantees and minimize energy costs in very large cloud service centers. We present a scalable distributed hierarchical framework based on a mixed-integer nonlinear optimization of resource management acting at multiple timescales. Extensive experiments across a wide variety of configurations demonstrate the efficiency and effectiveness of our approach.
---
paper_title: A Model-free Learning Approach for Coordinated Configuration of Virtual Machines and Appliances
paper_content:
Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the-fly to adapt to system dynamics. The interplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this paper, we propose a framework, namely CoTuner, for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advantages of Simplex and RL methods and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen-based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that CoTuner is able to drive a virtual server system into an optimal or near optimal configuration state dynamically, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based solely on Simplex or basic RL algorithm, the hybrid RL algorithm gains 30% to 40% throughput improvement. Moreover, the algorithm is able to reduce SLA violation of the applications by more than 80%.
---
paper_title: Dynamic resource allocation with management objectives—Implementation for an OpenStack cloud
paper_content:
We report on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources in support of both interactive and computationally intensive applications. The design supports an extensible set of management objectives between which the system can switch at runtime. We demonstrate through examples how management objectives related to load-balancing and energy efficiency can be mapped onto the controllers of the resource allocation subsystem, which attempts to achieve an activated management objective at all times. The design is extensible in the sense that additional objectives can be introduced by providing instantiations for generic functions in the controllers. Our implementation monitors the fulfillment of the relevant management metrics in real time. Testbed evaluation demonstrates the effectiveness of our approach in a dynamic environment. It further illustrates the trade-off between closely meeting a specific management objective and the associated cost of VM live-migration.
---
paper_title: Cloud-scale resource management: challenges and techniques
paper_content:
Managing resources at large scale while providing performance isolation and efficient use of underlying hardware is a key challenge for any cloud management software. Most virtual machine (VM) resource management systems like VMware DRS clusters, Microsoft PRO and Eucalyptus, do not currently scale to the number of hosts and VMs supported by cloud service providers. In addition to scale, other challenges include heterogeneity of systems, compatibility constraints between virtual machines and underlying hardware, islands of resources created due to storage and network connectivity and limited scale of storage resources. ::: ::: In this paper, we shed light on some of the key issues in building cloud-scale resource management systems, based on five years of research and shipping cluster resource management products. Furthermore, we discuss various techniques to provide large scale resource management, along with the pros and cons of each technique. We hope to motivate future research in this area to develop practical solutions to these issues.
---
paper_title: Heterogeneity and dynamicity of clouds at scale: Google trace analysis
paper_content:
To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.
---
paper_title: Towards an adaptive human-centric computing resource management framework based on resource prediction and multi-objective genetic algorithm
paper_content:
The complexity, scale and dynamic of data source in the human-centric computing bring great challenges to maintainers. It is problem to be solved that how to reduce manual intervention in large scale human-centric computing, such as cloud computing resource management so that system can automatically manage according to configuration strategies. To address the problem, a resource management framework based on resource prediction and multi-objective optimization genetic algorithm resource allocation (RPMGA-RMF) was proposed. It searches for optimal load cluster as training sample based on load similarity. The neural network (NN) algorithm was used to predict resource load. Meanwhile, the model also built virtual machine migration request in accordance with obtained predicted load value. The multi-objective genetic algorithm (GA) based on hybrid group encoding algorithm was introduced for virtual machine (VM) resource management, so as to provide optimal VM migration strategy, thus achieving adaptive optimization configuration management of resource. Experimental resource based on CloudSim platform shows that the RPMGA-RMF can decrease VM migration times while reduce physical node simultaneously. The system energy consumption can be reduced and load balancing can be achieved either.
---
paper_title: AROMA: automated resource allocation and configuration of mapreduce environment in the cloud
paper_content:
Distributed data processing framework MapReduce is increasingly deployed in Clouds to leverage the pay-per-usage cloud computing model. Popular Hadoop MapReduce environment expects that end users determine the type and amount of Cloud resources for reservation as well as the configuration of Hadoop parameters. However, such resource reservation and job provisioning decisions require in-depth knowledge of system internals and laborious but often ineffective parameter tuning. We propose and develop AROMA, a system that automates the allocation of heterogeneous Cloud resources and configuration of Hadoop parameters for achieving quality of service goals while minimizing the incurred cost. It addresses the significant challenge of provisioning ad-hoc jobs that have performance deadlines in Clouds through a novel two-phase machine learning and optimization framework. Its technical core is a support vector machine based performance model that enables the integration of various aspects of resource provisioning and auto-configuration of Hadoop jobs. It adapts to ad-hoc jobs by robustly matching their resource utilization signature with previously executed jobs and making provisioning decisions accordingly. We implement AROMA as an automated job provisioning system for Hadoop MapReduce hosted in virtualized HP ProLiant blade servers. Experimental results show AROMA's effectiveness in providing performance guarantee of diverse Hadoop benchmark jobs while minimizing the cost of Cloud resource usage.
---
paper_title: An adaptive hybrid elasticity controller for cloud infrastructures
paper_content:
Cloud elasticity is the ability of the cloud infrastructure to rapidly change the amount of resources allocated to a service in order to meet the actual varying demands on the service while enforcing SLAs. In this paper, we focus on horizontal elasticity, the ability of the infrastructure to add or remove virtual machines allocated to a service deployed in the cloud. We model a cloud service using queuing theory. Using that model we build two adaptive proactive controllers that estimate the future load on a service. We explore the different possible scenarios for deploying a proactive elasticity controller coupled with a reactive elasticity controller in the cloud. Using simulation with workload traces from the FIFA world-cup web servers, we show that a hybrid controller that incorporates a reactive controller for scale up coupled with our proactive controllers for scale down decisions reduces SLA violations by a factor of 2 to 10 compared to a regression based controller or a completely reactive controller.
---
paper_title: Agile dynamic provisioning of multi-tier Internet applications
paper_content:
Dynamic capacity provisioning is a useful technique for handling the multi-time-scale variations seen in Internet workloads. In this article, we propose a novel dynamic provisioning technique for multi-tier Internet applications that employs (1) a flexible queuing model to determine how much of the resources to allocate to each tier of the application, and (2) a combination of predictive and reactive methods that determine when to provision these resources, both at large and small time scales. We propose a novel data center architecture based on virtual machine monitors to reduce provisioning overheads. Our experiments on a forty-machine Xen/Linux-based hosting platform demonstrate the responsiveness of our technique in handling dynamic workloads. In one scenario where a flash crowd caused the workload of a three-tier application to double, our technique was able to double the application capacity within five minutes, thus maintaining response-time targets. Our technique also reduced the overhead of switching servers across applications from several minutes to less than a second, while meeting the performance targets of residual sessions.
---
paper_title: An Adaptive Policy to Minimize Energy and SLA Violations of Parallel Jobs on the Cloud
paper_content:
Energy consumption for Cloud providers and data centers is a major problem. Dynamic Power Management is a common solution to this problem, switching off and on idle servers as needed. However, failing to predict the impact of switching costs may adversely affect energy and/or SLA violations. This paper contributes a policy that adaptively decides when to switch servers on and off under a workload of parallel jobs. Its objective is to minimize both the energy consumption and the number of SLA violations. Experimental results using Cloud Sim show that our proactive policy strikes a good balance between consumed energy and the number of SLA violations and compares favorably with other policies from the literature.
---
paper_title: Joint admission control and resource allocation in virtualized servers
paper_content:
In service oriented architectures, Quality of Service (QoS) is a key issue. Service requestors evaluate QoS at run time to address their service invocation to the most suitable provider. Thus, QoS has a direct impact on the providers' revenues. However, QoS requirements are difficult to satisfy because of the high variability of Internet workloads. This paper presents a self-managing technique that jointly addresses the resource allocation and admission control optimization problems in virtualized servers. Resource allocation and admission control represent key components of an autonomic infrastructure and are responsible for the fulfillment of service level agreements. Our solution is designed taking into account the provider's revenues, the cost of resource utilization, and customers' QoS requirements, specified in terms of the response time of individual requests. The effectiveness of our joint resource allocation and admission control solution, compared to top performing state-of-the-art techniques, is evaluated using synthetic as well as realistic workloads, for a number of different scenarios of interest. Results show that our solution can satisfy QoS constraints while still yielding a significant gain in terms of profits for the provider, especially under high workload conditions, if compared to the alternative methods. Moreover, it is robust to service time variance, resource usage cost, and workload mispredictions.
---
paper_title: Automated control for elastic n-tier workloads based on empirical modeling
paper_content:
Elastic n-tier applications have non-stationary workloads that require adaptive control of resources allocated to them. This presents not only an opportunity in pay-as-you-use clouds, but also a challenge to dynamically allocate virtual machines appropriately. Previous approaches based on control theory, queuing networks, and machine learning work well for some situations, but each model has its own limitations due to inaccuracies in performance prediction. In this paper we propose a multi-model controller, which integrates adaptation decisions from several models, choosing the best. The focus of our work is an empirical model, based on detailed measurement data from previous application runs. The main advantage of the empirical model is that it returns high quality performance predictions based on measured data. For new application scenarios, we use other models or heuristics as a starting point, and all performance data are continuously incorporated into the empirical model's knowledge base. Using a prototype implementation of the multi-model controller, a cloud testbed, and an n-tier benchmark (RUBBoS), we evaluated and validated the advantages of the empirical model. For example, measured data show that it is more effective to add two nodes as a group, one for each tier, when two tiers approach saturation simultaneously.
---
paper_title: Automated control of multiple virtualized resources
paper_content:
Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy service-level objectives(SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I/O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.
---
paper_title: Elastic Virtual Machine for Fine-Grained Cloud Resource Provisioning
paper_content:
Elasticity is one of the distinguishing characteristics associated with Cloud computing emergence. It enables cloud resources to auto-scale to cope with workload demand. Multi-instances horizontal scaling is the common scalability architecture in Cloud; however, its current implementation is coarse-grained, while it considers Virtual Machine (VM) as a scaling unit, this implies additional scaling-out overhead and limits it to specific applications. To overcome these limitations, we propose Elastic VM as a fine-grained vertical scaling architecture. Our results proved that Elastic VM architecture implies less consumption of resources, mitigates Service Level Objectives (SLOs) violation, and avoids scaling-up overhead. Furthermore, it scales broader range of applications including databases.
---
paper_title: Efficient Autoscaling in the Cloud Using Predictive Models for Workload Forecasting
paper_content:
Large-scale component-based enterprise applications that leverage Cloud resources expect Quality of Service(QoS) guarantees in accordance with service level agreements between the customer and service providers. In the context of Cloud computing, auto scaling mechanisms hold the promise of assuring QoS properties to the applications while simultaneously making efficient use of resources and keeping operational costs low for the service providers. Despite the perceived advantages of auto scaling, realizing the full potential of auto scaling is hard due to multiple challenges stemming from the need to precisely estimate resource usage in the face of significant variability in client workload patterns. This paper makes three contributions to overcome the general lack of effective techniques for workload forecasting and optimal resource allocation. First, it discusses the challenges involved in auto scaling in the cloud. Second, it develops a model-predictive algorithm for workload forecasting that is used for resource auto scaling. Finally, empirical results are provided that demonstrate that resources can be allocated and deal located by our algorithm in a way that satisfies both the application QoS while keeping operational costs low.
---
paper_title: Optimization of virtual resource management for cloud applications to cope with traffic burst
paper_content:
Being the latest computing paradigm, cloud computing has proliferated as many IT giants started to deliver resources as services. Thus application providers are free from the burden of the low-level implementation and system administration. Meanwhile, the fact that we are in an era of information explosion brings certain challenges. Some websites may encounter a sharp rising workload due to some unexpected social concerns, which make these websites unavailable or even fail to provide services in time. Currently, a post-action method based on human experience and system alarm is widely used to handle this scenario in industry, which has shortcomings like reaction delay. In our paper, we want to solve this problem by deploying such websites on cloud, and use features of the cloud to tackle it. We present a framework of dynamic virtual resource management in clouds, to cope with traffic burst that applications might encounter. The framework implements a whole work-flow from prediction of the sharp rising workload to a customized resource management module which guarantees the high availability of web applications and cost-effectiveness of the cloud service providers. Our experiments show the accuracy of our workload forecasting method by comparing it with other methods. The 1998 World Cup workload dataset used in our experiment reveals the applicability of our model in the specific scenarios of traffic burst. Also, a simulation-based experiment is designed to indicate that the proposed management framework detects changes in workload intensity that occur over time and allocates multiple virtualized IT resources accordingly to achieve high availability and cost-effective targets. We present a framework of dynamic resource management to cope with traffic burst.The prediction of traffic burst is based on Gompertz Curve and Moving Average model.VM scheduler involves VM Provisioning, VM Placement and VM Recycling.High availability and cost-effectiveness are achieved by the proposed framework.
---
paper_title: SLA-Aware Virtual Resource Management for Cloud Infrastructures
paper_content:
Cloud platforms host several independent applications on a shared resource pool with the ability to allocate computing power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate several virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to control the virtualized environment which decouples the provisioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Programming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.
---
paper_title: A Systematic Review of Service Level Management in the Cloud
paper_content:
Cloud computing make it possible to flexibly procure, scale, and release computational resources on demand in response to workload changes. Stakeholders in business and academia are increasingly exploring cloud deployment options for their critical applications. One open problem is that service level agreements (SLAs) in the cloud ecosystem are yet to mature to a state where critical applications can be reliably deployed in clouds. This article systematically surveys the landscape of SLA-based cloud research to understand the state of the art and identify open problems. The survey is particularly aimed at the resource allocation phase of the SLA life cycle while highlighting implications on other phases. Results indicate that (i) minimal number of SLA parameters are accounted for in most studies; (ii) heuristics, policies, and optimisation are the most commonly used techniques for resource allocation; and (iii) the monitor-analysis-plan-execute (MAPE) architecture style is predominant in autonomic cloud systems. The results contribute to the fundamentals of engineering cloud SLA and their autonomic management, motivating further research and industrial-oriented solutions.
---
paper_title: Adaptive resource provisioning for read intensive multi-tier applications in the cloud
paper_content:
A Service-Level Agreement (SLA) provides surety for specific quality attributes to the consumers of services. However, current SLAs offered by cloud infrastructure providers do not address response time, which, from the user's point of view, is the most important quality attribute for Web applications. Satisfying a maximum average response time guarantee for Web applications is difficult for two main reasons: first, traffic patterns are highly dynamic and difficult to predict accurately; second, the complex nature of multi-tier Web applications increases the difficulty of identifying bottlenecks and resolving them automatically. This paper proposes a methodology and presents a working prototype system for automatic detection and resolution of bottlenecks in a multi-tier Web application hosted on a cloud in order to satisfy specific maximum response time requirements. It also proposes a method for identifying and retracting over-provisioned resources in multi-tier cloud-hosted Web applications. We demonstrate the feasibility of the approach in an experimental evaluation with a testbed EUCALYPTUS-based cloud and a synthetic workload. Automatic bottleneck detection and resolution under dynamic resource management has the potential to enable cloud infrastructure providers to provide SLAs for Web applications that guarantee specific response time requirements while minimizing resource utilization.
---
paper_title: VDC Planner: Dynamic migration-aware Virtual Data Center embedding for clouds
paper_content:
Cloud computing promises to provide computing resources to a large number of service applications in an on demand manner. Traditionally, cloud providers such as Amazon only provide guaranteed allocation for compute and storage resources, and fail to support bandwidth requirements and performance isolation among these applications. To address this limitation, recently, a number of proposals advocate providing both guaranteed server and network resources in the form of Virtual Data Centers (VDCs). This raises the problem of optimally allocating both servers and data center networks to multiple VDCs in order to maximize the total revenue, while minimizing the total energy consumption in the data center. However, despite recent studies on this problem, none of the existing solutions have considered the possibility of using VM migration to dynamically adjust the resource allocation, in order to meet the fluctuating resource demand of VDCs. In this paper, we propose VDC Planner, a migration-aware dynamic virtual data center embedding framework that aims at achieving high revenue while minimizing the total energy cost over-time. Our framework supports various usage scenarios, including VDC embedding, VDC scaling as well as dynamic VDC consolidation. Through experiments using realistic workload traces, we show our proposed approach achieves both higher revenue and lower average scheduling delay compared to existing migration-oblivious solutions.
---
paper_title: Autonomic Workload and Resources Management of Cloud Computing Services
paper_content:
The power consumption of data centers and cloud systems have increased almost three times between 2007 and 2012. Over-provisioning techniques are typically used for meeting the peak workloads. In this paper we present an autonomic power and performance management method for cloud systems in order to dynamically match the application requirements with "just-enough" system resources at runtime that lead to significant power reduction while meeting the quality of service requirements of the cloud applications. Our solution offers the following capabilities: 1) real-time monitoring of the cloud resources and workload behavior running on virtual machines (VMs), 2) determine the current operating point of both workloads and the VMs running these workloads, 3) characterize workload behavior and predict the next operating point for the VMs, 4) dynamically manage the VM resources (scaling up and down the number of cores, CPU frequency, and memory amount) at run time, and 5) assign available cloud resources that can guarantee optimal power consumption without sacrificing the QoS requirements of cloud workloads. We validate the performance of our approach using the RUB is benchmark, an auction model emulating eBay transactions that generates a wide range of workloads (such as browsing and bidding with different number of clients). Our experimental results show that our approach can lead to reduction in power consumption up to 87% when compared to the static resource allocation strategy, 72% compared to adaptive frequency scaling strategy and 66% compared to a similar multi-resource management strategy.
---
paper_title: A Model-free Learning Approach for Coordinated Configuration of Virtual Machines and Appliances
paper_content:
Cloud computing has a key requirement for resource configuration in a real-time manner. In such virtualized environments, both virtual machines (VMs) and hosted applications need to be configured on-the-fly to adapt to system dynamics. The interplay between the layers of VMs and applications further complicates the problem of cloud configuration. Independent tuning of each aspect may not lead to optimal system wide performance. In this paper, we propose a framework, namely CoTuner, for coordinated configuration of VMs and resident applications. At the heart of the framework is a model-free hybrid reinforcement learning (RL) approach, which combines the advantages of Simplex and RL methods and is further enhanced by the use of system knowledge guided exploration policies. Experimental results on Xen-based virtualized environments with TPC-W and TPC-C benchmarks demonstrate that CoTuner is able to drive a virtual server system into an optimal or near optimal configuration state dynamically, in response to the change of workload. It improves the systems throughput by more than 30% over independent tuning strategies. In comparison with the coordinated tuning strategies based solely on Simplex or basic RL algorithm, the hybrid RL algorithm gains 30% to 40% throughput improvement. Moreover, the algorithm is able to reduce SLA violation of the applications by more than 80%.
---
paper_title: Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures
paper_content:
Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs).
---
paper_title: CloudScale: elastic resource scaling for multi-tenant cloud systems
paper_content:
Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (
---
paper_title: AROMA: automated resource allocation and configuration of mapreduce environment in the cloud
paper_content:
Distributed data processing framework MapReduce is increasingly deployed in Clouds to leverage the pay-per-usage cloud computing model. Popular Hadoop MapReduce environment expects that end users determine the type and amount of Cloud resources for reservation as well as the configuration of Hadoop parameters. However, such resource reservation and job provisioning decisions require in-depth knowledge of system internals and laborious but often ineffective parameter tuning. We propose and develop AROMA, a system that automates the allocation of heterogeneous Cloud resources and configuration of Hadoop parameters for achieving quality of service goals while minimizing the incurred cost. It addresses the significant challenge of provisioning ad-hoc jobs that have performance deadlines in Clouds through a novel two-phase machine learning and optimization framework. Its technical core is a support vector machine based performance model that enables the integration of various aspects of resource provisioning and auto-configuration of Hadoop jobs. It adapts to ad-hoc jobs by robustly matching their resource utilization signature with previously executed jobs and making provisioning decisions accordingly. We implement AROMA as an automated job provisioning system for Hadoop MapReduce hosted in virtualized HP ProLiant blade servers. Experimental results show AROMA's effectiveness in providing performance guarantee of diverse Hadoop benchmark jobs while minimizing the cost of Cloud resource usage.
---
paper_title: Power and performance management of virtualized computing environments via lookahead control
paper_content:
There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
---
paper_title: Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing
paper_content:
Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.
---
paper_title: Autonomic Management of Cloud Service Centers with Availability Guarantees
paper_content:
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
---
paper_title: Joint admission control and resource allocation in virtualized servers
paper_content:
In service oriented architectures, Quality of Service (QoS) is a key issue. Service requestors evaluate QoS at run time to address their service invocation to the most suitable provider. Thus, QoS has a direct impact on the providers' revenues. However, QoS requirements are difficult to satisfy because of the high variability of Internet workloads. This paper presents a self-managing technique that jointly addresses the resource allocation and admission control optimization problems in virtualized servers. Resource allocation and admission control represent key components of an autonomic infrastructure and are responsible for the fulfillment of service level agreements. Our solution is designed taking into account the provider's revenues, the cost of resource utilization, and customers' QoS requirements, specified in terms of the response time of individual requests. The effectiveness of our joint resource allocation and admission control solution, compared to top performing state-of-the-art techniques, is evaluated using synthetic as well as realistic workloads, for a number of different scenarios of interest. Results show that our solution can satisfy QoS constraints while still yielding a significant gain in terms of profits for the provider, especially under high workload conditions, if compared to the alternative methods. Moreover, it is robust to service time variance, resource usage cost, and workload mispredictions.
---
paper_title: An adaptive framework for utility-based optimization of scientific applications in the cloud
paper_content:
Cloud computing plays an increasingly important role in realizing scientific applications by offering virtualized compute and storage infrastructures that can scale on demand. This paper presents a self-configuring adaptive framework optimizing resource utilization for scientific applications on top of Cloud technologies. The proposed approach relies on the concept of utility, i.e., measuring the usefulness, and leverages the well-established principle from autonomic computing, namely the MAPE-K loop, in order to adaptively configure scientific applications. Therein, the process of maximizing the utility of specific configurations takes into account the Cloud stack: the application layer, the execution environment layer, and the resource layer, which is supported by the defined Cloud stack configuration model. The proposed framework self-configures the layers by evaluating monitored resources, analyzing their state, and generating an execution plan on a per job basis. Evaluating configurations is based on historical data and a utility function that ranks them according to the costs incurred. The proposed adaptive framework has been integrated into the Vienna Cloud Environment (VCE) and the evaluation by means of a data-intensive application is presented herein.
---
paper_title: Shares and utilities based power consolidation in virtualized server environments
paper_content:
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47% or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the min-max and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for power-performance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the end-to-end validity of our approach and demonstrate that our final candidate algorithm, PowerExpandMinMax, consistently yields the best overall utility across a broad spectrum of inputs - varying VM sizes and utilities, varying server capacities and varying power costs - thus providing a practical solution for administrators.
---
paper_title: Lightweight Resource Scaling for Cloud Applications
paper_content:
Elastic resource provisioning is a key feature of cloud computing, allowing users to scale up or down resource allocation for their applications at run-time. To date, most practical approaches to managing elasticity are based on allocation/de-allocation of the virtual machine (VM) instances to the application. This VM-level elasticity typically incurs both considerable overhead and extra costs, especially for applications with rapidly fluctuating demands. In this paper, we propose a lightweight approach to enable cost-effective elasticity for cloud applications. Our approach operates fine-grained scaling at the resource level itself (CPUs, memory, I/O, etc) in addition to VM-level scaling. We also present the design and implementation of an intelligent platform for light-weight resource management of cloud applications. We describe our algorithms for light-weight scaling and VM-level scaling and show their interaction. We then use an industry standard benchmark to evaluate the effectiveness of our approach and compare its performance against traditional approaches.
---
paper_title: A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems
paper_content:
Abstract Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions.
---
paper_title: Dynamic resource allocation with management objectives—Implementation for an OpenStack cloud
paper_content:
We report on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources in support of both interactive and computationally intensive applications. The design supports an extensible set of management objectives between which the system can switch at runtime. We demonstrate through examples how management objectives related to load-balancing and energy efficiency can be mapped onto the controllers of the resource allocation subsystem, which attempts to achieve an activated management objective at all times. The design is extensible in the sense that additional objectives can be introduced by providing instantiations for generic functions in the controllers. Our implementation monitors the fulfillment of the relevant management metrics in real time. Testbed evaluation demonstrates the effectiveness of our approach in a dynamic environment. It further illustrates the trade-off between closely meeting a specific management objective and the associated cost of VM live-migration.
---
paper_title: Cost of Virtual Machine Live Migration in Clouds: A Performance Evaluation
paper_content:
Virtualization has become commonplace in modern data centers, often referred as "computing clouds". The capability of virtual machine live migration brings benefits such as improved performance, manageability and fault tolerance, while allowing workload movement with a short service downtime. However, service levels of applications are likely to be negatively affected during a live migration. For this reason, a better understanding of its effects on system performance is desirable. In this paper, we evaluate the effects of live migration of virtual machines on the performance of applications running inside Xen VMs. Results show that, in most cases, migration overhead is acceptable but cannot be disregarded, especially in systems where availability and responsiveness are governed by strict Service Level Agreements. Despite that, there is a high potential for live migration applicability in data centers serving modern Internet applications. Our results are based on a workload covering the domain of multi-tier Web 2.0 applications.
---
paper_title: Impact of DVFS on n-tier application performance
paper_content:
Dynamic Voltage and Frequency Scaling (DVFS) has been widely deployed and proven to reduce energy consumption at low CPU utilization levels; however, our measurements of the n-tier application benchmark (RUBBoS) performance showed significant performance degradation at high utilization levels, with response time several times higher and throughput loss of up to 20%, when DVFS is turned on. Using a combination of benchmark measurements and simulation, we found two kinds of problems: large response time fluctuations due to push-back wave queuing in n-tier systems and throughput loss due to rapidly alternating bottlenecks. These problems arise from anti-synchrony between DVFS adjustment period and workload burst cycles (similar cycle length but out of phase). Simulation results (confirmed by extensive measurements) show the anti-synchrony happens routinely for a wide range of configurations. We show that a workload-sensitive DVFS adaptive control mechanism can disrupt the anti-synchrony and reduce the performance impact of DVFS at high utilization levels to 25% or less of the original.
---
paper_title: Shares and utilities based power consolidation in virtualized server environments
paper_content:
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47% or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the min-max and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for power-performance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the end-to-end validity of our approach and demonstrate that our final candidate algorithm, PowerExpandMinMax, consistently yields the best overall utility across a broad spectrum of inputs - varying VM sizes and utilities, varying server capacities and varying power costs - thus providing a practical solution for administrators.
---
paper_title: Towards energy-aware scheduling in data centers using machine learning
paper_content:
As energy-related costs have become a major economical factor for IT infrastructures and data-centers, companies and the research community are being challenged to find better and more efficient power-aware resource management strategies. There is a growing interest in "Green" IT and there is still a big gap in this area to be covered. In order to obtain an energy-efficient data center, we propose a framework that provides an intelligent consolidation methodology using different techniques such as turning on/off machines, power-aware consolidation algorithms, and machine learning techniques to deal with uncertain information while maximizing performance. For the machine learning approach, we use models learned from previous system behaviors in order to predict power consumption levels, CPU loads, and SLA timings, and improve scheduling decisions. Our framework is vertical, because it considers from watt consumption to workload features, and cross-disciplinary, as it uses a wide variety of techniques. We evaluate these techniques with a framework that covers the whole control cycle of a real scenario, using a simulation with representative heterogeneous workloads, and we measure the quality of the results according to a set of metrics focused toward our goals, besides traditional policies. The results obtained indicate that our approach is close to the optimal placement and behaves better when the level of uncertainty increases.
---
paper_title: Towards an adaptive human-centric computing resource management framework based on resource prediction and multi-objective genetic algorithm
paper_content:
The complexity, scale and dynamic of data source in the human-centric computing bring great challenges to maintainers. It is problem to be solved that how to reduce manual intervention in large scale human-centric computing, such as cloud computing resource management so that system can automatically manage according to configuration strategies. To address the problem, a resource management framework based on resource prediction and multi-objective optimization genetic algorithm resource allocation (RPMGA-RMF) was proposed. It searches for optimal load cluster as training sample based on load similarity. The neural network (NN) algorithm was used to predict resource load. Meanwhile, the model also built virtual machine migration request in accordance with obtained predicted load value. The multi-objective genetic algorithm (GA) based on hybrid group encoding algorithm was introduced for virtual machine (VM) resource management, so as to provide optimal VM migration strategy, thus achieving adaptive optimization configuration management of resource. Experimental resource based on CloudSim platform shows that the RPMGA-RMF can decrease VM migration times while reduce physical node simultaneously. The system energy consumption can be reduced and load balancing can be achieved either.
---
paper_title: Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures
paper_content:
Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs).
---
paper_title: CloudScale: elastic resource scaling for multi-tenant cloud systems
paper_content:
Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (
---
paper_title: Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing
paper_content:
Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.
---
paper_title: Autonomic Management of Cloud Service Centers with Availability Guarantees
paper_content:
Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.
---
paper_title: Divide the Task, Multiply the Outcome: Cooperative VM Consolidation
paper_content:
Efficient resource utilization is one of the main concerns of cloud providers, as it has a direct impact on energy costs and thus their revenue. Virtual machine (VM) consolidation is one the common techniques, used by infrastructure providers to efficiently utilize their resources. However, when it comes to large-scale infrastructures, consolidation decisions become computationally complex, since VMs are multi-dimensional entities with changing demand and unknown lifetime, and users often overestimate their actual demand. These uncertainties urges the system to take consolidation decisions continuously in a real time manner. In this work, we investigate a decentralized approach for VM consolidation using Peer to Peer (P2P) principles. We investigate the opportunities offered by P2P systems, as scalable and robust management structures, to address VM consolidation concerns. We present a P2P consolidation protocol, considering the dimensionality of resources and dynamicity of the environment. The protocol benefits from concurrency and decentralization of control and it uses a dimension aware decision function for efficient consolidation. We evaluate the protocol through simulation of 100,000 physical machines and 200,000 VM requests. Results demonstrate the potentials and advantages of using a P2P structure to make resource management decisions in large scale data centers. They show that the P2P approach is feasible and scalable and produces resource utilization of 75% when the consolidation aim is 90%.
---
paper_title: Delivering Energy Proportionality with Non Energy-Proportional Systems - Optimizing the Ensemble
paper_content:
With power having become a critical issue in the operation of data centers today, there has been an increased push towards the vision of "energy-proportional computing", in which no power is used by idle systems, very low power is used by lightly loaded systems, and proportionately higher power at higher loads. Unfortunately, given the state of the art of today's hardware, designing individual servers that exhibit this property remains an open challenge. However, even in the absence of redesigned hardware, we demonstrate how optimization-based techniques can be used to build systems with off-the-shelf hardware that, when viewed at the aggregate level, approximate the behavior of energy-proportional systems. This paper explores the viability and tradeoffs of optimization-based approaches using two different case studies. First, we show how different power-saving mechanisms can be combined to deliver an aggregate system that is proportional in its use of server power. Second, we show early results on delivering a proportional cooling system for these servers. When compared to the power consumed at 100% utilization, results from our testbed show that optimization-based systems can reduce the power consumed at 0% utilization to 15% for server power and 32% for cooling power.
---
paper_title: 1000 Islands: Integrated Capacity and Workload Management for the Next Generation Data Center
paper_content:
Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a "service hosting abstraction" that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.
---
paper_title: Dynamic resource allocation with management objectives—Implementation for an OpenStack cloud
paper_content:
We report on design, implementation and evaluation of a resource management system that builds upon OpenStack, an open-source cloud platform for private and public clouds. Our implementation supports an Infrastructure-as-a-Service (IaaS) cloud and currently provides allocation for computational resources in support of both interactive and computationally intensive applications. The design supports an extensible set of management objectives between which the system can switch at runtime. We demonstrate through examples how management objectives related to load-balancing and energy efficiency can be mapped onto the controllers of the resource allocation subsystem, which attempts to achieve an activated management objective at all times. The design is extensible in the sense that additional objectives can be introduced by providing instantiations for generic functions in the controllers. Our implementation monitors the fulfillment of the relevant management metrics in real time. Testbed evaluation demonstrates the effectiveness of our approach in a dynamic environment. It further illustrates the trade-off between closely meeting a specific management objective and the associated cost of VM live-migration.
---
paper_title: Towards an adaptive human-centric computing resource management framework based on resource prediction and multi-objective genetic algorithm
paper_content:
The complexity, scale and dynamic of data source in the human-centric computing bring great challenges to maintainers. It is problem to be solved that how to reduce manual intervention in large scale human-centric computing, such as cloud computing resource management so that system can automatically manage according to configuration strategies. To address the problem, a resource management framework based on resource prediction and multi-objective optimization genetic algorithm resource allocation (RPMGA-RMF) was proposed. It searches for optimal load cluster as training sample based on load similarity. The neural network (NN) algorithm was used to predict resource load. Meanwhile, the model also built virtual machine migration request in accordance with obtained predicted load value. The multi-objective genetic algorithm (GA) based on hybrid group encoding algorithm was introduced for virtual machine (VM) resource management, so as to provide optimal VM migration strategy, thus achieving adaptive optimization configuration management of resource. Experimental resource based on CloudSim platform shows that the RPMGA-RMF can decrease VM migration times while reduce physical node simultaneously. The system energy consumption can be reduced and load balancing can be achieved either.
---
paper_title: Optimizing Workload Category for Adaptive Workload Prediction in Service Clouds
paper_content:
It is important to predict the total workload for facilitating auto scaling resource management in service cloud platforms. Currently, most prediction methods use a single prediction model to predict workloads. However, they cannot get satisfactory prediction performance due to varying workload patterns in service clouds. In this paper, we propose a novel prediction approach, which categorizes the workloads and assigns different prediction models according to the workload features. The key idea is that we convert workload classification into a 0–1 programming problem. We formulate an optimization problem to maximize prediction precision, and then present an optimization algorithm. We use real traces of typical online services to evaluate prediction method accuracy. The experimental results indicate that the optimizing workload category is effective and proposed prediction method outperforms single ones especially in terms of the platform cumulative absolute prediction error. Further, the uniformity of prediction error is also improved.
---
paper_title: An Approach for Characterizing Workloads in Google Cloud to Derive Realistic Resource Utilization Models
paper_content:
Analyzing behavioral patterns of workloads is critical to understanding Cloud computing environments. However, until now only a limited number of real-world Cloud data center trace logs have been available for analysis. This has led to a lack of methodologies to capture the diversity of patterns that exist in such datasets. This paper presents the first large-scale analysis of real-world Cloud data, using a recently released dataset that features traces from over 12,000 servers over the period of a month. Based on this analysis, we develop a novel approach for characterizing workloads that for the first time considers Cloud workload in the context of both user and task in order to derive a model to capture resource estimation and utilization patterns. The derived model assists in understanding the relationship between users and tasks within workload, and enables further work such as resource optimization, energy-efficiency improvements, and failure correlation. Additionally, it provides a mechanism to create patterns that randomly fluctuate based on realistic parameters. This is critical to emulating dynamic environments instead of statically replaying records in the trace log. Our approach is evaluated by contrasting the logged data against simulation experiments, and our results show that the derived model parameters correctly describe the operational environment within a 5% of error margin, confirming the great variability of patterns that exist in Cloud computing.
---
paper_title: A cost-aware auto-scaling approach using the workload prediction in service clouds
paper_content:
Service clouds are distributed infrastructures which deploys communication services in clouds. The scalability is an important characteristic of service clouds. With the scalability, the service cloud can offer on-demand computing power and storage capacities to different services. In order to achieve the scalability, we need to know when and how to scale virtual resources assigned to different services. In this paper, a novel service cloud architecture is presented, and a linear regression model is used to predict the workload. Based on this predicted workload, an auto-scaling mechanism is proposed to scale virtual resources at different resource levels in service clouds. The auto-scaling mechanism combines the real-time scaling and the pre-scaling. Finally experimental results are provided to demonstrate that our approach can satisfy the user Service Level Agreement (SLA) while keeping scaling costs low.
---
paper_title: Dependable Horizontal Scaling Based on Probabilistic Model Checking
paper_content:
The focus of this work is the on-demand resource provisioning in cloud computing, which is commonly referredto as cloud elasticity. Although a lot of effort has been invested in developing systems and mechanisms that enable elasticity, the elasticity decision policies tend to be designed without quantifying or guaranteeing the quality of their operation. We present an approach towards the development of more formalized and dependable elasticity policies. We make two distinct contributions. First, we propose an extensible approach to enforcing elasticity through the dynamic instantiation and online quantitative verification of Markov Decision Processes(MDP) using probabilistic model checking. Second, various concrete elasticity models and elasticity policies are studied. We evaluate the decision policies using traces from a realNoSQL database cluster under constantly evolving externalload. We reason about the behaviour of different modelling and elasticity policy options and we show that our proposal can improve upon the state-of-the-art in significantly decreasing under-provisioning while avoiding over-provisioning.
---
paper_title: Quasar: resource-efficient and QoS-aware cluster management
paper_content:
Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability. We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47% in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types.
---
paper_title: Characterizing Cloud Applications on a Google Data Center
paper_content:
In this paper, we characterize Google applications, based on a one-month Google trace with over 650k jobs running across over 12000 heterogeneous hosts from a Google data center. On one hand, we carefully compute the valuable statistics about task events and resource utilization for Google applications, based on various types of resources (such as CPU, memory) and execution types (e.g., whether they can run batch tasks or not). Resource utilization per application is observed with an extremely typical Pareto principle. On the other hand, we classify applications via a K-means clustering algorithm with optimized number of sets, based on task events and resource usage. The number of applications in the K-means clustering sets follows a Pareto-similar distribution. We believe our work is very interesting and valuable for the further investigation of Cloud environment.
---
paper_title: Efficient Autoscaling in the Cloud Using Predictive Models for Workload Forecasting
paper_content:
Large-scale component-based enterprise applications that leverage Cloud resources expect Quality of Service(QoS) guarantees in accordance with service level agreements between the customer and service providers. In the context of Cloud computing, auto scaling mechanisms hold the promise of assuring QoS properties to the applications while simultaneously making efficient use of resources and keeping operational costs low for the service providers. Despite the perceived advantages of auto scaling, realizing the full potential of auto scaling is hard due to multiple challenges stemming from the need to precisely estimate resource usage in the face of significant variability in client workload patterns. This paper makes three contributions to overcome the general lack of effective techniques for workload forecasting and optimal resource allocation. First, it discusses the challenges involved in auto scaling in the cloud. Second, it develops a model-predictive algorithm for workload forecasting that is used for resource auto scaling. Finally, empirical results are provided that demonstrate that resources can be allocated and deal located by our algorithm in a way that satisfies both the application QoS while keeping operational costs low.
---
paper_title: Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud
paper_content:
Data centers today consume tremendous amount of energy in terms of power distribution and cooling. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands. However, despite extensive studies of the problem, existing solutions for dynamic capacity provisioning have not fully considered the heterogeneity of both workload and machine hardware found in production environments. In particular, production data centers often comprise several generations of machines with different capacities, capabilities and energy consumption characteristics. Meanwhile, the workloads running in these data centers typically consist of a wide variety of applications with different priorities, performance objectives and resource requirements. Failure to consider heterogenous characteristics will lead to both sub-optimal energy-savings and long scheduling delays, due to incompatibility between workload requirements and the resources offered by the provisioned machines. To address this limitation, in this paper we present HARMONY, a Heterogeneity-Aware Resource Management System for dynamic capacity provisioning in cloud computing environments. Specifically, we first use the K-means clustering algorithm to divide the workload into distinct task classes with similar characteristics in terms of resource and performance requirements. Then we present a novel technique for dynamically adjusting the number of machines of each type to minimize total energy consumption and performance penalty in terms of scheduling delay. Through simulations using real traces from Google's compute clusters, we found that our approach can improve data center energy efficiency by up to 28% compared to heterogeneity-oblivious solutions.
---
paper_title: Cost-Aware Elastic Cloud Provisioning for Scientific Workloads
paper_content:
Cloud computing provides an efficient model to host and scale scientific applications. While cloud-based approaches can reduce costs as users pay only for the resources used, it is often challenging to scale execution both efficiently and cost-effectively. We describe here a cost-aware elastic cloud provisioner designed to elastically provision cloud infrastructure to execute analyses cost-effectively. The provisioner considers real-time spot instance prices across availability zones, leverages application profiles to optimize instance type selection, over-provisions resources to alleviate bottlenecks caused by oversubscribed instance types, and is capable of reverting to on-demand instances when spot prices exceed thresholds. We evaluate the usage of our cost-aware provisioner using four production scientific gateways and show that it can produce cost savings of up to 97.2% when compared to naive provisioning approaches.
---
paper_title: Optimization of virtual resource management for cloud applications to cope with traffic burst
paper_content:
Being the latest computing paradigm, cloud computing has proliferated as many IT giants started to deliver resources as services. Thus application providers are free from the burden of the low-level implementation and system administration. Meanwhile, the fact that we are in an era of information explosion brings certain challenges. Some websites may encounter a sharp rising workload due to some unexpected social concerns, which make these websites unavailable or even fail to provide services in time. Currently, a post-action method based on human experience and system alarm is widely used to handle this scenario in industry, which has shortcomings like reaction delay. In our paper, we want to solve this problem by deploying such websites on cloud, and use features of the cloud to tackle it. We present a framework of dynamic virtual resource management in clouds, to cope with traffic burst that applications might encounter. The framework implements a whole work-flow from prediction of the sharp rising workload to a customized resource management module which guarantees the high availability of web applications and cost-effectiveness of the cloud service providers. Our experiments show the accuracy of our workload forecasting method by comparing it with other methods. The 1998 World Cup workload dataset used in our experiment reveals the applicability of our model in the specific scenarios of traffic burst. Also, a simulation-based experiment is designed to indicate that the proposed management framework detects changes in workload intensity that occur over time and allocates multiple virtualized IT resources accordingly to achieve high availability and cost-effective targets. We present a framework of dynamic resource management to cope with traffic burst.The prediction of traffic burst is based on Gompertz Curve and Moving Average model.VM scheduler involves VM Provisioning, VM Placement and VM Recycling.High availability and cost-effectiveness are achieved by the proposed framework.
---
paper_title: Automated, Elastic Resource Provisioning for NoSQL Clusters Using TIRAMOLA
paper_content:
This work presents TIRAMOLA, a cloud-enabled, open-source framework to perform automatic resizing of NoSQL clusters according to user-defined policies. Decisions on adding or removing worker VMs from a cluster are modeled as a Markov Decision Process and taken in real-time. The system automatically decides on the most advantageous cluster size according to user-defined policies, it then proceeds on requesting/releasing VM resources from the provider and orchestrating them inside a NoSQL cluster. TIRAMOLA's modular architecture and standard API support allows interaction with most current IaaS platforms and increased customization. An extensive experimental evaluation on an HBase cluster confirms our assertions: The system resizes clusters in real-time and adapts its performance through different optimization strategies, different permissible actions, different input and training loads. Besides the automation of the process, it exhibits a learning feature which allows it to make very close to optimal decisions even with input loads 130% larger or alternating 10 times faster compared to the accumulated information.
---
paper_title: Service workload patterns for Qos-driven cloud resource management
paper_content:
Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support a continuous approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction technique that combines a workload pattern mining approach with a traditional collaborative filtering solution to meet the accuracy and efficiency requirements. Service workload patterns abstract common infrastructure workloads from monitoring logs and act as a part of a first-stage high-performant configuration mechanism before more complex traditional methods are considered. This enhances current reactive rule-based scalability approaches and basic prediction techniques by a hybrid prediction solution. Uncertainty and noise are additional challenges that emerge in multi-layered, often federated cloud architectures. We specifically add log smoothing combined with a fuzzy logic approach to make the prediction solution more robust in the context of these challenges.
---
paper_title: Heterogeneity and dynamicity of clouds at scale: Google trace analysis
paper_content:
To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.
---
paper_title: PRESS: PRedictive Elastic ReSource Scaling for cloud systems
paper_content:
Cloud systems require elastic resource allocation to minimize resource provisioning costs while meeting service level objectives (SLOs). In this paper, we present a novel PRedictive Elastic reSource Scaling (PRESS) scheme for cloud systems. PRESS unobtrusively extracts fine-grained dynamic patterns in application resource demands and adjust their resource allocations automatically. Our approach leverages light-weight signal processing and statistical learning algorithms to achieve online predictions of dynamic application resource requirements. We have implemented the PRESS system on Xen and tested it using RUBiS and an application load trace from Google. Our experiments show that we can achieve good resource prediction accuracy with less than 5% over-estimation error and near zero under-estimation error, and elastic resource scaling can both significantly reduce resource waste and SLO violations.
---
|
Title: Adaptation in cloud resource configuration: a survey
Section 1: Introduction
Description 1: Introduce cloud computing and provide an overview of the importance of adapting cloud resource configurations. Outline the survey's goals and contributions.
Section 2: Cloud systems setup
Description 2: Define cloud computing and introduce the core constituents of cloud systems, including compute, storage, network resources, and management tools.
Section 3: Cloud systems adaptation
Description 3: Discuss IPs' objectives and various approaches for adapting cloud infrastructure to meet workload demands, including the decision-making process and dimensions of cloud systems adaptation.
Section 4: Adapted resource
Description 4: Survey different types of resource adaptations, such as VM level, node level, and cluster level adaptations, identifying techniques and methods used.
Section 5: Adaptation objective
Description 5: Discuss the various objectives for adapting resources, such as minimizing SLA violations, reducing power consumption, and maximizing IP revenue.
Section 6: Adaptation technique
Description 6: Outline the different analytical and modeling techniques used to achieve adaptation objectives, including heuristic, control theory, and machine learning techniques.
Section 7: Adaptation engagement
Description 7: Explain how and when adaptation processes are invoked, including reactive, proactive, and hybrid approaches.
Section 8: Decision engine architecture
Description 8: Describe the various architectures of decision engines and their impact on the scalability and efficiency of resource adaptation.
Section 9: Managed infrastructure
Description 9: Discuss considerations for managed infrastructure, including heterogeneity and resource capabilities, and how these are factored into the decision-making process.
Section 10: Adaptation in cloud resource configuration
Description 10: Survey literature focused on adapting cloud resource configurations, specifically on compute and storage resources, and compare different approaches based on several dimensions.
Section 11: Open research challenges
Description 11: Identify and discuss open challenges in the field of cloud systems adaptation, such as workload characterization, online profiling, and scalability.
Section 12: Conclusion
Description 12: Summarize the survey, highlighting key findings and techniques for cloud systems adaptation, and suggesting areas for further research.
|
Scholarly use of social media and altmetrics: a review of the literature
| 13 |
---
paper_title: Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC's members
paper_content:
This study explores the connections between social and usage metrics (altmetrics) and bibliometric indicators at the author level. It studies to what extent these indicators, gained from academic sites, can provide a proxy for research impact. Close to 10,000 author profiles belonging to the Spanish National Research Council were extracted from the principal scholarly social sites: ResearchGate, Academia.edu and Mendeley and academic search engines: Microsoft Academic Search and Google Scholar Citations. Results describe little overlapping between sites because most of the researchers only manage one profile (72%). Correlations point out that there is scant relationship between altmetric and bibliometric indicators at author level. This is due to the almetric ones are site-dependent, while the bibliometric ones are more stable across web sites. It is concluded that altmetrics could reflect an alternative dimension of the research performance, close, perhaps, to science popularization and networking abilities, but far from citation impact.
---
paper_title: User Participation in an Academic Social Networking Service: A Survey of Open Group Users on Mendeley
paper_content:
Although there are a number of social networking services that specifically target scholars, little has been published about the actual practices and the usage of these so-called academic social networking services ASNSs. To fill this gap, we explore the populations of academics who engage in social activities using an ASNS; as an indicator of further engagement, we also determine their various motivations for joining a group in ASNSs. Using groups and their members in Mendeley as the platform for our case study, we obtained 146 participant responses from our online survey about users' common activities, usage habits, and motivations for joining groups. Our results show that a participants did not engage with social-based features as frequently and actively as they engaged with research-based features, and b users who joined more groups seemed to have a stronger motivation to increase their professional visibility and to contribute the research articles that they had read to the group reading list. Our results generate interesting insights into Mendeley's user populations, their activities, and their motivations relative to the social features of Mendeley. We also argue that further design of ASNSs is needed to take greater account of disciplinary differences in scholarly communication and to establish incentive mechanisms for encouraging user participation.
---
paper_title: Groups in Mendeley: Owners' descriptions and group outcomes
paper_content:
Using four factors borrowed from traditional social group theories, we examined owners' group descriptions in Mendeley to study the applicability of traditional social group theories for large, loosely-formed online groups. We manually annotated the descriptions for 529 Mendeley groups, and correlated the appearances of the factors with two measures of the groups' outcomes: the changes in the numbers of group members and the changes of the articles shared within the groups between 2011 and 2012. Our results suggest that, in general, all four factors are important in online groups, which indicates the usefulness of traditional group theories in the study of online groups. In addition, although a majority of the factors have helped the growth of group size being higher than average increase, several factors have caused the increase of the shared articles within the groups to be smaller than average increase.
---
paper_title: Do highly cited researchers successfully use the social web?
paper_content:
Academics can now use the web and the social websites to disseminate scholarly information in a variety of different ways. Although some scholars have taken advantage of these new online opportunities, it is not clear how widespread their uptake is or how much impact they can have. This study assesses the extent to which successful scientists have social web presences, focusing on one influential group: highly cited researchers working at European institutions. It also assesses the impact of these presences. We manually and systematically identified if the European highly cited researchers had profiles in Google Scholar, Microsoft Academic Search, Mendeley, Academia and LinkedIn or any content in SlideShare. We then used URL mentions and altmetric indicators to assess the impact of the web presences found. Although most of the scientists had an institutional website of some kind, few had created a profile in any social website investigated, and LinkedIn--the only non-academic site in the list--was the most popular. Scientists having one kind of social web profile were more likely to have another in many cases, especially in the life sciences and engineering. In most cases it was possible to estimate the relative impact of the profiles using a readily available statistic and there were disciplinary differences in the impact of the different kinds of profiles. Most social web profiles had some evidence of uptake, if not impact; nevertheless, the value of the indicators used is unclear.
---
paper_title: Academics and their online networks: Exploring the role of academic social networking sites
paper_content:
The rapid rise in popularity of online social networking has been followed by a slew of services aimed at an academic audience. This project sought to explore network structure in these sites, and to explore trends in network structure by surveying participants about their use of sites and motivations for making connections. Social network analysis revealed that discipline was influential in defining community structure, while academic seniority was linked to the position of nodes within the network. The survey revealed a contradiction between academics use of the sites and their position within the networks the sites foster. Junior academics were found to be more active users of the sites, agreeing to a greater extent with the perceived benefits, yet having fewer connections and occupying a more peripheral position in the network.
---
paper_title: Coverage and adoption of altmetrics sources in the bibliometric community
paper_content:
Altmetrics, indices based on social media platforms and tools, have recently emerged as alternative means of measuring scholarly impact. Such indices assume that scholars in fact populate online social environments, and interact with scholarly products in the social web. We tested this assumption by examining the use and coverage of social media environments amongst a sample of bibliometricians examining both their own use of online platforms and the use of their papers on social reference managers. As expected, coverage varied: 82 % of articles published by sampled bibliometricians were included in Mendeley libraries, while only 28 % were included in CiteULike. Mendeley bookmarking was moderately correlated (.45) with Scopus citation counts. We conducted a survey among the participants of the STI2012 participants. Over half of respondents asserted that social media tools were affecting their professional lives, although uptake of online tools varied widely. 68 % of those surveyed had LinkedIn accounts, while Academia.edu, Mendeley, and ResearchGate each claimed a fifth of respondents. Nearly half of those responding had Twitter accounts, which they used both personally and professionally. Surveyed bibliometricians had mixed opinions on altmetrics' potential; 72 % valued download counts, while a third saw potential in tracking articles' influence in blogs, Wikipedia, reference managers, and social media. Altogether, these findings suggest that some online tools are seeing substantial use by bibliometricians, and that they present a potentially valuable source of impact data.
---
paper_title: Post-Gutenberg Galaxy: The Fourth Revolution in the Means of Production of Knowledge.
paper_content:
The 4th revolution after speech, writing and print, is skywriting (email, hypermail, web-based archiving).
---
paper_title: Can Mendeley bookmarks reflect readership? A survey of user motivations
paper_content:
Although Mendeley bookmarking counts appear to correlate moderately with conventional citation metrics, it is not known whether academic publications are bookmarked in Mendeley in order to be read or not. Without this information, it is not possible to give a confident interpretation of altmetrics derived from Mendeley. In response, a survey of 860 Mendeley users shows that it is reasonable to use Mendeley bookmarking counts as an indication of readership because most 55% users with a Mendeley library had read or intended to read at least half of their bookmarked publications. This was true across all broad areas of scholarship except for the arts and humanities 42%. About 85% of the respondents also declared that they bookmarked articles in Mendeley to cite them in their publications, but some also bookmark articles for use in professional 50%, teaching 25%, and educational activities 13%. Of course, it is likely that most readers do not record articles in Mendeley and so these data do not represent all readers. In conclusion, Mendeley bookmark counts seem to be indicators of readership leading to a combination of scholarly impact and wider professional impact.
---
paper_title: Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows 1
paper_content:
Although there is evidence that counting the readers of an article in the social reference site, Mendeley, may help to capture its research impact, the extent to which this is true for different scientific fields is unknown. In this study, we compare Mendeley readership counts with citations for different social sciences and humanities disciplines. The overall correlation between Mendeley readership counts and citations for the social sciences was higher than for the humanities. Low and medium correlations between Mendeley bookmarks and citation counts in all the investigated disciplines suggest that these measures reflect different aspects of research impact. Mendeley data were also used to discover patterns of information flow between scientific fields. Comparing information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for the disciplines revealed substantial similarities and some differences. Thus, the evidence from this study suggests that Mendeley readership data could be used to help capture knowledge transfer across scientific disciplines, especially for people that read but do not author articles, as well as giving impact evidence at an earlier stage than is possible with citation counts.
---
paper_title: Groups in Academic Social Networking Services--An Exploration of Their Potential as a Platform for Multi-disciplinary Collaboration
paper_content:
The importance of collaborations across geographical, institutional and/or disciplinary boundaries has been widely recognized in research communities, yet there exist a range of obstacles to such collaborations. This study is concerned with understanding the potential of academic social networking services (ASNS) as a medium or platform for cross-disciplinary or multi-disciplinary collaborations. Many ASNS sites allow scholars to form online groups as well as to build up their professional network individually. In this study, we look at the patterns of user participation in online groups in a ASNS site, Mendeley, with an emphasis on assessing the degree to which people from different disciplinary backgrounds gather in these groups. The results show that while there exists a need for better means to facilitate group formation and growth, the groups in Mendeley exhibit a great deal of diversity in their member composition in terms of disciplines. Overall, the findings of this study support the argument that online social networking, especially ASNS, may foster multi-disciplinary collaborations by providing a platform for researchers from diverse backgrounds to find one another and cooperate on issues of common interests.
---
paper_title: Assessing the Impact of Publications Saved by Mendeley Users: Is There Any Different Pattern Among Users?
paper_content:
The main focus of this paper is to investigate the impact of publications read (saved) by the different users in Mendeley in order to explore the extent to which their readership counts correlate with their citation indicators. The potential of filtering highly cited papers by Mendeley readerships and its different users have been also explored. For the analysis of the users, we have considered the information of the top three Mendeley ‘users’ reported by the Mendeley. Our results show that publications with Mendeley readerships tend to have higher citation and journal citation scores than publications without readerships. ‘Biomedical & health sciences’ and ‘Mathematics and computer science’ are the fields with respectively the most and the least readership activity in Mendeley. PhD students have the highest density of readerships per publication and Lecturers and Librarians have the lowest across all the different fields. Our precision-recall analysis indicates that in general, for publications with at least one reader in Mendeley, the capacity of readerships of filtering highly cited publications is better than (or at least as good as) Journal Citation Scores. We discuss the important limitation of Mendeley of only reporting the top three readers and not all of them in the potential development of indicators based on Mendeley and its users.
---
paper_title: Network Structure of Social Coding in GitHub
paper_content:
Social coding enables a different experience of software development as the activities and interests of one developer are easily advertised to other developers. Developers can thus track the activities relevant to various projects in one umbrella site. Such a major change in collaborative software development makes an investigation of networkings on social coding sites valuable. Furthermore, project hosting platforms promoting this development paradigm have been thriving, among which GitHub has arguably gained the most momentum. In this paper, we contribute to the body of knowledge on social coding by investigating the network structure of social coding in GitHub. We collect 100,000 projects and 30,000 developers from GitHub, construct developer-developer and project-project relationship graphs, and compute various characteristics of the graphs. We then identify influential developers and projects on this sub network of GitHub by using PageRank. Understanding how developers and projects are actually related to each other on a social coding site is the first step towards building tool supports to aid social programmers in performing their tasks more efficiently.
---
paper_title: Social Networking Meets Software Development: Perspectives from GitHub, MSDN, Stack Exchange, and TopCoder
paper_content:
Many successful software companies use social networking as a way to improve the services or products they provide. To gain an understanding of the role social networking plays in today's software development world, the guest editors of the January/February 2013 issue conducted semistructured interviews with leaders from four successful companies: Brian Doll, an engineer who manages GitHub's marketing; Doug Laundry, a principal group program manager at Microsoft; David Fullerton, vice president of engineering at Stack Exchange; and Robert Hughes, the president and chief operating officer of TopCoder. The first Web extra at http://try.github.com is a video of Joel Spolsky discussing the structure, software, technology, and culture of Stack Exchange. The second Web extra at http://blip.tv/play/gvUBgqLbRgI.html is a video of Matthew McCullough and Tim Berglund demonstrating how Git not only incorporates the best features of existing source control systems but also includes unique distributed capabilities that make version control commands available without connectivity, allowing you to choose when to interact with a network. The third Web extra at http://blip.tv/play/gvUBgqLbRgI.html is a video of Matthew McCullough and Tim Berglund demonstrating how to leverage Git's powerful yet underused advanced features. The last Web extra at http://youtu.be/SK6TBI1bNLI is a video of Thomas Baden, Chief Information Officer, State of Minnesota, Department of Human Services, describing the experience of working on the TopCoder Platform and with the members of the TopCoder Community.
---
paper_title: Data Sharing by Scientists: Practices and Perceptions
paper_content:
Background: Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers – data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results. Methodology/Principal Findings: A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the shortand long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region. Conclusions/Significance: Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.
---
paper_title: Public sharing of research datasets: A pilot study of associations
paper_content:
The public sharing of primary research datasets potentially benefits the research community but is not yet common practice. In this pilot study, we analyzed whether data sharing frequency was associated with funder and publisher requirements, journal impact factor, or investigator experience and impact. Across 397 recent biomedical microarray studies, we found investigators were more likely to publicly share their raw dataset when their study was published in a high-impact journal and when the first or last authors had high levels of career experience and impact. We estimate the USA's National Institutes of Health (NIH) data sharing policy applied to 19% of the studies in our cohort; being subject to the NIH data sharing plan requirement was not found to correlate with increased data sharing behavior in multivariate logistic regression analysis. Studies published in journals that required a database submission accession number as a condition of publication were more likely to share their data, but this trend was not statistically significant. These early results will inform our ongoing larger analysis, and hopefully contribute to the development of more effective data sharing initiatives.
---
paper_title: Do highly cited researchers successfully use the social web?
paper_content:
Academics can now use the web and the social websites to disseminate scholarly information in a variety of different ways. Although some scholars have taken advantage of these new online opportunities, it is not clear how widespread their uptake is or how much impact they can have. This study assesses the extent to which successful scientists have social web presences, focusing on one influential group: highly cited researchers working at European institutions. It also assesses the impact of these presences. We manually and systematically identified if the European highly cited researchers had profiles in Google Scholar, Microsoft Academic Search, Mendeley, Academia and LinkedIn or any content in SlideShare. We then used URL mentions and altmetric indicators to assess the impact of the web presences found. Although most of the scientists had an institutional website of some kind, few had created a profile in any social website investigated, and LinkedIn--the only non-academic site in the list--was the most popular. Scientists having one kind of social web profile were more likely to have another in many cases, especially in the life sciences and engineering. In most cases it was possible to estimate the relative impact of the profiles using a readily available statistic and there were disciplinary differences in the impact of the different kinds of profiles. Most social web profiles had some evidence of uptake, if not impact; nevertheless, the value of the indicators used is unclear.
---
paper_title: How many citations are there in the Data Citation Index?
paper_content:
19th International Conference on Science and Technology Indicators (STI), Leiden (The Netherlands) 3-5 september 2014.
---
paper_title: Tracking citations and altmetrics for research data: Challenges and opportunities
paper_content:
Editor's Summary ::: ::: ::: ::: Methods for determining research quality have long been debated but with little lasting agreement on standards, leading to the emergence of alternative metrics. Altmetrics are a useful supplement to traditional citation metrics, reflecting a variety of measurement points that give different perspectives on how a dataset is used and by whom. A positive development is the integration of a number of research datasets into the ISI Data Citation Index, making datasets searchable and linking them to published articles. Yet access to data resources and tracking the resulting altmetrics depend on specific qualities of the datasets and the systems where they are archived. Though research on altmetrics use is growing, the lack of standardization across datasets and system architecture undermines its generalizability. Without some standards, stakeholders' adoption of altmetrics will be limited.
---
paper_title: Indicators for the Data Usage Index (DUI): an incentive for publishing primary biodiversity data through global information infrastructure
paper_content:
BackgroundA professional recognition mechanism is required to encourage expedited publishing of an adequate volume of 'fit-for-use' biodiversity data. As a component of such a recognition mechanism, we propose the development of the Data Usage Index (DUI) to demonstrate to data publishers that their efforts of creating biodiversity datasets have impact by being accessed and used by a wide spectrum of user communities.DiscussionWe propose and give examples of a range of 14 absolute and normalized biodiversity dataset usage indicators for the development of a DUI based on search events and dataset download instances. The DUI is proposed to include relative as well as species profile weighted comparative indicators.ConclusionsWe believe that in addition to the recognition to the data publisher and all players involved in the data life cycle, a DUI will also provide much needed yet novel insight into how users use primary biodiversity data. A DUI consisting of a range of usage indicators obtained from the GBIF network and other relevant access points is within reach. The usage of biodiversity datasets leads to the development of a family of indicators in line with well known citation-based measurements of recognition.
---
paper_title: Social media and scholarly reading
paper_content:
Purpose – The purpose of this paper is to examine how often university academic staff members use and create various forms of social media for their work and how that use influences their use of traditional scholarly information sources.Design/methodology/approach – This article is based on a 2011 academic reading study conducted at six higher learning institutions in the United Kingdom. Approximately 2,000 respondents completed the web‐based survey. The study used the critical incident of last reading by academics to gather information on the purpose, outcomes, and values of scholarly readings and access to library collections. In addition, academics were asked about their use and creation of social media as part of their work activities. The authors looked at six categories of social media – blogs, videos/YouTube, RSS feeds, Twitter feeds, user comments in articles, podcasts, and other. This article focuses on the influence of social media on scholarly reading patterns.Findings – Most UK academics use o...
---
paper_title: The role of online videos in research communication: A content analysis of YouTube videos cited in academic publications 1
paper_content:
Although there is some evidence that online videos are increasingly used by academics for informal scholarly communication and teaching, the extent to which they are used in published academic research is unknown. This article explores the extent to which YouTube videos are cited in academic publications and whether there are significant broad disciplinary differences in this practice. To investigate, we extracted the URL citations to YouTube videos from academic publications indexed by Scopus. A total of 1,808 Scopus publications cited at least one YouTube video, and there was a steady upward growth in citing online videos within scholarly publications from 2006 to 2011, with YouTube citations being most common within arts and humanities (0.3%) and the social sciences (0.2%). A content analysis of 551 YouTube videos cited by research articles indicated that in science (78%) and in medicine and health sciences (77%), over three fourths of the cited videos had either direct scientific (e.g., laboratory experiments) or scientific-related contents (e.g., academic lectures or education) whereas in the arts and humanities, about 80% of the YouTube videos had art, culture, or history themes, and in the social sciences, about 63% of the videos were related to news, politics, advertisements, and documentaries. This shows both the disciplinary differences and the wide variety of innovative research communication uses found for videos within the different subject areas. © 2012 Wiley Periodicals, Inc.
---
paper_title: A Community of Curious Souls: An Analysis of Commenting Behavior on TED Talks Videos
paper_content:
The TED (Technology, Entertainment, Design) Talks website hosts video recordings of various experts, celebrities, academics, and others who discuss their topics of expertise. Funded by advertising and members but provided free online, TED Talks have been viewed over a billion times and are a science communication phenomenon. Although the organization has been derided for its populist slant and emphasis on entertainment value, no previous research has assessed audience reactions in order to determine the degree to which presenter characteristics and platform affect the reception of a video. This article addresses this issue via a content analysis of comments left on both the TED website and the YouTube platform (on which TED Talks videos are also posted). It was found that commenters were more likely to discuss the characteristics of a presenter on YouTube, whereas commenters tended to engage with the talk content on the TED website. In addition, people tended to be more emotional when the speaker was a woman (by leaving comments that were either positive or negative). The results can inform future efforts to popularize science amongst the public, as well as to provide insights for those looking to disseminate information via Internet videos.
---
paper_title: The State of Blogging
paper_content:
A semiconductor device comprising a surface layer of one conductivity type provided on a support and separated therefrom by a junction, said surface layer defining a surface and comprising island-insulating means in the form of an electrode pattern at the surface and which surrounds at least an island-shaped part of the surface layer comprising a component. The electrode pattern is separated from the surface layer by a barrier junction and is to induce electric fields in the surface layer to form a depletion region that is below the electrode pattern and extends throughout the thickness of the surface layer, an electric connection for draining leakage currents, is at part of the surface layer present at the edge of the surface and outside the pattern.
---
paper_title: Using altmetrics for assessing research impact in the humanities
paper_content:
The prospects of altmetrics are especially encouraging for research fields in the humanities that currently are difficult to study using established bibliometric methods. Yet, little is known about the altmetric impact of research fields in the humanities. Consequently, this paper analyses the altmetric coverage and impact of humanities-oriented articles and books published by Swedish universities during 2012. Some of the most common altmetric sources are examined using a sample of 310 journal articles and 54 books. Mendeley has the highest coverage of journal articles (61 %) followed by Twitter (21 %) while very few of the publications are mentioned in blogs or on Facebook. Books, on the other hand, are quite often tweeted while both Mendeley's and the novel data source Library Thing's coverage is low. Many of the problems of applying bibliometrics to the humanities are also relevant for altmetric approaches; the importance of non-journal publications, the reliance on print as well the limited coverage of non-English language publications. However, the continuing development and diversification of methods suggests that altmetrics could evolve into a valuable tool for assessing research in the humanities.
---
paper_title: Studying Scientific Discourse on the Web Using Bibliometrics : A Chemistry Blogging Case Study
paper_content:
In this work we study scientific discourse on the Web through its connection to the academic literature.
---
paper_title: How is research blogged? A content analysis approach
paper_content:
Blogs that cite academic articles have emerged as a potential source of alternative impact metrics for the visibility of the blogged articles. Nevertheless, to evaluate more fully the value of blog citations, it is necessary to investigate whether research blogs focus on particular types of articles or give new perspectives on scientific discourse. Therefore, we studied the characteristics of peer-reviewed references in blogs and the typical content of blog posts to gain insight into bloggers' motivations. The sample consisted of 391 blog posts from 2010 to 2012 in Researchblogging.org's health category. The bloggers mostly cited recent research articles or reviews from top multidisciplinary and general medical journals. Using content analysis methods, we created a general classification scheme for blog post content with 10 major topic categories, each with several subcategories. The results suggest that health research bloggers rarely self-cite and that the vast majority of their blog posts (90%) include a general discussion of the issue covered in the article, with more than one quarter providing health-related advice based on the article(s) covered. These factors suggest a genuine attempt to engage with a wider, nonacademic audience. Nevertheless, almost 30% of the posts included some criticism of the issues being discussed.
---
paper_title: Research Blogging: Indexing and Registering the Change in Science 2.0
paper_content:
Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.
---
paper_title: Science blogs and public engagement with science: practices, challenges, and opportunities
paper_content:
Digital information and communication technologies (ICTs) are novelty tools that can be used to facilitate broader involvement of citizens in the discussions about science. The same tools can be used to reinforce the traditional top-down model of science communication. Empirical investigations of particular technologies can help to understand how these tools are used in the dissemination of information and knowledge as well as stimulate a dialog about better models and practices of science communication. This study focuses on one of the ICTs that have already been adopted in science communication, on science blogging. The findings from the analysis of eleven blogs are presented in an attempt to understand current practices of science blogging and to provide insight into the role of blogging in the promotion of more interactive forms of science communication.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Blogging thoughts: personal publication as an online research tool
paper_content:
Once upon a time, weblogs were automatically collated overviews of data about visitors to a web server. That's changed. Nowadays the texts called weblogs are definitely not written by a computer. Weblogs today are subjective annotations to the web rather than statistics about it. Weblogs, or blogs as they are affectionately termed, are frequently updated websites, usually personal, with commentary and links. Link lists are as old as home pages, but a blog is far from a static link list or home page. A blog consists of many relatively short posts, usually time-stamped, and organised in reverse chronology so that a reader will always see the most recent post first. The first weblogs were seen as filters to the Internet; interesting links to sites the reader might not have seen, often with commentary from the blogger. Though weblogs have many different themes, looks and writing styles, formally the genre is clear. Brief, dated posts collected on one web page are the main formal criteria. Evan Williams, one of the creators of the popular blogging tool Blogger, is succinct in his definition:
---
paper_title: Social Media Release Increases Dissemination of Original Articles in the Clinical Pain Sciences
paper_content:
A barrier to dissemination of research is that it depends on the end-user searching for or ‘pulling’ relevant knowledge from the literature base. Social media instead ‘pushes’ relevant knowledge straight to the end-user, via blogs and sites such as Facebook and Twitter. That social media is very effective at improving dissemination seems well accepted, but, remarkably, there is no evidence to support this claim. We aimed to quantify the impact of social media release on views and downloads of articles in the clinical pain sciences. Sixteen PLOS ONE articles were blogged and released via Facebook, Twitter, LinkedIn and ResearchBlogging.org on one of two randomly selected dates. The other date served as a control. The primary outcomes were the rate of HTML views and PDF downloads of the article, over a seven-day period. The critical result was an increase in both outcome variables in the week after the blog post and social media release. The mean ± SD rate of HTML views in the week after the social media release was 18±18 per day, whereas the rate during the other three weeks was no more than 6±3 per day. The mean ± SD rate of PDF downloads in the week after the social media release was 4±4 per day, whereas the rate during the other three weeks was less than 1±1 per day (p 0.3 for all). We conclude that social media release of a research article in the clinical pain sciences increases the number of people who view or download that article, but conventional social media metrics are unrelated to the effect.
---
paper_title: We the Media: Grassroots Journalism by the People, for the People
paper_content:
"We the Media, has become something of a bible for those who believe the online medium will change journalism for the better." - "Financial Times". Big Media has lost its monopoly on the news, thanks to the Internet. Now that it's possible to publish in real time to a worldwide audience, a new breed of grassroots journalists are taking the news into their own hands. Armed with laptops, cell phones, and digital cameras, these readers-turned-reporters are transforming the news from a lecture into a conversation. In "We the Media", nationally acclaimed newspaper columnist and blogger Dan Gillmor tells the story of this emerging phenomenon and sheds light on this deep shift in how we make - and consume - the news. Gillmor shows how anyone can produce the news, using personal blogs, Internet chat groups, email, and a host of other tools. He sends a wake-up call to newsmakers - politicians, business executives, celebrities - and the marketers and PR flacks who promote them. He explains how to successfully play by the rules of this new era and shift from "control" to "engagement." And, he makes a strong case to his fell journalists that, in the face of a plethora of Internet-fueled news vehicles, they must change or become irrelevant. Journalism in the 21st century will be fundamentally different from the Big Media oligarchy that prevails today. "We the Media" casts light on the future of journalism, and invites us all to be part of it. Dan Gillmor is founder of Grassroots Media Inc., a project aimed at enabling grassroots journalism and expanding its reach. The company's first launch is Bayosphere.com, a site "of, by, and for the San Francisco Bay Area." From 1994-2004, Gillmor was a columnist at the "San Jose Mercury News", Silicon Valley's daily newspaper, and wrote a weblog for SiliconValley.com. He joined the "Mercury News" after six years with the Detroit Free Press. Before that, he was with the "Kansas City Times" and several newspapers in Vermont. He has won or shared in several regional and national journalism awards. Before becoming a journalist, he played music professionally for seven years.
---
paper_title: Blog-supported scientific communication: An exploratory analysis based on social hyperlinks in a Chinese blog community
paper_content:
As a new-style computer-mediated communication system, the blog has been gaining popularity among various Web users. Blog communities come into being in the process of self-organized communication between bloggers and the community structures are reflected by the embedded social networks. This study research the communication patterns of scientist bloggers with the data from the largest Chinese-language scientific blog community specializing in computer and information sciences and technologies, i.e. the Csdn blog. The social network analysis of its blogroll link data suggests that the Csdn blog community is a small-world network. Many sub-communities exist in the blog community. The communication between the central and ordinary bloggers within the same sub-community is usually one-way and dense. The structure of the Csdn blog community indicates that distributed central actors are still important in the diffusion and communication of scientific knowledge.
---
paper_title: Translating Research For Health Policy: Researchers’ Perceptions And Use Of Social Media
paper_content:
As the United States moves forward with health reform, the communication gap between researchers and policy makers will need to be narrowed to promote policies informed by evidence. Social media represent an expanding channel for communication. Academic journals, public health agencies, and health care organizations are increasingly using social media to communicate health information. For example, the Centers for Disease Control and Prevention now regularly tweets to 290,000 followers. We conducted a survey of health policy researchers about using social media and two traditional channels (traditional media and direct outreach) to disseminate research findings to policy makers. Researchers rated the efficacy of the three dissemination methods similarly but rated social media lower than the other two in three domains: researchers’ confidence in their ability to use the method, peers’ respect for its use, and how it is perceived in academic promotion. Just 14 percent of our participants reported tweeting, ...
---
paper_title: Science blogging: an exploratory study of motives, styles, and audience reactions
paper_content:
This paper presents results from three studies on science blogging, the use of blogs for science communication. A survey addresses the views and motives of science bloggers, a first content analysis examines material published in science blogging platforms, while a second content analysis looks at reader responses to controversial issues covered in science blogs. Bloggers determine to a considerable degree which communicative function their blog can realize and how accessible it will be to non-experts Frequently readers are interested in adding their views to a post, a form of involvement which is in turn welcomed by the majority of bloggers.
---
paper_title: Studying Scientific Discourse on the Web Using Bibliometrics : A Chemistry Blogging Case Study
paper_content:
In this work we study scientific discourse on the Web through its connection to the academic literature.
---
paper_title: Academic blogging, academic practice and academic identity
paper_content:
This paper describes a small-scale study which investigates the role of blogging in professional academic practice in higher education. It draws on interviews with a sample of academics (scholars, researchers and teachers) who have blogs and on the author's own reflections on blogging to investigate the function of blogging in academic practice and its contribution to academic identity. It argues that blogging offers the potential of a new genre of accessible academic production which could contribute to the creation of a new twenty-first century academic identity with more involvement as a public intellectual.
---
paper_title: Blogging and the Transformation of Legal Scholarship
paper_content:
Does blogging have anything to do with legal scholarship? Could blogging transform the legal academy? This paper suggests that these are the wrong questions. Blogs have plenty to do with legal scholarship - that's obvious. But what blogs have to do with legal scholarship isn't driven by anything special about blogs qua weblogs, qua collections of web pages that share the form of a journal or log. The relationship between blogging and the future of legal scholarship is a product of other forces - the emergence of the short form, the obsolesce of exclusive rights, and the trend towards the disintermediation of legal scholarship. Those forces and their relationship to blogging will be the primary focus of this paper. The transition from the "long form" to the "short form" involves movement from very long law review articles and multivolume treatises to new forms of legal scholarship, including the blog post, the idea piece, and the use of collaborative online authoring environments such as wikis. The transition from exclusive rights to open source requires publication in formats that provide full text searchability and the use of copyright to insure that scholarship can be freely downloaded and duplicated. The trend toward disintermediation reflects the diminished role of traditional intermediaries such as student and peer editorial boards and the growing role of search engines such as Google. These trends are the result of technology change and the fundamental forces that drive legal scholarship. Each of the three trends, the short form, open access, and disintermediation reduces search costs and access costs to legal scholarship. Reducing costs has other important implications, including the facilitation of the globalization of legal scholarship and the reduction of lag times between the production and full-scale dissemination of new scholarship. Each of these important trends is facilitated by blogs and blogging, but the blog or weblog is only one form that these trends can take. Blogs express and facilitate the fundamental forces that are already transforming legal scholarship in fundamental ways.
---
paper_title: Why do academics blog? An analysis of audiences, purposes and challenges
paper_content:
Academics are increasingly being urged to blog in order to expand their audiences, create networks and to learn to write in more reader friendly style. This paper holds this advocacy up to empirical scrutiny. A content analysis of 100 academic blogs suggests that academics most commonly write about academic work conditions and policy contexts, share information and provide advice; the intended audience for this work is other higher education staff. We contend that academic blogging may constitute a community of practice in which a hybrid public/private academic operates in a ‘gift economy’. We note however that academic blogging is increasingly of interest to institutions and this may challenge some of the current practices we have recorded. We conclude that there is still much to learn about academic blogging practices.
---
paper_title: The roles, reasons and restrictions of science blogs.
paper_content:
Over the past few years, blogging ('web logging') has become a major social movement, and as such includes blogs by scientists about science. Blogs are highly idiosyncratic, personal and ephemeral means of public expression, and yet they contribute to the current practice and reputation of science as much as, if not more than, any popular scientific work or visual presentation. It is important, therefore, to understand this phenomenon.
---
paper_title: How is research blogged? A content analysis approach
paper_content:
Blogs that cite academic articles have emerged as a potential source of alternative impact metrics for the visibility of the blogged articles. Nevertheless, to evaluate more fully the value of blog citations, it is necessary to investigate whether research blogs focus on particular types of articles or give new perspectives on scientific discourse. Therefore, we studied the characteristics of peer-reviewed references in blogs and the typical content of blog posts to gain insight into bloggers' motivations. The sample consisted of 391 blog posts from 2010 to 2012 in Researchblogging.org's health category. The bloggers mostly cited recent research articles or reviews from top multidisciplinary and general medical journals. Using content analysis methods, we created a general classification scheme for blog post content with 10 major topic categories, each with several subcategories. The results suggest that health research bloggers rarely self-cite and that the vast majority of their blog posts (90%) include a general discussion of the issue covered in the article, with more than one quarter providing health-related advice based on the article(s) covered. These factors suggest a genuine attempt to engage with a wider, nonacademic audience. Nevertheless, almost 30% of the posts included some criticism of the issues being discussed.
---
paper_title: Bloggership, or is publishing a blog scholarship? A survey of academic librarians
paper_content:
Purpose – The aim of this paper is to gauge how academic libraries treat publishing a blog.Design/methodology/approach – As blogging becomes more popular, the question arises as to whether it should count as scholarship or a creative activity in academic promotion and tenure. To find out, the author sent a link to a questionnaire to several e‐mail lists, inviting academic librarians to answer a short survey.Findings – In total, 73.9 percent of respondents indicated that their institution expects them to engage in scholarly activities and/or publish scholarly articles, 53.6 percent indicated that their performance review committees do not weigh a blog the same as an article published in a peer‐reviewed journal.Research limitations/implications – As technology changes, policies will need to change.Practical implications – Libraries may need to adapt to new forms of scholarship. Electronic scholarship needs a mechanism for peer‐review.Originality/value – The paper is original – the author did not find any ot...
---
paper_title: Examining the Medical Blogosphere: An Online Survey of Medical Bloggers
paper_content:
Background: Blogs are the major contributors to the large increase of new websites created each year. Most blogs allow readers to leave comments and, in this way, generate both conversation and encourage collaboration. Despite their popularity, however, little is known about blogs or their creators. ::: Objectives: To contribute to a better understanding of the medical blogosphere by investigating the characteristics of medical bloggers and their blogs, including bloggers’ Internet and blogging habits, their motivations for blogging, and whether or not they follow practices associated with journalism. ::: Methods: We approached 197 medical bloggers of English-language medical blogs which provided direct contact information, with posts published within the past month. The survey included 37 items designed to evaluate data about Internet and blogging habits, blog characteristics, blogging motivations, and, finally, the demographic data of bloggers. ::: Pearson’s Chi-Square test was used to assess the significance of an association between 2 categorical variables. Spearman’s rank correlation coefficient was utilized to reveal the relationship between participants’ ages, as well as the number of maintained blogs, and their motivation for blogging. The Mann-Whitney U test was employed to reveal relationships between practices associated with journalism and participants’ characteristics like gender and pseudonym use. ::: Results: A total of 80 (42%) of 197 eligible participants responded. The majority of responding bloggers were white (75%), highly educated (71% with a Masters degree or doctorate), male (59%), residents of the United States (72%), between the ages of 30 and 49 (58%), and working in the healthcare industry (67%). Most of them were experienced bloggers, with 23% (18/80) blogging for 4 or more years, 38% (30/80) for 2 or 3 years, 32% (26/80) for about a year, and only 7% (6/80) for 6 months or less. Those who received attention from the news media numbered 66% (53/80). When it comes to best practices associated with journalism, the participants most frequently reported including links to original source of material and spending extra time verifying facts, while rarely seeking permission to post copyrighted material. Bloggers who have published a scientific paper were more likely to quote other people or media than those who have never published such a paper (U= 506.5, n1= 41, n2= 35, P= .016). Those blogging under their real name more often included links to original sources than those writing under a pseudonym (U= 446.5, n1= 58, n2= 19, P= .01). Major motivations for blogging were sharing practical knowledge or skills with others, influencing the way others think, and expressing oneself creatively. ::: Conclusions: Medical bloggers are highly educated and devoted blog writers, faithful to their sources and readers. Sharing practical knowledge and skills, as well as influencing the way other people think, were major motivations for blogging among our medical bloggers. Medical blogs are frequently picked up by mainstream media; thus, blogs are an important vehicle to influence medical and health policy. [J Med Internet Res 2008;10(3):e28]
---
paper_title: Research Blogging: Indexing and Registering the Change in Science 2.0
paper_content:
Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.
---
paper_title: How Digital Are the Digital Humanities? An Analysis of Two Scholarly Blogging Platforms
paper_content:
In this paper we compare two academic networking platforms, HASTAC and Hypotheses, to show the distinct ways in which they serve specific communities in the Digital Humanities (DH) in different national and disciplinary contexts. After providing background information on both platforms, we apply co-word analysis and topic modeling to show thematic similarities and differences between the two sites, focusing particularly on how they frame DH as a new paradigm in humanities research. We encounter a much higher ratio of posts using humanities-related terms compared to their digital counterparts, suggesting a one-way dependency of digital humanities-related terms on the corresponding unprefixed labels. The results also show that the terms digital archive, digital literacy, and digital pedagogy are relatively independent from the respective unprefixed terms, and that digital publishing, digital libraries, and digital media show considerable cross-pollination between the specialization and the general noun. The topic modeling reproduces these findings and reveals further differences between the two platforms. Our findings also indicate local differences in how the emerging field of DH is conceptualized and show dynamic topical shifts inside these respective contexts.
---
paper_title: Science blogs and public engagement with science: practices, challenges, and opportunities
paper_content:
Digital information and communication technologies (ICTs) are novelty tools that can be used to facilitate broader involvement of citizens in the discussions about science. The same tools can be used to reinforce the traditional top-down model of science communication. Empirical investigations of particular technologies can help to understand how these tools are used in the dissemination of information and knowledge as well as stimulate a dialog about better models and practices of science communication. This study focuses on one of the ICTs that have already been adopted in science communication, on science blogging. The findings from the analysis of eleven blogs are presented in an attempt to understand current practices of science blogging and to provide insight into the role of blogging in the promotion of more interactive forms of science communication.
---
paper_title: Science Blogging: Networks, Boundaries and Limitations
paper_content:
ABSTRACTThere is limited research into the realities of science blogging, how science bloggers themselves view their activity and what bloggers can achieve. The ‘badscience’ blogs analysed here show a number of interesting developments, with significant implications for understandings of science blogging and scientific cultures more broadly. A functioning and diverse online community (with offline elements) has been constructed, with a number of non-professional and anonymous members and with boundary work being used to establish a recognisable outgroup. The community has developed distinct norms alongside a type of distributed authority and has negotiated the authority, anonymity and varying status of many community members in some interesting and novel ways. Activist norms and initiatives have been actioned, with some prominent community campaigns and action. There are questions about what science blogging—both in the UK and internationally—may be able to achieve in future and about the fragility of the...
---
paper_title: Wired Academia: Why Social Science Scholars Are Using Social Media
paper_content:
Social media websites are having a significant impact on how collaborative relationships are formed and information is disseminated throughout society. While there is a large body of literature devoted to the ways in which the general public is making use of social media, there is little research regarding how such trends are impacting scholarly practices. This paper presents the results of a study on how academics, primarily in social sciences, are adopting these new sites.
---
paper_title: Beyond the Blog
paper_content:
This dissertation examines weblog community as a materially afforded and socially constructed space. In a set of three case studies, this dissertation examines three separate weblog communities bet ...
---
paper_title: Scholarly hyperwriting: The function of links in academic weblogs
paper_content:
Weblogs are gaining momentum as one of most versatile tools for online scholarly communication. Since academic weblogs tend to be used by scholars to position themselves in a disciplinary blogging community, links are essential to their construction. The aim of this article is to analyze the reasons for linking in academic weblogs and to determine how links are used for distribution of information, collaborative construction of knowledge, and construction of the blog's and the blogger's identity. For this purpose I analyzed types of links in 15 academic blogs, considering both sidebar links and in-post links. The results show that links are strategically used by academic bloggers for several purposes, among others to seek their place in a disciplinary community, to engage in hypertext conversations for collaborative construction of knowledge, to organize information in the blog, to publicize their research, to enhance the blog's visibility, and to optimize blog entries and the blog itself. © 2009 Wiley Periodicals, Inc.
---
paper_title: The Blogosphere in the Spanish Literary Field. Consequences and Challenges to Twenty-First Century Literature
paper_content:
This paper analyses the impact of blogs on the Hispanic literary field, and how the spreading of blogs concerned with literary critics and poetry is questioning traditional hierarchies. It looks at how literary reviews and critics on the Internet are superseding traditional criticism, which has been accused of an excessive interdependence between publishers, newspapers, poets, and critics. It will be argued that, in this competition between paper and Internet media, the former often functions as a conservative academic bastion whose prestige is being constantly contested. In addition, attention will be paid to websites like Las afinidades electivas, defined as ‘an attempt for virtual interconnection between contemporary Spanish poets’, social networks which recently have influenced other collective initiatives. In all these cases, the focus will be on the formal and thematic challenges which this new medium of diffusion offers for the development of a new literature.
---
paper_title: Research Blogs and the Discussion of Scholarly Information
paper_content:
The research blog has become a popular mechanism for the quick discussion of scholarly information. However, unlike peer-reviewed journals, the characteristics of this form of scientific discourse are not well understood, for example in terms of the spread of blogger levels of education, gender and institutional affiliations. In this paper we fill this gap by analyzing a sample of blog posts discussing science via an aggregator called ResearchBlogging.org (RB). ResearchBlogging.org aggregates posts based on peer-reviewed research and allows bloggers to cite their sources in a scholarly manner. We studied the bloggers, blog posts and referenced journals of bloggers who posted at least 20 items. We found that RB bloggers show a preference for papers from high-impact journals and blog mostly about research in the life and behavioral sciences. The most frequently referenced journal sources in the sample were: Science, Nature, PNAS and PLoS One. Most of the bloggers in our sample had active Twitter accounts connected with their blogs, and at least 90% of these accounts connect to at least one other RB-related Twitter account. The average RB blogger in our sample is male, either a graduate student or has been awarded a PhD and blogs under his own name.
---
paper_title: Science blogs as boundary layers: Creating and understanding new writer and reader interactions through science blogging
paper_content:
This study examines the affordances that journalistic science blogging offers at the boundaries between science communicators, researchers, non-scientists, and other readers. Taking a framework of boundary phenomena, it examines, as a case study, the blog Not Exactly Rocket Science and in particular two posts that spawned a collaboration between a scientist and a farmer. Two existing boundary phenomena, boundary objects and boundary organizations, are examined as possible models for understanding the interactions facilitated by this science blog. These existing phenomena are argued to not adequately account for and describe the interactions between people and information facilitated by the case study posts. To better understand science blogging boundaries, a new category of boundary phenomenon – the boundary layer – is proposed.
---
paper_title: Blogs and the Promotion and Tenure Letter
paper_content:
Should blogs count as legal scholarship for purposes of tenure? This essay looks at the line between legal scholarship and service. It considers the value of blogging and the role it should play in a person's tenure review.
---
paper_title: I am a blogging researcher: Motivations for blogging in a scholarly context
paper_content:
The number of scholarly blogs on the Web is increasing. In this article, a group of researchers are asked to describe the functions that their blogs serve for them as researchers. The results show that their blogging is motivated by the possibility to share knowledge, that the blog aids creativity, and that it provides a feeling of being connected in their work as researchers. The blog serves in particular as a creative catalyst in the work of researchers, where writing forms a large part, which is not as prominent as a motivation in other professional blogs. In addition, the analysis brings out the blogs' combination of functions and the possibility it offers to reach multiple audiences as a motivating factor that makes the blog different from other kinds of communication in scholarly contexts.
---
paper_title: Translating Research For Health Policy: Researchers’ Perceptions And Use Of Social Media
paper_content:
As the United States moves forward with health reform, the communication gap between researchers and policy makers will need to be narrowed to promote policies informed by evidence. Social media represent an expanding channel for communication. Academic journals, public health agencies, and health care organizations are increasingly using social media to communicate health information. For example, the Centers for Disease Control and Prevention now regularly tweets to 290,000 followers. We conducted a survey of health policy researchers about using social media and two traditional channels (traditional media and direct outreach) to disseminate research findings to policy makers. Researchers rated the efficacy of the three dissemination methods similarly but rated social media lower than the other two in three domains: researchers’ confidence in their ability to use the method, peers’ respect for its use, and how it is perceived in academic promotion. Just 14 percent of our participants reported tweeting, ...
---
paper_title: Using altmetrics for assessing research impact in the humanities
paper_content:
The prospects of altmetrics are especially encouraging for research fields in the humanities that currently are difficult to study using established bibliometric methods. Yet, little is known about the altmetric impact of research fields in the humanities. Consequently, this paper analyses the altmetric coverage and impact of humanities-oriented articles and books published by Swedish universities during 2012. Some of the most common altmetric sources are examined using a sample of 310 journal articles and 54 books. Mendeley has the highest coverage of journal articles (61 %) followed by Twitter (21 %) while very few of the publications are mentioned in blogs or on Facebook. Books, on the other hand, are quite often tweeted while both Mendeley's and the novel data source Library Thing's coverage is low. Many of the problems of applying bibliometrics to the humanities are also relevant for altmetric approaches; the importance of non-journal publications, the reliance on print as well the limited coverage of non-English language publications. However, the continuing development and diversification of methods suggests that altmetrics could evolve into a valuable tool for assessing research in the humanities.
---
paper_title: Tweeting Links to Academic Articles
paper_content:
Academic articles are now frequently tweeted and so Twitter seems to be a useful tool for scholars to use to help keep up with publications and discussions in their fields. Perhaps as a result of this, tweet counts are increasingly used by digital libraries and journal websites as indicators of an article's interest or impact. Nevertheless, it is not known whether tweets are typically positive, neutral or critical, or how articles are normally tweeted. These are problems for those wishing to tweet articles effectively and for those wishing to know whether tweet counts in digital libraries should be taken seriously. In response, a pilot study content analysis was conducted of 270 tweets linking to articles in four journals, four digital libraries and two DOI URLs, collected over a period of eight months in 2012. The vast majority of the tweets echoed an article title (42%) or a brief summary (41%). One reason for summarising an article seemed to be to translate it for a general audience. Few tweets explicitly praised an article and none were critical. Most tweets did not directly refer to the article author, but some did and others were clearly self-citations. In summary, tweets containing links to scholarly articles generally provide little more than publicity, and so whilst tweet counts may provide evidence of the popularity of an article, the contents of the tweets themselves are unlikely to give deep insights into scientists' reactions to publications, except perhaps in special cases.
---
paper_title: Presenting professorship on social media: from content and strategy to evaluation
paper_content:
Technology has helped to reform class dynamics and teacher-student relationships. Although the phenomenon of online presentation has drawn considerable scholarly attention, academics seem to have an incomplete understanding about their own presentations online, especially in using social media. Without a thorough examination of how academics present themselves on social media, our understanding of the online learning environment is limited. To fill this void, this study aims to explore the following: (1) the content that college professors provide and the strategy they employ to present it on social media; and (2) how the public evaluates professors based on the content and strategy presented on social media. This study utilizes two methods. First, it conducts a content analysis of 2,783 pieces of microblog posts from 142 full-time communication professors' microblog accounts. Second, it conducts an online experiment based on a between-subject factorial design of 2 (gender: male vs. female) X 3 (topic: pe...
---
paper_title: Social media and scholarly reading
paper_content:
Purpose – The purpose of this paper is to examine how often university academic staff members use and create various forms of social media for their work and how that use influences their use of traditional scholarly information sources.Design/methodology/approach – This article is based on a 2011 academic reading study conducted at six higher learning institutions in the United Kingdom. Approximately 2,000 respondents completed the web‐based survey. The study used the critical incident of last reading by academics to gather information on the purpose, outcomes, and values of scholarly readings and access to library collections. In addition, academics were asked about their use and creation of social media as part of their work activities. The authors looked at six categories of social media – blogs, videos/YouTube, RSS feeds, Twitter feeds, user comments in articles, podcasts, and other. This article focuses on the influence of social media on scholarly reading patterns.Findings – Most UK academics use o...
---
paper_title: Adapting sentiment analysis for tweets linking to scientific papers
paper_content:
Introduction In the context of “altmetrics”, tweets have been discussed as potential indicators of immediate and broader societal impact of scientific documents (Thelwall et al., 2013a). However, it is not yet clear to what extent Twitter captures actual research impact. A small case study (Thelwall et al., 2013b) suggests that tweets to journal articles neither comment on nor express any sentiments towards the publication, which suggests that tweets merely disseminate bibliographic information, often even automatically (Haustein et al., in press). This study analyses the sentiments of tweets for a large representative set of scientific papers by specifically adapting different methods to academic articles distributed on Twitter. The aim is to improve the understanding of Twitter’s role in scholarly communication and the meaning of tweets as impact metrics.
---
paper_title: Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact
paper_content:
Background: Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known. Objective: (1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles. Methods: Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated. Results: A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity. Conclusions: Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time. [J Med Internet Res 2011;13(4):e123]
---
paper_title: International Urology Journal Club via Twitter: 12-Month Experience
paper_content:
Abstract Background Online journal clubs have increasingly been utilised to overcome the limitations of the traditional journal club. However, to date, no reported online journal club is available for international participation. Objective To present a 12-mo experience from the International Urology Journal Club, the world's first international journal club using Twitter, an online micro-blogging platform, and to demonstrate the viability and sustainability of such a journal club. Design, setting, and participants #urojc is an asynchronous 48-h monthly journal club moderated by the Twitter account @iurojc. The open invitation discussions focussed on papers typically published within the previous 2–4 wk. Data were obtained via third-party Twitter analysis services. Outcome measurements and statistical analysis Outcomes analysed included number of total and new users, number of tweets, and qualitative analysis of the relevance of tweets. Analysis was undertaken using GraphPad software, Microsoft Excel, and thematic qualitative analysis. Results and limitations The first 12 mo saw a total of 189 unique users representing 19 countries and 6 continents. There was a mean of 39 monthly participants that included 14 first-time participants per month. The mean number of tweets per month was 195 of which 62% represented original tweets directly related to the topic of discussion and 22% represented retweets of original posts. A mean of 130 832 impressions, or reach , were created per month. The @iurojc moderator account has accumulated >1000 followers. The study is limited by potentially incomplete data extracted by third-party Twitter analysers. Conclusions Social media provides a potential for enormous international communication that has not been possible in the past. We believe the pioneering #urojc is both viable and sustainable. There is unlimited scope for journal clubs in other fields to follow the example of #urojc and utilise online portals to revitalise the traditional journal club while fostering international relationships.
---
paper_title: Increased Use of Twitter at a Medical Conference: A Report and a Review of the Educational Opportunities
paper_content:
BACKGROUND ::: Most consider Twitter as a tool purely for social networking. However, it has been used extensively as a tool for online discussion at nonmedical and medical conferences, and the academic benefits of this tool have been reported. Most anesthetists still have yet to adopt this new educational tool. There is only one previously published report of the use of Twitter by anesthetists at an anesthetic conference. This paper extends that work. ::: ::: ::: OBJECTIVE ::: We report the uptake and growth in the use of Twitter, a microblogging tool, at an anesthetic conference and review the potential use of Twitter as an educational tool for anesthetists. ::: ::: ::: METHODS ::: A unique Twitter hashtag (#WSM12) was created and promoted by the organizers of the Winter Scientific Meeting held by The Association of Anaesthetists of Great Britain and Ireland (AAGBI) in London in January 2012. Twitter activity was compared with Twitter activity previously reported for the AAGBI Annual Conference (September 2011 in Edinburgh). All tweets posted were categorized according to the person making the tweet and the purpose for which they were being used. The categories were determined from a literature review. ::: ::: ::: RESULTS ::: A total of 227 tweets were posted under the #WSM12 hashtag representing a 530% increase over the previously reported anesthetic conference. Sixteen people joined the Twitter stream by using this hashtag (300% increase). Excellent agreement (κ = 0.924) was seen in the classification of tweets across the 11 categories. Delegates primarily tweeted to create and disseminate notes and learning points (55%), describe which session was attended, undertake discussions, encourage speakers, and for social reasons. In addition, the conference organizers, trade exhibitors, speakers, and anesthetists who did not attend the conference all contributed to the Twitter stream. The combined total number of followers of those who actively tweeted represented a potential audience of 3603 people. ::: ::: ::: CONCLUSIONS ::: This report demonstrates an increase in uptake and growth in the use of Twitter at an anesthetic conference and the review illustrates the opportunities and benefits for medical education in the future.
---
paper_title: Disciplinary differences in Twitter scholarly communication
paper_content:
This paper investigates disciplinary differences in how researchers use the microblogging site Twitter. Tweets from selected researchers in ten disciplines (astrophysics, biochemistry, digital humanities, economics, history of science, cheminformatics, cognitive science, drug discovery, social network analysis, and sociology) were collected and analyzed both statistically and qualitatively. The researchers tended to share more links and retweet more than the average Twitter users in earlier research and there were clear disciplinary differences in how they used Twitter. Biochemists retweeted substantially more than researchers in the other disciplines. Researchers in digital humanities and cognitive science used Twitter more for conversations, while researchers in economics shared the most links. Finally, whilst researchers in biochemistry, astrophysics, cheminformatics and digital humanities seemed to use Twitter for scholarly communication, scientific use of Twitter in economics, sociology and history of science appeared to be marginal.
---
paper_title: Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature
paper_content:
Data collected by social media platforms have been introduced as new sources for indicators to help measure the impact of scholarly research in ways that are complementary to traditional citation analysis. Data generated from social media activities can be used to reflect broad types of impact. This article aims to provide systematic evidence about how often Twitter is used to disseminate information about journal articles in the biomedical sciences. The analysis is based on 1.4 million documents covered by both PubMed and Web of Science and published between 2010 and 2012. The number of tweets containing links to these documents was analyzed and compared to citations to evaluate the degree to which certain journals, disciplines, and specialties were represented on Twitter and how far tweets correlate with citation impact. With less than 10% of PubMed articles mentioned on Twitter, its uptake is low in general but differs between journals and specialties. Correlations between tweets and citations are low, implying that impact metrics based on tweets are different from those based on citations. A framework using the coverage of articles and the correlation between Twitter mentions and citations is proposed to facilitate the evaluation of novel social-media-based metrics.
---
paper_title: Social media in radiology: early trends in Twitter microblogging at radiology's largest international meeting.
paper_content:
PURPOSE ::: Twitter is a social media microblogging platform that allows rapid exchange of information between individuals. Despite its widespread acceptance and use at various other medical specialty meetings, there are no published data evaluating its use at radiology meetings. The purpose of this study is to quantitatively and qualitatively evaluate the use of Twitter as a microblogging platform at recent RSNA annual meetings. ::: ::: ::: METHODS ::: Twitter activity meta-data tagged with official meeting hashtags #RSNA11 and #RSNA12 were collected and analyzed. Multiple metrics were evaluated, including daily and hourly Twitter activity, frequency of microblogging activity over time, characteristics of the 100 most active Twitter users at each meeting, characteristics of meeting-related tweets, and the geographic origin of meeting microbloggers. ::: ::: ::: RESULTS ::: The use of Twitter microblogging increased by at least 30% by all identifiable meaningful metrics between the 2011 and 2012 RSNA annual meetings, including total tweets, tweets per day, activity of the most active microbloggers, and total number of microbloggers. Similar increases were observed in numbers of North American and international microbloggers. ::: ::: ::: CONCLUSION ::: Markedly increased use of the Twitter microblogging platform at recent RSNA annual meetings demonstrates the potential to leverage this technology to engage meeting attendees, improve scientific sessions, and promote improved collaboration at national radiology meetings.
---
paper_title: Identifying and analyzing researchers on twitter
paper_content:
For millions of users Twitter is an important communication platform, a social network, and a system for resource sharing. Likewise, scientists use Twitter to connect with other researchers, announce calls for papers, or share their thoughts. Filtering tweets, discovering other researchers, or finding relevant information on a topic of interest, however, is difficult since no directory of researchers on Twitter exists. In this paper we present an approach to identify Twitter accounts of researchers and demonstrate its utility for the discipline of computer science. Based on a seed set of computer science conferences we collect relevant Twitter users which we can partially map to ground-truth data. The mapping is leveraged to learn a model for classifying the remaining. To gain first insights into how researchers use Twitter, we empirically analyze the identified users and compare their age, popularity, influence, and social network.
---
paper_title: Adoption and use of Web 2.0 in scholarly communications
paper_content:
Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.
---
paper_title: Astrophysicists on Twitter: An in-depth analysis of tweeting and scientific publication behavior
paper_content:
Purpose ::: ::: ::: ::: – The purpose of this paper is to analyze the tweeting behavior of 37 astrophysicists on Twitter and compares their tweeting behavior with their publication behavior and citation impact to show whether they tweet research-related topics or not. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: – Astrophysicists on Twitter are selected to compare their tweets with their publications from Web of Science. Different user groups are identified based on tweeting and publication frequency. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: – A moderate negative correlation (ρ=−0.339) is found between the number of publications and tweets per day, while retweet and citation rates do not correlate. The similarity between tweets and abstracts is very low (cos=0.081). User groups show different tweeting behavior such as retweeting and including hashtags, usernames and URLs. ::: ::: ::: ::: ::: Research limitations/implications ::: ::: ::: ::: – The study is limited in terms of the small set of astrophysicists. Results are not necessarily representative of the entire astrophysicist community on Twitter and they most certainly do not apply to scientists in general. Future research should apply the methods to a larger set of researchers and other scientific disciplines. ::: ::: ::: ::: ::: Practical implications ::: ::: ::: ::: – To a certain extent, this study helps to understand how researchers use Twitter. The results hint at the fact that impact on Twitter can neither be equated with nor replace traditional research impact metrics. However, tweets and other so-called altmetrics might be able to reflect other impact of scientists such as public outreach and science communication. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: – To the best of the knowledge, this is the first in-depth study comparing researchers’ tweeting activity and behavior with scientific publication output in terms of quantity, content and impact.
---
paper_title: A Case Study in Serendipity: Environmental Researchers Use of Traditional and Social Media for Dissemination
paper_content:
In the face of demands for researchers to engage more actively with a wider range of publics and to capture different kinds of research impacts and engagements, we explored the ways a small number of environmental researchers use traditional and social media to disseminate research. A questionnaire was developed to investigate the impact of different media as a tool to broker contact between researchers and a variety of different stakeholders (for example, publics, other researchers, policymakers, journalists) as well as how researchers perceive that their use of these media has changed over the past five years. The questionnaire was sent to 504 researchers whose work had featured in a policy-oriented e-news service. 149 valid responses were received (29%). Coverage in traditional media (newspapers, broadcast) not only brokers contact with other journalists, but is a good source of contact from other researchers (n=47, 62%) and members of the public (n=36, 26%). Although the use of social media was limited amongst our sample, it did broker contact with other researchers (n=17, 47%) and the public (n=10, 28%). Nevertheless, few environmental researchers were actively using social media to disseminate their research findings, with many continuing to rely on academic journals and face-to-face communication to reach both academic and public audiences.
---
paper_title: How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations
paper_content:
We analyze the online response to the preprint publication of a cohort of 4,606 scientific articles submitted to the preprint database arXiv.org between October 2010 and May 2011. We study three forms of responses to these preprints: downloads on the arXiv.org site, mentions on the social media site Twitter, and early citations in the scholarly record. We perform two analyses. First, we analyze the delay and time span of article downloads and Twitter mentions following submission, to understand the temporal configuration of these reactions and whether one precedes or follows the other. Second, we run regression and correlation tests to investigate the relationship between Twitter mentions, arXiv downloads, and article citations. We find that Twitter mentions and arXiv downloads of scholarly articles follow two distinct temporal patterns of activity, with Twitter mentions having shorter delays and narrower time spans than arXiv downloads. We also find that the volume of Twitter mentions is statistically correlated with arXiv downloads and early citations just months after the publication of a preprint, with a possible bias that favors highly mentioned articles.
---
paper_title: Geographic variation in social media metrics: an analysis of Latin American journal articles
paper_content:
Purpose – The purpose of this study is to contribute to the understanding of how the potential of altmetrics varies around the world by measuring the percentage of articles with non-zero metrics (coverage) for articles published from a developing region (Latin America). Design/methodology/approach – This study uses article metadata from a prominent Latin American journal portal, SciELO, and combines it with altmetrics data from Altmetric.com and with data collected by author-written scripts. The study is primarily descriptive, focusing on coverage levels disaggregated by year, country, subject area, and language. Findings – Coverage levels for most of the social media sources studied was zero or negligible. Only three metrics had coverage levels above 2 per cent – Mendeley, Twitter, and Facebook. Of these, Twitter showed the most significant differences with previous studies. Mendeley coverage levels reach those found by previous studies, but it takes up to two years longer for articles to be saved in the...
---
paper_title: Do highly cited researchers successfully use the social web?
paper_content:
Academics can now use the web and the social websites to disseminate scholarly information in a variety of different ways. Although some scholars have taken advantage of these new online opportunities, it is not clear how widespread their uptake is or how much impact they can have. This study assesses the extent to which successful scientists have social web presences, focusing on one influential group: highly cited researchers working at European institutions. It also assesses the impact of these presences. We manually and systematically identified if the European highly cited researchers had profiles in Google Scholar, Microsoft Academic Search, Mendeley, Academia and LinkedIn or any content in SlideShare. We then used URL mentions and altmetric indicators to assess the impact of the web presences found. Although most of the scientists had an institutional website of some kind, few had created a profile in any social website investigated, and LinkedIn--the only non-academic site in the list--was the most popular. Scientists having one kind of social web profile were more likely to have another in many cases, especially in the life sciences and engineering. In most cases it was possible to estimate the relative impact of the profiles using a readily available statistic and there were disciplinary differences in the impact of the different kinds of profiles. Most social web profiles had some evidence of uptake, if not impact; nevertheless, the value of the indicators used is unclear.
---
paper_title: Use of social media in urology: data from the American Urological Association (AUA)
paper_content:
Objective To characterise the use of social media among members of the American Urological Association (AUA), as the use of social media in medicine has greatly expanded in recent years. Subjects and Methods In December 2012 to January 2013, the AUA e-mailed a survey with 34 questions on social media use to 2000 randomly selected urologists and 2047 resident/fellow members. Additional data was collected from Symplur analytics on social media use surrounding the AUA Annual Meeting in May 2013. Results In all, 382 (9.4%) surveys were completed, indicating 74% of responders had an online social media account. The most commonly used social media platforms were Facebook (93%), followed in descending order by LinkedIn (46%), Twitter (36%) and Google+ (26%). Being aged 5000 tweets from >600 distinct contributors. Conclusion As of early 2013, among respondents to an e-mail survey, most urologists and urology trainees used some form of social media, and its use in urology conferences has greatly expanded. © 2013 The Authors. BJU International © 2013 BJU International.
---
paper_title: Mapping Physician Twitter Networks: Describing How They Work as a First Step in Understanding Connectivity, Information Flow, and Message Diffusion
paper_content:
Background: Twitter is becoming an important tool in medicine, but there is little information on Twitter metrics. In order to recommend best practices for information dissemination and diffusion, it is important to first study and analyze the networks. Objective: This study describes the characteristics of four medical networks, analyzes their theoretical dissemination potential, their actual dissemination, and the propagation and distribution of tweets. Methods: Open Twitter data was used to characterize four networks: the American Medical Association (AMA), the American Academy of Family Physicians (AAFP), the American Academy of Pediatrics (AAP), and the American College of Physicians (ACP). Data were collected between July 2012 and September 2012. Visualization was used to understand the follower overlap between the groups. Actual flow of the tweets for each group was assessed. Tweets were examined using Topsy, a Twitter data aggregator. Results: The theoretical information dissemination potential for the groups is large. A collective community is emerging, where large percentages of individuals are following more than one of the groups. The overlap across groups is small, indicating a limited amount of community cohesion and cross-fertilization. The AMA followers’ network is not as active as the other networks. The AMA posted the largest number of tweets while the AAP posted the fewest. The number of retweets for each organization was low indicating dissemination that is far below its potential. Conclusions: To increase the dissemination potential, medical groups should develop a more cohesive community of shared followers. Tweet content must be engaging to provide a hook for retweeting and reaching potential audience. Next steps call for content analysis, assessment of the behavior and actions of the messengers and the recipients, and a larger-scale study that considers other medical groups using Twitter. [J Med Internet Res 2014;16(4):e107]
---
paper_title: Analysis of emergency physicians' Twitter accounts
paper_content:
Background Twitter is one of the fastest growing social media networks for communication between users via short messages. Technology proficient physicians have demonstrated enthusiasm in adopting social media for their work. Objective To identify and create the largest directory of emergency physicians on Twitter, analyse their user accounts and reveal details behind their connections. Methods Several web search tools were used to identify emergency physicians on Twitter with biographies completely or partially written in English. NodeXL software was used to calculate emergency physicians9 Twitter network metrics and create visualisation graphs. Results The authors found 672 Twitter accounts of self-identified emergency physicians. Protected accounts were excluded from the study, leaving 632 for further analysis. Most emergency physicians were located in USA (55.4%), had created their accounts in 2009 (43.4%), used their full personal name (77.5%) and provided a custom profile picture (92.2%). Based on at least one published tweet in the last 15 days, there were 345 (54.6%) active users on 31 December 2011. Active users mostly used mobile devices based on the Apple operating system to publish tweets (69.2%). Visualisation of emergency physicians9 Twitter network revealed many users with no connections with their colleagues, and a small group of most influential users who were highly interconnected. Conclusions Only a small proportion of registered emergency physicians use Twitter. Among them exists a smaller inner network of emergency physicians with strong social bonds that is using Twitter9s full potentials for professional development.
---
paper_title: What does Twitter Measure?: Influence of Diverse User Groups in Altmetrics
paper_content:
The most important goal for digital libraries is to ensure high quality search experience for all kinds of users. To attain this goal, it is necessary to have as much relevant metadata as possible at hand to assess the quality of publications. Recently, a new group of metrics appeared, that has the potential to raise the quality of publication metadata to the next level -- the altmetrics. These metrics try to reflect the impact of publications within the social web. However, currently it is still unclear if and how altmetrics should be used to assess the quality of a publication and how altmetrics are related to classical bibliographical metrics (like e.g. citations). To gain more insights about what kind of concepts are reflected by altmetrics, we conducted an in-depth analysis on a real world dataset crawled from the Public Library of Science (PLOS). Especially, we analyzed if the common approach to regard the users in the social web as one homogeneous group is sensible or if users need to be divided into diverse groups in order to receive meaningful results.
---
paper_title: Twitter use at a family medicine conference: analyzing #STFM13.
paper_content:
BACKGROUND ::: The use of social media is expanding in medicine. A few articles sought to describe participant behavior using Twitter at scientific conferences. Family physicians are known as active participants in social media, but their behavior and practices at conferences have not been methodically described. ::: ::: ::: METHODS ::: We recorded all public tweets at the 2013 Society of Teachers of Family Medicine (STFM) Annual Spring Conference bearing the hashtag #STFM13, using commercially available services. We created a transcript of all tweets for the 5 days of the conference and 3 days before and after. We looked at the total number of tweets, number of original tweets and re-tweets, active users, most prolific users, and impressions. We categorized the content based on (1) Session related, (2) Social, (3) Logistics, (4) Ads, and (5) Other. We compared major metrics (but not content) to the 2012 STFM Annual Spring Conference. ::: ::: ::: RESULTS ::: There were a total of 1,818 tweets from 181 user accounts: 13% of the conference registrants. The top tweeter accounted for over 15% of the total tweets, and the top 10 accounted for over 50% of the total volume. Most original tweets (69.7%) were related to session content. Social content came in second (14.2%), followed by other, logistics, and advertisement (7.6%, 6.9%, 1.6%). ::: ::: ::: CONCLUSIONS ::: This preliminary analysis provides an initial snapshot of twitter activity at a family medicine conference. It may suggest avenues for further inquiry: trend identification, "influencer" identification, and qualitative analysis. Interdisciplinary research should focus on evaluation methods that can assess the quality, value, and impact of tweeting.
---
paper_title: The Kardashian index: a measure of discrepant social media profile for scientists
paper_content:
In the era of social media there are now many different ways that a scientist can build their public profile; the publication of high-quality scientific papers being just one. While social media is a valuable tool for outreach and the sharing of ideas, there is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices. To help quantify this, I propose the ‘Kardashian Index’, a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers.
---
paper_title: Social media use in the research workflow
paper_content:
The paper reports on a major international survey, covering 2,000 researchers, which investigated the use of social media in the research workflow. The topic is the second to emerge from the Charleston Observatory, the research adjunct of the popular annual Charleston Conference (http://www.katina.info/conference/). The study shows that social media have found serious application at all points of the research lifecycle, from identifying research opportunities to disseminating findings at the end. The three most popular social media tools in a research setting were those for collaborative authoring, conferencing, and scheduling meetings. The most popular brands used tend to be mainstream anchor technologies or 'household brands', such as Twitter. Age is a poor predictor of social media use in a research context, and humanities and social science scholars avail themselves most of social media. Journals, conference proceedings, and edited books remain the core traditional means of disseminating research, with institutional repositories highly valued as well, but social media have become important complementary channels for disseminating and discovering research.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Influence of study type on Twitter activity for medical research papers
paper_content:
Twitter has been identified as one of the most popular and promising altmetrics data sources, as it possibly reflects a broader use of research articles by the general public. Several factors, such as document age, scientific discipline, number of authors and document type, have been shown to affect the number of tweets received by scientific documents. The particular meaning of tweets mentioning scholarly papers is, however, not entirely understood and their validity as impact indicators debatable. This study contributes to the understanding of factors influencing Twitter popularity of medical papers investigating differences between medical study types. 162,830 documents indexed in Embase to a medical study type have been analysed for the study type specific tweet frequency. Meta-analyses, systematic reviews and clinical trials were found to be tweeted substantially more frequently than other study types, while all basic research received less attention than the average. The findings correspond well with clinical evidence hierarchies. It is suggested that interest from laymen and patients may be a factor in the observed effects.
---
paper_title: Citation Analysis in Twitter: Approaches for Defining and Measuring Information Flows within Tweets during Scientific Conferences
paper_content:
This paper investigates Twitter usage in scientific contexts, particu- larly the use of Twitter during scientific conferences. It proposes a methodology for capturing and analyzing citations/references in Twitter. First results are presented based on the analysis of tweets gathered for two conference hashtags.
---
paper_title: Do Altmetrics Work? Twitter and Ten Other Social Web Services
paper_content:
Altmetric measurements derived from the social web are increasingly advocated and used as early indicators of article impact and usefulness. Nevertheless, there is a lack of systematic scientific evidence that altmetrics are valid proxies of either impact or utility although a few case studies have reported medium correlations between specific altmetrics and citation rates for individual journals or fields. To fill this gap, this study compares 11 altmetrics with Web of Science citations for 76 to 208,739 PubMed articles with at least one altmetric mention in each case and up to 1,891 journals per metric. It also introduces a simple sign test to overcome biases caused by different citation and usage windows. Statistically significant associations were found between higher metric scores and higher citations for articles with positive altmetric scores in all cases with sufficient evidence (Twitter, Facebook wall posts, research highlights, blogs, mainstream media and forums) except perhaps for Google+ posts. Evidence was insufficient for LinkedIn, Pinterest, question and answer sites, and Reddit, and no conclusions should be drawn about articles with zero altmetric scores or the strength of any correlation between altmetrics and citations. Nevertheless, comparisons between citations and metric values for articles published at different times, even within the same year, can remove or reverse this association and so publishers and scientometricians should consider the effect of time when using altmetrics to rank articles. Finally, the coverage of all the altmetrics except for Twitter seems to be low and so it is not clear if they are prevalent enough to be useful in practice.
---
paper_title: Tweets as impact indicators: Examining the implications of automated bot accounts on Twitter
paper_content:
This brief communication presents preliminary findings on automated Twitter accounts distributing links to scientific articles deposited on the preprint repository arXiv. It discusses the implication of the presence of such bots from the perspective of social media metrics altmetrics, where mentions of scholarly documents on Twitter have been suggested as a means of measuring impact that is both broader and timelier than citations. Our results show that automated Twitter accounts create a considerable amount of tweets to scientific articles and that they behave differently than common social bots, which has critical implications for the use of raw tweet counts in research evaluation and assessment. We discuss some definitions of Twitter cyborgs and bots in scholarly communication and propose distinguishing between different levels of engagement-that is, differentiating between tweeting only bibliographic information to discussing or commenting on the content of a scientific work.
---
paper_title: Do altmetrics follow the crowd or does the crowd follow altmetrics?
paper_content:
Changes are occurring in scholarly communication as scientific discourse and research activities spread across various social media platforms. In this paper, we study altmetrics on the article and journal levels, investigating whether the online attention received by research articles is related to scholarly impact or may be due to other factors. We define a new metric, Journal Social Impact (JSI), based on eleven data sources: CiteULike, Mendeley, F1000, blogs, Twitter, Facebook, mainstream news outlets, Google Plus, Pinterest, Reddit, and sites running Stack Exchange (Q&A). We compare JSI against diverse citation-based metrics, and find that JSI significantly correlates with a number of them. These findings indicate that online attention of scholarly articles is related to traditional journal rankings and favors journals with a longer history of scholarly impact. We also find that journal-level altmetrics have strong significant correlations among themselves, compared with the weak correlations among article-level altmetrics. Another finding is that Mendeley and Twitter have the highest usage and coverage of scholarly activities. Among individual altmetrics, we find that the readership of academic social networks have the highest correlations with citation-based metrics. Our findings deepen the overall understanding of altmetrics and can assist in validating them.
---
paper_title: How and why scholars cite on Twitter
paper_content:
Scholars are increasingly using the microblogging service Twitter as a communication platform. Since citing is a central practice of scholarly communication, we investigated whether and how scholars cite on Twitter. We conducted interviews and harvested 46,515 tweets from a sample of 28 scholars and found that they do cite on Twitter, though often indirectly. Twitter citations are part of a fast-moving conversation that participants believe reflects scholarly impact. Twitter citation metrics could augment traditional citation analysis, supporting a "scientometrics 2.0."
---
paper_title: Monitoring Academic Conferences: Real-Time Visualization and Retrospective Analysis of Backchannel Conversations
paper_content:
Social-media-supported academic conferences are becoming increasingly global as people anywhere can participate actively through backchannel conversation. It can be challenging for the conference organizers to integrate the use of social media, to take advantage of the connections between backchannel and front stage, and to encourage the participants to be a part of the broader discussion occurring through social media. The backchannel conversation during academic conference can offer key insights on best practices, and specialized tools and methods are needed to analyze this data. In this paper we present our two fold contribution to enable organizers to gain such insights. First, we introduce Conference Monitor (CM), a real time web-based tweet visualization dashboard to monitor the backchannel conversation during academic conferences. We demonstrate the features of CM, which are designed to help monitor academic conferences, and its application during the conference Theorizing the Web 2012 (TtW12). Its real time visualizations helped identify the popular sessions, the active and important participants, and trending topics. Second, we report on our retrospective analysis of the tweets about the TtW12 conference and the conference-related follower-networks. The 4828 tweets from 593 participants resulted in 8:14 tweets per participant. The 1591 new follower-relations created among the participants during the conference confirmed the overall high volume of new connections created during academic conferences. On average a speaker got more new followers than a non-speaker. A few remote participants also gained comparatively large number of new followers due to the content of their tweets and their perceived importance. There was a positive correlation between the number of new followers of a participant and the number of people who mentioned him/her. Remote participants had a significant level of participation in the backchannel and live streaming helped them to be more engaged.
---
paper_title: Collaborative authoring: a case study of the use of a wiki as a tool to keep systematic reviews up to date
paper_content:
BACKGROUND ::: Systematic reviews are recognized as the most effective means of summarizing research evidence. However, they are limited by the time and effort required to keep them up to date. Wikis present a unique opportunity to facilitate collaboration among many authors. The purpose of this study was to examine the use of a wiki as an online collaborative tool for the updating of a type of systematic review known as a scoping review. ::: ::: ::: METHODS ::: An existing peer-reviewed scoping review on asynchronous telehealth was previously published on an open, publicly available wiki. Log file analysis, user questionnaires and content analysis were used to collect descriptive and evaluative data on the use of the site from 9 June 2009 to 10 April 2010. Blog postings from referring sites were also analyzed. ::: ::: ::: RESULTS ::: During the 10-month study period, there were a total of 1222 visits to the site, 3996 page views and 875 unique visitors from around the globe. Five unique visitors (0.6% of the total number of visitors) submitted a total of 6 contributions to the site: 3 contributions were made to the article itself, and 3 to the discussion pages. None of the contributions enhanced the evidence base of the scoping review. The commentary about the project in the blogosphere was positive, tempered with some skepticism. ::: ::: ::: INTERPRETATIONS ::: Despite the fact that wikis provide an easy-to-use, free and powerful means to edit information, fewer than 1% of visitors contributed content to the wiki. These results may be a function of limited interest in the topic area, the review methodology itself, lack of familiarity with the wiki, and the incentive structure of academic publishing. Controversial and timely topics in addition to incentives and organizational support for Web 2.0 impact metrics might motivate greater participation in online collaborative efforts to keep scientific knowledge up to date.
---
paper_title: How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications
paper_content:
In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6 % of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r = 0.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined.
---
paper_title: Altmetrics and Other Novel Measures for Scientific Impact
paper_content:
Impact assessment is one of the major drivers in scholarly communication, in particular since the number of available faculty positions and grants has far exceeded the number of applications. Peer review still plays a critical role in evaluating science, but citation-based bibliometric indicators are becoming increasingly important. This chapter looks at a novel set of indicators that can complement both citation analysis and peer review. Altmetrics use indicators gathered in the real-time Social Web to provide immediate feedback about scholarly works. We describe the most important altmetrics and provide a critical assessment of their value and limitations.
---
paper_title: Metadata Requirements for Repositories in Health Informatics Research: Evidence from the Analysis of Social Media Citations
paper_content:
Social media have transformed the way modern science is communicated. Although several studies have been focused on the use of social media for the dissemination of scientific knowledge and the measurement of the impact of academic output, we know very little about how academics cite social media in their publications. In order to address this gap, a content analysis was performed on a sample of 629 journal articles in medical informatics. The findings showed the presence of 109 citations to social media resources, the majority of which were blogs and wikis. Social media citations were used more frequently to support the literature review section of articles. However, a fair amount of citations was used in order to document various aspects of the methodology section, such as the data collection and analysis process. The paper concludes with the implications of these findings for metadata design for bibliographic databases (like PubMed and Medline).
---
paper_title: Scientific citations in Wikipedia
paper_content:
The Internet-based encyclopaedia Wikipedia has grown to become one of the most visited Web sites on the Internet, but critics have questioned the quality of entries. An empirical study of Wikipedia found errors in a 2005 sample of science entries. Biased coverage and lack of sources are among the "Wikipedia risks." This paper describes a simple assessment of these aspects by examining the outbound links from Wikipedia articles to articles in scientific journals with a comparison against journal statistics from Journal Citation Reports such as impact factors. The results show an increasing use of structured citation markup and good agreement with citation patterns seen in the scientific literature though with a slight tendency to cite articles in high-impact journals such as Nature and Science. These results increase confidence in Wikipedia as a reliable information resource for science in general.
---
paper_title: Wikis and Collaborative Writing Applications in Health Care: A Scoping Review
paper_content:
Background: Collaborative writing applications (eg, wikis and Google Documents) hold the potential to improve the use of evidence in both public health and health care. The rapid rise in their use has created the need for a systematic synthesis of the evidence of their impact as knowledge translation (KT) tools in the health care sector and for an inventory of the factors that affect their use. Objective: Through the Levac six-stage methodology, a scoping review was undertaken to explore the depth and breadth of evidence about the effective, safe, and ethical use of wikis and collaborative writing applications (CWAs) in health care. Methods: Multiple strategies were used to locate studies. Seven scientific databases and 6 grey literature sources were queried for articles on wikis and CWAs published between 2001 and September 16, 2011. In total, 4436 citations and 1921 grey literature items were screened. Two reviewers independently reviewed citations, selected eligible studies, and extracted data using a standardized form. We included any paper presenting qualitative or quantitative empirical evidence concerning health care and CWAs. We defined a CWA as any technology that enables the joint and simultaneous editing of a webpage or an online document by many end users. We performed qualitative content analysis to identify the factors that affect the use of CWAs using the Gagnon framework and their effects on health care using the Donabedian framework. Results: Of the 111 studies included, 4 were experimental, 5 quasi-experimental, 5 observational, 52 case studies, 23 surveys about wiki use, and 22 descriptive studies about the quality of information in wikis. We classified them by theme: patterns of use of CWAs (n=26), quality of information in existing CWAs (n=25), and CWAs as KT tools (n=73). A high prevalence of CWA use (ie, more than 50%) is reported in 58% (7/12) of surveys conducted with health care professionals and students. However, we found only one longitudinal study showing that CWA use is increasing in health care. Moreover, contribution rates remain low and the quality of information contained in different CWAs needs improvement. We identified 48 barriers and 91 facilitators in 4 major themes (factors related to the CWA, users’ knowledge and attitude towards CWAs, human environment, and organizational environment). We also found 57 positive and 23 negative effects that we classified into processes and outcomes. Conclusions: Although we found some experimental and quasi-experimental studies of the effectiveness and safety of CWAs as educational and KT interventions, the vast majority of included studies were observational case studies about CWAs being used by health professionals and patients. More primary research is needed to find ways to address the different barriers to their use and to make these applications more useful for different stakeholders. [J Med Internet Res 2013;15(10):e210]
---
paper_title: Wikipedia as Public Scholarship: Communicating Our Impact Online
paper_content:
To contribute to the forum asking “Has Communication Research Made a Difference?,” this essay examines whether communication scholarship makes a difference (a) to those who search for information online, (b) in the sense that a primary way our research can make a difference is through its accessibility, and (c) by using the criteria of its presence (or absence) on Wikipedia. In this essay, we reason that Wikipedia is a useful benchmark for online accessibility of public scholarship in that it provides immediate, freely available information to today's diverse global public seeking online answers to questions and relief from problems.
---
paper_title: Expert participation on Wikipedia: barriers and opportunities
paper_content:
On the occasion of Wikipedia's 10th anniversary, the Chronicle wrote that, nowadays, the project does not represent "the bottom layer of authority, nor the top, but in fact the highest layer without formal vetting" and, as such, it can serve as "an ideal bridge between the validated and unvalidated Web". An increasing number of university students use Wikipedia for "pre-research", as part of their course assignments or research projects. Yet many among academics, scientists and experts turn their noses up at the thought of contributing to Wikipedia, despite a growing number of calls from the expert community to join the project. The Association for Psychological Science launched an initiative to get the scientific psychology community involved in improving the coverage and quality of articles in their field; biomedical experts recently called upon their peers to help make public health information in Wikipedia rigorous and complete; historians have recently started to contribute references to Wikipedia in an effort to make their scholarly work more easily accessible to a broad readership; chemists are curating Wikipedia to include structured metadata in articles on chemical compounds. The Wikimedia Foundation itself is exploring strategies to engage with the expert community and with higher education at large, as part of initiatives such as USPP or the expert review proposal. ::: ::: These calls for participation, however, remain sporadic and most experts-- despite goodwill to contribute--still perceive major barriers to participation, which typically include issues of a technical, social and cultural nature, from the lack of incentives from the perspective of a professional career, to the poor recognition of one’s expertise within Wikipedia to issues of social interaction. In combination with the apparent anomaly of collaborative--and often anonymous--authorship and the resulting fluidity of Wikipedia articles, these factors create an environment that significantly differs from the ones experts are accustomed to. ::: ::: There has been so far only anecdotal evidence on what keeps experts (defined in the broadest possible sense to include academics, but also expert professionals in industry and in the public sector, as well as research students) from contributing to Wikipedia. The Wikimedia Research Committee ran a survey on expert participation between February and April 2011 with over 3K respondents to try and turn anecdotes about expert participation into data. The aim of this talk is to present the results of the survey and tackle questions such as: the different perception of participation in Wikipedia across academic fields; the effects of expertise, gender, discipline, wiki literacy on participation; the gap between shared attitudes and individual drivers of participation; the relation between participation in Wikipedia and attitudes towards open access and open science.
---
paper_title: Geographic variation in social media metrics: an analysis of Latin American journal articles
paper_content:
Purpose – The purpose of this study is to contribute to the understanding of how the potential of altmetrics varies around the world by measuring the percentage of articles with non-zero metrics (coverage) for articles published from a developing region (Latin America). Design/methodology/approach – This study uses article metadata from a prominent Latin American journal portal, SciELO, and combines it with altmetrics data from Altmetric.com and with data collected by author-written scripts. The study is primarily descriptive, focusing on coverage levels disaggregated by year, country, subject area, and language. Findings – Coverage levels for most of the social media sources studied was zero or negligible. Only three metrics had coverage levels above 2 per cent – Mendeley, Twitter, and Facebook. Of these, Twitter showed the most significant differences with previous studies. Mendeley coverage levels reach those found by previous studies, but it takes up to two years longer for articles to be saved in the...
---
paper_title: Internet encyclopaedias go head to head
paper_content:
Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds. UPDATE: see details of how the data were collected for this article in the supplementary information. UPDATE 2 (28 March 2006). The results reported in this news story and their interpretation have been disputed by Encyclopaedia Britannica. Nature responded to these objections .
---
paper_title: Academic opinions of Wikipedia and Open Access publishing
paper_content:
Purpose – The purpose of this paper is to examine academics’ awareness of and attitudes towards Wikipedia and Open Access journals for academic publishing to better understand the perceived benefits and challenges of these models. Design/methodology/approach – Bases for analysis include comparison of the models, enumeration of their advantages and disadvantages, and investigation of Wikipedia's web structure in terms of potential for academic publishing. A web survey was administered via department-based invitations and listservs. Findings – The survey results show that: Wikipedia has perceived advantages and challenges in comparison to the Open Access model; the academic researchers’ increased familiarity is associated with increased comfort with these models; and the academic researchers’ attitudes towards these models are associated with their familiarity, academic environment, and professional status. Research limitations/implications – The major limitation of the study is sample size. The result of a...
---
paper_title: Proteopedia- a scientific 'wiki' bridging the rift between three-dimensional structure and function of biomacromolecules
paper_content:
Many scientists lack the background to fully utilize the wealth of solved three-dimensional biomacromolecule structures. Thus, a resource is needed to present structure/function information in a user-friendly manner to a broad scientific audience. Proteopedia http://www.proteopedia.org is an interactive, wiki web-resource whose pages have embedded three-dimensional structures surrounded by descriptive text containing hyperlinks that change the appearance (view, representations, colors, labels) of the adjacent three-dimensional structure to reflect the concept explained in the text.
---
paper_title: Wikipedia for academic publishing: advantages and challenges
paper_content:
Purpose – The purpose of this paper is to explore the potential of Wikipedia as a venue for academic publishing.Design/methodology/approach – By looking at other sources and studying Wikipedia structures, the paper compares the processes of publishing a peer‐reviewed article in Wikipedia and the open access journal model, discusses the advantages and challenges of adopting Wikipedia in academic publishing, and provides suggestions on how to address the challenges.Findings – Compared to an open access journal model, Wikipedia has several advantages for academic publishing: it is less expensive, quicker, more widely read, and offers a wider variety of articles. There are also several major challenges in adopting Wikipedia in the academic community: the web site structure is not well suited to academic publications; the site is not integrated with common academic search engines such as Google Scholar or with university libraries; and there are concerns among some members of the academic community about the s...
---
paper_title: The distorted mirror of Wikipedia: a quantitative analysis of Wikipedia coverage of academics
paper_content:
Activity of modern scholarship creates online footprints galore. Along with traditional metrics of research quality, such as citation counts, online images of researchers and institutions increasingly matter in evaluating academic impact, decisions about grant allocation, and promotion. We examined 400 biographical Wikipedia articles on academics from four scientific fields to test if being featured in the world’s largest online encyclopedia is correlated with higher academic notability (assessed through citation counts). We found no statistically significant correlation between Wikipedia articles metrics (length, number of edits, number of incoming links from other articles, etc.) and academic notability of the mentioned researchers. We also did not find any evidence that the scientists with better WP representation are necessarily more prominent in their fields. In addition, we inspected the Wikipedia coverage of notable scientists sampled from Thomson Reuters list of ‘highly cited researchers’. In each of the examined fields, Wikipedia failed in covering notable scholars properly. Both findings imply that Wikipedia might be producing an inaccurate image of academics on the front end of science. By shedding light on how public perception of academic progress is formed, this study alerts that a subjective element might have been introduced into the hitherto structured system of academic evaluation.
---
paper_title: Reinventing academic publishing online. Part II: A socio-technical vision
paper_content:
Part I of this paper outlined the limitations of feudal academic knowledge exchange and predicted its decline as cross-disciplinary research expands. Part II now suggests the next evolutionary step is democratic online knowledge exchange, run by the academic many rather than the few. Using socio-technical tools it is possible to accept all, evaluate all and publish all academic documents. Editors and reviewers will remain, but their role will change, from gatekeepers to guides. However, the increase in knowledge throughput can only be supported by activating the academic community as a whole. Yet that is what socio-technical systems do --- activate people to increase common gains. Part 1 argued that scholars must do this or be left behind in the dust of progress. The design proposed here is neither wiki, nor e-journal, nor electronic repository, nor reputation system, but a hybrid of these and other socio-technical functions. It supports print publishing as a permanent archive byproduct useful to a living, online knowledge exchange community. It could also track academic submissions, provide performance transcripts to promotion committees, enable hyperlinks, support attribution, allow data-source sharing, retain anonymous reviewing and support relevance and rigor in evaluation. Rather than a single "super" KES, a network of online systems united by a common vision of democratic knowledge exchange is proposed.
---
paper_title: Coverage and adoption of altmetrics sources in the bibliometric community
paper_content:
Altmetrics, indices based on social media platforms and tools, have recently emerged as alternative means of measuring scholarly impact. Such indices assume that scholars in fact populate online social environments, and interact with scholarly products in the social web. We tested this assumption by examining the use and coverage of social media environments amongst a sample of bibliometricians examining both their own use of online platforms and the use of their papers on social reference managers. As expected, coverage varied: 82 % of articles published by sampled bibliometricians were included in Mendeley libraries, while only 28 % were included in CiteULike. Mendeley bookmarking was moderately correlated (.45) with Scopus citation counts. We conducted a survey among the participants of the STI2012 participants. Over half of respondents asserted that social media tools were affecting their professional lives, although uptake of online tools varied widely. 68 % of those surveyed had LinkedIn accounts, while Academia.edu, Mendeley, and ResearchGate each claimed a fifth of respondents. Nearly half of those responding had Twitter accounts, which they used both personally and professionally. Surveyed bibliometricians had mixed opinions on altmetrics' potential; 72 % valued download counts, while a third saw potential in tracking articles' influence in blogs, Wikipedia, reference managers, and social media. Altogether, these findings suggest that some online tools are seeing substantial use by bibliometricians, and that they present a potentially valuable source of impact data.
---
paper_title: A comparative study of academic and Wikipedia ranking
paper_content:
In addition to its broad popularity Wikipedia is also widely used for scholarly purposes. Many Wikipedia pages pertain to academic papers, scholars and topics providing a rich ecology for scholarly uses. Scholarly references and mentions on Wikipedia may thus shape the "societal impact" of a certain scholarly communication item, but it is not clear whether they shape actual "academic impact". In this paper we compare the impact of papers, scholars, and topics according to two different measures, namely scholarly citations and Wikipedia mentions. Our results show that academic and Wikipedia impact are positively correlated. Papers, authors, and topics that are mentioned on Wikipedia have higher academic impact than those are not mentioned. Our findings validate the hypothesis that Wikipedia can help assess the impact of scholarly publications and underpin relevance indicators for scholarly retrieval or recommendation systems.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Wikipedia in the eyes of its beholders: A systematic review of scholarly research on Wikipedia readers and readership
paper_content:
Hundreds of scholarly studies have investigated various aspects of Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment but also for more serious topics such as health and legal information. Scholars, librarians, and students are common users, and Wikipedia provides a unique opportunity for educating students in digital literacy. We conclude with a summary of key findings, implications for researchers, and implications for the Wikipedia community.
---
paper_title: Organizational identity, meaning, and values: analysis of social media guideline and policy documents
paper_content:
With the increasing use of social media by students, researchers, administrative staff, and faculty in post-secondary education (PSE), a number of institutions have developed guideline and policy documents to set standards for social media use. In this study we analyze social media guidelines and policies across 250 PSE institutions from 10 countries using latent semantic analysis. This initial finding produced a list of 36 universal topics. Subsequently, chi-squared tests were employed to identify distribution differences of content-related factors between American and Non-American PSE institutions. This analysis offered a high-level summary of unstructured text data on the topic of social media guidance. The results include a comprehensive list of recommendations for developing social media guidelines and policies, and a database of social media guideline and policy documents for the PSE sector and other related organizations.
---
paper_title: Social Software in Academia: Three Studies on Users’ Acceptance of Web 2.0 Services
paper_content:
paper presents a summary of the results of three surveys, questioning different groups of users on their usage of social software tools in academic settings. The first survey addressed students across various disciplines, the second one addressed only students in information science and related disciplines, and the third one addressed researchers and university teachers across several disciplines. The different studies had slightly different foci (and thus did not comprise the same set of questions), but all considered aspects such of 'which Web 2.0 services are known?' and 'how are they used?' In this paper, the different survey results related to use of social software are summed up and compared.
---
paper_title: Exploring altmetrics in an emerging country context
paper_content:
The study of altmetrics is relatively new, and the little that is known about altmetrics is only known for journals and articles from a limited set of contexts (publication venues and subject areas). The use of PLOS, arXiv.org, PubMed, Web of Science, or of a few well-established journals like Nature and Science introduces a selection bias that calls into question the generalizability of reported results. For example, we already know that altmetrics related to mentions in blogs are heavily influenced by the makeup of bloggers themselves and the journals they tend to blog about, both of which introduce a strong bias in favour high-impact life science journals (Shema et al., 2012). There is therefore a need to study the altmetrics of journals and articles published and read in other contexts, including research that is published and read in regions of the world beyond the global North, as well as in other languages beyond English.
---
paper_title: PeerJ --A case study in improving research collaboration at the journal level
paper_content:
PeerJ Inc. is the Open Access publisher of PeerJ a peer-reviewed, Open Access journal and PeerJ PrePrints an un-peer-reviewed or collaboratively reviewed preprint server, both serving the biological, medical and health sciences.The Editorial Criteria of PeerJ the journal are similar to those of PLOS ONE in that all submissions are judged only on their scientific and methodological soundness not on subjective determinations of impact, or degree of advance. PeerJ's peer-review process is managed by an Editorial Board of 800 and an Advisory Board of 20 including 5 Nobel Laureates. Editor listings by subject area are at: https://peerj.com/academic-boards/subjects/ and the Advisory Board is at: https://peerj.com/academic-boards/advisors/.In the context of Understanding Research Collaboration, there are several unique aspects of the PeerJ set-up which will be of interest to readers of this special issue.
---
paper_title: For what it’s worth – the open peer review landscape
paper_content:
Purpose – The purpose of this paper is twofold, first, to discuss the current and future issues around post-publication open peer review. Second, to highlight some of the main protagonists and platforms that encourages open peer review, pre-and post-publication. Design/methodology/approach – The first part of the paper aims to discuss the facilitators and barriers that will enable and prevent academics engaging with the new and established platforms of scholarly communication and review. These issues are covered with the intention of proposing further dialogue within the academic community that ultimately address researchers’ concerns, whilst continuing to nurture a progressive approach to scholarly communication and review. The paper will continue to look at the prominent open post-publication platforms and tools and discuss whether in the future it will become a standard model. Findings – The paper identifies several problems, not exclusive to open peer review that could inhibit academics from being ope...
---
paper_title: Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial
paper_content:
Abstract Objectives : To examine the effect on peer review of asking reviewers to have their identity revealed to the authors of the paper. Design : Randomised trial. Consecutive eligible papers were sent to two reviewers who were randomised to have their identity revealed to the authors or to remain anonymous. Editors and authors were blind to the intervention. Main outcome measures : The quality of the reviews was independently rated by two editors and the corresponding author using a validated instrument. Additional outcomes were the time taken to complete the review and the recommendation regarding publication. A questionnaire survey was undertaken of the authors of a cohort of manuscripts submitted for publication to find out their views on open peer review. Results : Two editors9 assessments were obtained for 113out of 125manuscripts, and the corresponding author9s assessment was obtained for 105.Reviewers randomised to be asked to be identified were 12% (95% confidence interval 0.2% to 24%) more likely to decline to review than reviewers randomised to remain anonymous (35% v 23%). There was no significant difference in quality (scored on a scale of 1to 5) between anonymous reviewers (3.06(SD 0.72)) and identified reviewers (3.09(0.68)) (P=0.68, 95% confidence interval for difference −align=baseline>0.19 to 0.12), and no significant difference in the recommendation regarding publication or time taken to review the paper. The editors9 quality score for reviews (3.05(SD 0.70)) was significantly higher than that of authors (2.90(0.87))(P 0.26 to − align=baseline>0.03). Most authors were in favour of open peer review. Conclusions : Asking reviewers to consent to being identified to the author had no important effect on the quality of the review, the recommendation regarding publication, or the time taken to review, but it significantly increased the likelihood of reviewers declining to review.
---
paper_title: F1000 Recommendations as a Potential New Data Source for Research Evaluation: A Comparison With Citations
paper_content:
F1000 is a postpublication peer review service for biological and medical research. F1000 recommends important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and more than 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications.
---
paper_title: Rewarding Peer Reviewers: Maintaining the Integrity of Science Communication
paper_content:
This article overviews currently available options for rewarding peer reviewers. Rewards and incentives may help maintain the quality and integrity of scholarly publications. Publishers around the world implemented a variety of financial and nonfinancial mechanisms for incentivizing their best reviewers. None of these is proved effective on its own. A strategy of combined rewards and credits for the reviewers1 creative contributions seems a workable solution. Opening access to reviews and assigning publication credits to the best reviews is one of the latest achievements of digitization. Reviews, posted on academic networking platforms, such as Publons, add to the transparency of the whole system of peer review. Reviewer credits, properly counted and displayed on individual digital profiles, help distinguish the best contributors, invite them to review and offer responsible editorial posts.
---
paper_title: Defining and Characterizing Open Peer Review: A Review of the Literature
paper_content:
Changes in scholarly publishing have resulted in a move toward openness. To this end, new, open models of peer review are emerging. While the scholarly literature has examined and discussed open peer review, no established definition of it exists, nor are there uniform implementations of open peer review processes. This article examines the literature discussing open peer review, identifies common open peer review definitions, and describes eight common characteristics of open peer review: signed review, disclosed review, editor-mediated review, transparent review, crowd-sourced review, pre-publication review, synchronous review, and post-publication review. This article further discusses benefits and challenges to the scholarly publishing community posed by open peer review and concludes that open peer review can and should exist within the current scholarly publishing paradigm.
---
paper_title: Bias in peer review
paper_content:
Research on bias in peer review examines scholarly communication and funding processes to assess the epistemic and social legitimacy of the mechanisms by which knowledge communities vet and self-regulate their work. Despite vocal concerns, a closer look at the empirical and methodological limitations of research on bias raises questions about the existence and extent of many hypothesized forms of bias. In addition, the notion of bias is predicated on an implicit ideal that, once articulated, raises questions about the normative implications of research on bias in peer review. This review provides a brief description of the function, history, and scope of peer review; articulates and critiques the conception of bias unifying research on bias in peer review; characterizes and examines the empirical, methodological, and normative claims of bias in peer review research; and assesses possible alternatives to the status quo. We close by identifying ways to expand conceptions and studies of bias to contend with the complexity of social interactions among actors involved directly and indirectly in peer review. © 2013 Wiley Periodicals, Inc.
---
paper_title: Social Media and Marketing of Higher Education: A Review of the Literature
paper_content:
The emergence of social media has revolutionized the practice of communication in two fundamental ways. First, social media have made it possible for one person to send an instant message to millions of others worldwide. Second and perhaps more important, social media make it possible to establish a two-way communication channel between the sender and receivers or simply between receivers or “followers” outside the control of the original sender. Social media, therefore, transcend the traditional bureaucracy when it comes to marketing or seeking information from an institution. We conducted a review of the literature to find out how institutions of higher education are leveraging social media for recruitment and admissions purposes, and whether prospective students use social media in their college search process. Our findings indicate that social media use by institutions of higher education is on the rise, yet it is unclear whether content on university social media pages influences prospects’ choice-making processes.
---
paper_title: Social Networking as an Admission Tool: A Case Study in Success
paper_content:
The concept of social networking, the focus of this article, targets the development of online communities in higher education, and in particular, as part of the admission process. A successful case study of a university is presented on how one university has used this tool to compete for students. A discussion including suggestions on how to enhance the success of this tool in your recruitment process is also provided.
---
paper_title: A Case Study of Israeli Higher-Education Institutes Sharing Scholarly Information with the Community via Social Networks.
paper_content:
Abstract The purpose of this study is to empirically examine cases in which Social Networking Sites (SNS) are being utilized for scholarly purposes by higher-education institutes in Israel. The research addresses questions regarding content patterns, activity patterns, and interactivity within Facebook and Twitter accounts of these institutes. Research population comprises of 47 Facebook accounts and 26 Twitter accounts of Israeli universities or colleges and/or sub-divisions within these institutes. In addition to descriptive statistics, all tweets within Twitter accounts were analyzed and classified into categories, based on their content, for better understanding of how they can facilitate informal learning. Research findings suggest that SNS promotes knowledge sharing, thereby facilitating informal learning within the community; SNS open academic institutes to the community altogether. Still, SNS were utilized in an assimilation mode, i.e. while the potential is high for using special features enabled by SNS as well as unique sharing of information modes, de facto use of these special features was extremely low. However, contrary to the relatively high dropout rates of SNS' personal accounts, many academic accounts were frequently active for long periods of time. This may indicate that SNS activity which is based on sharing of knowledge as well as on social interaction has better sustainability prospects. Usage and content patterns of these accounts corresponded to parallel patterns in the Israeli higher-education community in “real” life, hence reinforcing the role of these institutes within the community. Overall, this study implies that the potential of SNS as means of sharing academic knowledge in higher education institutes in Israel has not been actualized yet, but is indeed being explored by these organizations as well as by the community.
---
paper_title: Enriched Audience Engagement Through Twitter: Should More Academic Radiology Departments Seize the Opportunity?
paper_content:
PURPOSE ::: The aim of this study was to evaluate use of the microblogging social network Twitter by academic radiology departments (ARDs) in the United States. ::: ::: ::: METHODS ::: Twitter was searched to identify all accounts corresponding with United States ARDs. All original tweets from identified accounts over a recent 3-month period (August to October 2014) were archived. Measures of account activity, as well as tweet and link content, were summarized. ::: ::: ::: RESULTS ::: Fifteen ARDs (8.2%) had Twitter accounts. Ten (5.5%) had "active" accounts, with ≥1 tweet over the 3-month period. Active accounts averaged 711 ± 925 followers (maximum, 2,885) and 61 ± 93 tweets (maximum, 260) during the period. Among 612 tweets from active accounts, content most commonly related to radiology-related education (138), dissemination of departmental research (102), general departmental or hospital promotional material (62), departmental awards or accomplishments (60), upcoming departmental lectures (59), other hospital-related news (55), medical advice or information for patients (38), local community events or news (29), social media and medicine (27), and new departmental or hospital hires or expansion (19). Eighty percent of tweets (490 of 612) included 315 unique external links. Most frequent categories of link sources were picture-, video-, and music-sharing websites (89); the ARD's website or blog (83); peer-reviewed journal articles (40); the hospital's or university's website (34), the lay press (28), and Facebook (14). ::: ::: ::: CONCLUSIONS ::: Twitter provides ARDs the opportunity to engage their own staff members, the radiology community, the department's hospital, and patients, through a broad array of content. ARDs frequently used Twitter for promotional and educational purposes. Because only a small fraction of ARDs actively use Twitter, more departments are encouraged to take advantage of this emerging communication tool.
---
paper_title: Friend or faculty: Social networking sites, dual relationships, and context collapse in higher education
paper_content:
Students and faculty have always interacted informally, on campus and off. Social networking sites (SNSs), like Facebook, introduce another space in which they interact, contributing to a blurring of boundaries between professional and personal personas. Communications on SNSs may be seen as simply an extension of on- and off-campus lives, and hence fall under the same policies governing institutional codes of conduct. The medium however, allowing for the capture and broadcast of events in academic and everyday lives, merits special considerations for contemporary faculty and administrators. The objective of this critical review is synthesize literature discussing dual relationships, context collapse, and digital persistence to contextualize and inform policy development regarding the use of SNSs in higher education. The review will contribute to discussions and decision-making on what should be done in response to these ubiquitous and novel channels for communication, connection and disclosure, and promote awareness of the implications for the connections fostered and the digital traces left behind in networked social spaces.
---
paper_title: Innovative online faculty development utilizing the power of social media.
paper_content:
Abstract Objective Faculty development (FD) is important for continued professional development, but expense and distance remain challenging. These challenges could be minimized by the free and asynchronous nature of social media (SM). We sought to determine the utility and effectiveness of conducting a national online FD activity on Facebook by assessing participants' perceptions and use and facilitators' challenges. Methods An educational activity of a national FD program was managed on a closed Facebook group. Activities included postings of educational technology goals, abstracting an article, and commenting on peers' postings. Sources of quantitative data included the Facebook postings and the survey responses. Surveys before, after, and 6 months after the activity assessed knowledge, attitudes and self-reported behaviors. Sources of qualitative data were the open-ended survey questions and the content of the Facebook postings. Results All participants completed the FD activity and evaluations, yielding 38 postings and 115 comments. Before the activity, 88% had a personal Facebook account, 64% were somewhat/very confident using Facebook, 77% thought SM would be useful for professional networking, and 12% had used it professionally. Six months after the activity, professional usage had increased to 35%. Continued use of Facebook for future presentations of this FD activity was recommended by 76%. Qualitative analysis yielded 12 types of Facebook postings and 7 themes related to using SM for FD. Conclusions Conducting a national FD activity on Facebook yielded excellent participation rates and positive participant impressions, and it affected professional usage. Facebook may become an additional tool in the educator's toolbox for FD as a result of its acceptability and accessibility.
---
paper_title: Organizational identity, meaning, and values: analysis of social media guideline and policy documents
paper_content:
With the increasing use of social media by students, researchers, administrative staff, and faculty in post-secondary education (PSE), a number of institutions have developed guideline and policy documents to set standards for social media use. In this study we analyze social media guidelines and policies across 250 PSE institutions from 10 countries using latent semantic analysis. This initial finding produced a list of 36 universal topics. Subsequently, chi-squared tests were employed to identify distribution differences of content-related factors between American and Non-American PSE institutions. This analysis offered a high-level summary of unstructured text data on the topic of social media guidance. The results include a comprehensive list of recommendations for developing social media guidelines and policies, and a database of social media guideline and policy documents for the PSE sector and other related organizations.
---
paper_title: Riding the crest of the altmetrics wave: How librarians can help prepare faculty for the next generation of research impact metrics
paper_content:
As scholars migrate into online spaces like Mendeley, blogs, Twitter, and more, they leave new traces of once-invisible interactions like reading, saving, discussing, and recommending. Observing these traces can inform new metrics of scholarly influence and impact -- so-called "altmetrics." Stakeholders in academia are beginning to discuss how and where altmetrics can be useful towards evaluating a researcher's academic contribution. As this interest grows, libraries are in a unique position to help support an informed dialog on campus. We suggest that librarians can provide this support in three main ways: informing emerging conversations with the latest research, supporting experimentation with emerging altmetrics tools, and engaging in early altmetrics education and outreach. We include examples and lists of resources to help librarians fill these roles.
---
paper_title: A Look at Altmetrics and Its Growing Significance to Research Libraries
paper_content:
This document serves as an informational review of the emerging field and practices of alternative metrics or altmetrics. It is intended to be used by librarians and faculty members in research libraries and universities to better understand the trends and challenges associated with altmetrics in higher education. It is also intended to be used by research libraries to offer guidance on how to participate in shaping this emerging field.
---
paper_title: New opportunities for repositories in the age of altmetrics
paper_content:
Editor's Summary ::: ::: ::: ::: For institutional repositories, alternative metrics reflecting online activity present valuable indicators of interest in their holdings that can supplement traditional usage statistics. A variable mix of built-in metrics is available through popular repository platforms: Digital Commons, DSpace and EPrints. These may include download counts at the collection and/or item level, search terms, total and unique visitors, page views and social media and bookmarking metrics; additional data may be available with special plug-ins. Data provide different types of information valuable for repository managers, university administrators and authors. They can reflect both scholarly and popular impact, show readership, reflect an institution's output, justify tenure and promotion and indicate direction for collection management. Practical considerations for implementing altmetrics include service costs, technical support, platform integration and user interest. Altmetrics should not be used for author ranking or comparison, and altmetrics sources should be regularly reevaluated for relevance.
---
paper_title: Altmetrics: Rethinking the Way We Measure
paper_content:
Altmetrics is the focus for this edition of “Balance Point.” The column editor invited Finbar Galligan who has gained considerable knowledge of altmetrics to co-author the column. Altmetrics, their relationship to traditional metrics, their importance, uses, potential impacts, and possible future directions are examined. The authors conclude that altmetrics have an important future role to play and that they offer the potential to revolutionize the analysis of the value and impact of scholarly work.
---
paper_title: Social Networking Sites Emerging and Essential Tools for Communication in Dermatology
paper_content:
IMPORTANCE ::: The use of social media by dermatology journals and professional and patient-centered dermatology organizations remains largely unknown and, to our knowledge, has yet to be fully evaluated. ::: ::: ::: OBJECTIVE ::: To evaluate and quantify the extent of involvement of dermatology journals, professional dermatology organizations, and dermatology-related patient advocate groups on social networking sites. ::: ::: ::: DESIGN, SETTING, AND PARTICIPANTS ::: We obtained an archived list of 102 current dermatology journals from SCImago on the World Wide Web and used the list to investigate Facebook, Twitter, and individual journal websites for the presence of social media accounts. We identified professional and patient-centered dermatology organization activity on social networks through queries of predetermined search terms on Google, Facebook, Twitter, and LinkedIn. The activity of each entity was documented by recording the following metrics of popularity: the numbers of Facebook "likes," Twitter "followers," and LinkedIn "members." ::: ::: ::: MAIN OUTCOMES AND MEASURES ::: The numbers of Facebook likes, Twitter followers, and LinkedIn members corresponding to each dermatology journal and each professional and patient-related dermatology organization. ::: ::: ::: RESULTS ::: On July 17, 2012, of the 102 dermatology journals ranked by SCImago, 12.7% were present on Facebook and 13.7% on Twitter. We identified popular dermatology journals based on Facebook likes and Twitter followers, led by the Journal of the American Academy of Dermatology and Dermatology Times, respectively. Popular professional dermatology organizations included dermRounds Dermatology Network (11 251 likes on Facebook and 2900 followers on Twitter). The most popular dermatology patient-centered organizations were the Skin Cancer Foundation (20 119 likes on Facebook), DermaTalk (21 542 followers on Twitter), and the National Psoriasis Foundation (200 members on LinkedIn). ::: ::: ::: CONCLUSIONS AND RELEVANCE ::: Patient-centered and professional dermatology organizations use social networking sites; however, academic journals tend to lag behind significantly. Although some journals are active in social media, most have yet to recognize the potential benefits of fully embracing popular social networks.
---
paper_title: International Urology Journal Club via Twitter: 12-Month Experience
paper_content:
Abstract Background Online journal clubs have increasingly been utilised to overcome the limitations of the traditional journal club. However, to date, no reported online journal club is available for international participation. Objective To present a 12-mo experience from the International Urology Journal Club, the world's first international journal club using Twitter, an online micro-blogging platform, and to demonstrate the viability and sustainability of such a journal club. Design, setting, and participants #urojc is an asynchronous 48-h monthly journal club moderated by the Twitter account @iurojc. The open invitation discussions focussed on papers typically published within the previous 2–4 wk. Data were obtained via third-party Twitter analysis services. Outcome measurements and statistical analysis Outcomes analysed included number of total and new users, number of tweets, and qualitative analysis of the relevance of tweets. Analysis was undertaken using GraphPad software, Microsoft Excel, and thematic qualitative analysis. Results and limitations The first 12 mo saw a total of 189 unique users representing 19 countries and 6 continents. There was a mean of 39 monthly participants that included 14 first-time participants per month. The mean number of tweets per month was 195 of which 62% represented original tweets directly related to the topic of discussion and 22% represented retweets of original posts. A mean of 130 832 impressions, or reach , were created per month. The @iurojc moderator account has accumulated >1000 followers. The study is limited by potentially incomplete data extracted by third-party Twitter analysers. Conclusions Social media provides a potential for enormous international communication that has not been possible in the past. We believe the pioneering #urojc is both viable and sustainable. There is unlimited scope for journal clubs in other fields to follow the example of #urojc and utilise online portals to revitalise the traditional journal club while fostering international relationships.
---
paper_title: International palliative care journal club on twitter:experience so far
paper_content:
Introduction @hpmJC (hospice and palliative medicine Journal Club, #hpmJC) was launched in February 2014 on the social networking service Twitter, as a regular international journal club for palliative care. The journal club aims to encourage critical analysis of research methods and findings, and to promote evidence based practice, by providing a forum to discuss latest research findings. Aim(s) and method(s) To analyse the use and reach of #hpmJC, from the first journal club in February 2014, to date. All data on Twitter posts (tweets) using #hpmJC were extracted from Twitter using analytic tools Sysomos and Symplur. Outcomes included number of tweets, number of unique users, users9 designated country, and impressions (potential number of accounts reached). Results 7 journal clubs have taken place. 2360 tweets were sent, from 230 individual Twitter accounts and with contributions from people in 17 countries. For contributors whose country of origin is known (59%), most were based in the UK (41%) or USA (26%).Tweets from resource-poor countries were initially uncommon but increased over the time period. The mean number of contributors at each journal club was 32. The potential reach of #hpmJC varied, but for the most recent journal club was 290,802 unique users. Conclusion(s) Social media provides opportunities to share expertise and disseminate information globally, transcending geographical boundaries. @hpmJC has been used to start a viable and sustainable online multidisciplinary journal club with wider geographical spread and potential reach than a traditional journal club. Strategies to increase participation in resource-poor countries are being developed.
---
paper_title: Journal Club via social media: authors take note of the impact of #BlueJC
paper_content:
Journal Clubs inform clinicians, instil research literacy, andembed evidence-based practice. They also offer the oppor-tunity for post publication peer-review to identify weak-nesses in research, make suggestions for improvement, anddiscover implications for future research and clinical prac-tice; however, the deliberations of Journal Clubs are rarelyfed back for reflections from editors and authors.
---
paper_title: The emerging use of Twitter by urological journals.
paper_content:
Objective ::: ::: To assess the emerging use of Twitter by urological journals. ::: ::: ::: ::: Methods ::: ::: A search of the Journal of Citation Reports 2012 was performed to identify urological journals. These journals were then searched on Twitter.com. Each journal website was accessed for links to social media (SoMe). The number of ‘tweets’, followers and age of profile was determined. To evaluate the content, over a 6-month period (November 2013 to April 2014), all tweets were scrutinised on the journals Twitter profiles. To assess SoMe influence, the Klout score of each journal was also calculated. ::: ::: ::: ::: Results ::: ::: In all, 33 urological journals were identified. Eight (24.2%) had Twitter profiles. The mean (range) number of tweets and followers was 557 (19–1809) and 1845 (82–3692), respectively. The mean (range) age of the twitter profiles was 952 (314–1758) days with an average 0.88 tweets/day. A Twitter profile was associated with a higher mean impact factor of the journal (mean [sd] 3.588 [3.05] vs 1.78 [0.99], P = 0.013). Over a 6-month period, November 2013 to April 2014, the median (range) number of tweets per profile was 82 (2–415) and the median (range) number of articles linked to tweets was 73 (0–336). Of these 710 articles, 152 were Level 1 evidence-based articles, 101 Level 2, 278 Level 3 and 179 Level 4. The median (range) Klout score was 47 (19–58). The Klout scores of major journals did not exactly mirror their impact factors. ::: ::: ::: ::: Conclusion ::: ::: SoMe is increasingly becoming an adjunct to traditional teaching methods, due to its convenient and user-friendly platform. Recently, many of the leading urological journals have used Twitter to highlight significant articles of interest to readers.
---
paper_title: Modern medicine comes online: How putting Wikipedia articles through a medical journal's traditional process can put free, reliable information into as many hands as possible.
paper_content:
Despite its popularity in medical circles, Wikipedia endures skepticism. Often used to gather information, it is rarely considered accurate or complete enough to guide treatment decisions In the face of this, clinicians and trainees turn to resources like UpToDate with greater frequency and confidence because in clinical medicine, a small error can make a big difference. In this issue of Open Medicine, we've published the first ever formally peer-reviewed, and edited, Wikipedia article. The clinical topic is Dengue Fever. Though there may be a need for shorter, more focused clinical articles published elsewhere as this one expands, it is anticipated that the Wikipedia page on Dengue will be a reference against which all others can be compared. Though it might be decades before we see an end to Dengue, perhaps the end to exhaustive or expensive searches about what yet needs to be done, can bring it sooner.
---
paper_title: Social media, medicine and the modern journal club
paper_content:
AbstractMedical media is changing along with the rest of the media landscape. One of the more interesting ways that medical media is evolving is the increased role of social media in medical media's creation, curation and distribution. Twitter, a microblogging site, has become a central hub for finding, vetting, and spreading this content among doctors. We have created a Twitter journal club for nephrology that primarily provides post-publication peer review of high impact nephrology articles, but additionally helps Twitter users build a network of engaged people with interests in academic nephrology. By following participants in the nephrology journal club, users are able to stock their personal learning network. In this essay we discuss the history of medical media, the role of Twitter in the current states of media and summarize our initial experience with a Twitter journal club.
---
paper_title: Preliminary survey of leading general medicine journals’ use of Facebook and Twitter1
paper_content:
Aim: This study is the first to chart the use of Facebook and Twitter by peer-reviewed medical journals. Methods: We selected the top 25 general medicine journals on the Thomson Reuters Journal Citation Report (JCR) list. We surveyed their Facebook and Twitter presences and scanned their Web sites for any Facebook and (or) Twitter features as of November 2011. Results/Discussion: 20 of 25 journals had some sort of Facebook presence, with 11 also having a Twitter presence. Total ‘Likes’ across all of the Facebook pages for journals with a Facebook presence were 321,997, of which 259, 902 came from the New England Journal of Medicine (NEJM) alone. The total numbers of Twitter ‘Followers’ were smaller by comparison when compiled across all surveyed journals. ‘Likes’ and ‘Followers’ are not the equivalents of total accesses but provide some proxy measure for impact and popularity. Those journals in our sample making best use of the open sharing nature of social media are closed-access; with the leading open access journals on the list lagging behind by comparison. We offer a partial interpretation for this and discuss other findings of our survey, provide some recommendations to journals wanting to use social media, and finally present some future research directions. Conclusions: Journals should not underestimate the potential of social media as a powerful means of reaching out to their readership.
---
paper_title: Twitter as a tool for ophthalmologists
paper_content:
Twitter is a social media web site created in 2006 that allows users to post Tweets, which are text-based messages containing up to 140 characters. It has grown exponentially in popularity; now more than 340 million Tweets are sent daily, and there are more than 140 million users. Twitter has become an important tool in medicine in a variety of contexts, allowing medical journals to engage their audiences, conference attendees to interact with one another in real time, and physicians to have the opportunity to interact with politicians, organizations, and the media in a manner that can be freely observed. There are also tremendous research opportunities since Twitter contains a database of public opinion that can be mined by keywords and hashtags. This article serves as an introduction to Twitter and surveys the peer-reviewed literature concerning its various uses and original studies. Opportunities for use in ophthalmology are outlined, and a recommended list of ophthalmology feeds on Twitter is presented. Overall, Twitter is an underutilized resource in ophthalmology and has the potential to enhance professional collegiality, advocacy, and scientific research.
---
paper_title: Global Emergency Medicine Journal Club: A Social Media Discussion About the Age-Adjusted D-Dimer Cutoff Levels to Rule Out Pulmonary Embolism Trial
paper_content:
Study objective Annals of Emergency Medicine collaborated with an educational Web site, Academic Life in Emergency Medicine (ALiEM), to host an online discussion session featuring the 2014 Journal of the American Medical Association publication on the Age-Adjusted D-Dimer Cutoff Levels to Rule Out Pulmonary Embolism (ADJUST-PE) trial by Righini et al. The objective is to describe a 14-day (August 25 to September 7, 2014) worldwide academic dialogue among clinicians in regard to 4 preselected questions about the age-adjusted D-dimer cutoff to detect pulmonary embolism. Methods Five online facilitators hosted the multimodal discussion on the ALiEM Web site, Twitter, and Google Hangout. Comments across the social media platforms were curated for this report, as framed by the 4 preselected questions, and engagement was tracked through various Web analytic tools. Results Blog and Twitter comments, as well as video expert commentary involving the ADJUST-PE trial, are summarized. The dialogue resulted in 1,169 page views from 391 cities in 52 countries on the ALiEM Web site, 502,485 Twitter impressions, and 159 views of the video interview with experts. A postdiscussion summary on the Journal Jam podcast resulted in 3,962 downloads in its first week of publication during September 16 to 23, 2014. Conclusion Common themes that arose in the multimodal discussions included the heterogeneity of practices, D-dimer assays, provider knowledge about these assays, and prevalence rates in different areas of the world. This educational approach using social media technologies demonstrates a free, asynchronous means to engage a worldwide audience in scholarly discourse.
---
paper_title: Increased Use of Twitter at a Medical Conference: A Report and a Review of the Educational Opportunities
paper_content:
BACKGROUND ::: Most consider Twitter as a tool purely for social networking. However, it has been used extensively as a tool for online discussion at nonmedical and medical conferences, and the academic benefits of this tool have been reported. Most anesthetists still have yet to adopt this new educational tool. There is only one previously published report of the use of Twitter by anesthetists at an anesthetic conference. This paper extends that work. ::: ::: ::: OBJECTIVE ::: We report the uptake and growth in the use of Twitter, a microblogging tool, at an anesthetic conference and review the potential use of Twitter as an educational tool for anesthetists. ::: ::: ::: METHODS ::: A unique Twitter hashtag (#WSM12) was created and promoted by the organizers of the Winter Scientific Meeting held by The Association of Anaesthetists of Great Britain and Ireland (AAGBI) in London in January 2012. Twitter activity was compared with Twitter activity previously reported for the AAGBI Annual Conference (September 2011 in Edinburgh). All tweets posted were categorized according to the person making the tweet and the purpose for which they were being used. The categories were determined from a literature review. ::: ::: ::: RESULTS ::: A total of 227 tweets were posted under the #WSM12 hashtag representing a 530% increase over the previously reported anesthetic conference. Sixteen people joined the Twitter stream by using this hashtag (300% increase). Excellent agreement (κ = 0.924) was seen in the classification of tweets across the 11 categories. Delegates primarily tweeted to create and disseminate notes and learning points (55%), describe which session was attended, undertake discussions, encourage speakers, and for social reasons. In addition, the conference organizers, trade exhibitors, speakers, and anesthetists who did not attend the conference all contributed to the Twitter stream. The combined total number of followers of those who actively tweeted represented a potential audience of 3603 people. ::: ::: ::: CONCLUSIONS ::: This report demonstrates an increase in uptake and growth in the use of Twitter at an anesthetic conference and the review illustrates the opportunities and benefits for medical education in the future.
---
paper_title: Social media: A tool to spread information: A case study analysis of Twitter conversation at the Cardiac Society of Australia & New Zealand 61st Annual Scientific Meeting 2013
paper_content:
Summary Background The World Wide Web has changed the way in which people communicate and consume information. More importantly, this innovation has increased the speed and spread of information. There has been recent increase in the percentage of cardiovascular professionals, including journals and associations using Twitter to engage with others and exchange ideas. Evaluating the reach and impact in scientific meetings is important in promoting the use of social media. Objective This study evaluated Twitter use during the recent 61st Annual Scientific Meeting at the Cardiac Society of Australia and New Zealand. Methods During the Cardiac Society of Australia and New Zealand 2013 61st Annual Scientific Meeting Symplur was used to curate conversations that were publicly posted with the hashtag #CSANZ2013. The hashtag was monitored with analysis focused on the influencers, latest tweets, tweet statistics, activity comparisons, and tweet activity during the conference. Additionally, Radian6 social media listening software was used to collect data. A summary is provided. Results There were 669 total tweets sent from 107 unique Twitter accounts during 8th August 9a.m. to 11th August 1p.m. This averaged nine tweets per hour and six tweets per participant. This assisted in the sharing of ideas and disseminating the findings and conclusions from presenters at the conference with a total 1,432,573 potential impressions in Twitter users tweet streams. Conclusion This analysis of Twitter conversations during a recent scientific meeting highlights the significance and place of social media within research dissemination and collaboration. Researchers and clinicians should consider using this technology to enhance timely communication of findings. The potential to engage with consumers and enhance shared decision-making should be explored further.
---
paper_title: Monitoring Academic Conferences: Real-Time Visualization and Retrospective Analysis of Backchannel Conversations
paper_content:
Social-media-supported academic conferences are becoming increasingly global as people anywhere can participate actively through backchannel conversation. It can be challenging for the conference organizers to integrate the use of social media, to take advantage of the connections between backchannel and front stage, and to encourage the participants to be a part of the broader discussion occurring through social media. The backchannel conversation during academic conference can offer key insights on best practices, and specialized tools and methods are needed to analyze this data. In this paper we present our two fold contribution to enable organizers to gain such insights. First, we introduce Conference Monitor (CM), a real time web-based tweet visualization dashboard to monitor the backchannel conversation during academic conferences. We demonstrate the features of CM, which are designed to help monitor academic conferences, and its application during the conference Theorizing the Web 2012 (TtW12). Its real time visualizations helped identify the popular sessions, the active and important participants, and trending topics. Second, we report on our retrospective analysis of the tweets about the TtW12 conference and the conference-related follower-networks. The 4828 tweets from 593 participants resulted in 8:14 tweets per participant. The 1591 new follower-relations created among the participants during the conference confirmed the overall high volume of new connections created during academic conferences. On average a speaker got more new followers than a non-speaker. A few remote participants also gained comparatively large number of new followers due to the content of their tweets and their perceived importance. There was a positive correlation between the number of new followers of a participant and the number of people who mentioned him/her. Remote participants had a significant level of participation in the backchannel and live streaming helped them to be more engaged.
---
paper_title: Social media and scholarly reading
paper_content:
Purpose – The purpose of this paper is to examine how often university academic staff members use and create various forms of social media for their work and how that use influences their use of traditional scholarly information sources.Design/methodology/approach – This article is based on a 2011 academic reading study conducted at six higher learning institutions in the United Kingdom. Approximately 2,000 respondents completed the web‐based survey. The study used the critical incident of last reading by academics to gather information on the purpose, outcomes, and values of scholarly readings and access to library collections. In addition, academics were asked about their use and creation of social media as part of their work activities. The authors looked at six categories of social media – blogs, videos/YouTube, RSS feeds, Twitter feeds, user comments in articles, podcasts, and other. This article focuses on the influence of social media on scholarly reading patterns.Findings – Most UK academics use o...
---
paper_title: Identifying and analyzing researchers on twitter
paper_content:
For millions of users Twitter is an important communication platform, a social network, and a system for resource sharing. Likewise, scientists use Twitter to connect with other researchers, announce calls for papers, or share their thoughts. Filtering tweets, discovering other researchers, or finding relevant information on a topic of interest, however, is difficult since no directory of researchers on Twitter exists. In this paper we present an approach to identify Twitter accounts of researchers and demonstrate its utility for the discipline of computer science. Based on a seed set of computer science conferences we collect relevant Twitter users which we can partially map to ground-truth data. The mapping is leveraged to learn a model for classifying the remaining. To gain first insights into how researchers use Twitter, we empirically analyze the identified users and compare their age, popularity, influence, and social network.
---
paper_title: Adoption and use of Web 2.0 in scholarly communications
paper_content:
Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.
---
paper_title: Social media use in the research workflow
paper_content:
The paper reports on a major international survey, covering 2,000 researchers, which investigated the use of social media in the research workflow. The topic is the second to emerge from the Charleston Observatory, the research adjunct of the popular annual Charleston Conference (http://www.katina.info/conference/). The study shows that social media have found serious application at all points of the research lifecycle, from identifying research opportunities to disseminating findings at the end. The three most popular social media tools in a research setting were those for collaborative authoring, conferencing, and scheduling meetings. The most popular brands used tend to be mainstream anchor technologies or 'household brands', such as Twitter. Age is a poor predictor of social media use in a research context, and humanities and social science scholars avail themselves most of social media. Journals, conference proceedings, and edited books remain the core traditional means of disseminating research, with institutional repositories highly valued as well, but social media have become important complementary channels for disseminating and discovering research.
---
paper_title: Translating Research For Health Policy: Researchers’ Perceptions And Use Of Social Media
paper_content:
As the United States moves forward with health reform, the communication gap between researchers and policy makers will need to be narrowed to promote policies informed by evidence. Social media represent an expanding channel for communication. Academic journals, public health agencies, and health care organizations are increasingly using social media to communicate health information. For example, the Centers for Disease Control and Prevention now regularly tweets to 290,000 followers. We conducted a survey of health policy researchers about using social media and two traditional channels (traditional media and direct outreach) to disseminate research findings to policy makers. Researchers rated the efficacy of the three dissemination methods similarly but rated social media lower than the other two in three domains: researchers’ confidence in their ability to use the method, peers’ respect for its use, and how it is perceived in academic promotion. Just 14 percent of our participants reported tweeting, ...
---
paper_title: Adoption and use of Web 2.0 in scholarly communications
paper_content:
Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.
---
paper_title: Who reads research articles? An altmetrics analysis of Mendeley user categories
paper_content:
Little detailed information is known about who reads research articles and the contexts in which research articles are read. Using data about people who register in Mendeley as readers of articles, this article explores different types of users of Clinical Medicine, Engineering and Technology, Social Science, Physics, and Chemistry articles inside and outside academia. The majority of readers for all disciplines were PhD students, postgraduates, and postdocs but other types of academics were also represented. In addition, many Clinical Medicine articles were read by medical professionals. The highest correlations between citations and Mendeley readership counts were found for types of users who often authored academic articles, except for associate professors in some sub-disciplines. This suggests that Mendeley readership can reflect usage similar to traditional citation impact if the data are restricted to readers who are also authors without the delay of impact measured by citation counts. At the same time, Mendeley statistics can also reveal the hidden impact of some research articles, such as educational value for nonauthor users inside academia or the impact of research articles on practice for readers outside academia.
---
paper_title: Connected scholars: Examining the role of social media in research practices of faculty using the UTAUT model
paper_content:
Social media has become mainstream in recent years, and its adoption has skyrocketed. Following this trend among the general public, scholars are also increasingly adopting these tools for their professional work. The current study seeks to learn if, why and how scholars are using social media for communication and information dissemination, as well as validate and update the results of previous scholarship in this area. The study is based on the content analysis of 51 semi-structured interviews of scholars in the Information Science and Technology field. Unlike previous studies, the current work aims not only to highlight the specific social media tools used, but also discover factors that influence intention and use of social media by scholars. To achieve this, the paper uses the Unified Theory of Acceptance and Use of Technology (UTAUT), a widely adopted technology acceptance theory. This paper contributes new knowledge to methodological discussions as it is the first known study to employ UTAUT to interpret scholarly use of social media. It also offers recommendations about how UTAUT can be expanded to better fit examinations of social media use within scholarly practices.
---
paper_title: Assessing the Impact of Publications Saved by Mendeley Users: Is There Any Different Pattern Among Users?
paper_content:
The main focus of this paper is to investigate the impact of publications read (saved) by the different users in Mendeley in order to explore the extent to which their readership counts correlate with their citation indicators. The potential of filtering highly cited papers by Mendeley readerships and its different users have been also explored. For the analysis of the users, we have considered the information of the top three Mendeley ‘users’ reported by the Mendeley. Our results show that publications with Mendeley readerships tend to have higher citation and journal citation scores than publications without readerships. ‘Biomedical & health sciences’ and ‘Mathematics and computer science’ are the fields with respectively the most and the least readership activity in Mendeley. PhD students have the highest density of readerships per publication and Lecturers and Librarians have the lowest across all the different fields. Our precision-recall analysis indicates that in general, for publications with at least one reader in Mendeley, the capacity of readerships of filtering highly cited publications is better than (or at least as good as) Journal Citation Scores. We discuss the important limitation of Mendeley of only reporting the top three readers and not all of them in the potential development of indicators based on Mendeley and its users.
---
paper_title: Presenting professorship on social media: from content and strategy to evaluation
paper_content:
Technology has helped to reform class dynamics and teacher-student relationships. Although the phenomenon of online presentation has drawn considerable scholarly attention, academics seem to have an incomplete understanding about their own presentations online, especially in using social media. Without a thorough examination of how academics present themselves on social media, our understanding of the online learning environment is limited. To fill this void, this study aims to explore the following: (1) the content that college professors provide and the strategy they employ to present it on social media; and (2) how the public evaluates professors based on the content and strategy presented on social media. This study utilizes two methods. First, it conducts a content analysis of 2,783 pieces of microblog posts from 142 full-time communication professors' microblog accounts. Second, it conducts an online experiment based on a between-subject factorial design of 2 (gender: male vs. female) X 3 (topic: pe...
---
paper_title: Science blogging: an exploratory study of motives, styles, and audience reactions
paper_content:
This paper presents results from three studies on science blogging, the use of blogs for science communication. A survey addresses the views and motives of science bloggers, a first content analysis examines material published in science blogging platforms, while a second content analysis looks at reader responses to controversial issues covered in science blogs. Bloggers determine to a considerable degree which communicative function their blog can realize and how accessible it will be to non-experts Frequently readers are interested in adding their views to a post, a form of involvement which is in turn welcomed by the majority of bloggers.
---
paper_title: Who Tweets about Science?
paper_content:
Twitter is currently one of the primary venues for online information dissemination. Although its detractors portray it as nothing more than an exercise in narcissism and banality, Twitter is also used to share news stories and other information that may be of interest to a person’s followers. The current study sampled tweeters who had tweeted at least one link to an article in one of four leading journals, with a focus on studying who, precisely, these tweeters were. The results showed that approximately 76% of the sampled accounts were maintained by individuals (rather than organizations), 67% of these accounts were maintained by a single man, and 34.4% of the individuals were identified as possessing a Ph.D, suggesting that the population of Twitter users who tweet links to academic articles does not reflect the demographics of the general public. In addition, the vast majority of students and academics were associated with some form of science, indicating that interest in scientific journals is limited to individuals in related fields of study. Conference Topic Altmetrics
---
paper_title: Adoption and use of Web 2.0 in scholarly communications
paper_content:
Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.
---
paper_title: Examining the Medical Blogosphere: An Online Survey of Medical Bloggers
paper_content:
Background: Blogs are the major contributors to the large increase of new websites created each year. Most blogs allow readers to leave comments and, in this way, generate both conversation and encourage collaboration. Despite their popularity, however, little is known about blogs or their creators. ::: Objectives: To contribute to a better understanding of the medical blogosphere by investigating the characteristics of medical bloggers and their blogs, including bloggers’ Internet and blogging habits, their motivations for blogging, and whether or not they follow practices associated with journalism. ::: Methods: We approached 197 medical bloggers of English-language medical blogs which provided direct contact information, with posts published within the past month. The survey included 37 items designed to evaluate data about Internet and blogging habits, blog characteristics, blogging motivations, and, finally, the demographic data of bloggers. ::: Pearson’s Chi-Square test was used to assess the significance of an association between 2 categorical variables. Spearman’s rank correlation coefficient was utilized to reveal the relationship between participants’ ages, as well as the number of maintained blogs, and their motivation for blogging. The Mann-Whitney U test was employed to reveal relationships between practices associated with journalism and participants’ characteristics like gender and pseudonym use. ::: Results: A total of 80 (42%) of 197 eligible participants responded. The majority of responding bloggers were white (75%), highly educated (71% with a Masters degree or doctorate), male (59%), residents of the United States (72%), between the ages of 30 and 49 (58%), and working in the healthcare industry (67%). Most of them were experienced bloggers, with 23% (18/80) blogging for 4 or more years, 38% (30/80) for 2 or 3 years, 32% (26/80) for about a year, and only 7% (6/80) for 6 months or less. Those who received attention from the news media numbered 66% (53/80). When it comes to best practices associated with journalism, the participants most frequently reported including links to original source of material and spending extra time verifying facts, while rarely seeking permission to post copyrighted material. Bloggers who have published a scientific paper were more likely to quote other people or media than those who have never published such a paper (U= 506.5, n1= 41, n2= 35, P= .016). Those blogging under their real name more often included links to original sources than those writing under a pseudonym (U= 446.5, n1= 58, n2= 19, P= .01). Major motivations for blogging were sharing practical knowledge or skills with others, influencing the way others think, and expressing oneself creatively. ::: Conclusions: Medical bloggers are highly educated and devoted blog writers, faithful to their sources and readers. Sharing practical knowledge and skills, as well as influencing the way other people think, were major motivations for blogging among our medical bloggers. Medical blogs are frequently picked up by mainstream media; thus, blogs are an important vehicle to influence medical and health policy. [J Med Internet Res 2008;10(3):e28]
---
paper_title: Drivers of Higher Education Institutions' Visibility: A Study of UK HEIs Social Media Use vs. Organizational Characteristics
paper_content:
Social media is increasingly used in higher education settings by researchers, students and institutions. Whether it is researchers conversing with other researchers, or universities seeking to communicate to a wider audience, social media platforms serve as a tool for users to communicate and increase visibility. Scholarly communication in social media and investigations about social media metrics is of increasing interest for scientometric researchers, and to the emergence of altmetrics. Less understood is the role of organizational characteristics in garnering social media visibility, through for instance liking and following mechanisms. In this study we aim to contribute to the understanding of the effect of specific social media use by investigating higher education institutions’ presence on Twitter. We investigate the possible connections between followers on Twitter and the use of Twitter and the organizational characteristics of the HEIs. We find that HEIs’ social media visibility on Twitter are only partly explained by social media use and that organizational characteristics also play a role in garnering these followers. Although, there is an advantage in garnering followers for those first adopters of Twitter. These findings emphasize the importance of considering a range of factors to understand impact online for organizations and HEIs in particular.
---
paper_title: Research Blogs and the Discussion of Scholarly Information
paper_content:
The research blog has become a popular mechanism for the quick discussion of scholarly information. However, unlike peer-reviewed journals, the characteristics of this form of scientific discourse are not well understood, for example in terms of the spread of blogger levels of education, gender and institutional affiliations. In this paper we fill this gap by analyzing a sample of blog posts discussing science via an aggregator called ResearchBlogging.org (RB). ResearchBlogging.org aggregates posts based on peer-reviewed research and allows bloggers to cite their sources in a scholarly manner. We studied the bloggers, blog posts and referenced journals of bloggers who posted at least 20 items. We found that RB bloggers show a preference for papers from high-impact journals and blog mostly about research in the life and behavioral sciences. The most frequently referenced journal sources in the sample were: Science, Nature, PNAS and PLoS One. Most of the bloggers in our sample had active Twitter accounts connected with their blogs, and at least 90% of these accounts connect to at least one other RB-related Twitter account. The average RB blogger in our sample is male, either a graduate student or has been awarded a PhD and blogs under his own name.
---
paper_title: Discovering value in academic social networks: A case study in ResearchGate
paper_content:
The research presented in this paper is about detecting collaborative networks inside the structure of a research social network. As case study we consider ResearchGate and SEE University academic staff. First we describe the methodology used to crawl and create an academic-academic network depending from their fields of interest. We then calculate and discuss four social network analysis centrality measures (closeness, betweenness, degree, and PageRank) for entities in this network. In addition to these metrics, we have also investigated grouping of individuals, based on automatic clustering depending from their reciprocal relationships.
---
paper_title: Social media and scholarly reading
paper_content:
Purpose – The purpose of this paper is to examine how often university academic staff members use and create various forms of social media for their work and how that use influences their use of traditional scholarly information sources.Design/methodology/approach – This article is based on a 2011 academic reading study conducted at six higher learning institutions in the United Kingdom. Approximately 2,000 respondents completed the web‐based survey. The study used the critical incident of last reading by academics to gather information on the purpose, outcomes, and values of scholarly readings and access to library collections. In addition, academics were asked about their use and creation of social media as part of their work activities. The authors looked at six categories of social media – blogs, videos/YouTube, RSS feeds, Twitter feeds, user comments in articles, podcasts, and other. This article focuses on the influence of social media on scholarly reading patterns.Findings – Most UK academics use o...
---
paper_title: Disciplinary differences in Twitter scholarly communication
paper_content:
This paper investigates disciplinary differences in how researchers use the microblogging site Twitter. Tweets from selected researchers in ten disciplines (astrophysics, biochemistry, digital humanities, economics, history of science, cheminformatics, cognitive science, drug discovery, social network analysis, and sociology) were collected and analyzed both statistically and qualitatively. The researchers tended to share more links and retweet more than the average Twitter users in earlier research and there were clear disciplinary differences in how they used Twitter. Biochemists retweeted substantially more than researchers in the other disciplines. Researchers in digital humanities and cognitive science used Twitter more for conversations, while researchers in economics shared the most links. Finally, whilst researchers in biochemistry, astrophysics, cheminformatics and digital humanities seemed to use Twitter for scholarly communication, scientific use of Twitter in economics, sociology and history of science appeared to be marginal.
---
paper_title: Adoption and use of Web 2.0 in scholarly communications
paper_content:
Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.
---
paper_title: Research Blogging: Indexing and Registering the Change in Science 2.0
paper_content:
Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.
---
paper_title: Who reads research articles? An altmetrics analysis of Mendeley user categories
paper_content:
Little detailed information is known about who reads research articles and the contexts in which research articles are read. Using data about people who register in Mendeley as readers of articles, this article explores different types of users of Clinical Medicine, Engineering and Technology, Social Science, Physics, and Chemistry articles inside and outside academia. The majority of readers for all disciplines were PhD students, postgraduates, and postdocs but other types of academics were also represented. In addition, many Clinical Medicine articles were read by medical professionals. The highest correlations between citations and Mendeley readership counts were found for types of users who often authored academic articles, except for associate professors in some sub-disciplines. This suggests that Mendeley readership can reflect usage similar to traditional citation impact if the data are restricted to readers who are also authors without the delay of impact measured by citation counts. At the same time, Mendeley statistics can also reveal the hidden impact of some research articles, such as educational value for nonauthor users inside academia or the impact of research articles on practice for readers outside academia.
---
paper_title: Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows 1
paper_content:
Although there is evidence that counting the readers of an article in the social reference site, Mendeley, may help to capture its research impact, the extent to which this is true for different scientific fields is unknown. In this study, we compare Mendeley readership counts with citations for different social sciences and humanities disciplines. The overall correlation between Mendeley readership counts and citations for the social sciences was higher than for the humanities. Low and medium correlations between Mendeley bookmarks and citation counts in all the investigated disciplines suggest that these measures reflect different aspects of research impact. Mendeley data were also used to discover patterns of information flow between scientific fields. Comparing information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for the disciplines revealed substantial similarities and some differences. Thus, the evidence from this study suggests that Mendeley readership data could be used to help capture knowledge transfer across scientific disciplines, especially for people that read but do not author articles, as well as giving impact evidence at an earlier stage than is possible with citation counts.
---
paper_title: Research Blogging: Indexing and Registering the Change in Science 2.0
paper_content:
Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.
---
paper_title: Geographic variation in social media metrics: an analysis of Latin American journal articles
paper_content:
Purpose – The purpose of this study is to contribute to the understanding of how the potential of altmetrics varies around the world by measuring the percentage of articles with non-zero metrics (coverage) for articles published from a developing region (Latin America). Design/methodology/approach – This study uses article metadata from a prominent Latin American journal portal, SciELO, and combines it with altmetrics data from Altmetric.com and with data collected by author-written scripts. The study is primarily descriptive, focusing on coverage levels disaggregated by year, country, subject area, and language. Findings – Coverage levels for most of the social media sources studied was zero or negligible. Only three metrics had coverage levels above 2 per cent – Mendeley, Twitter, and Facebook. Of these, Twitter showed the most significant differences with previous studies. Mendeley coverage levels reach those found by previous studies, but it takes up to two years longer for articles to be saved in the...
---
paper_title: Exploring altmetrics in an emerging country context
paper_content:
The study of altmetrics is relatively new, and the little that is known about altmetrics is only known for journals and articles from a limited set of contexts (publication venues and subject areas). The use of PLOS, arXiv.org, PubMed, Web of Science, or of a few well-established journals like Nature and Science introduces a selection bias that calls into question the generalizability of reported results. For example, we already know that altmetrics related to mentions in blogs are heavily influenced by the makeup of bloggers themselves and the journals they tend to blog about, both of which introduce a strong bias in favour high-impact life science journals (Shema et al., 2012). There is therefore a need to study the altmetrics of journals and articles published and read in other contexts, including research that is published and read in regions of the world beyond the global North, as well as in other languages beyond English.
---
paper_title: The metric tide: report of the independent review of the role of metrics in research assessment and management
paper_content:
This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. ::: ::: This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. ::: ::: The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.
---
paper_title: Academic sell-out: How an obsession with metrics and rankings is damaging academia
paper_content:
Increasingly, academics have to demonstrate that their research has academic ::: impact. Universities normally use journal rankings and journal impact factors ::: to assess the research impact of individual academics. More recently, ::: citation counts for individual articles and the h-index have also been used ::: to measure the academic impact of academics. There are, however, several ::: serious problems with relying on journal rankings, journal impact factors and ::: citation counts. For example, articles without any impact may be published in ::: highly ranked journals or journals with high impact factor, whereas articles ::: with high impact could be published in lower ranked journals or journals with ::: low impact factor. Citation counts can also be easily gamed and manipulated ::: and the h-index disadvantages early career academics. This paper discusses ::: these and several other problems and suggests alternatives such as ::: post-publication peer review and open-access journals.
---
paper_title: The Altmetrics Collection
paper_content:
What paper should I read next? Who should I talk to at a conference? Which research group should get this grant? Researchers and funders alike must make daily judgments on how to best spend their limited time and money–judgments that are becoming increasingly difficult as the volume of scholarly communication increases. Not only does the number of scholarly papers continue to grow, it is joined by new forms of communication from data publications to microblog posts. ::: ::: To deal with incoming information, scholars have always relied upon filters. At first these filters were manually compiled compendia and corpora of the literature. But by the mid-20th century, filters built on manual indexing began to break under the weight of booming postwar science production. Garfield [1] and others pioneered a solution: automated filters that leveraged scientists own impact judgments, aggregating citations as “pellets of peer recognition.” [2]. ::: ::: These citation-based filters have dramatically grown in importance and have become the tenet of how research impact is measured. But, like manual indexing 60 years ago, they may today be failing to keep up with the literature’s growing volume, velocity, and diversity [3]. ::: ::: Citations are heavily gamed [4]–[6] and are painfully slow to accumulate [7], and overlook increasingly important societal and clinical impacts [8]. Most importantly, they overlook new scholarly forms like datasets, software, and research blogs that fall outside of the scope of citable research objects. In sum, citations only reflect formal acknowledgment and thus they provide only a partial picture of the science system [9]. Scholars may discuss, annotate, recommend, refute, comment, read, and teach a new finding before it ever appears in the formal citation registry. We need new mechanisms to create a subtler, higher-resolution picture of the science system. ::: ::: The Quest for Better Filters ::: The scientometrics community has not been blind to the limitations of citation measures, and has collectively proposed methods to gather evidence of broader impacts and provide more detail about the science system: tracking acknowledgements [10], patents [11], mentorships [12], news articles [8], usage in syllabuses [13], and many others, separately and in various combinations [14]. The emergence of the Web, a “nutrient-rich space for scholars” [15], has held particular promise for new filters and lenses on scholarly output. Webometrics researchers have uncovered evidence of informal impact by examining networks of hyperlinks and mentions on the broader Web [16]–[18]. An important strand of webometrics has also examined the properties of article download data [7], [19], [20]. ::: ::: The last several years, however, have presented a promising new approach to gathering fine-grained impact data: tracking large-scale activity around scholarly products in online tools and environments. These tools and environments include, among others: ::: ::: ::: social media like Twitter and Facebook ::: ::: ::: online reference managers like CiteULike, Zotero, and Mendeley ::: ::: ::: collaborative encyclopedias like Wikipedia ::: ::: ::: blogs, both scholarly and general-audience ::: ::: ::: scholarly social networks, like ResearchGate or Academia.edu ::: ::: ::: conference organization sites like Lanyrd.com ::: ::: ::: ::: Growing numbers of scholars are using these and similar tools to mediate their interaction with the literature. In doing so, they are leaving valuable tracks behind them–tracks with potential to show informal paths of influence with unprecedented speed and resolution. Many of these tools offer open APIs, supporting large-scale, automated mining of online activities and conversations around research objects [21]. ::: ::: Altmetrics [22], [23] is the study and use of scholarly impact measures based on activity in online tools and environments. The term has also been used to describe the metrics themselves–one could propose in plural a “set of new altmetrics.” Altmetrics is in most cases a subset of both scientometrics and webometrics; it is a subset of the latter in that it focuses more narrowly on scholarly influence as measured in online tools and environments, rather than on the Web more generally. ::: ::: Altmetrics may support finer-grained maps of science, broader and more equitable evaluations, and improvements to the peer-review system [24]. On the other hand, the use and development of altmetrics should be pursued with appropriate scientific caution. Altmetrics may face attempts at manipulation similar to what Google must deal with in web search ranking. Addressing such manipulation may, in-turn, impact the transparency of altmetrics. New and complex measures may distort our picture of the science system if not rigorously assessed and correctly understood. Finally, altmetrics may promote an evaluation system for scholarship that many argue has become overly focused on metrics.
---
paper_title: What Can Article-Level Metrics Do for You?
paper_content:
Article-level metrics (ALMs) provide a wide range of metrics about the uptake of an individual journal article by the scientific community after publication. They include citations, usage statistics, discussions in online comments and social media, social bookmarking, and recommendations. In this essay, we describe why article-level metrics are an important extension of traditional citation-based journal metrics and provide a number of example from ALM data collected for PLOS Biology.
---
paper_title: A multi-metric approach for research evaluation
paper_content:
Background information is provided about the Web 2.0 related term altmetrics. This term is placed in the context of the broader field of informetrics. The term influmetrics is proposed as a better term for altmetrics. The importance of considering research products and not just scientific publications is highlighted. Issues related to peer review and making funding decisions within a multi-metric approach are discussed and brought in relation with the new metrics field.
---
paper_title: Ask not what altmetrics can do for you, but what altmetrics can do for developing countries
paper_content:
Editor's Summary ::: ::: ::: ::: Traditional citation counting for evaluating scholarly impact unfairly benefits those in North America and Europe and shortchanges the alternative scholars of the developing world. Alternative metrics more accurately measure the impact of scholarly writings, better serve all scholars and can foster a research culture that supports national development goals. The current system favors dominant journals and topics of interest to the prevailing scientific community, captured by the leading bibliographic databases. Yet publishing on platforms more open to underrepresented journals and scholars in developing nations would promote a greater range of ideas and scholarly exchange. With facilitating international development in mind, scholarly communication should encourage research on topics of local and national relevance and be presented through globally accessible channels, disseminated by social media. Publishing technology barriers to participation must be lowered. The value of altmetrics will be evident, providing advantages to alternative scholars, serving public needs and revealing scientific contributions long underrepresented in the standard literature.
---
paper_title: Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature
paper_content:
Data collected by social media platforms have been introduced as new sources for indicators to help measure the impact of scholarly research in ways that are complementary to traditional citation analysis. Data generated from social media activities can be used to reflect broad types of impact. This article aims to provide systematic evidence about how often Twitter is used to disseminate information about journal articles in the biomedical sciences. The analysis is based on 1.4 million documents covered by both PubMed and Web of Science and published between 2010 and 2012. The number of tweets containing links to these documents was analyzed and compared to citations to evaluate the degree to which certain journals, disciplines, and specialties were represented on Twitter and how far tweets correlate with citation impact. With less than 10% of PubMed articles mentioned on Twitter, its uptake is low in general but differs between journals and specialties. Correlations between tweets and citations are low, implying that impact metrics based on tweets are different from those based on citations. A framework using the coverage of articles and the correlation between Twitter mentions and citations is proposed to facilitate the evaluation of novel social-media-based metrics.
---
paper_title: Who Tweets about Science?
paper_content:
Twitter is currently one of the primary venues for online information dissemination. Although its detractors portray it as nothing more than an exercise in narcissism and banality, Twitter is also used to share news stories and other information that may be of interest to a person’s followers. The current study sampled tweeters who had tweeted at least one link to an article in one of four leading journals, with a focus on studying who, precisely, these tweeters were. The results showed that approximately 76% of the sampled accounts were maintained by individuals (rather than organizations), 67% of these accounts were maintained by a single man, and 34.4% of the individuals were identified as possessing a Ph.D, suggesting that the population of Twitter users who tweet links to academic articles does not reflect the demographics of the general public. In addition, the vast majority of students and academics were associated with some form of science, indicating that interest in scientific journals is limited to individuals in related fields of study. Conference Topic Altmetrics
---
paper_title: How consistent are altmetrics providers? Study of 1000 PLOS ONE publications using the PLOS ALM, Mendeley and Altmetric.com APIs
paper_content:
Introduction Altmetrics track the impact of scholarly works on the social web. The term was introduced in 2010 (Priem, et al.) as an alternative way of measuring the broader research impact of scholarly outputs using the social web; aimed at enhancing and complementing the more traditional ways of impact assessment via citations. The initial phase of altmetrics has been characterized by the development of diversity of tools that aim to track ‘real-time’ impact of scientific outputs (Wouters & Costas, 2012). Several studies have started to analyze the presence of altmetrics across scientific publications (Priem, Piwowar, & Hemminger, 2012; Zahedi, Costas & Wouters, 2014; Costas, Zahedi, & Wouters, 2014; Thelwall et al., 2013). However, little is still known about the quality of altmetric data obtained by these providers. It seems that similar metrics differ across different providers due to the difference in collection time, data sources and methods of collection among altmetrics providers (Chamberlain, 2013). Hence, the assessment of the quality, reliability and consistency of altmetric data is crucial in order to be able to introduce altmetrics for research assessment purposes. This study targets to investigate 3 main altmetrics providers (PLOS ALM, Altmetric.com and Mendeley) and to test the accuracy and quality of their metrics for a same set of publications. The research questions are as follows: 1. Are there differences across these three altmetrics providers in the metrics for the same set of publications? 2. If there are differences, what are possible factors that explain these differences? Data and Methodology This study is based on all PLOS ONE publications from 2013 (31,408 articles), retrieved from the full PLOS ALM Dataset on 14 Jan 2014. A random sample of 1,000 publications from this data set has been extracted. DOIs were used for collecting the metrics automatically from three providers of altmetrics data: PLOS ALM, Altmetric.com and Mendeley using their REST APIs. The data collection was performed at the same date and time (11 AM CET on February 11, 2014). The R statistical analysis software version 3.0.2 and the rOpenSci alm package were used to obtain the data from the PLOS ALM REST API v3, and to generate a CSV report. For gathering the altmetric data from the Mendeley and Altmetric.com, the responses provided on search requests using DOI’s were downloaded per API search request separately in Java Script Object Notations (JSON) format on the basis of individual DOIs and parsed by using the additional JAVA library from within the SAS software. Finally, the data transformed into a comma separated value format (CSV) and imported in SQL in order to join the files from the three altmetrics providers and to perform further analysis. Results Coverage of PLOS ONE publications across altmetrics providers Table 1 shows the coverage of the 1000 PLOS ONE publications by these altmetrics providers. We have focused only on 3 altmetric indicators: Mendeley readerships, Twitter counts and Facebook counts. The number of publications with at least one metric (Mendeley readers, tweet and Facebook counts) show that Mendeley has the highest coverage, followed by PLOS ALM and Altmetric.com. There are more publications with at least one tweet in Altmetric.com vs. PLOS ALM and the other way around for Facebook counts (PLOS ALM has a higher coverage than Altmetric.com). Table 1. coverage of PLOS ONE publications by altmetrics providers
---
paper_title: Research Blogging: Indexing and Registering the Change in Science 2.0
paper_content:
Increasing public interest in science information in a digital and 2.0 science era promotes a dramatically, rapid and deep change in science itself. The emergence and expansion of new technologies and internet-based tools is leading to new means to improve scientific methodology and communication, assessment, promotion and certification. It allows methods of acquisition, manipulation and storage, generating vast quantities of data that can further facilitate the research process. It also improves access to scientific results through information sharing and discussion. Content previously restricted only to specialists is now available to a wider audience. This context requires new management systems to make scientific knowledge more accessible and useable, including new measures to evaluate the reach of scientific information. The new science and research quality measures are strongly related to the new online technologies and services based in social media. Tools such as blogs, social bookmarks and online reference managers, Twitter and others offer alternative, transparent and more comprehensive information about the active interest, usage and reach of scientific publications. Another of these new filters is the Research Blogging platform, which was created in 2007 and now has over 1,230 active blogs, with over 26,960 entries posted about peer-reviewed research on subjects ranging from Anthropology to Zoology. This study takes a closer look at RB, in order to get insights into its contribution to the rapidly changing landscape of scientific communication.
---
paper_title: Drivers of Higher Education Institutions' Visibility: A Study of UK HEIs Social Media Use vs. Organizational Characteristics
paper_content:
Social media is increasingly used in higher education settings by researchers, students and institutions. Whether it is researchers conversing with other researchers, or universities seeking to communicate to a wider audience, social media platforms serve as a tool for users to communicate and increase visibility. Scholarly communication in social media and investigations about social media metrics is of increasing interest for scientometric researchers, and to the emergence of altmetrics. Less understood is the role of organizational characteristics in garnering social media visibility, through for instance liking and following mechanisms. In this study we aim to contribute to the understanding of the effect of specific social media use by investigating higher education institutions’ presence on Twitter. We investigate the possible connections between followers on Twitter and the use of Twitter and the organizational characteristics of the HEIs. We find that HEIs’ social media visibility on Twitter are only partly explained by social media use and that organizational characteristics also play a role in garnering these followers. Although, there is an advantage in garnering followers for those first adopters of Twitter. These findings emphasize the importance of considering a range of factors to understand impact online for organizations and HEIs in particular.
---
paper_title: Altmetrics for large, multidisciplinary research groups: Comparison of current tools
paper_content:
Most altmetric studies compare how often a publication has been cited or mentioned on the Web. Yet, a closer look at altmetric analyses reveals that the altmetric tools employed and the social media platforms considered may have a significant effect on the available information and ensuing interpretation. Therefore, it is indicated to investigate and compare the various tools currently available for altmetric analyses and the social media platforms they draw upon. This paper will present results from a comparative altmetric analysis conducted employing four well-established altmetric services based on a broad, multidisciplinary sample of scientific publications. Our study reveals that for several data sources the coverage of findable publications on social media platforms and metric counts (impact) can vary across altmetric data providers. ::: ::: http://www.bibliometrie-pf.de/article/viewFile/205/258
---
paper_title: Five challenges in altmetrics: A toolmaker's perspective
paper_content:
Jean Liu is data curator and blog editor at Altmetric LLP, and Euan Adie is the founder. They can be reached at http://altmetric.com. D riven by the development of new tools for measuring scholarly attention, altmetrics constitute a burgeoning new area of information science. It is an exciting time to be involved in the field since there are so many opportunities to contribute in innovative ways. We develop altmetrics tools and related services at Altmetric LLP, a small London-based start-up founded in 2011 [1]. Like all developers of new altmetrics tools, we frequently encounter challenges in defining what should be measured, accurately collecting attention from disparate sources and making sense of the huge amount of compiled data. We outline five of these challenges in this piece, illustrating them with examples from our experience. It is worth noting that the altmetrics community as a whole comes together regularly to discuss these and other issues, with two open workshops held in 2012 and more planned for the future.
---
paper_title: The metric tide: report of the independent review of the role of metrics in research assessment and management
paper_content:
This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. ::: ::: This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. ::: ::: The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.
---
paper_title: Tweets as impact indicators: Examining the implications of automated bot accounts on Twitter
paper_content:
This brief communication presents preliminary findings on automated Twitter accounts distributing links to scientific articles deposited on the preprint repository arXiv. It discusses the implication of the presence of such bots from the perspective of social media metrics altmetrics, where mentions of scholarly documents on Twitter have been suggested as a means of measuring impact that is both broader and timelier than citations. Our results show that automated Twitter accounts create a considerable amount of tweets to scientific articles and that they behave differently than common social bots, which has critical implications for the use of raw tweet counts in research evaluation and assessment. We discuss some definitions of Twitter cyborgs and bots in scholarly communication and propose distinguishing between different levels of engagement-that is, differentiating between tweeting only bibliographic information to discussing or commenting on the content of a scientific work.
---
paper_title: Altmetrics: New Indicators for Scientific Communication in Web 2.0
paper_content:
In this paper we review the socalled altmetrics or alternative metrics. This concept raises from the development of new indicators based on Web 2.0, for the evaluation of the research and academic activity. The basic assumption is that variables such as mentions in blogs, number of twits or of researchers bookmarking a research paper for instance, may be legitimate indicators for measuring the use and impact of scientific publications. In this sense, these indicators are currently the focus of the bibliometric community and are being discussed and debated. We describe the main platforms and indicators and we analyze as a sample the Spanish research output in Communication Studies. Comparing traditional indicators such as citations with these new indicators. The results show that the most cited papers are also the ones with a highest impact according to the altmetrics. We conclude pointing out the main shortcomings these metrics present and the role they may play when measuring the research impact through 2.0 platforms.
---
paper_title: Assessing the Impact of Publications Saved by Mendeley Users: Is There Any Different Pattern Among Users?
paper_content:
The main focus of this paper is to investigate the impact of publications read (saved) by the different users in Mendeley in order to explore the extent to which their readership counts correlate with their citation indicators. The potential of filtering highly cited papers by Mendeley readerships and its different users have been also explored. For the analysis of the users, we have considered the information of the top three Mendeley ‘users’ reported by the Mendeley. Our results show that publications with Mendeley readerships tend to have higher citation and journal citation scores than publications without readerships. ‘Biomedical & health sciences’ and ‘Mathematics and computer science’ are the fields with respectively the most and the least readership activity in Mendeley. PhD students have the highest density of readerships per publication and Lecturers and Librarians have the lowest across all the different fields. Our precision-recall analysis indicates that in general, for publications with at least one reader in Mendeley, the capacity of readerships of filtering highly cited publications is better than (or at least as good as) Journal Citation Scores. We discuss the important limitation of Mendeley of only reporting the top three readers and not all of them in the potential development of indicators based on Mendeley and its users.
---
paper_title: Altmetrics – a complement to conventional metrics
paper_content:
Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Both can be employed in conjunction to offer a richer picture of an article use from immediate to long terms. Article-level metrics (ALM) is the result of the aggregation of different data sources and the collection of content from multiple social network services. Sources used for the aggregation can be broken down into five categories: usage, captures, mentions, social media and citations. Data sources depend on the tool, but they include classic metrics indicators based on citations, academic social networks (Mendeley, CiteULike, Delicious) and social media (Facebook, Twitter, blogs, or Youtube, among others). Altmetrics is not synonymous with alternative metrics. Altmetrics are normally early available and allow to assess the social impact of scholarly outputs, almost at the real time. This paper overviews briefly the meaning of altmetrics and describes some of the existing tools used to apply this new metrics: Public Library of Science - Article-Level Metrics, Altmetric, Impactstory and Plum.
---
paper_title: Altmetrics: Rethinking the Way We Measure
paper_content:
Altmetrics is the focus for this edition of “Balance Point.” The column editor invited Finbar Galligan who has gained considerable knowledge of altmetrics to co-author the column. Altmetrics, their relationship to traditional metrics, their importance, uses, potential impacts, and possible future directions are examined. The authors conclude that altmetrics have an important future role to play and that they offer the potential to revolutionize the analysis of the value and impact of scholarly work.
---
paper_title: Do altmetrics follow the crowd or does the crowd follow altmetrics?
paper_content:
Changes are occurring in scholarly communication as scientific discourse and research activities spread across various social media platforms. In this paper, we study altmetrics on the article and journal levels, investigating whether the online attention received by research articles is related to scholarly impact or may be due to other factors. We define a new metric, Journal Social Impact (JSI), based on eleven data sources: CiteULike, Mendeley, F1000, blogs, Twitter, Facebook, mainstream news outlets, Google Plus, Pinterest, Reddit, and sites running Stack Exchange (Q&A). We compare JSI against diverse citation-based metrics, and find that JSI significantly correlates with a number of them. These findings indicate that online attention of scholarly articles is related to traditional journal rankings and favors journals with a longer history of scholarly impact. We also find that journal-level altmetrics have strong significant correlations among themselves, compared with the weak correlations among article-level altmetrics. Another finding is that Mendeley and Twitter have the highest usage and coverage of scholarly activities. Among individual altmetrics, we find that the readership of academic social networks have the highest correlations with citation-based metrics. Our findings deepen the overall understanding of altmetrics and can assist in validating them.
---
paper_title: Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC's members
paper_content:
This study explores the connections between social and usage metrics (altmetrics) and bibliometric indicators at the author level. It studies to what extent these indicators, gained from academic sites, can provide a proxy for research impact. Close to 10,000 author profiles belonging to the Spanish National Research Council were extracted from the principal scholarly social sites: ResearchGate, Academia.edu and Mendeley and academic search engines: Microsoft Academic Search and Google Scholar Citations. Results describe little overlapping between sites because most of the researchers only manage one profile (72%). Correlations point out that there is scant relationship between altmetric and bibliometric indicators at author level. This is due to the almetric ones are site-dependent, while the bibliometric ones are more stable across web sites. It is concluded that altmetrics could reflect an alternative dimension of the research performance, close, perhaps, to science popularization and networking abilities, but far from citation impact.
---
paper_title: Geographic variation in social media metrics: an analysis of Latin American journal articles
paper_content:
Purpose – The purpose of this study is to contribute to the understanding of how the potential of altmetrics varies around the world by measuring the percentage of articles with non-zero metrics (coverage) for articles published from a developing region (Latin America). Design/methodology/approach – This study uses article metadata from a prominent Latin American journal portal, SciELO, and combines it with altmetrics data from Altmetric.com and with data collected by author-written scripts. The study is primarily descriptive, focusing on coverage levels disaggregated by year, country, subject area, and language. Findings – Coverage levels for most of the social media sources studied was zero or negligible. Only three metrics had coverage levels above 2 per cent – Mendeley, Twitter, and Facebook. Of these, Twitter showed the most significant differences with previous studies. Mendeley coverage levels reach those found by previous studies, but it takes up to two years longer for articles to be saved in the...
---
paper_title: What does Twitter Measure?: Influence of Diverse User Groups in Altmetrics
paper_content:
The most important goal for digital libraries is to ensure high quality search experience for all kinds of users. To attain this goal, it is necessary to have as much relevant metadata as possible at hand to assess the quality of publications. Recently, a new group of metrics appeared, that has the potential to raise the quality of publication metadata to the next level -- the altmetrics. These metrics try to reflect the impact of publications within the social web. However, currently it is still unclear if and how altmetrics should be used to assess the quality of a publication and how altmetrics are related to classical bibliographical metrics (like e.g. citations). To gain more insights about what kind of concepts are reflected by altmetrics, we conducted an in-depth analysis on a real world dataset crawled from the Public Library of Science (PLOS). Especially, we analyzed if the common approach to regard the users in the social web as one homogeneous group is sensible or if users need to be divided into diverse groups in order to receive meaningful results.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Evaluating altmetrics
paper_content:
The rise of the social web and its uptake by scholars has led to the creation of altmetrics, which are social web metrics for academic publications. These new metrics can, in theory, be used in an evaluative role, to give early estimates of the impact of publications or to give estimates of non-traditional types of impact. They can also be used as an information seeking aid: to help draw a digital library user's attention to papers that have attracted social web mentions. If altmetrics are to be trusted then they must be evaluated to see if the claims made about them are reasonable. Drawing upon previous citation analysis debates and web citation analysis research, this article discusses altmetric evaluation strategies, including correlation tests, content analyses, interviews and pragmatic analyses. It recommends that a range of methods are needed for altmetric evaluations, that the methods should focus on identifying the relative strengths of influences on altmetric creation, and that such evaluations should be prioritised in a logical order.
---
paper_title: Using altmetrics for assessing research impact in the humanities
paper_content:
The prospects of altmetrics are especially encouraging for research fields in the humanities that currently are difficult to study using established bibliometric methods. Yet, little is known about the altmetric impact of research fields in the humanities. Consequently, this paper analyses the altmetric coverage and impact of humanities-oriented articles and books published by Swedish universities during 2012. Some of the most common altmetric sources are examined using a sample of 310 journal articles and 54 books. Mendeley has the highest coverage of journal articles (61 %) followed by Twitter (21 %) while very few of the publications are mentioned in blogs or on Facebook. Books, on the other hand, are quite often tweeted while both Mendeley's and the novel data source Library Thing's coverage is low. Many of the problems of applying bibliometrics to the humanities are also relevant for altmetric approaches; the importance of non-journal publications, the reliance on print as well the limited coverage of non-English language publications. However, the continuing development and diversification of methods suggests that altmetrics could evolve into a valuable tool for assessing research in the humanities.
---
paper_title: Astrophysics publications on arXiv, Scopus and Mendeley: a case study
paper_content:
In this study we examined a sample of 100 European astrophysicists and their publications indexed by the citation database Scopus, submitted to the arXiv repository and bookmarked by readers in the reference manager Mendeley. Although it is believed that astrophysicists use arXiv widely and extensively, the results show that on average more items are indexed by Scopus than submitted to arXiv. A considerable proportion of the items indexed by Scopus appear also on Mendeley, but on average the number of readers who bookmarked the item on Mendeley is much lower than the number of citations reported in Scopus. The comparisons between the data sources were done based on the authors and the titles of the publications.
---
paper_title: Validating online reference managers for scholarly impact measurement
paper_content:
This paper investigates whether CiteULike and Mendeley are useful for measuring scholarly influence, using a sample of 1,613 papers published in Nature and Science in 2007. Traditional citation counts from the Web of Science (WoS) were used as benchmarks to compare with the number of users who bookmarked the articles in one of the two free online reference manager sites. Statistically significant correlations were found between the user counts and the corresponding WoS citation counts, suggesting that this type of influence is related in some way to traditional citation-based scholarly impact but the number of users of these systems seems to be still too small for them to challenge traditional citation indexes.
---
paper_title: Applying social bookmarking data to evaluate journal usage
paper_content:
Web 2.0 technologies are finding their way into academics: specialized social bookmarking services allow researchers to store and share scientific literature online. By bookmarking and tagging articles, academic prosumers generate new information about resources, i.e. usage statistics and content description of scientific journals. Given the lack of global download statistics, the authors propose the application of social bookmarking data to journal evaluation. For a set of 45 physics journals all 13,608 bookmarks from CiteULike, Connotea and BibSonomy to documents published between 2004 and 2008 were analyzed. This article explores bookmarking data in STM and examines in how far it can be used to describe the perception of periodicals by the readership. Four basic indicators are defined, which analyze different aspects of usage: Usage Ratio, Usage Diffusion, Article Usage Intensity and Journal Usage Intensity. Tags are analyzed to describe a reader-specific view on journal content.
---
paper_title: Visualizing readership activity of Mendeley users using VOSviewer
paper_content:
Introduction: Mendeley is a popular reference manager and academic social network that helps users to organize their publications and collaborate with others online. For each publication that is included in Mendeley, a variety of readership statistics are collected. Therefore, besides being a useful tool, Mendeley has also become an interesting and rich altmetrics data source (Zahedi, Costas & Wouters, 2014). This paper builds on a pervious study of Zahedi, Costas & Wouters (2013) in which a sample of 200,000 publications was used to study the readership activity of Mendeley users based on their career stages across seven broad disciplines of science. In this paper, our aim is to analyze the readership activity of Mendeley users at a more detailed level. Based on all 2011 publications that are included in the Web of Science (WoS) database and the readership statistics that can be collected from Mendeley, we try to answer the following research questions: 1. What are the differences in readership activity across research fields? In which fields are Mendeley users most and least active? What are the topics of interest within research fields? 2. What are the fields of interest of the users in different career stages (i.e. Student, PhDs, PostDocs, Researchers, Professors, Librarians, Lecturers & other Professionals)? Are there any differences between types of users? Data & Methodology: For this study, we collected all publications from the WoS database that are classified as article or review, that were published in 2011, and for which a DOI is available. In total, we ended up with 1,114,776 publications. The DOIs of the collected publications were used to extract the readership statistics of these publications from Mendeley by using the Mendeley REST API in November 2013. Out of the 1,114,776 publications, a total of 847,587 publications (76%) were saved in Mendeley. The data from Mendeley was matched with the in-house WoS database of CWTS in order to add citation data. For each publication, citations were counted until the end of 2013. The VOSviewer software tool (Van Eck & Waltman, 2010) was used to create so-called overlay visualizations. These visualizations can be used to show additional information on top of a base map (e.g. Van Eck et. al., 2013; Leydesdorff & Rafols, 2012). Two types of base maps were used. A base map containing the 250 subject categories in the WoS database was used to analyze differences in readership activity across research fields and to analyze differences in interest between types of users. Base maps containing terms extracted from titles and abstracts using the text mining functionality of VOSviewer (Van Eck & Waltman, 2011) were used to analyze differences in readership activity within research fields.
---
paper_title: Mendeley readership altmetrics for medical articles: An analysis of 45 fields
paper_content:
Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly about 0.7; 78% of articles had at least one reader with Mendeley readership counts from the new version 1 applications programming interface [API] in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
---
paper_title: Who reads research articles? An altmetrics analysis of Mendeley user categories
paper_content:
Little detailed information is known about who reads research articles and the contexts in which research articles are read. Using data about people who register in Mendeley as readers of articles, this article explores different types of users of Clinical Medicine, Engineering and Technology, Social Science, Physics, and Chemistry articles inside and outside academia. The majority of readers for all disciplines were PhD students, postgraduates, and postdocs but other types of academics were also represented. In addition, many Clinical Medicine articles were read by medical professionals. The highest correlations between citations and Mendeley readership counts were found for types of users who often authored academic articles, except for associate professors in some sub-disciplines. This suggests that Mendeley readership can reflect usage similar to traditional citation impact if the data are restricted to readers who are also authors without the delay of impact measured by citation counts. At the same time, Mendeley statistics can also reveal the hidden impact of some research articles, such as educational value for nonauthor users inside academia or the impact of research articles on practice for readers outside academia.
---
paper_title: Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics
paper_content:
BackgroundAcademic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags) and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products.ResultsWe assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags) per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting.ConclusionThe most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search) to benefit substantially from socially generated metadata in the life sciences, more documents need to be tagged and more tags are needed for each document. These issues may be addressed both by finding ways to attract more users to current systems and by creating new user interfaces that encourage more collectively useful individual tagging behaviour.
---
paper_title: Coverage and adoption of altmetrics sources in the bibliometric community
paper_content:
Altmetrics, indices based on social media platforms and tools, have recently emerged as alternative means of measuring scholarly impact. Such indices assume that scholars in fact populate online social environments, and interact with scholarly products in the social web. We tested this assumption by examining the use and coverage of social media environments amongst a sample of bibliometricians examining both their own use of online platforms and the use of their papers on social reference managers. As expected, coverage varied: 82 % of articles published by sampled bibliometricians were included in Mendeley libraries, while only 28 % were included in CiteULike. Mendeley bookmarking was moderately correlated (.45) with Scopus citation counts. We conducted a survey among the participants of the STI2012 participants. Over half of respondents asserted that social media tools were affecting their professional lives, although uptake of online tools varied widely. 68 % of those surveyed had LinkedIn accounts, while Academia.edu, Mendeley, and ResearchGate each claimed a fifth of respondents. Nearly half of those responding had Twitter accounts, which they used both personally and professionally. Surveyed bibliometricians had mixed opinions on altmetrics' potential; 72 % valued download counts, while a third saw potential in tracking articles' influence in blogs, Wikipedia, reference managers, and social media. Altogether, these findings suggest that some online tools are seeing substantial use by bibliometricians, and that they present a potentially valuable source of impact data.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows 1
paper_content:
Although there is evidence that counting the readers of an article in the social reference site, Mendeley, may help to capture its research impact, the extent to which this is true for different scientific fields is unknown. In this study, we compare Mendeley readership counts with citations for different social sciences and humanities disciplines. The overall correlation between Mendeley readership counts and citations for the social sciences was higher than for the humanities. Low and medium correlations between Mendeley bookmarks and citation counts in all the investigated disciplines suggest that these measures reflect different aspects of research impact. Mendeley data were also used to discover patterns of information flow between scientific fields. Comparing information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for the disciplines revealed substantial similarities and some differences. Thus, the evidence from this study suggests that Mendeley readership data could be used to help capture knowledge transfer across scientific disciplines, especially for people that read but do not author articles, as well as giving impact evidence at an earlier stage than is possible with citation counts.
---
paper_title: Do altmetrics follow the crowd or does the crowd follow altmetrics?
paper_content:
Changes are occurring in scholarly communication as scientific discourse and research activities spread across various social media platforms. In this paper, we study altmetrics on the article and journal levels, investigating whether the online attention received by research articles is related to scholarly impact or may be due to other factors. We define a new metric, Journal Social Impact (JSI), based on eleven data sources: CiteULike, Mendeley, F1000, blogs, Twitter, Facebook, mainstream news outlets, Google Plus, Pinterest, Reddit, and sites running Stack Exchange (Q&A). We compare JSI against diverse citation-based metrics, and find that JSI significantly correlates with a number of them. These findings indicate that online attention of scholarly articles is related to traditional journal rankings and favors journals with a longer history of scholarly impact. We also find that journal-level altmetrics have strong significant correlations among themselves, compared with the weak correlations among article-level altmetrics. Another finding is that Mendeley and Twitter have the highest usage and coverage of scholarly activities. Among individual altmetrics, we find that the readership of academic social networks have the highest correlations with citation-based metrics. Our findings deepen the overall understanding of altmetrics and can assist in validating them.
---
paper_title: Beyond citations: Scholars' visibility on the social Web
paper_content:
Traditionally, scholarly impact and visibility have been measured by counting publications and citations in the scholarly literature. However, increasingly scholars are also visible on the Web, establishing presences in a growing variety of social ecosystems. But how wide and established is this presence, and how do measures of social Web impact relate to their more traditional counterparts? To answer this, we sampled 57 presenters from the 2010 Leiden STI Conference, gathering publication and citations counts as well as data from the presenters' Web "footprints." We found Web presence widespread and diverse: 84% of scholars had homepages, 70% were on LinkedIn, 23% had public Google Scholar profiles, and 16% were on Twitter. For sampled scholars' publications, social reference manager bookmarks were compared to Scopus and Web of Science citations; we found that Mendeley covers more than 80% of sampled articles, and that Mendeley bookmarks are significantly correlated (r=.45) to Scopus citation counts.
---
paper_title: Altmetrics: New Indicators for Scientific Communication in Web 2.0
paper_content:
In this paper we review the socalled altmetrics or alternative metrics. This concept raises from the development of new indicators based on Web 2.0, for the evaluation of the research and academic activity. The basic assumption is that variables such as mentions in blogs, number of twits or of researchers bookmarking a research paper for instance, may be legitimate indicators for measuring the use and impact of scientific publications. In this sense, these indicators are currently the focus of the bibliometric community and are being discussed and debated. We describe the main platforms and indicators and we analyze as a sample the Spanish research output in Communication Studies. Comparing traditional indicators such as citations with these new indicators. The results show that the most cited papers are also the ones with a highest impact according to the altmetrics. We conclude pointing out the main shortcomings these metrics present and the role they may play when measuring the research impact through 2.0 platforms.
---
paper_title: Assessing the Impact of Publications Saved by Mendeley Users: Is There Any Different Pattern Among Users?
paper_content:
The main focus of this paper is to investigate the impact of publications read (saved) by the different users in Mendeley in order to explore the extent to which their readership counts correlate with their citation indicators. The potential of filtering highly cited papers by Mendeley readerships and its different users have been also explored. For the analysis of the users, we have considered the information of the top three Mendeley ‘users’ reported by the Mendeley. Our results show that publications with Mendeley readerships tend to have higher citation and journal citation scores than publications without readerships. ‘Biomedical & health sciences’ and ‘Mathematics and computer science’ are the fields with respectively the most and the least readership activity in Mendeley. PhD students have the highest density of readerships per publication and Lecturers and Librarians have the lowest across all the different fields. Our precision-recall analysis indicates that in general, for publications with at least one reader in Mendeley, the capacity of readerships of filtering highly cited publications is better than (or at least as good as) Journal Citation Scores. We discuss the important limitation of Mendeley of only reporting the top three readers and not all of them in the potential development of indicators based on Mendeley and its users.
---
paper_title: Mendeley readership counts: An investigation of temporal and disciplinary differences
paper_content:
Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories agriculture, business, decision science, pharmacy, and the social sciences and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
---
paper_title: The Vacuum Shouts Back: Postpublication Peer Review on Social Media
paper_content:
Social media has created new pathways for postpublication peer review, which regularly leads to corrections. Such online discussions are often resisted by authors and editors, however, and efforts to formalize postpublication peer review have not yet resonated with scientific communities.
---
paper_title: Post-publication filtering and evaluation: Faculty of 1000
paper_content:
Faculty of 1000 (www.facultyof1000.com) is a new on-line literature awareness and assessment service of research papers, on the basis of selections by 1400 of the world's top biologists, that combines metrics with judgement. The service offers a systematic and comprehensive form of post-publication peer review that focuses on the best papers regardless of the journal in which they are published. It is now possible to draw some conclusions about how this new form of post-publication peer review meets the needs of scientists, and the organizations that fund them, in practice. In addition, inferences about the relative importance of journals are set out, which should also interest publishers and librarians.
---
paper_title: Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs
paper_content:
OBJECTIVE ::: To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers. ::: ::: ::: METHODS AND MATERIALS ::: Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact. ::: ::: ::: RESULTS ::: Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a 'major addition to knowledge' -6 were thought to be 'landmark' papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications. ::: ::: ::: CONCLUSION ::: When attempting to assess the quality and importance of research papers, we found that sole reliance on bibliometric indicators would have led us to miss papers containing important results as judged by expert review. In particular, some papers that were highly rated by experts were not highly cited during the first three years after publication. Tools that link expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation.
---
paper_title: F1000 Recommendations as a Potential New Data Source for Research Evaluation: A Comparison With Citations
paper_content:
F1000 is a postpublication peer review service for biological and medical research. F1000 recommends important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and more than 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications.
---
paper_title: The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations
paper_content:
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
---
paper_title: Altmetrics in the wild: Using social media to explore scholarly impact
paper_content:
In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics", sampling 24,331 articles published by the Public Library of Science. ::: We find that that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors", capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.
---
paper_title: Assessing non-standard article impact using F1000 labels
paper_content:
Faculty of 1000 (F1000) is a post-publishing peer review web site where experts evaluate and rate biomedical publications. F1000 reviewers also assign labels to each paper from a standard list or article types. This research examines the relationship between article types, citation counts and F1000 article factors (FFa). For this purpose, a random sample of F1000 medical articles from the years 2007 and 2008 were studied. In seven out of the nine cases, there were no significant differences between the article types in terms of citation counts and FFa scores. Nevertheless, citation counts and FFa scores were significantly different for two article types: "New finding" and "Changes clinical practice": FFa scores value the appropriateness of medical research for clinical practice and "New finding" articles are more highly cited. It seems that highlighting key features of medical articles alongside ratings by Faculty members of F1000 could help to reveal the hidden value of some medical papers.
---
paper_title: Do ‘ Faculty of 1000 ’ ( F 1000 ) ratings of ecological publications serve as reasonable predictors of their future impact ?
paper_content:
There is an increasing demand for an effective means of post-publication evaluation of ecological work that avoids pitfalls associated with using the impact factor of the journal in which the work was published. One approach that has been gaining momentum is the 'Faculty of 1000' (hereafter F1000) evaluation procedure, in which panel members identify what they believe to be the most 'important' recent publications they have read. Here I focused on 1530 publications from 7 major ecological journals that appeared in 2005, and compared the F1000 rating of each publication with the frequency with which it was subsequently cited. The mean and median citation frequencies of the 103 publications highlighted by F1000 was higher than for all 1530 publications, but not substantially so. Further, the F1000 procedure did not highlight any of the 11 publications that were each cited over 130 (and up to 497) times, while it did highlight 14 publications that were each cited between 4 and 9 times. Further, 46% and 31% of all manuscripts highlighted by F1000 were cited less often than the mean and median respectively of all 1530 publications. Possible reasons for the F1000 process failing to identify high impact publications may include uneven coverage by F1000 of different ecological topics, cronyism, and geographical bias favoring North American publications. As long as the F1000 process cannot identify those publications that subsequently have the greatest impact, it cannot be reliably used as a means of post-publication evaluation of the ecological literature.
---
|
Title: Scholarly use of social media and altmetrics: a review of the literature
Section 1: Introduction
Description 1: Provide an overview of the shifts and trends in scholarly communication, the rise of social media and altmetrics in academia, and introduce key themes of visibility and heterogeneity.
Section 2: Social data sharing
Description 2: Discuss the requirement for data sharing by funders and journals, the establishment of data sharing platforms, usage patterns, and the early stages of metrics based on data sharing.
Section 3: Video
Description 3: Explore the role of video in scholarly communication, the popularity of platforms like YouTube and TED, and the passive versus active use of video content by scholars.
Section 4: Blogging
Description 4: Review the history and impact of scholarly blogging, its formats, content, and issues related to blog citations, blog coverage, and the influence of blogs on scholarly communication.
Section 5: Microblogging
Description 5: Examine the development of microblogging, particularly Twitter, its usage in academic contexts, content, networking benefits, coverage, correlations with citations, and influences on scholarly communication.
Section 6: Wikis
Description 6: Analyze the use of wikis for academic collaboration, the role of Wikipedia in scholarly research, contribution rates, and the comparison between traditional publishing and wikis.
Section 7: Social recommending, rating, and reviewing
Description 7: Detail the platforms developed for filtering scientific content through recommendations and ratings, the effectiveness of open peer review, and the use of tools like F1000Prime and Pubpeer.
Section 8: Scholarly use by institutions and organizations
Description 8: Describe how universities, libraries, journals, publishers, and professional associations use social media to disseminate research, engage audiences, and promote scholarly communication.
Section 9: Factors affecting social media use
Description 9: Identify demographic and contextual factors that influence social media use among scholars, including age, academic rank, gender, discipline, country, and language.
Section 10: Social media and research evaluation
Description 10: Discuss the growing importance of demonstrating societal impact, the conceptualization and classification of altmetrics, criticisms, and limitations of using social media metrics for research evaluation.
Section 11: Data collection and methodological limitations
Description 11: Address the tools for collecting and aggregating social media metrics, issues of data quality, the impact of bots, demographic analyses, and comparisons with citation-based metrics.
Section 12: Social media metrics
Description 12: Review how social media metrics are used for research evaluation, coverage and correlations with traditional metrics, and the diverse utility of different social media platforms.
Section 13: Conclusion and outlook
Description 13: Summarize the key findings, the impact of digital and social media on scholarly communication, and the potential future directions for social media and altmetrics in academic research dissemination and evaluation.
|
Deep Learning for Genomics: A Concise Overview
| 10 |
---
paper_title: Finding Structure in Time
paper_content:
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.
---
paper_title: Functional annotation of a full-length mouse cDNA collection
paper_content:
The RIKEN Mouse Gene Encyclopaedia Project, a systematic approach to determining the full coding potential of the mouse genome, involves collection and sequencing of full-length complementary DNAs and physical mapping of the corresponding genes to the mouse genome. We organized an international functional annotation meeting (FANTOM) to annotate the first 21,076 cDNAs to be analysed in this project. Here we describe the first RIKEN clone collection, which is one of the largest described for any organism. Analysis of these cDNAs extends known gene families and identifies new ones.
---
paper_title: Initial sequencing and analysis of the human genome.
paper_content:
The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence.
---
paper_title: Long Short-Term Memory
paper_content:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
---
paper_title: Learning internal representations by error propagation
paper_content:
This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion
---
paper_title: Cognitron: A self-organizing multilayered neural network
paper_content:
A new hypothesis for the organization of synapses between neurons is proposed: "The synapse from neuron x to neuron y is reinforced when x fires provided that no neuron in the vicinity of y is firing stronger than y". By introducing this hypothesis, a new algorithm with which a multilayered neural network is effectively organized can be deduced. A self-organizing multilayered neural network, which is named "cognitron", is constructed following this algorithm, and is simulated on a digital computer. Unlike the organization of a usual brain models such as a three-layered perceptron, the self-organization of a cognitron progresses favorably without having a "teacher" which instructs in all particulars how the individual cells respond. After repetitive presentations of several stimulus patterns, the cognitron is self-organized in such a way that the receptive fields of the cells become relatively larger in a deeper layer. Each cell in the final layer integrates the information from whole parts of the first layer and selectively responds to a specific stimulus pattern or a feature.
---
paper_title: Learning and Relearning in Boltzmann Machines
paper_content:
This chapter contains sections titled: Relaxation Searches, Easy and Hard Learning, The Boltzmann Machine Learning Algorithm, An Example of Hard Learning, Achieving Reliable Computation with Unreliable Hardware, An Example of the Effects of Damage, Conclusion, Acknowledgments, Appendix: Derivation of the Learning Algorithm, References
---
paper_title: Integrative analysis of 111 reference human epigenomes
paper_content:
The reference human genome sequence set the stage for studies of genetic variation and its association with human disease, but epigenomic studies lack a similar reference. To address this need, the NIH Roadmap Epigenomics Consortium generated the largest collection so far of human epigenomes for primary cells and tissues. Here we describe the integrative analysis of 111 reference human epigenomes generated as part of the programme, profiled for histone modification patterns, DNA accessibility, DNA methylation and RNA expression. We establish global maps of regulatory elements, define regulatory modules of coordinated activity, and their likely activators and repressors. We show that disease- and trait-associated genetic variants are enriched in tissue-specific epigenomic marks, revealing biologically relevant cell types for diverse human traits, and providing a resource for interpreting the molecular basis of human disease. Our results demonstrate the central role of epigenomic information for understanding gene regulation, cellular differentiation and human disease.
---
paper_title: Deep learning
paper_content:
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
---
paper_title: Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition
paper_content:
A neural network model, called a “neocognitron”, is proposed for a mechanism of visual pattern recognition. It is demonstrated by computer simulation that the neocognitron has characteristics similar to those of visual systems of vertebrates.
---
paper_title: An integrated encyclopedia of DNA elements in the human genome
paper_content:
The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation. The newly identified elements also show a statistical correspondence to sequence variants linked to human disease, and can thereby guide interpretation of this variation. Overall, the project provides new insights into the organization and regulation of our genes and genome, and is an expansive resource of functional annotations for biomedical research.
---
paper_title: ImageNet classification with deep convolutional neural networks
paper_content:
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
---
paper_title: DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences
paper_content:
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory ‘grammar’ to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.
---
paper_title: Bidirectional recurrent neural networks
paper_content:
In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported.
---
paper_title: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
paper_content:
In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.
---
paper_title: DeepNano: Deep Recurrent Neural Networks for Base Calling in MinION Nanopore Reads
paper_content:
The MinION device by Oxford Nanopore produces very long reads (reads over 100 kBp were reported); however it suffers from high sequencing error rate. We present an open-source DNA base caller based on deep recurrent neural networks and show that the accuracy of base calling is much dependent on the underlying software and can be improved by considering modern machine learning methods. By employing carefully crafted recurrent neural networks, our tool significantly improves base calling accuracy on data from R7.3 version of the platform compared to the default base caller supplied by the manufacturer. On R9 version, we achieve results comparable to Nanonet base caller provided by Oxford Nanopore. Availability of an open source tool with high base calling accuracy will be useful for development of new applications of the MinION device, including infectious disease detection and custom target enrichment during sequencing.
---
paper_title: Long Short-Term Memory
paper_content:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
---
paper_title: Learning structure in gene expression data using deep architectures, with an application to gene clustering
paper_content:
Genes play a central role in all biological processes. DNA microarray technology has made it possible to study the expression behavior of thousands of genes in one go. Often, gene expression data is used to generate features for supervised and unsupervised learning tasks. At the same time, advances in the field of deep learning have made available a plethora of architectures. In this paper, we use deep architectures pre-trained in an unsupervised manner using denoising autoencoders as a preprocessing step for a popular unsupervised learning task. Denoising autoencoders (DA) can be used to learn a compact representation of input, and have been used to generate features for further supervised learning tasks. We propose that our deep architectures can be treated as empirical versions of Deep Belief Networks (DBNs). We use our deep architectures to regenerate gene expression time series data for two different data sets. We test our hypothesis on two popular datasets for the unsupervised learning task of clustering and find promising improvements in performance.
---
paper_title: The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity
paper_content:
The systematic translation of cancer genomic data into knowledge of tumour biology and therapeutic possibilities remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacological annotation is available. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacological profiles for 24 anticancer drugs across 479 of the cell lines, this collection allowed identification of genetic, lineage, and gene-expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Together, our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of 'personalized' therapeutic regimens.
---
paper_title: Unsupervised Feature Construction and Knowledge Extraction from Genome-Wide Assays of Breast Cancer with Denoising Autoencoders
paper_content:
Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties.
---
paper_title: ADAGE-Based Integration of Publicly Available Pseudomonas aeruginosa Gene Expression Data with Denoising Autoencoders Illuminates Microbe-Host Interactions
paper_content:
The increasing number of genome-wide assays of gene expression available from public databases presents opportunities for computational methods that facilitate hypothesis generation and biological interpretation of these data. We present an unsupervised machine learning approach, ADAGE (analysis using denoising autoencoders of gene expression), and apply it to the publicly available gene expression data compendium for Pseudomonas aeruginosa. In this approach, the machine-learned ADAGE model contained 50 nodes which we predicted would correspond to gene expression patterns across the gene expression compendium. While no biological knowledge was used during model construction, cooperonic genes had similar weights across nodes, and genes with similar weights across nodes were significantly more likely to share KEGG pathways. By analyzing newly generated and previously published microarray and transcriptome sequencing data, the ADAGE model identified differences between strains, modeled the cellular response to low oxygen, and predicted the involvement of biological processes based on low-level gene expression differences. ADAGE compared favorably with traditional principal component analysis and independent component analysis approaches in its ability to extract validated patterns, and based on our analyses, we propose that these approaches differ in the types of patterns they preferentially identify. We provide the ADAGE model with analysis of all publicly available P. aeruginosa GeneChip experiments and open source code for use with other species and settings. Extraction of consistent patterns across large-scale collections of genomic data using methods like ADAGE provides the opportunity to identify general principles and biologically important patterns in microbial biology. This approach will be particularly useful in less-well-studied microbial species. IMPORTANCE The quantity and breadth of genome-scale data sets that examine RNA expression in diverse bacterial and eukaryotic species are increasing more rapidly than for curated knowledge. Our ADAGE method integrates such data without requiring gene function, gene pathway, or experiment labeling, making practical its application to any large gene expression compendium. We built a Pseudomonas aeruginosa ADAGE model from a diverse set of publicly available experiments without any prespecified biological knowledge, and this model was accurate and predictive. We provide ADAGE results for the complete P. aeruginosa GeneChip compendium for use by researchers studying P. aeruginosa and source code that facilitates ADAGE's application to other species and data types. Author Video: An author video summary of this article is available.
---
paper_title: Unsupervised extraction of stable expression signatures from public compendia with eADAGE
paper_content:
Cross experiment comparisons in public data compendia are challenged by unmatched conditions and technical noise. The ADAGE method, which performs unsupervised integration with neural networks, can effectively identify biological patterns, but because ADAGE models, like many neural networks, are over-parameterized, different ADAGE models perform equally well. To enhance model robustness and better build signatures consistent with biological pathways, we developed an ensemble ADAGE (eADAGE) that integrated stable signatures across models. We applied eADAGE to a Pseudomonas aeruginosa compendium containing experiments performed in 78 media. eADAGE revealed a phosphate starvation response controlled by PhoB. While we expected PhoB activity in limiting phosphate conditions, our analyses found PhoB activity in other media with moderate phosphate and predicted that a second stimulus provided by the sensor kinase, KinB, is required for PhoB activation in this setting. We validated this relationship using both targeted and unbiased genetic approaches. eADAGE, which captures stable biological patterns, enables cross-experiment comparisons that can highlight measured but undiscovered relationships.
---
paper_title: Greedy layer-wise training of deep networks
paper_content:
Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
---
paper_title: Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
paper_content:
We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining.
---
paper_title: Extracting and composing robust features with denoising autoencoders
paper_content:
Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.
---
paper_title: Genomics of Drug Sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells
paper_content:
Alterations in cancer genomes strongly influence clinical responses to treatment and in many instances are potent biomarkers for response to drugs. The Genomics of Drug Sensitivity in Cancer (GDSC) database (www.cancerRxgene.org) is the largest public resource for information on drug sensitivity in cancer cells and molecular markers of drug response. Data are freely available without restriction. GDSC currently contains drug sensitivity data for almost 75 000 experiments, describing response to 138 anticancer drugs across almost 700 cancer cell lines. To identify molecular markers of drug response, cell line drug sensitivity data are integrated with large genomic datasets obtained from the Catalogue of Somatic Mutations in Cancer database, including information on somatic mutations in cancer genes, gene amplification and deletion, tissue type and transcriptional data. Analysis of GDSC data is through a web portal focused on identifying molecular biomarkers of drug sensitivity based on queries of specific anticancer drugs or cancer genes. Graphical representations of the data are used throughout with links to related resources and all datasets are fully downloadable. GDSC provides a unique resource incorporating large drug sensitivity and genomic datasets to facilitate the discovery of new therapeutic biomarkers for cancer therapies.
---
paper_title: Deep Spatio-Temporal Architectures and Learning for Protein Structure Prediction
paper_content:
Residue-residue contact prediction is a fundamental problem in protein structure prediction. Hower, despite considerable research efforts, contact prediction methods are still largely unreliable. Here we introduce a novel deep machine-learning architecture which consists of a multidimensional stack of learning modules. For contact prediction, the idea is implemented as a three-dimensional stack of Neural Networks NNkij, where i and j index the spatial coordinates of the contact map and k indexes "time". The temporal dimension is introduced to capture the fact that protein folding is not an instantaneous process, but rather a progressive refinement. Networks at level k in the stack can be trained in supervised fashion to refine the predictions produced by the previous level, hence addressing the problem of vanishing gradients, typical of deep architectures. Increased accuracy and generalization capabilities of this approach are established by rigorous comparison with other classical machine learning approaches for contact prediction. The deep approach leads to an accuracy for difficult long-range contacts of about 30%, roughly 10% above the state-of-the-art. Many variations in the architectures and the training algorithms are possible, leaving room for further improvements. Furthermore, the approach is applicable to other problems with strong underlying spatial and temporal components.
---
paper_title: Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
paper_content:
MOTIVATION ::: Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. ::: ::: ::: METHOD ::: This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. ::: ::: ::: RESULTS ::: Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. ::: ::: ::: AVAILABILITY ::: http://raptorx.uchicago.edu/ContactMap/.
---
paper_title: DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences
paper_content:
Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory ‘grammar’ to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.
---
paper_title: DeepEnhancer: Predicting enhancers by convolutional neural networks
paper_content:
Enhancers are crucial to the understanding of mechanisms underlying gene transcriptional regulation. Although having been successfully applied in such projects as ENCODE and Roadmap to generate landscape of enhancers in human cell lines, high-throughput biological experimental techniques are still costly and time consuming for even larger scale identification of enhancers across a variety of tissues under different disease status, making computational identification of enhancers indispensable. In this paper, we propose a computational framework, named DeepEnhancer, to classify enhancers from background genomic sequences. We construct convolutional neural networks of various architectures and compare the classification performance with traditional sequence-based classifiers. We first train the deep learning model on the FANTOM5 permissive enhancer dataset, and then fine-tune the model on ENCODE cell type-specific enhancer datasets by adopting the transfer learning strategy. Experimental results demonstrate that DeepEnhancer has superior efficiency and effectiveness in classification tasks, and the use of max-pooling and batch normalization is beneficial to higher accuracy. To make our approach more understandable, we propose a strategy to visualize the convolutional kernels as sequence logos and compare them against the JASPAR database using TOMTOM. In summary, DeepEnhancer allows researchers to train highly accurate deep models and will be broadly applicable in computational biology.
---
paper_title: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
paper_content:
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].
---
paper_title: Deep generative models of genetic variation capture mutation effects
paper_content:
The functions of proteins and RNAs are determined by a myriad of interactions between their constituent residues, but most quantitative models of how molecular phenotype depends on genotype must approximate this by simple additive effects. While recent models have relaxed this constraint to also account for pairwise interactions, these approaches do not provide a tractable path towards modeling higher-order epistasis. Here, we show how latent variable models with nonlinear dependencies can be applied to capture beyond-pairwise constraints in biomolecules. We present a new probabilistic model for sequence families, DeepSequence, that can predict the effects of mutations across a variety of deep mutational scanning experiments significantly better than site independent or pairwise models that are based on the same evolutionary data. The model, learned in an unsupervised manner solely from sequence information, is grounded with biologically motivated priors, reveals latent organization of sequence families, and can be used to extrapolate to new parts of sequence space.
---
paper_title: Distributed Representations of Words and Phrases and their Compositionality
paper_content:
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
---
paper_title: Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning
paper_content:
Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.
---
paper_title: A Survey of Transfer and Multitask Learning in Bioinformatics
paper_content:
Machine learning and data mining have found many applications in biological domains, where we look to build predictive models based on labeled training data. However, in practice, high quality labeled data is scarce, and to label new data incurs high costs. Transfer and multitask learning offer an attractive alternative, by allowing useful knowledge to be extracted and transferred from data in auxiliary domains helps counter the lack of data problem in the target domain. In this article, we survey recent advances in transfer and multitask learning for bioinformatics applications. In particular, we survey several key bioinformatics application areas, including sequence classification, gene expression data analysis, biological network reconstruction and biomedical applications. Category: Convergence computing
---
paper_title: Transfer learning for Latin and Chinese characters with Deep Neural Networks
paper_content:
We analyze transfer learning with Deep Neural Networks (DNN) on various character recognition tasks. DNN trained on digits are perfectly capable of recognizing uppercase letters with minimal retraining. They are on par with DNN fully trained on uppercase letters, but train much faster. DNN trained on Chinese characters easily recognize uppercase Latin letters. Learning Chinese characters is accelerated by first pretraining a DNN on a small subset of all classes and then continuing to train on all classes. Furthermore, pretrained nets consistently outperform randomly initialized nets on new tasks with few labeled data.
---
paper_title: An Empirical Analysis of Domain Adaptation Algorithms for Genomic Sequence Analysis
paper_content:
We study the problem of domain transfer for a supervised classification task in mRNA splicing. We consider a number of recent domain transfer methods from machine learning, including some that are novel, and evaluate them on genomic sequence data from model organisms of varying evolutionary distance. We find that in cases where the organisms are not closely related, the use of domain adaptation methods can help improve classification performance.
---
paper_title: Probability Weighted Ensemble Transfer Learning for Predicting Interactions between HIV-1 and Human Proteins
paper_content:
Reconstruction of host-pathogen protein interaction networks is of great significance to reveal the underlying microbic pathogenesis. However, the current experimentally-derived networks are generally small and should be augmented by computational methods for less-biased biological inference. From the point of view of computational modelling, data scarcity, data unavailability and negative data sampling are the three major problems for host-pathogen protein interaction networks reconstruction. In this work, we are motivated to address the three concerns and propose a probability weighted ensemble transfer learning model for HIV-human protein interaction prediction (PWEN-TLM), where support vector machine (SVM) is adopted as the individual classifier of the ensemble model. In the model, data scarcity and data unavailability are tackled by homolog knowledge transfer. The importance of homolog knowledge is measured by the ROC-AUC metric of the individual classifiers, whose outputs are probability weighted to yield the final decision. In addition, we further validate the assumption that only the homolog knowledge is sufficient to train a satisfactory model for host-pathogen protein interaction prediction. Thus the model is more robust against data unavailability with less demanding data constraint. As regards with negative data construction, experiments show that exclusiveness of subcellular co-localized proteins is unbiased and more reliable than random sampling. Last, we conduct analysis of overlapped predictions between our model and the existing models, and apply the model to novel host-pathogen PPIs recognition for further biological research.
---
paper_title: Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
paper_content:
A central theme in learning from image data is to develop appropriate image representations for the specific task at hand. Traditional methods used handcrafted local features combined with high-level image representations to generate image-level representations. Thus, a practical challenge is to determine what features are appropriate for specific tasks. For example, in the study of gene expression patterns in Drosophila melanogaster, texture features based on wavelets were particularly effective for determining the developmental stages from in situ hybridization (ISH) images. Such image representation is however not suitable for controlled vocabulary (CV) term annotation because each CV term is often associated with only a part of an image. Here, we developed problem-independent feature extraction methods to generate hierarchical representations for ISH images. Our approach is based on the deep convolutional neural networks (CNNs) that can act on image pixels directly. To make the extracted features generic, the models were trained using a natural image set with millions of labeled examples. These models were transferred to the ISH image domain and used directly as feature extractors to compute image representations. Furthermore, we employed multi-task learning method to fine-tune the pre-trained models with labeled ISH images, and also extracted features from the fine-tuned models. Experimental results showed that feature representations computed by deep models based on transfer and multi-task learning significantly outperformed other methods for annotating gene expression patterns at different stage ranges. We also demonstrated that the intermediate layers of deep models produced the best gene expression pattern representations.
---
paper_title: Semi-supervised multi-task learning for predicting interactions between HIV-1 and human proteins
paper_content:
Motivation: Protein–protein interactions (PPIs) are critical for virtually every biological function. Recently, researchers suggested to use supervised learning for the task of classifying pairs of proteins as interacting or not. However, its performance is largely restricted by the availability of truly interacting proteins (labeled). Meanwhile, there exists a considerable amount of protein pairs where an association appears between two partners, but not enough experimental evidence to support it as a direct interaction (partially labeled). ::: ::: Results: We propose a semi-supervised multi-task framework for predicting PPIs from not only labeled, but also partially labeled reference sets. The basic idea is to perform multi-task learning on a supervised classification task and a semi-supervised auxiliary task. The supervised classifier trains a multi-layer perceptron network for PPI predictions from labeled examples. The semi-supervised auxiliary task shares network layers of the supervised classifier and trains with partially labeled examples. Semi-supervision could be utilized in multiple ways. We tried three approaches in this article, (i) classification (to distinguish partial positives with negatives); (ii) ranking (to rate partial positive more likely than negatives); (iii) embedding (to make data clusters get similar labels). We applied this framework to improve the identification of interacting pairs between HIV-1 and human proteins. Our method improved upon the state-of-the-art method for this task indicating the benefits of semi-supervised multi-task learning using auxiliary information. ::: ::: Availability: http://www.cs.cmu.edu/~qyj/HIVsemi ::: ::: Contact: [email protected]
---
paper_title: Multimodal Transfer Deep Learning with Applications in Audio-Visual Recognition
paper_content:
We propose a transfer deep learning (TDL) framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality. Specifically, we show that we can leverage speech data to fine-tune the network trained for video recognition, given an initial set of audio-video parallel dataset within the same semantics. Our approach first learns the analogy-preserving embeddings between the abstract representations learned from intermediate layers of each network, allowing for semantics-level transfer between the source and target modalities. We then apply our neural network operation that fine-tunes the target network with the additional knowledge transferred from the source network, while keeping the topology of the target network unchanged. While we present an audio-visual recognition task as an application of our approach, our framework is flexible and thus can work with any multimodal dataset, or with any already-existing deep networks that share the common underlying semantics. In this work in progress report, we aim to provide comprehensive results of different configurations of the proposed approach on two widely used audio-visual datasets, and we discuss potential applications of the proposed approach.
---
paper_title: Multitask Learning
paper_content:
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
---
paper_title: Methods for biological data integration: perspectives and challenges
paper_content:
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biologi...
---
paper_title: A review on machine learning principles for multi-view biological data integration
paper_content:
Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.
---
paper_title: Pixels that sound
paper_content:
People and animals fuse auditory and visual information to obtain robust perception. A particular benefit of such cross-modal analysis is the ability to localize visual events associated with sound sources. We aim to achieve this using computer-vision aided by a single microphone. Past efforts encountered problems stemming from the huge gap between the dimensions involved and the available data. This has led to solutions suffering from low spatio-temporal resolutions. We present a rigorous analysis of the fundamental problems associated with this task. Then, we present a stable and robust algorithm which overcomes past deficiencies. It grasps dynamic audio-visual events with high spatial resolution, and derives a unique solution. The algorithm effectively detects pixels that are associated with the sound, while filtering out other dynamic pixels. It is based on canonical correlation analysis (CCA), where we remove inherent ill-posedness by exploiting the typical spatial sparsity of audio-visual events. The algorithm is simple and efficient thanks to its reliance on linear programming and is free of user-defined parameters. To quantitatively assess the performance, we devise a localization criterion. The algorithm capabilities were demonstrated in experiments, where it overcame substantial visual distractions and audio noise.
---
paper_title: Select-additive learning: Improving generalization in multimodal sentiment analysis
paper_content:
Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets.
---
paper_title: Integrative data analysis of multi-platform cancer data with a multimodal deep learning approach
paper_content:
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
---
paper_title: Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
paper_content:
MOTIVATION ::: Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. ::: ::: ::: METHOD ::: This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. ::: ::: ::: RESULTS ::: Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. ::: ::: ::: AVAILABILITY ::: http://raptorx.uchicago.edu/ContactMap/.
---
paper_title: Learning structure in gene expression data using deep architectures, with an application to gene clustering
paper_content:
Genes play a central role in all biological processes. DNA microarray technology has made it possible to study the expression behavior of thousands of genes in one go. Often, gene expression data is used to generate features for supervised and unsupervised learning tasks. At the same time, advances in the field of deep learning have made available a plethora of architectures. In this paper, we use deep architectures pre-trained in an unsupervised manner using denoising autoencoders as a preprocessing step for a popular unsupervised learning task. Denoising autoencoders (DA) can be used to learn a compact representation of input, and have been used to generate features for further supervised learning tasks. We propose that our deep architectures can be treated as empirical versions of Deep Belief Networks (DBNs). We use our deep architectures to regenerate gene expression time series data for two different data sets. We test our hypothesis on two popular datasets for the unsupervised learning task of clustering and find promising improvements in performance.
---
paper_title: Gene expression inference with deep learning
paper_content:
Motivation Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ῀1,000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression, limiting its accuracy since it does not capture complex nonlinear relationship between expression of genes. Results We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based GEO dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms linear regression with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than linear regression in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2,921 expression profiles. Deep learning still outperforms linear regression with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. Availability D-GEX is available at https://github.com/uci-cbcl/D-GEX. Contact [email protected]
---
paper_title: Extracting compact representation of knowledge from gene expression data for protein-protein interaction
paper_content:
DNA microarrays help measure the expression levels of thousands of genes concurrently. A major challenge is to extract biologically relevant information and knowledge from massive amounts of microarray data. In this paper, we explore learning a compact representation of gene expression profiles by using a multi-task neural network model, so that further analyses can be carried out more efficiently on the data. The proposed network is trained with prediction tasks for Protein-Protein Interactions (PPIs), predicting Gene Ontology (GO) similarities as well as geometrical constrains, while simultaneously learning a high-level representation of gene expression data. We argue that deep networks can extract more information from expression data as compared to standard statistical models. We tested the utility of our method by comparing its performance with famous feature extraction and dimensionality reduction methods on the task of PPI prediction, and found the results to be promising.
---
paper_title: The Connectivity Map: Using Gene-Expression Signatures to Connect Small Molecules, Genes, and Disease
paper_content:
To pursue a systematic approach to the discovery of functional connections among diseases, genetic perturbation, and drug action, we have created the first installment of a reference collection of gene-expression profiles from cultured human cells treated with bioactive small molecules, together with pattern-matching software to mine these data. We demonstrate that this ‘‘Connectivity Map’’ resource can be used to find connections among small molecules sharing a mechanism of action, chemicals and physiological processes, and diseases and drugs. These results indicate the feasibility of the approach and suggest the value of a large-scale community Connectivity Map project.
---
paper_title: Unsupervised Feature Construction and Knowledge Extraction from Genome-Wide Assays of Breast Cancer with Denoising Autoencoders
paper_content:
Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties.
---
paper_title: ADAGE-Based Integration of Publicly Available Pseudomonas aeruginosa Gene Expression Data with Denoising Autoencoders Illuminates Microbe-Host Interactions
paper_content:
The increasing number of genome-wide assays of gene expression available from public databases presents opportunities for computational methods that facilitate hypothesis generation and biological interpretation of these data. We present an unsupervised machine learning approach, ADAGE (analysis using denoising autoencoders of gene expression), and apply it to the publicly available gene expression data compendium for Pseudomonas aeruginosa. In this approach, the machine-learned ADAGE model contained 50 nodes which we predicted would correspond to gene expression patterns across the gene expression compendium. While no biological knowledge was used during model construction, cooperonic genes had similar weights across nodes, and genes with similar weights across nodes were significantly more likely to share KEGG pathways. By analyzing newly generated and previously published microarray and transcriptome sequencing data, the ADAGE model identified differences between strains, modeled the cellular response to low oxygen, and predicted the involvement of biological processes based on low-level gene expression differences. ADAGE compared favorably with traditional principal component analysis and independent component analysis approaches in its ability to extract validated patterns, and based on our analyses, we propose that these approaches differ in the types of patterns they preferentially identify. We provide the ADAGE model with analysis of all publicly available P. aeruginosa GeneChip experiments and open source code for use with other species and settings. Extraction of consistent patterns across large-scale collections of genomic data using methods like ADAGE provides the opportunity to identify general principles and biologically important patterns in microbial biology. This approach will be particularly useful in less-well-studied microbial species. IMPORTANCE The quantity and breadth of genome-scale data sets that examine RNA expression in diverse bacterial and eukaryotic species are increasing more rapidly than for curated knowledge. Our ADAGE method integrates such data without requiring gene function, gene pathway, or experiment labeling, making practical its application to any large gene expression compendium. We built a Pseudomonas aeruginosa ADAGE model from a diverse set of publicly available experiments without any prespecified biological knowledge, and this model was accurate and predictive. We provide ADAGE results for the complete P. aeruginosa GeneChip compendium for use by researchers studying P. aeruginosa and source code that facilitates ADAGE's application to other species and data types. Author Video: An author video summary of this article is available.
---
paper_title: Unsupervised extraction of stable expression signatures from public compendia with eADAGE
paper_content:
Cross experiment comparisons in public data compendia are challenged by unmatched conditions and technical noise. The ADAGE method, which performs unsupervised integration with neural networks, can effectively identify biological patterns, but because ADAGE models, like many neural networks, are over-parameterized, different ADAGE models perform equally well. To enhance model robustness and better build signatures consistent with biological pathways, we developed an ensemble ADAGE (eADAGE) that integrated stable signatures across models. We applied eADAGE to a Pseudomonas aeruginosa compendium containing experiments performed in 78 media. eADAGE revealed a phosphate starvation response controlled by PhoB. While we expected PhoB activity in limiting phosphate conditions, our analyses found PhoB activity in other media with moderate phosphate and predicted that a second stimulus provided by the sensor kinase, KinB, is required for PhoB activation in this setting. We validated this relationship using both targeted and unbiased genetic approaches. eADAGE, which captures stable biological patterns, enables cross-experiment comparisons that can highlight measured but undiscovered relationships.
---
paper_title: Principal Component Analysis for clustering gene expression data
paper_content:
MOTIVATION ::: There is a great need to develop analytical methodology to analyze and to exploit the information contained in gene expression data. Because of the large number of genes and the complexity of biological networks, clustering is a useful exploratory technique for analysis of gene expression data. Other classical techniques, such as principal component analysis (PCA), have also been applied to analyze gene expression data. Using different data analysis techniques and different clustering algorithms to analyze the same data set can lead to very different conclusions. Our goal is to study the effectiveness of principal components (PCs) in capturing cluster structure. Specifically, using both real and synthetic gene expression data sets, we compared the quality of clusters obtained from the original data to the quality of clusters obtained after projecting onto subsets of the principal component axes. ::: ::: ::: RESULTS ::: Our empirical study showed that clustering with the PCs instead of the original variables does not necessarily improve, and often degrades, cluster quality. In particular, the first few PCs (which contain most of the variation in the data) do not necessarily capture most of the cluster structure. We also showed that clustering with PCs has different impact on different algorithms and different similarity metrics. Overall, we would not recommend PCA before clustering except in special circumstances.
---
paper_title: Gene Expression Differences Among Primates Are Associated With Changes in a Histone Epigenetic Modification
paper_content:
Changes in gene regulation are thought to play an important role in speciation and adaptation, especially in primates. However, we still know relatively little about the mechanisms underlying regulatory evolution. In particular, the extent to which epigenetic modifications underlie gene expression differences between primates is not yet known. Our study focuses on an epigenetic histone modification, H3K4me3, which is thought to promote transcription. To investigate the contribution of H3K4me3 to regulatory differences between species, we collected gene expression data and identified H3K4me3-associated genomic regions in lymphoblastoid cell lines (LCLs) from humans, chimpanzees, and rhesus macaques, using three cell lines from each species. We found strong evidence for conservation of H3K4me3 localization in primates. Moreover, regardless of species, H3K4me3 is consistently enriched near annotated transcription start sites (TSS), and highly expressed genes are more likely than lowly expressed genes to have the histone modification near their TSS. Interestingly, we observed an enrichment of interspecies differences in H3K4me3 at the TSS of genes that are differentially expressed between species. We estimate that as much as 7% of gene expression differences between the LCLs of humans, chimpanzees, and rhesus macaques may be explained, at least in part, by changes in the status of H3K4me3 histone modifications. Our results suggest a modest, yet important role for epigenetic changes in gene expression differences between primates.
---
paper_title: The correlation between histone modifications and gene expression
paper_content:
In the nuclei of eukaryotic cells, DNA wraps around the octamer of histone proteins to form the nucleosome, in a structure like ‘beads on a string’, which makes up the basic unit of chromatin. Chromatin further folds into higherlevel structures, loosely or tightly, which helps to determine the accessibility of the DNA. For instance, actively transcribed regions tend to be in looser chromatin structures so that transcription factors and RNA polymerases can access the genes. Chromatin structure can be altered by various post-translational modifications of the N-terminal tail residues of histone proteins. For example, acetylation of a lysine residue can neutralize its positive charge and weaken the binding between the histone and the negatively charged DNA, which exposes the DNA to regulatory proteins. Methylation is another common type of histone modification; for example, the lysine at the fourth position of the H3 histone can be mono-, dior tri-methylated (denoted as H3K4me1, H3K4me2 and H3K4me3, respectively). By examining histone modification patterns at highly conserved noncoding regions in mouse embryonic stem cells, Bernstein et al. found ‘bivalent domains’ of histone modifications (i.e., harboring both the repressive mark H3K27me3 and the active mark H3K4me3) near genes with poised transcription [1]. When embryonic stem cells differentiate into more specialized cells (e.g., neural precursor cells), a subset of the bivalent domains are resolved (i.e., H3K27me3 becomes weaker, while H3K4me3 becomes stronger, and these loci coincide with genes that are actively transcribed in neural precursor cells). Thus, combinations of histone marks are indicative of transcriptional states. Barski et al. mapped 20 histone methylations of lysine and arginine residues in human CD4 T cells using chromatin immunoprecipitation followed by sequencing (ChIP-seq) [2]. They found that monomethylated H3K27, H3K9, H4K20, H3K79 and H2BK5 were linked to gene activation, while trimethylated H3K27, H3K9 and H3K79 were linked to gene repression. In a later study, the group profiled 39 additional histone modifications in human CD4 T cells [3]. They identified more than 3000 genes that were highly expressed in these cells and the promoters of these genes showed high levels of 17 histone modifications (called a histone modification module). Other studies also investigated the correlation between individual histone marks and gene expression, although not in a quantitative way [4,5].
---
paper_title: Gene expression inference with deep learning
paper_content:
Motivation Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ῀1,000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression, limiting its accuracy since it does not capture complex nonlinear relationship between expression of genes. Results We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based GEO dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms linear regression with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than linear regression in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2,921 expression profiles. Deep learning still outperforms linear regression with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. Availability D-GEX is available at https://github.com/uci-cbcl/D-GEX. Contact [email protected]
---
paper_title: Histone modification levels are predictive for gene expression
paper_content:
Histones are frequently decorated with covalent modifications. These histone modifications are thought to be involved in various chromatin-dependent processes including transcription. To elucidate the relationship between histone modifications and transcription, we derived quantitative models to predict the expression level of genes from histone modification levels. We found that histone modification levels and gene expression are very well correlated. Moreover, we show that only a small number of histone modifications are necessary to accurately predict gene expression. We show that different sets of histone modifications are necessary to predict gene expression driven by high CpG content promoters (HCPs) or low CpG content promoters (LCPs). Quantitative models involving H3K4me3 and H3K79me1 are the most predictive of the expression levels in LCPs, whereas HCPs require H3K27ac and H4K20me1. Finally, we show that the connections between histone modifications and gene expression seem to be general, as we were able to predict gene expression levels of one cell type using a model trained on another one.
---
paper_title: A statistical framework for modeling gene expression using chromatin features and application to modENCODE datasets
paper_content:
We develop a statistical framework to study the relationship between chromatin features and gene expression. This can be used to predict gene expression of protein coding genes, as well as microRNAs. We demonstrate the prediction in a variety of contexts, focusing particularly on the modENCODE worm datasets. Moreover, our framework reveals the positional contribution around genes (upstream or downstream) of distinct chromatin features to the overall prediction of expression levels.
---
paper_title: Combinatorial Roles of DNA Methylation and Histone Modifications on Gene Expression
paper_content:
Gene regulation, despite being investigated in a large number of works, is still yet to be well understood. The mechanisms that control gene expression is one of the open problems. Epigenetic factors, among others, are assumed to have a role to play. In this work, we focus on DNA methylation and post-translational histone modifications (PTMs). These individually have been shown to contribute to controlling of gene expression. However, neither can totally account for the expression levels, i.e. low or high. Therefore, the hypothesis of their combinatorial role, as two of the most influencing factors, has been established and discussed in literature. Taking a computational approach based on rule induction, we derived \(83\) rules and identified some key PTMs that have considerable effects such as H2BK5ac, H3K79me123, H4K91ac, and H3K4me3. Also, some interesting patterns of DNA methylation and PTMs that can explain the low expression of genes in CD4\(+\) T cell. The results include previously reported patterns as well as some new valid ones which could give some new insights to the process in question.
---
paper_title: Predicting Gene Expression from Sequence: A Reexamination
paper_content:
Although much of the information regarding genes' expressions is encoded in the genome, deciphering such information has been very challenging. We reexamined Beer and Tavazoie's (BT) approach to predict mRNA expression patterns of 2,587 genes in Saccharomyces cerevisiae from the information in their respective promoter sequences. Instead of fitting complex Bayesian network models, we trained naive Bayes classifiers using only the sequence-motif matching scores provided by BT. Our simple models correctly predict expression patterns for 79% of the genes, based on the same criterion and the same cross-validation (CV) procedure as BT, which compares favorably to the 73% accuracy of BT. The fact that our approach did not use position and orientation information of the predicted binding sites but achieved a higher prediction accuracy, motivated us to investigate a few biological predictions made by BT. We found that some of their predictions, especially those related to motif orientations and positions, are at best circumstantial. For example, the combinatorial rules suggested by BT for the PAC and RRPE motifs are not unique to the cluster of genes from which the predictive model was inferred, and there are simpler rules that are statistically more significant than BT's ones. We also show that CV procedure used by BT to estimate their method's prediction accuracy is inappropriate and may have overestimated the prediction accuracy by about 10%.
---
paper_title: Defining the chromatin signature of inducible genes in T cells
paper_content:
BackgroundSpecific chromatin characteristics, especially the modification status of the core histone proteins, are associated with active and inactive genes. There is growing evidence that genes that respond to environmental or developmental signals may possess distinct chromatin marks. Using a T cell model and both genome-wide and gene-focused approaches, we examined the chromatin characteristics of genes that respond to T cell activation.ResultsTo facilitate comparison of genes with similar basal expression levels, we used expression-profiling data to bin genes according to their basal expression levels. We found that inducible genes in the lower basal expression bins, especially rapidly induced primary response genes, were more likely than their non-responsive counterparts to display the histone modifications of active genes, have RNA polymerase II (Pol II) at their promoters and show evidence of ongoing basal elongation. There was little or no evidence for the presence of active chromatin marks in the absence of promoter Pol II on these inducible genes. In addition, we identified a subgroup of genes with active promoter chromatin marks and promoter Pol II but no evidence of elongation. Following T cell activation, we find little evidence for a major shift in the active chromatin signature around inducible gene promoters but many genes recruit more Pol II and show increased evidence of elongation.ConclusionsThese results suggest that the majority of inducible genes are primed for activation by having an active chromatin signature and promoter Pol II with or without ongoing elongation.
---
paper_title: Modeling gene expression using chromatin features in various cellular contexts
paper_content:
BackgroundPrevious work has demonstrated that chromatin feature levels correlate with gene expression. The ENCODE project enables us to further explore this relationship using an unprecedented volume of data. Expression levels from more than 100,000 promoters were measured using a variety of high-throughput techniques applied to RNA extracted by different protocols from different cellular compartments of several human cell lines. ENCODE also generated the genome-wide mapping of eleven histone marks, one histone variant, and DNase I hypersensitivity sites in seven cell lines.ResultsWe built a novel quantitative model to study the relationship between chromatin features and expression levels. Our study not only confirms that the general relationships found in previous studies hold across various cell lines, but also makes new suggestions about the relationship between chromatin features and gene expression levels. We found that expression status and expression levels can be predicted by different groups of chromatin features, both with high accuracy. We also found that expression levels measured by CAGE are better predicted than by RNA-PET or RNA-Seq, and different categories of chromatin features are the most predictive of expression for different RNA measurement methods. Additionally, PolyA+ RNA is overall more predictable than PolyA- RNA among different cell compartments, and PolyA+ cytosolic RNA measured with RNA-Seq is more predictable than PolyA+ nuclear RNA, while the opposite is true for PolyA- RNA.ConclusionsOur study provides new insights into transcriptional regulation by analyzing chromatin features in different cellular contexts.
---
paper_title: Deep Feature Selection: Theory and Application to Identify Enhancers and Promoters
paper_content:
Abstract Sparse linear models approximate target variable(s) by a sparse linear combination of input variables. Since they are simple, fast, and able to select features, they are widely used in classification and regression. Essentially they are shallow feed-forward neural networks that have three limitations: (1) incompatibility to model nonlinearity of features, (2) inability to learn high-level features, and (3) unnatural extensions to select features in a multiclass case. Deep neural networks are models structured by multiple hidden layers with nonlinear activation functions. Compared with linear models, they have two distinctive strengths: the capability to (1) model complex systems with nonlinear structures and (2) learn high-level representation of features. Deep learning has been applied in many large and complex systems where deep models significantly outperform shallow ones. However, feature selection at the input level, which is very helpful to understand the nature of a complex system, is stil...
---
paper_title: Detection of RNA polymerase II promoters and polyadenylation sites in human DNA sequence
paper_content:
Detection of RNA polymerase II promoters and polyadenylation sites helps to locate gene boundaries and can enhance accurate gene recognition and modeling in genomic DNA sequence. We describe a system which can be used to detect polyadenylation sites and thus delineate the 3′ boundary of a gene, and discuss improvements to a system first described in Matis et al. (1995) [Matis S., Shah M., Mural R. J. & Uberbacher E. C. (1995) Proc. First Wrld Conf. Computat. Med., Public Hlth, Biotechnol. (Wrld Sci.) (in press).], which predicts a large subset of RNA polymerase II promoters. The promoter system used statistical matrices and distance information as inputs for a neural network which was trained to provide initial promoter recognition. The output of the network was further refined by applying rules which use the gene context information predicted by GRAIL. We have reconstructed the rule-based system which uses gene context information and significantly improved the sensitivity and selectivity of promoter detection.
---
paper_title: The identification of cis-regulatory elements: A review from a machine learning perspective
paper_content:
The majority of the human genome consists of non-coding regions that have been called junk DNA. However, recent studies have unveiled that these regions contain cis-regulatory elements, such as promoters, enhancers, silencers, insulators, etc. These regulatory elements can play crucial roles in controlling gene expressions in specific cell types, conditions, and developmental stages. Disruption to these regions could contribute to phenotype changes. Precisely identifying regulatory elements is key to deciphering the mechanisms underlying transcriptional regulation. Cis-regulatory events are complex processes that involve chromatin accessibility, transcription factor binding, DNA methylation, histone modifications, and the interactions between them. The development of next-generation sequencing techniques has allowed us to capture these genomic features in depth. Applied analysis of genome sequences for clinical genetics has increased the urgency for detecting these regions. However, the complexity of cis-regulatory events and the deluge of sequencing data require accurate and efficient computational approaches, in particular, machine learning techniques. In this review, we describe machine learning approaches for predicting transcription factor binding sites, enhancers, and promoters, primarily driven by next-generation sequencing data. Data sources are provided in order to facilitate testing of novel methods. The purpose of this review is to attract computational experts and data scientists to advance this field.
---
paper_title: Enhanced Regulatory Sequence Prediction Using Gapped k-mer Features
paper_content:
Oligomers of length k, or k-mers, are convenient and widely used features for modeling the properties and functions of DNA and protein sequences. However, k-mers suffer from the inherent limitation that if the parameter k is increased to resolve longer features, the probability of observing any specific k-mer becomes very small, and k-mer counts approach a binary variable, with most k-mers absent and a few present once. Thus, any statistical learning approach using k-mers as features becomes susceptible to noisy training set k-mer frequencies once k becomes large. To address this problem, we introduce alternative feature sets using gapped k-mers, a new classifier, gkm-SVM, and a general method for robust estimation of k-mer frequencies. To make the method applicable to large-scale genome wide applications, we develop an efficient tree data structure for computing the kernel matrix. We show that compared to our original kmer-SVM and alternative approaches, our gkm-SVM predicts functional genomic regulatory elements and tissue specific enhancers with significantly improved accuracy, increasing the precision by up to a factor of two. We then show that gkm-SVM consistently outperforms kmer-SVM on human ENCODE ChIP-seq datasets, and further demonstrate the general utility of our method using a Naïve-Bayes classifier. Although developed for regulatory sequence analysis, these methods can be applied to any sequence classification problem.
---
paper_title: Boosted Categorical Restricted Boltzmann Machine for Computational Prediction of Splice Junctions
paper_content:
Splicing refers to the elimination of noncoding regions in transcribed pre-messenger ribonucleic acid (RNA). Discovering splice sites is an important machine learning task that helps us not only to identify the basic units of genetic heredity but also to understand how different proteins are produced. Existing methods for splicing prediction have produced promising results, but often show limited robustness and accuracy. In this paper, we propose a deep belief network-basedmethodology for computational splice junction prediction. Our proposal includes a novel method for training restricted Boltzmann machines for class-imbalanced prediction. The proposed method addresses the limitations of conventional contrastive divergence and provides regularization for datasets that have categorical features. We tested our approach using public human genome datasets and obtained significantly improved accuracy and reduced runtime compared to state-of-the-art alternatives. The proposed approach was less sensitive to the length of input sequences and more robust for handling false splicing signals. Furthermore,we could discover noncanonical splicing patterns that were otherwise difficult to recognize using conventional methods. Given the efficiency and robustness of our methodology, we anticipate that it can be extended to the discovery of primary structural patterns of other subtle genomic elements.
---
paper_title: Opportunities and obstacles for deep learning in biology and medicine
paper_content:
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results...
---
paper_title: Bayesian prediction of tissue-regulated splicing using RNA sequence and cellular context
paper_content:
Motivation: Alternative splicing is a major contributor to cellular diversity in mammalian tissues and relates to many human diseases. An important goal in understanding this phenomenon is to infer a ‘splicing code’ that predicts how splicing is regulated in different cell types by features derived from RNA, DNA and epigenetic modifiers. ::: ::: Methods: We formulate the assembly of a splicing code as a problem of statistical inference and introduce a Bayesian method that uses an adaptively selected number of hidden variables to combine subgroups of features into a network, allows different tissues to share feature subgroups and uses a Gibbs sampler to hedge predictions and ascertain the statistical significance of identified features. ::: ::: Results: Using data for 3665 cassette exons, 1014 RNA features and 4 tissue types derived from 27 mouse tissues ( http://genes.toronto.edu/wasp), we benchmarked several methods. Our method outperforms all others, and achieves relative improvements of 52% in splicing code quality and up to 22% in classification error, compared with the state of the art. Novel combinations of regulatory features and novel combinations of tissues that share feature subgroups were identified using our method. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online.
---
paper_title: Deep learning of the tissue-regulated splicing code
paper_content:
Motivation: Alternative splicing (AS) is a regulated process that directs the generation of different transcripts from single genes. A computational model that can accurately predict splicing patterns based on genomic features and cellular context is highly desirable, both in understanding this widespread phenomenon, and in exploring the effects of genetic variations on AS. Methods: Using a deep neural network, we developed a model inferred from mouse RNA-Seq data that can predict splicing patterns in individual tissues and differences in splicing patterns across tissues. Our architecture uses hidden variables that jointly represent features in genomic sequences and tissue types when making predictions. A graphics processing unit was used to greatly reduce the training time of our models with millions of parameters. Results: We show that the deep architecture surpasses the performance of the previous Bayesian method for predicting AS patterns. With the proper optimization procedure and selection of hyperparameters, we demonstrate that deep architectures can be beneficial, even with a moderately sparse dataset. An analysis of what the model has learned in terms of the genomic features is presented. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
---
paper_title: Deciphering the splicing code
paper_content:
The coding capacity of the genome is greatly expanded by the process of alternative splicing, which enables a single gene to produce more than one distinct protein. Can the expression of these different proteins be predicted from sequence data? Here, modelling based on information theory has been used to develop a 'splicing code', which can predict, with good accuracy, tissue-dependent changes in alternative splicing.
---
paper_title: Enhanced Regulatory Sequence Prediction Using Gapped k-mer Features
paper_content:
Oligomers of length k, or k-mers, are convenient and widely used features for modeling the properties and functions of DNA and protein sequences. However, k-mers suffer from the inherent limitation that if the parameter k is increased to resolve longer features, the probability of observing any specific k-mer becomes very small, and k-mer counts approach a binary variable, with most k-mers absent and a few present once. Thus, any statistical learning approach using k-mers as features becomes susceptible to noisy training set k-mer frequencies once k becomes large. To address this problem, we introduce alternative feature sets using gapped k-mers, a new classifier, gkm-SVM, and a general method for robust estimation of k-mer frequencies. To make the method applicable to large-scale genome wide applications, we develop an efficient tree data structure for computing the kernel matrix. We show that compared to our original kmer-SVM and alternative approaches, our gkm-SVM predicts functional genomic regulatory elements and tissue specific enhancers with significantly improved accuracy, increasing the precision by up to a factor of two. We then show that gkm-SVM consistently outperforms kmer-SVM on human ENCODE ChIP-seq datasets, and further demonstrate the general utility of our method using a Naïve-Bayes classifier. Although developed for regulatory sequence analysis, these methods can be applied to any sequence classification problem.
---
paper_title: Reverse-complement parameter sharing improves deep learning models for genomics
paper_content:
Deep learning approaches that have produced breakthrough predictive models in computer vision, speech recognition and machine translation are now being successfully applied to problems in regulatory genomics. However, deep learning architectures used thus far in genomics are often directly ported from computer vision and natural language processing applications with few, if any, domain-specific modifications. In double-stranded DNA, the same pattern may appear identically on one strand and its reverse complement due to complementary base pairing. Here, we show that conventional deep learning models that do not explicitly model this property can produce substantially different predictions on forward and reverse-complement versions of the same DNA sequence. We present four new convolutional neural network layers that leverage the reverse-complement property of genomic DNA sequence by sharing parameters between forward and reverse-complement representations in the model. These layers guarantee that forward and reverse-complement sequences produce identical predictions within numerical precision. Using experiments on simulated and in vivo transcription factor binding data, we show that our proposed architectures lead to improved performance, faster learning and cleaner internal representations compared to conventional architectures trained on the same data.
---
paper_title: RNA-protein binding motifs mining with a new hybrid deep learning based cross-domain knowledge integration approach
paper_content:
Background RNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation. Results In viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications. Conclusion The iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep
---
paper_title: DNA binding sites: representation and discovery
paper_content:
The purpose of this article is to provide a brief history of the development and application of computer algorithms for the analysis and prediction of DNA binding sites. This problem can be conveniently divided into two subproblems. The first is, given a collection of known binding sites, develop a representation of those sites that can be used to search new sequences and reliably predict where additional binding sites occur. The second is, given a set of sequences known to contain binding sites for a common factor, but not knowing where the sites are, discover the location of the sites in each sequence and a representation for the specificity of the protein.
---
paper_title: SCLpred : protein subcellular localization prediction by N-to-1 neural networks
paper_content:
Summary: Knowledge of the subcellular location of a protein provides valuable information about its function and possible interaction with other proteins. In the post-genomic era, fast and accurate predictors of subcellular location are required if this abundance of sequence data is to be fully exploited. We have developed a subcellular localization predictor (SCLpred), which predicts the location of a protein into four classes for animals and fungi and five classes for plants (secreted, cytoplasm, nucleus, mitochondrion and chloroplast) using machine learning models trained on large non-redundant sets of protein sequences. The algorithm powering SCLpred is a novel Neural Network (N-to-1 Neural Network, or N1-NN) we have developed, which is capable of mapping whole sequences into single properties (a functional class, in this work) without resorting to predefined transformations, but rather by adaptively compressing the sequence into a hidden feature vector. We benchmark SCLpred against other publicly available predictors using two benchmarks including a new subset of Swiss-Prot Release 2010_06. We show that SCLpred surpasses the state of the art. The N1-NN algorithm is fully general and may be applied to a host of problems of similar shape, that is, in which a whole sequence needs to be mapped into a fixed-size array of properties, and the adaptive compression it operates may shed light on the space of protein sequences. Availability: The predictive systems described in this article are publicly available as a web server at http://distill.ucd.ie/distill/. Contact: [email protected].
---
paper_title: Predicting Subcellular Localization of Proteins Based on their N-terminal Amino Acid Sequence
paper_content:
A neural network-based tool, TargetP, for large-scale subcellular location prediction of newly identified proteins has been developed. Using N-terminal sequence information only, it discriminates between proteins destined for the mitochondrion, the chloroplast, the secretory pathway, and ‘‘other’’ localizations with a success rate of 85 % (plant) or 90 % (nonplant) on redundancy-reduced test sets. From a TargetP analysis of the recently sequenced Arabidopsis thaliana chromosomes 2 and 4 and the Ensembl Homo sapiens protein set, we estimate that 10 % of all plant proteins are mitochondrial and 14 % chloroplastic, and that the abundance of secretory proteins, in both Arabidopsis and Homo, is around 10 %. TargetP also predicts cleavage sites with levels of correctly predicted sites ranging from approximately 40 % to 50 % (chloroplastic and mitochondrial presequences) to above 70 % (secretory signal peptides). TargetP is available as a web-server at http://www.cbs.dtu.dk/services/TargetP/. # 2000 Academic Press
---
paper_title: Opportunities and obstacles for deep learning in biology and medicine
paper_content:
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results...
---
paper_title: Creating a universal SNP and small indel variant caller with deep neural networks
paper_content:
Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual’s genome1 by calling genetic variants present in an individual using billions of short, errorful sequence reads2. Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome3,4. Here we show that a deep convolutional neural network5 can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the “highest performance” award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other mammalian species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data.
---
paper_title: Continuous Distributed Representation of Biological Sequences for Deep Proteomics and Genomics
paper_content:
We introduce a new representation and feature extraction method for biological sequences. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. In the present paper, we focus on protein-vectors that can be utilized in a wide array of bioinformatics investigations such as family classification, protein visualization, structure prediction, disordered protein identification, and protein-protein interaction prediction. In this method, we adopt artificial neural network approaches and represent a protein sequence with a single dense n-dimensional vector. To evaluate this method, we apply it in classification of 324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein families, where an average family classification accuracy of 93%±0.06% is obtained, outperforming existing family classification methods. In addition, we use ProtVec representation to predict disordered proteins from structured proteins. Two databases of disordered sequences are used: the DisProt database as well as a database featuring the disordered regions of nucleoporins rich with phenylalanine-glycine repeats (FG-Nups). Using support vector machine classifiers, FG-Nup sequences are distinguished from structured protein sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and unstructured DisProt sequences are differentiated from structured DisProt sequences with 100.0% accuracy. These results indicate that by only providing sequence data for various proteins into this model, accurate information about protein structure can be determined. Importantly, this model needs to be trained only once and can then be applied to extract a comprehensive set of information regarding proteins of interest. Moreover, this representation can be considered as pre-training for various applications of deep learning in bioinformatics. The related data is available at Life Language Processing Website: http://llp.berkeley.edu and Harvard Dataverse: http://dx.doi.org/10.7910/DVN/JMFHTN.
---
paper_title: Gene Ontology: tool for the unification of biology
paper_content:
Genomic sequencing has made it clear that a large fraction of the genes specifying the core biological functions are shared by all eukaryotes. Knowledge of the biological role of such shared proteins in one organism can often be transferred to other organisms. The goal of the Gene Ontology Consortium is to produce a dynamic, controlled vocabulary that can be applied to all eukaryotes even as knowledge of gene and protein roles in cells is accumulating and changing. To this end, three independent ontologies accessible on the World-Wide Web (http://www.geneontology.org) are being constructed: biological process, molecular function and cellular component.
---
paper_title: Profile based direct kernels for remote homology detection and fold recognition
paper_content:
Motivation: Protein remote homology detection is a central problem in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for remote homology detection. The performance of these methods depends on how the protein sequences are modeled and on the method used to compute the kernel function between them. ::: ::: Results: We introduce two classes of kernel functions that are constructed by combining sequence profiles with new and existing approaches for determining the similarity between pairs of protein sequences. These kernels are constructed directly from these explicit protein similarity measures and employ effective profile-to-profile scoring schemes for measuring the similarity between pairs of proteins. Experiments with remote homology detection and fold recognition problems show that these kernels are capable of producing results that are substantially better than those produced by all of the existing state-of-the-art SVM-based methods. In addition, the experiments show that these kernels, even when used in the absence of profiles, produce results that are better than those produced by existing non-profile-based schemes. ::: ::: Availability: The programs for computing the various kernel functions are available on request from the authors. ::: ::: Contact: [email protected]
---
paper_title: Protein function classification based on gene ontology
paper_content:
Most proteins interact with other proteins, cells, tissues or diseases. They have biological functions and can be classified according to their functions. With the functions and the functional relations of proteins, we can explain many biological phenomena and obtain answers in solving biological problems. Therefore, it is important to determine the functions of proteins. In this paper we present a protein function classification method for the function prediction of proteins. With human proteins assigned to GO molecular function terms, we measure the similarity of proteins to function classes using the functional distribution.
---
paper_title: Structural classification of proteins and structural genomics: new insights into protein folding and evolution
paper_content:
During the past decade, the Protein Structure Initiative (PSI) centres have become major contributors of new families, superfamilies and folds to the Structural Classification of Proteins (SCOP) database. The PSI results have increased the diversity of protein structural space and accelerated our understanding of it. This review article surveys a selection of protein structures determined by the Joint Center for Structural Genomics (JCSG). It presents previously undescribed β-sheet architectures such as the double barrel and spiral β-roll and discusses new examples of unusual topologies and peculiar structural features observed in proteins characterized by the JCSG and other Structural Genomics centres.
---
paper_title: SCOP: a Structural Classification of Proteins database
paper_content:
The Structural Classification of Proteins (SCOP) database provides a detailed and comprehensive description of the relationships of known protein structures. The classification is on hierarchical levels: the first two levels, family and superfamily, describe near and distant evolutionary relationships; the third, fold, describes geometrical relationships. The distinction between evolutionary relationships and those that arise from the physics and chemistry of proteins is a feature that is unique to this database so far. The sequences of proteins in SCOP provide the basis of the ASTRAL sequence libraries that can be used as a source of data to calibrate sequence search algorithms and for the generation of statistics on, or selections of, protein structures. Links can be made from SCOP to PDB-ISL: a library containing sequences homologous to proteins of known structure. Sequences of proteins of unknown structure can be matched to distantly related proteins of known structure by using pairwise sequence comparison methods to find homologues in PDB-ISL. The database and its associated files are freely accessible from a number of WWW sites mirrored from URL http://scop.mrc-lmb.cam.ac.uk/scop/
---
paper_title: A comprehensive review and comparison of different computational methods for protein remote homology detection
paper_content:
Protein remote homology detection is one of the most fundamental and central problems for the studies of protein structures and functions, aiming to detect the distantly evolutionary relationships among proteins via computational methods. During the past decades, many computational approaches have been proposed to solve this important task. These methods have made a substantial contribution to protein remote homology detection. Therefore, it is necessary to give a comprehensive review and comparison on these computational methods. In this article, we divide these computational approaches into three categories, including alignment methods, discriminative methods and ranking methods. Their advantages and disadvantages are discussed in a comprehensive perspective, and their performance is compared on widely used benchmark data sets. Finally, some open questions in this field are further explored and discussed.
---
paper_title: Improved tools for biological sequence comparison.
paper_content:
We have developed three computer programs for comparisons of protein and DNA sequences. They can be used to search sequence data bases, evaluate similarity scores, and identify periodic structures based on local sequence similarity. The FASTA program is a more sensitive derivative of the FASTP program, which can be used to search protein or DNA sequence data bases and can compare a protein sequence to a DNA sequence data base by translating the DNA data base as it is searched. FASTA includes an additional step in the calculation of the initial pairwise similarity score that allows multiple regions of similarity to be joined to increase the score of related sequences. The RDF2 program can be used to evaluate the significance of similarity scores using a shuffling method that preserves local sequence composition. The LFASTA program can display all the regions of local similarity between two sequences with scores greater than a threshold, using the same scoring parameters and a similar alignment algorithm; these local similarities can be displayed as a "graphic matrix" plot or as individual alignments. In addition, these programs have been generalized to allow comparison of DNA or protein sequences based on a variety of alternative scoring matrices.
---
paper_title: A topological approach for protein classification
paper_content:
Protein function and dynamics are closely related to its sequence and structure. However prediction of protein function and dynamics from its sequence and structure is still a fundamental challenge in molecular biology. Protein classification, which is typically done through measuring the similarity be- tween proteins based on protein sequence or physical information, serves as a crucial step toward the understanding of protein function and dynamics. Persistent homology is a new branch of algebraic topology that has found its success in the topological data analysis in a variety of disciplines, including molecular biology. The present work explores the potential of using persistent homology as an indepen- dent tool for protein classification. To this end, we propose a molecular topological fingerprint based support vector machine (MTF-SVM) classifier. Specifically, we construct machine learning feature vectors solely from protein topological fingerprints, which are topological invariants generated during the filtration process. To validate the present MTF-SVM approach, we consider four types of problems. First, we study protein-drug binding by using the M2 channel protein of influenza A virus. We achieve 96% accuracy in discriminating drug bound and unbound M2 channels. Additionally, we examine the use of MTF-SVM for the classification of hemoglobin molecules in their relaxed and taut forms and obtain about 80% accuracy. The identification of all alpha, all beta, and alpha-beta protein domains is carried out in our next study using 900 proteins. We have found a 85% success in this identifica- tion. Finally, we apply the present technique to 55 classification tasks of protein superfamilies over 1357 samples. An average accuracy of 82% is attained. The present study establishes computational topology as an independent and effective alternative for protein classification.
---
paper_title: Fast model-based protein homology detection without alignment
paper_content:
Motivation: As more genomes are sequenced, the demand for fast gene classification techniques is increasing. To analyze a newly sequenced genome, first the genes are identified and translated into amino acid sequences which are then classified into structural or functional classes. The best-performing protein classification methods are based on protein homology detection using sequence alignment methods. Alignment methods have recently been enhanced by discriminative methods like support vector machines (SVMs) as well as by position-specific scoring matrices (PSSM) as obtained from PSI-BLAST. ::: ::: However, alignment methods are time consuming if a new sequence must be compared to many known sequences—the same holds for SVMs. Even more time consuming is to construct a PSSM for the new sequence. The best-performing methods would take about 25 days on present-day computers to classify the sequences of a new genome (20 000 genes) as belonging to just one specific class—however, there are hundreds of classes. ::: ::: Another shortcoming of alignment algorithms is that they do not build a model of the positive class but measure the mutual distance between sequences or profiles. Only multiple alignments and hidden Markov models are popular classification methods which build a model of the positive class but they show low classification performance. The advantage of a model is that it can be analyzed for chemical properties common to the class members to obtain new insights into protein function and structure. ::: ::: We propose a fast model-based recurrent neural network for protein homology detection, the ‘Long Short-Term Memory’ (LSTM). LSTM automatically extracts indicative patterns for the positive class, but in contrast to profile methods it also extracts negative patterns and uses correlations between all detected patterns for classification. LSTM is capable to automatically extract useful local and global sequence statistics like hydrophobicity, polarity, volume, polarizability and combine them with a pattern. These properties make LSTM complementary to alignment-based approaches as it does not use predefined similarity measures like BLOSUM or PAM matrices. ::: ::: Results: We have applied LSTM to a well known benchmark for remote protein homology detection, where a protein must be classified as belonging to a SCOP superfamily. LSTM reaches state-of-the-art classification performance but is considerably faster for classification than other approaches with comparable classification performance. LSTM is five orders of magnitude faster than methods which perform slightly better in classification and two orders of magnitude faster than the fastest SVM-based approaches (which, however, have lower classification performance than LSTM). Only PSI-BLAST and HMM-based methods show comparable time complexity as LSTM, but they cannot compete with LSTM in classification performance. ::: ::: To test the modeling capabilities of LSTM, we applied LSTM to PROSITE classes and interpreted the extracted patterns. In 8 out of 15 classes, LSTM automatically extracted the PROSITE motif. In the remaining 7 cases alternative motifs are generated which give better classification results on average than the PROSITE motifs. ::: ::: Availability: The LSTM algorithm is available from http://www.bioinf.jku.at/software/LSTM_protein/ ::: ::: Contact: [email protected]
---
paper_title: Combining Pairwise Sequence Similarity and Support Vector Machines for Detecting Remote Protein Evolutionary and Structural Relationships
paper_content:
One key element in understanding the molecular machinery of the cell is to understand the structure and function of each protein encoded in the genome. A very successful means of inferring the structure or function of a previously unannotated protein is via sequence similarity with one or more proteins whose structure or function is already known. Toward this end, we propose a means of representing proteins using pairwise sequence similarity scores. This representation, combined with a discriminative classification algorithm known as the support vector machine (SVM), provides a powerful means of detecting subtle structural and evolutionary relationships among proteins. The algorithm, called SVM-pairwise, when tested on its ability to recognize previously unseen families from the SCOP database, yields significantly better performance than SVM-Fisher, profile HMMs, and PSI-BLAST.
---
paper_title: Prediction of Protein Secondary Structure at Better than 70% Accuracy
paper_content:
We have trained a two-layered feed-forward neural network on a non-redundant data base of 130 protein chains to predict the secondary structure of water-soluble proteins. A new key aspect is the use of evolutionary information in the form of multiple sequence alignments that are used as input in place of single sequences. The inclusion of protein family information in this form increases the prediction accuracy by six to eight percentage points. A combination of three levels of networks results in an overall three-state accuracy of 70.8% for globular proteins (sustained performance). If four membrane protein chains are included in the evaluation, the overall accuracy drops to 70.2%. The prediction is well balanced between alpha-helix, beta-strand and loop: 65% of the observed strand residues are predicted correctly. The accuracy in predicting the content of three secondary structure types is comparable to that of circular dichroism spectroscopy. The performance accuracy is verified by a sevenfold cross-validation test, and an additional test on 26 recently solved proteins. Of particular practical importance is the definition of a position-specific reliability index. For half of the residues predicted with a high level of reliability the overall accuracy increases to better than 82%. A further strength of the method is the more realistic prediction of segment length. The protein family prediction method is available for testing by academic researchers via an electronic mail server.
---
paper_title: Protein secondary structure prediction with a neural network
paper_content:
A method is presented for protein secondary structure prediction based on a neural network. A training phase was used to teach the network to recognize the relation between secondary structure and amino acid sequences on a sample set of 48 proteins of known structure. On a separate test set of 14 proteins of known structure, the method achieved a maximum overall predictive accuracy of 63% for three states: helix, sheet, and coil. A numerical measure of helix and sheet tendency for each residue was obtained from the calculations. When predictions were filtered to include only the strongest 31% of predictions, the predictive accuracy rose to 79%.
---
paper_title: A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction
paper_content:
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.
---
paper_title: Bayesian Segmentation of Protein Secondary Structure
paper_content:
We present a novel method for predicting the secondary structure of a protein from its amino acid sequence. Most existing methods predict each position in turn based on a local window of residues, sliding this window along the length of the sequence. In contrast, we develop a probabilistic model of protein sequence/structure relationships in terms of structural segments, and formulate secondary structure prediction as a general Bayesian inference problem. A distinctive feature of our approach is the ability to develop explicit probabilistic models for alpha-helices, beta-strands, and other classes of secondary structure, incorporating experimentally and empirically observed aspects of protein structure such as helical capping signals, side chain correlations, and segment length distributions. Our model is Markovian in the segments, permitting efficient exact calculation of the posterior probability distribution over all possible segmentations of the sequence using dynamic programming. The optimal segmentation is computed and compared to a predictor based on marginal posterior modes, and the latter is shown to provide significant improvement in predictive accuracy. The marginalization procedure provides exact secondary structure probabilities at each sequence position, which are shown to be reliable estimates of prediction uncertainty. We apply this model to a database of 452 nonhomologous structures, achieving accuracies as high as the best currently available methods. We conclude by discussing an extension of this framework to model nonlocal interactions in protein structures, providing a possible direction for future improvements in secondary structure prediction accuracy.
---
paper_title: Protein secondary structure and homology by neural networks The α-helices in rhodopsin
paper_content:
Abstract Neural networks provide a basis for semiempirical studies of pattern matching between the primary and secondary structures of proteins. Networks of the perceptron class have been trained to classify the amino-acid residues into two categories for each of three types of secondary feature: α-helix or not, β-sheet or not, and random coil or not. The explicit prediction for the helices in rhodopsin is compared with both electron microscopy results and those of the Chou-Fasman method. A new measure of homology between proteins is provided by the network approach, which thereby leads to quantification of the differences between the primary structures of proteins.
---
paper_title: A graphical model for protein secondary structure prediction
paper_content:
In this paper, we present a graphical model for protein secondary structure prediction. This model extends segmental semi-Markov models (SSMM) to exploit multiple sequence alignment profiles which contain information from evolutionarily related sequences. A novel parameterized model is proposed as the likelihood function for the SSMM to capture the segmental conformation. By incorporating the information from long range interactions in s-sheets, this model is capable of carrying out inference on contact maps. The numerical results on benchmark data sets show that incorporating the profiles results in substantial improvements and the generalization performance is promising.
---
paper_title: Improvements in protein secondary structure prediction by an enhanced neural network.
paper_content:
Computational neural networks have recently been used to predict the mapping between protein sequence and secondary structure. They have proven adequate for determining the first-order dependence between these two sets, but have, until now, been unable to garner higher-order information that helps determine secondary structure. By adding neural network units that detect periodicities in the input sequence, we have modestly increased the secondary structure prediction accuracy. The use of tertiary structural class causes a marked increase in accuracy. The best case prediction was 79% for the class of all-alpha proteins. A scheme for employing neural networks to validate and refine structural hypotheses is proposed. The operational difficulties of applying a learning algorithm to a dataset where sequence heterogeneity is under-represented and where local and global effects are inadequately partitioned are discussed.
---
paper_title: Exploiting the Past and the Future in Protein Secondary Structure Prediction
paper_content:
Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three-dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the prediction site. Although a fixed small window avoids overfitting problems, it does not permit capturing variable long-rang information. Results: We introduce a family of novel architectures which can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing non-causal bidirectional dynamics to capture both upstream and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the input and output levels. While our system currently achieves an overall performance close to 76% correct prediction ‐ at least comparable to the best existing systems ‐ the main emphasis here is on the development of new algorithmic
---
paper_title: Improving the Prediction of Protein Secondary Structure in Three and Eight Classes Using Recurrent Neural Networks and Profiles
paper_content:
Secondary structure predictions are increasingly becoming the workhorse for several methods aiming at predicting protein structure and function. Here we use ensembles of bidirectional recurrent neural network architectures, PSI-BLAST-derived profiles, and a large nonredundant training set to derive two new predictors: (a) the second version of the SSpro program for secondary structure classification into three categories and (b) the first version of the SSpro8 program for secondary structure classification into the eight classes produced by the DSSP program. We describe the results of three different test sets on which SSpro achieved a sustained performance of about 78% correct prediction. We report confusion matrices, compare PSI-BLAST to BLAST-derived profiles, and assess the corresponding performance improvements. SSpro and SSpro8 are implemented as web servers, available together with other structural feature predictors at: http://promoter.ics.uci.edu/BRNN-PRED/. Proteins 2002;47:228–235. © 2002 Wiley-Liss, Inc.
---
paper_title: SSpro/ACCpro 5: almost perfect prediction of protein secondary structure and relative solvent accessibility using profiles, machine learning and structural similarity
paper_content:
Motivation: Accurately predicting protein secondary structure and relative solvent accessibility is important for the study of protein evolution, structure and function and as a component of protein 3D structure prediction pipelines. Most predictors use a combination of machine learning and profiles, and thus must be retrained and assessed periodically as the number of available protein sequences and structures continues to grow. Results: We present newly trained modular versions of the SSpro and ACCpro predictors of secondary structure and relative solvent accessibility together with their multi-class variants SSpro8 and ACCpro20. We introduce a sharp distinction between the use of sequence similarity alone, typically in the form of sequence profiles at the input level, and the additional use of sequence-based structural similarity, which uses similarity to sequences in the Protein Data Bank to infer annotations at the output level, and study their relative contributions to modern predictors. Using sequence similarity alone, SSpro’s accuracy is between 79 and 80% (79% for ACCpro) and no other predictor seems to exceed 82%. However, when sequence-based structural similarity is added, the accuracy of SSpro rises to 92.9% (90% for ACCpro). Thus, by combining both approaches, these problems appear now to be essentially solved, as an accuracy of 100% cannot be expected for several well-known reasons. These results point also to several open technical challenges, including (i) achieving on the order of 80% accuracy, without using any similarity with known proteins and (ii) achieving on the order of 85% accuracy, using sequence similarity alone. Availability and implementation: SSpro, SSpro8, ACCpro and ACCpro20 programs, data and web servers are available through the SCRATCH suite of protein structure predictors at http://scratch.
---
paper_title: Dictionary of protein secondary structure: Pattern recognition of hydrogen-bonded and geometrical features
paper_content:
For a successful analysis of the relation between amino acid sequence and protein structure, an unambiguous and physically meaningful definition of secondary structure is essential. We have developed a set of simple and physically motivated criteria for secondary structure, programmed as a pattern-recognition process of hydrogen-bonded and geometrical features extracted from x-ray coordinates. Cooperative secondary structure is recognized as repeats of the elementary hydrogen-bonding patterns “turn” and “bridge.” Repeating turns are “helices,” repeating bridges are “ladders,” connected ladders are “sheets.” Geometric structure is defined in terms of the concepts torsion and curvature of differential geometry. Local chain “chirality” is the torsional handedness of four consecutive Cα positions and is positive for right-handed helices and negative for ideal twisted β-sheets. Curved pieces are defined as “bends.” Solvent “exposure” is given as the number of water molecules in possible contact with a residue. The end result is a compilation of the primary structure, including SS bonds, secondary structure, and solvent exposure of 62 different globular proteins. The presentation is in linear form: strip graphs for an overall view and strip tables for the details of each of 10.925 residues. The dictionary is also available in computer-readable form for protein structure prediction work.
---
paper_title: Deep Supervised and Convolutional Generative Stochastic Network for Protein Secondary Structure Prediction
paper_content:
Predicting protein secondary structure is a fundamental problem in protein structure prediction. Here we present a new supervised generative stochastic network (GSN) based method to predict local secondary structure with deep hierarchical representations. GSN is a recently proposed deep learning technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative model. We present the supervised extension of GSN, which learns a Markov chain to sample from a conditional distribution, and applied it to protein structure prediction. To scale the model to full-sized, high-dimensional data, like protein sequences with hundreds of amino acids, we introduce a convolutional architecture, which allows efficient learning across multiple layers of hierarchical representations. Our architecture uniquely focuses on predicting structured low-level labels informed with both low and high-level representations learned by the model. In our application this corresponds to labeling the secondary structure state of each amino-acid residue. We trained and tested the model on separate sets of non-homologous proteins sharing less than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513 dataset, better than the previously reported best performance 64.9% (Wang et al., 2011) for this challenging secondary structure prediction problem.
---
paper_title: Hidden-Unit Conditional Random Fields
paper_content:
The paper explores a generalization of conditional random elds (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units in these models also learn to discover latent distributed structure in the data that improves classication. We derive ecient algorithms for inference and learning in these models by observing that the hidden units are conditionally independent given the data and the labels. Finally, we show that hiddenunit CRFs perform well in experiments on a range of tasks, including optical character recognition, text classication, protein structure prediction, and part-of-speech tagging.
---
paper_title: Predicting the secondary structure of globular proteins using neural networks models
paper_content:
We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures.
---
paper_title: Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks
paper_content:
Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available.
---
paper_title: A Novel Method of Protein Secondary Structure Prediction with High Segment Overlap Measure: Support Vector Machine Approach
paper_content:
We have introduced a new method of protein secondary structure prediction which is based on the theory of support vector machine (SVM). SVM represents a new approach to supervised pattern classification which has been successfully applied to a wide range of pattern recognition problems, including object recognition, speaker identification, gene function prediction with microarray expression profile, etc. In these cases, the performance of SVM either matches or is significantly better than that of traditional machine learning approaches, including neural networks.The first use of the SVM approach to predict protein secondary structure is described here. Unlike the previous studies, we first constructed several binary classifiers, then assembled a tertiary classifier for three secondary structure states (helix, sheet and coil) based on these binary classifiers. The SVM method achieved a good performance of segment overlap accuracy SOV=76.2 % through sevenfold cross validation on a database of 513 non-homologous protein chains with multiple sequence alignments, which out-performs existing methods. Meanwhile three-state overall per-residue accuracy Q(3) achieved 73.5 %, which is at least comparable to existing single prediction methods. Furthermore a useful "reliability index" for the predictions was developed. In addition, SVM has many attractive features, including effective avoidance of overfitting, the ability to handle large feature spaces, information condensing of the given data set, etc. The SVM method is conveniently applied to many other pattern classification tasks in biology.
---
paper_title: A modified definition of Sov, a segment-based measure for protein secondary structure prediction assessment
paper_content:
We present a measure for the evaluation of secondary structure prediction methods that is based on secondary structure segments rather than individual residues. The algorithm is an extension of the segment overlap measure Sov, originally defined by Rost et al. (J Mol Biol 1994;235:13-26). The new definition of Sov corrects the normalization procedure and improves Sov's ability to discriminate between similar and dissimilar segment distributions. The method has been comprehensively tested during the second Critical Assessment of Techniques for Protein Structure Prediction (CASP2). Here, we describe the underlying concepts, modifications to the original definition, and their significance.
---
paper_title: Deep Spatio-Temporal Architectures and Learning for Protein Structure Prediction
paper_content:
Residue-residue contact prediction is a fundamental problem in protein structure prediction. Hower, despite considerable research efforts, contact prediction methods are still largely unreliable. Here we introduce a novel deep machine-learning architecture which consists of a multidimensional stack of learning modules. For contact prediction, the idea is implemented as a three-dimensional stack of Neural Networks NNkij, where i and j index the spatial coordinates of the contact map and k indexes "time". The temporal dimension is introduced to capture the fact that protein folding is not an instantaneous process, but rather a progressive refinement. Networks at level k in the stack can be trained in supervised fashion to refine the predictions produced by the previous level, hence addressing the problem of vanishing gradients, typical of deep architectures. Increased accuracy and generalization capabilities of this approach are established by rigorous comparison with other classical machine learning approaches for contact prediction. The deep approach leads to an accuracy for difficult long-range contacts of about 30%, roughly 10% above the state-of-the-art. Many variations in the architectures and the training algorithms are possible, leaving room for further improvements. Furthermore, the approach is applicable to other problems with strong underlying spatial and temporal components.
---
paper_title: Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
paper_content:
MOTIVATION ::: Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. ::: ::: ::: METHOD ::: This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. ::: ::: ::: RESULTS ::: Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. ::: ::: ::: AVAILABILITY ::: http://raptorx.uchicago.edu/ContactMap/.
---
paper_title: Boosted Categorical Restricted Boltzmann Machine for Computational Prediction of Splice Junctions
paper_content:
Splicing refers to the elimination of noncoding regions in transcribed pre-messenger ribonucleic acid (RNA). Discovering splice sites is an important machine learning task that helps us not only to identify the basic units of genetic heredity but also to understand how different proteins are produced. Existing methods for splicing prediction have produced promising results, but often show limited robustness and accuracy. In this paper, we propose a deep belief network-basedmethodology for computational splice junction prediction. Our proposal includes a novel method for training restricted Boltzmann machines for class-imbalanced prediction. The proposed method addresses the limitations of conventional contrastive divergence and provides regularization for datasets that have categorical features. We tested our approach using public human genome datasets and obtained significantly improved accuracy and reduced runtime compared to state-of-the-art alternatives. The proposed approach was less sensitive to the length of input sequences and more robust for handling false splicing signals. Furthermore,we could discover noncanonical splicing patterns that were otherwise difficult to recognize using conventional methods. Given the efficiency and robustness of our methodology, we anticipate that it can be extended to the discovery of primary structural patterns of other subtle genomic elements.
---
paper_title: Opportunities and obstacles for deep learning in biology and medicine
paper_content:
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results...
---
paper_title: DEEP: a general computational framework for predicting enhancers
paper_content:
Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer’s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.
---
paper_title: Discover regulatory DNA elements using chromatin signatures and artificial neural network
paper_content:
Motivation: Recent large-scale chromatin states mapping efforts have revealed characteristic chromatin modification signatures for various types of functional DNA elements. Given the important influence of chromatin states on gene regulation and the rapid accumulation of genome-wide chromatin modification data, there is a pressing need for computational methods to analyze these data in order to identify functional DNA elements. However, existing computational tools do not exploit data transformation and feature extraction as a means to achieve a more accurate prediction. ::: ::: Results: We introduce a new computational framework for identifying functional DNA elements using chromatin signatures. The framework consists of a data transformation and a feature extraction step followed by a classification step using time-delay neural network. We implemented our framework in a software tool CSI-ANN (chromatin signature identification by artificial neural network). When applied to predict transcriptional enhancers in the ENCODE region, CSI-ANN achieved a 65.5% sensitivity and 66.3% positive predictive value, a 5.9% and 11.6% improvement, respectively, over the previously best approach. ::: ::: Availability and Implementation: CSI-ANN is implemented in Matlab. The source code is freely available at http://www.medicine.uiowa.edu/Labs/tan/CSIANNsoft.zip ::: ::: Contact: [email protected] ::: ::: Supplementary Information:Supplementary Materials are available at Bioinformatics online.
---
paper_title: Imbalanced Class Learning in Epigenetics
paper_content:
In machine learning, one of the important criteria for higher classification accuracy is a balanced dataset. Datasets with a large ratio between minority and majority classes face hindrance in learning using any classifier. Datasets having a magnitude difference in number of instances between the target concept result in an imbalanced class distribution. Such datasets can range from biological data, sensor data, medical diagnostics, or any other domain where labeling any instances of the minority class can be time-consuming or costly or the data may not be easily available. The current study investigates a number of imbalanced class algorithms for solving the imbalanced class distribution present in epigenetic datasets. Epigenetic (DNA methylation) datasets inherently come with few differentially DNA methylated regions (DMR) and with a higher number of non-DMR sites. For this class imbalance problem, a number of algorithms are compared, including the TAN+AdaBoost algorithm. Experiments performed on four epigenetic datasets and several known datasets show that an imbalanced dataset can have similar accuracy as a regular learner on a balanced dataset.
---
paper_title: An unsupervised learning approach to resolving the data imbalanced issue in supervised learning problems in functional genomics
paper_content:
Learning from imbalanced data occurs very frequently in functional genomic applications. One positive example to thousands of negative instances is common in scientific applications. Unfortunately, traditional machine learning treats the extremely small instances as noise. The standard approach for this difficulty is balancing training data by resampling them. However, this results in high false positive predictions. Hence, we propose preprocessing majority instances by partitioning them into clusters. This greatly reduces the ambiguity between minority instances and instances in each cluster. For moderately high imbalance ratio and low in-class complexity, our technique gives better prediction accuracy than undersampling method. For extreme imbalance ratio like splice site prediction problem, we demonstrate that this technique serves as a good filter with almost perfect recall that reduces the amount of imbalance so that traditional classification techniques can be deployed and yield significant improvements over previous predictor. We also show that the technique works for sub cellular localization and post-translational modification site prediction problems.
---
paper_title: Image-level and group-level models for Drosophilagene expression pattern annotation
paper_content:
BackgroundDrosophila melanogaster has been established as a model organism for investigating the developmental gene interactions. The spatio-temporal gene expression patterns of Drosophila melanogaster can be visualized by in situ hybridization and documented as digital images. Automated and efficient tools for analyzing these expression images will provide biological insights into the gene functions, interactions, and networks. To facilitate pattern recognition and comparison, many web-based resources have been created to conduct comparative analysis based on the body part keywords and the associated images. With the fast accumulation of images from high-throughput techniques, manual inspection of images will impose a serious impediment on the pace of biological discovery. It is thus imperative to design an automated system for efficient image annotation and comparison.ResultsWe present a computational framework to perform anatomical keywords annotation for Drosophila gene expression images. The spatial sparse coding approach is used to represent local patches of images in comparison with the well-known bag-of-words (BoW) method. Three pooling functions including max pooling, average pooling and Sqrt (square root of mean squared statistics) pooling are employed to transform the sparse codes to image features. Based on the constructed features, we develop both an image-level scheme and a group-level scheme to tackle the key challenges in annotating Drosophila gene expression pattern images automatically. To deal with the imbalanced data distribution inherent in image annotation tasks, the undersampling method is applied together with majority vote. Results on Drosophila embryonic expression pattern images verify the efficacy of our approach.ConclusionIn our experiment, the three pooling functions perform comparably well in feature dimension reduction. The undersampling with majority vote is shown to be effective in tackling the problem of imbalanced data. Moreover, combining sparse coding and image-level scheme leads to consistent performance improvement in keywords annotation.
---
paper_title: Imbalanced Learning: Foundations, Algorithms, and Applications
paper_content:
The first book of its kind to review the current status and future direction of the exciting new branch of machine learning/data mining called imbalanced learningImbalanced learning focuses on how an intelligent system can learn when it is provided with imbalanced data. Solving imbalanced learning problems is critical in numerous data-intensive networked systems, including surveillance, security, Internet, finance, biomedical, defense, and more. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. The first comprehensive look at this new branch of machine learning, this book offers a critical review of the problem of imbalanced learning, covering the state of the art in techniques, principles, and real-world applications. Featuring contributions from experts in both academia and industry, Imbalanced Learning: Foundations, Algorithms, and Applications provides chapter coverage on:Foundations of Imbalanced LearningImbalanced Datasets: From Sampling to ClassifiersEnsemble Methods for Class Imbalance LearningClass Imbalance Learning Methods for Support Vector MachinesClass Imbalance and Active LearningNonstationary Stream Data Learning with Imbalanced Class DistributionAssessment Metrics for Imbalanced LearningImbalanced Learning: Foundations, Algorithms, and Applications will help scientists and engineers learn how to tackle the problem of learning from imbalanced datasets, and gain insight into current developments in the field as well as future research directions.
---
paper_title: Causal Effect Inference with Deep Latent-Variable Models
paper_content:
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
---
paper_title: Population Structure and Cryptic Relatedness in Genetic Association Studies
paper_content:
We review the problem of confounding in genetic association studies, which arises principally because of population structure and cryptic relatedness. Many treatments of the problem consider only a simple ``island'' model of population structure. We take a broader approach, which views population structure and cryptic relatedness as different aspects of a single confounder: the unobserved pedigree defining the (often distant) relationships among the study subjects. Kinship is therefore a central concept, and we review methods of defining and estimating kinship coefficients, both pedigree-based and marker-based. In this unified framework we review solutions to the problem of population structure, including family-based study designs, genomic control, structured association, regression control, principal components adjustment and linear mixed models. The last solution makes the most explicit use of the kinships among the study subjects, and has an established role in the analysis of animal and plant breeding studies. Recent computational developments mean that analyses of human genetic association data are beginning to benefit from its powerful tests for association, which protect against population structure and cryptic kinship, as well as intermediate levels of confounding by the pedigree.
---
paper_title: Implicit Causal Models for Genome-wide Association Studies
paper_content:
Progress in probabilistic generative models has accelerated, developing richer models with neural architectures, implicit densities, and with scalable algorithms for their Bayesian inference. However, there has been limited progress in models that capture causal relationships, for example, how individual genetic factors cause major human diseases. In this work, we focus on two challenges in particular: How do we build richer causal models, which can capture highly nonlinear relationships and interactions between multiple causes? How do we adjust for latent confounders, which are variables influencing both cause and effect and which prevent learning of causal relationships? To address these challenges, we synthesize ideas from causality and modern probabilistic modeling. For the first, we describe implicit causal models, a class of causal models that leverages neural architectures with an implicit density. For the second, we describe an implicit causal model that adjusts for confounders by sharing strength across examples. In experiments, we scale Bayesian inference on up to a billion genetic measurements. We achieve state of the art accuracy for identifying causal factors: we significantly outperform existing genetics methods by an absolute difference of 15-45.3%.
---
paper_title: A unified mixed-model method for association mapping that accounts for multiple levels of relatedness
paper_content:
As population structure can result in spurious associations, it has constrained the use of association studies in human and plant genetics. Association mapping, however, holds great promise if true signals of functional association can be separated from the vast number of false signals generated by population structure. We have developed a unified mixed-model approach to account for multiple levels of relatedness simultaneously as detected by random genetic markers. We applied this new approach to two samples: a family-based sample of 14 human families, for quantitative gene expression dissection, and a sample of 277 diverse maize inbred lines with complex familial relationships and population structure, for quantitative trait dissection. Our method demonstrates improved control of both type I and type II error rates over other methods. As this new method crosses the boundary between family-based and structured association samples, it provides a powerful complement to currently available methods for association mapping.
---
paper_title: Multiple confounders correction with regularized linear mixed effect models, with application in biological processes
paper_content:
In this paper, we inspect the performance of regularized linear mixed effect models, as an extension of linear mixed effect model, when multiple confounding factors coexist. We first review its parameter estimation algorithms before we introduce three different methods for multiple confounding factors correction, namely concatenation, sequence, and interpolation. Then we investigate the performance on variable selection task and predictive task on three different data sets, synthetic data set, semi-empirical synthetic data set based on genome sequences and brain wave data set connecting to confused mental states. Our results suggest that sequence multiple confounding factors corrections behave the best when different confounders contribute equally to response variables. On the other hand, when various confounders affect the response variable unevenly, results mainly rely on the degree of how the major confounder is corrected.
---
paper_title: Advantages and pitfalls in the application of mixed-model association methods
paper_content:
Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.
---
paper_title: FaST linear mixed models for genome-wide association studies
paper_content:
We describe factored spectrally transformed linear mixed models (FaST-LMM), an algorithm for genome-wide association studies (GWAS) that scales linearly with cohort size in both run time and memory use. On Wellcome Trust data for 15,000 individuals, FaST-LMM ran an order of magnitude faster than current efficient algorithms. Our algorithm can analyze data for 120,000 individuals in just a few hours, whereas current algorithms fail on data for even 20,000 individuals (http://mscompbio.codeplex.com/).
---
paper_title: Variance component model to account for sample structure in genome-wide association studies
paper_content:
Although genome-wide association studies (GWASs) have identified numerous loci associated with complex traits, imprecise modeling of the genetic relatedness within study samples may cause substantial inflation of test statistics and possibly spurious associations. Variance component approaches, such as efficient mixed-model association (EMMA), can correct for a wide range of sample structures by explicitly accounting for pairwise relatedness between individuals, using high-density markers to model the phenotype distribution; but such approaches are computationally impractical. We report here a variance component approach implemented in publicly available software, EMMA eXpedited (EMMAX), that reduces the computational time for analyzing large GWAS data sets from years to hours. We apply this method to two human GWAS data sets, performing association analysis for ten quantitative traits from the Northern Finland Birth Cohort and seven common diseases from the Wellcome Trust Case Control Consortium. We find that EMMAX outperforms both principal component analysis and genomic control in correcting for sample structure.
---
paper_title: 3D deep convolutional neural networks for amino acid environment similarity analysis
paper_content:
Central to protein biology is the understanding of how structural elements give rise to observed function. The surfeit of protein structural data enables development of computational methods to systematically derive rules governing structural-functional relationships. However, performance of these methods depends critically on the choice of protein structural representation. Most current methods rely on features that are manually selected based on knowledge about protein structures. These are often general-purpose but not optimized for the specific application of interest. In this paper, we present a general framework that applies 3D convolutional neural network (3DCNN) technology to structure-based protein analysis. The framework automatically extracts task-specific features from the raw atom distribution, driven by supervised labels. As a pilot study, we use our network to analyze local protein microenvironments surrounding the 20 amino acids, and predict the amino acids most compatible with environments within a protein structure. To further validate the power of our method, we construct two amino acid substitution matrices from the prediction statistics and use them to predict effects of mutations in T4 lysozyme structures. Our deep 3DCNN achieves a two-fold increase in prediction accuracy compared to models that employ conventional hand-engineered features and successfully recapitulates known information about similar and different microenvironments. Models built from our predictions and substitution matrices achieve an 85% accuracy predicting outcomes of the T4 lysozyme mutation variants. Our substitution matrices contain rich information relevant to mutation analysis compared to well-established substitution matrices. Finally, we present a visualization method to inspect the individual contributions of each atom to the classification decisions. End-to-end trained deep learning networks consistently outperform methods using hand-engineered features, suggesting that the 3DCNN framework is well suited for analysis of protein microenvironments and may be useful for other protein structural analyses.
---
paper_title: A topological approach for protein classification
paper_content:
Protein function and dynamics are closely related to its sequence and structure. However prediction of protein function and dynamics from its sequence and structure is still a fundamental challenge in molecular biology. Protein classification, which is typically done through measuring the similarity be- tween proteins based on protein sequence or physical information, serves as a crucial step toward the understanding of protein function and dynamics. Persistent homology is a new branch of algebraic topology that has found its success in the topological data analysis in a variety of disciplines, including molecular biology. The present work explores the potential of using persistent homology as an indepen- dent tool for protein classification. To this end, we propose a molecular topological fingerprint based support vector machine (MTF-SVM) classifier. Specifically, we construct machine learning feature vectors solely from protein topological fingerprints, which are topological invariants generated during the filtration process. To validate the present MTF-SVM approach, we consider four types of problems. First, we study protein-drug binding by using the M2 channel protein of influenza A virus. We achieve 96% accuracy in discriminating drug bound and unbound M2 channels. Additionally, we examine the use of MTF-SVM for the classification of hemoglobin molecules in their relaxed and taut forms and obtain about 80% accuracy. The identification of all alpha, all beta, and alpha-beta protein domains is carried out in our next study using 900 proteins. We have found a 85% success in this identifica- tion. Finally, we apply the present technique to 55 classification tasks of protein superfamilies over 1357 samples. An average accuracy of 82% is attained. The present study establishes computational topology as an independent and effective alternative for protein classification.
---
paper_title: Fast model-based protein homology detection without alignment
paper_content:
Motivation: As more genomes are sequenced, the demand for fast gene classification techniques is increasing. To analyze a newly sequenced genome, first the genes are identified and translated into amino acid sequences which are then classified into structural or functional classes. The best-performing protein classification methods are based on protein homology detection using sequence alignment methods. Alignment methods have recently been enhanced by discriminative methods like support vector machines (SVMs) as well as by position-specific scoring matrices (PSSM) as obtained from PSI-BLAST. ::: ::: However, alignment methods are time consuming if a new sequence must be compared to many known sequences—the same holds for SVMs. Even more time consuming is to construct a PSSM for the new sequence. The best-performing methods would take about 25 days on present-day computers to classify the sequences of a new genome (20 000 genes) as belonging to just one specific class—however, there are hundreds of classes. ::: ::: Another shortcoming of alignment algorithms is that they do not build a model of the positive class but measure the mutual distance between sequences or profiles. Only multiple alignments and hidden Markov models are popular classification methods which build a model of the positive class but they show low classification performance. The advantage of a model is that it can be analyzed for chemical properties common to the class members to obtain new insights into protein function and structure. ::: ::: We propose a fast model-based recurrent neural network for protein homology detection, the ‘Long Short-Term Memory’ (LSTM). LSTM automatically extracts indicative patterns for the positive class, but in contrast to profile methods it also extracts negative patterns and uses correlations between all detected patterns for classification. LSTM is capable to automatically extract useful local and global sequence statistics like hydrophobicity, polarity, volume, polarizability and combine them with a pattern. These properties make LSTM complementary to alignment-based approaches as it does not use predefined similarity measures like BLOSUM or PAM matrices. ::: ::: Results: We have applied LSTM to a well known benchmark for remote protein homology detection, where a protein must be classified as belonging to a SCOP superfamily. LSTM reaches state-of-the-art classification performance but is considerably faster for classification than other approaches with comparable classification performance. LSTM is five orders of magnitude faster than methods which perform slightly better in classification and two orders of magnitude faster than the fastest SVM-based approaches (which, however, have lower classification performance than LSTM). Only PSI-BLAST and HMM-based methods show comparable time complexity as LSTM, but they cannot compete with LSTM in classification performance. ::: ::: To test the modeling capabilities of LSTM, we applied LSTM to PROSITE classes and interpreted the extracted patterns. In 8 out of 15 classes, LSTM automatically extracted the PROSITE motif. In the remaining 7 cases alternative motifs are generated which give better classification results on average than the PROSITE motifs. ::: ::: Availability: The LSTM algorithm is available from http://www.bioinf.jku.at/software/LSTM_protein/ ::: ::: Contact: [email protected]
---
paper_title: Continuous Distributed Representation of Biological Sequences for Deep Proteomics and Genomics
paper_content:
We introduce a new representation and feature extraction method for biological sequences. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. In the present paper, we focus on protein-vectors that can be utilized in a wide array of bioinformatics investigations such as family classification, protein visualization, structure prediction, disordered protein identification, and protein-protein interaction prediction. In this method, we adopt artificial neural network approaches and represent a protein sequence with a single dense n-dimensional vector. To evaluate this method, we apply it in classification of 324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein families, where an average family classification accuracy of 93%±0.06% is obtained, outperforming existing family classification methods. In addition, we use ProtVec representation to predict disordered proteins from structured proteins. Two databases of disordered sequences are used: the DisProt database as well as a database featuring the disordered regions of nucleoporins rich with phenylalanine-glycine repeats (FG-Nups). Using support vector machine classifiers, FG-Nup sequences are distinguished from structured protein sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and unstructured DisProt sequences are differentiated from structured DisProt sequences with 100.0% accuracy. These results indicate that by only providing sequence data for various proteins into this model, accurate information about protein structure can be determined. Importantly, this model needs to be trained only once and can then be applied to extract a comprehensive set of information regarding proteins of interest. Moreover, this representation can be considered as pre-training for various applications of deep learning in bioinformatics. The related data is available at Life Language Processing Website: http://llp.berkeley.edu and Harvard Dataverse: http://dx.doi.org/10.7910/DVN/JMFHTN.
---
paper_title: Distributed Representations of Words and Phrases and their Compositionality
paper_content:
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
---
paper_title: MUST-CNN: A Multilayer Shift-and-Stitch Deep Convolutional Architecture for Sequence-based Protein Structure Prediction
paper_content:
Predicting protein properties such as solvent accessibility and secondary structure from its primary amino acid sequence is an important task in bioinformatics. Recently, a few deep learning models have surpassed the traditional window based multilayer perceptron. Taking inspiration from the image classification domain we propose a deep convolutional neural network architecture, MUST-CNN, to predict protein properties. This architecture uses a novel multilayer shift-and-stitch (MUST) technique to generate fully dense per-position predictions on protein sequences. Our model is significantly simpler than the state-of-the-art, yet achieves better results. By combining MUST and the efficient convolution operation, we can consider far more parameters while retaining very fast prediction speeds. We beat the state-of-the-art performance on two large protein property prediction datasets.
---
|
Title: Deep Learning for Genomics: A Concise Overview
Section 1: Introduction
Description 1: This section provides a background on the intersection of genomics and deep learning, detailing the evolution of genomic data acquisition and the emergence of deep learning as a powerful tool for analyzing this data.
Section 2: Deep Learning Architectures: Genomic Perspective
Description 2: This section reviews various deep learning algorithms and their specific advantages for resolving different types of problems in genomic applications.
Section 3: Convolutional Neural Networks
Description 3: This section discusses the application of Convolutional Neural Networks (CNNs) in genomics, focusing on the adaptation of CNNs to genomic sequences and their success in tasks such as protein-binding specificity and motif identification.
Section 4: Recurrent Neural Networks
Description 4: This section explores the use of Recurrent Neural Networks (RNNs) for handling sequential genomic data, with emphasis on modeling DNA sequences and predicting protein localization.
Section 5: Autoencoders
Description 5: This section covers the application of Autoencoders in genomics, particularly for feature extraction, data denoising, and dimension reduction.
Section 6: Emergent Deep Architectures
Description 6: This section reviews recent innovative deep learning models that go beyond classical architectures, including hybrid models and architectures tailored to specific genomic problems.
Section 7: Deep Learning Architectures: Insights and Remarks
Description 7: This section offers insights into the interpretation and visualization of deep learning models, as well as remarks on model design considerations for real-world genomic applications.
Section 8: Genomic Applications
Description 8: This section discusses specific genomic problems that can benefit from deep learning methods, including gene expression characterization, gene expression prediction, regulatory genomics, and more.
Section 9: Obstacles and Opportunities
Description 9: This section identifies current obstacles in the application of deep learning to genomics, such as data limitation and heterogeneity, and explores potential opportunities to advance the field.
Section 10: Conclusion and Outlook
Description 10: This section summarizes the current state of deep learning applications in genomics and offers a perspective on future directions and challenges in the field.
|
Collective Intelligence in Humans: A Literature Review
| 6 |
---
paper_title: Swarm intelligence in animals and humans.
paper_content:
Electronic media have unlocked a hitherto largely untapped potential for swarm intelligence (SI; generally, the realisation that group living can facilitate solving cognitive problems that go beyond the capacity of single animals) in humans with relevance for areas such as company management, prediction of elections, product development and the entertainment industry. SI is a rapidly developing topic that has become a hotbed for both innovative research and wild speculation. Here, we tie together approaches from seemingly disparate areas by means of a general definition of SI to unite SI work on both animal and human groups. Furthermore, we identify criteria that are important for SI to operate and propose areas in which further progress with SI research can be made.
---
paper_title: Toward collective intelligence of online communities: A primitive conceptual model
paper_content:
Inspired by the ideas of Swarm Intelligence and the “global brain”, a concept of “community intelligence” is suggested in the present paper, reflecting that some “intelligent” features may emerge in a Web-mediated online community from interactions and knowledge-transmissions between the community members. This possible research field of community intelligence is then examined under the backgrounds of “community” and “intelligence” researches. Furthermore, a conceptual model of community intelligence is developed from two views. From the structural view, the community intelligent system is modeled as a knowledge supernetwork that is comprised of triple interwoven networks of the media network, the human network, and the knowledge network. Furthermore, based on a dyad of knowledge in two forms of “knowing” and “knoware”, the dynamic view describes the basic mechanics of the formation and evolution of “community intelligence”. A few relevant research issues are shortly discussed on the basis of the proposed conceptual model.
---
paper_title: From social computing to reflexive collective intelligence: The IEML research program
paper_content:
The IEML research program promotes a radical innovation in the notation and processing of semantics. IEML (Information Economy MetaLanguage) is a regular language that provides new methods for semantic interoperability, semantic navigation, collective categorization and self-referential collective intelligence. This research program is compatible with the major standards of the Web of data and is in tune with the current trends in social computing. The paper explains the philosophical relevance of this new language, expounds its syntactic and semantic structures and ponders its possible implications for the growth of collective intelligence in cyberspace.
---
paper_title: Collective Intelligence in the Organization of Science
paper_content:
Whereas some suggest that consensus is the desirable end goal in fields of science, this paper suggests that the existing literature on collective intelligence offers key alternative insights into the evolution of knowledge in scientific communities. Drawing on the papers in this special issue, we find that the papers fall across a spectrum of convergent, divergent, and reflective activities. In addition, we find there to be a set of ongoing theoretical tensions common across the papers. We suggest that this diversity of activities and ongoing theoretical tensions—both signs of collective intelligence— may be a far more appropriate measure than consensus of the health of a scientific community.
---
paper_title: On model design for simulation of collective intelligence
paper_content:
The study of collective intelligence (CI) systems is increasingly gaining interest in a variety of research and application domains. Those domains range from existing research areas such as computer networks and collective robotics to upcoming areas of agent-based and insect-based computing; also including applications on the internet and in games and movies. CI systems are complex by nature and (1) are effectively adaptive in uncertain and unknown environments, (2) can organise themselves autonomously, and (3) exhibit 'emergent' behaviour. Among others, multi-agent systems, complex adaptive systems, swarm intelligence and self-organising systems are considered to be such systems. The explosive wild growth of research studies of CI systems has not yet led to a systematic approach for model design of these kinds of systems. Although there have been recent efforts on the issue of system design (the complete design trajectory from identifying system requirements up to implementation), the problem of choosing and specifying a good model of a CI system is often done implicitly and sometimes even completely ignored. The aim of this article is to bring to the attention that model design is an essential as well as an integral part of system design. We present a constructive approach to systematically design, build and test models of CI systems. Because simulation is often used as a way to research CI systems, we particularly focus on models that can be used for simulation. Additionally, we show that it is not necessary to re-invent the wheel: here, we show how existing models and algorithms can be used for CI model design. The approach is illustrated by means of two example studies on a (semi-automated) multi-player game and collaborative robotics.
---
paper_title: A Learning Framework for Knowledge Building and Collective Wisdom Advancement in Virtual Learning Communities
paper_content:
This study represents an effort to construct a learning framework for knowledge building and collective wisdom advancement in a virtual learning community (VLC) from the perspectives of system wholeness, intelligence wholeness and dynamics, learning models, and knowledge management. It also tries to construct the zone of proximal development (ZPD) of VLCs based on the combination of Vygotsky’s theory of zone of proximal development and the trajectories of knowledge building. The aim of a VLC built on the theories of constructivism, situated learning, and knowledge building, etc., is to apply individual intelligence to online learning, bring the advantages of collaborative learning and collective wisdom into play, solve difficult problems in independent learning, and lead to the integration and sublimation of collective wisdom through long-term individual interactions, collaborative learning and knowledge building.
---
paper_title: The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence
paper_content:
Darwin considered an understanding of the evolution of the human mind and brain to be of major importance to the evolutionary sciences. This groundbreaking book sets out a comprehensive, integrated theory of why and how the human mind has developed to function as it does. Geary proposes that human motivational, affective, behavioral, and cognitive systems have evolved to process social and ecological information (e.g., facial expressions) that covaried with survival or reproductive options during human evolution. Further, he argues that the ultimate focus of all of these systems is to support our attempts to gain access to and control of resources - more specifically, the social (e.g., mates), biological (e.g., food), and physical (e.g., territory) resources that supported successful survival and reproduction over time. In this view, Darwin's conceptualization of natural selection as a "struggle for existence" becomes, for us, a struggle with other human beings for control of the available resources. This struggle provides a means of integrating modular brain and cognitive systems such as language with those brain and cognitive systems that support general intelligence. To support his arguments, Geary draws on an impressive array of recent findings in cognitive science and neuroscience as well as primatology, anthropology, and sociology. The book also explores a number of issues that are of interest in modern society, including how general intelligence relates to academic achievement, occupational status, and income. Readers will find this book a thought-provoking read and an impetus for new theories of mind.
---
paper_title: The Business Model: Recent Developments and Future Research
paper_content:
This article provides a broad and multifaceted review of the received literature on business models in which the authors examine the business model concept through multiple subject-matter lenses. The review reveals that scholars do not agree on what a business model is and that the literature is developing largely in silos, according to the phenomena of interest of the respective researchers. However, the authors also found emerging common themes among scholars of business models. Specifically, (1) the business model is emerging as a new unit of analysis; (2) business models emphasize a system-level, holistic approach to explaining how firms “do business”; (3) firm activities play an important role in the various conceptualizations of business models that have been proposed; and (4) business models seek to explain how value is created, not just how it is captured. These emerging themes could serve as catalysts for a more unified study of business models.
---
paper_title: Globalization, information and communication technologies, and the prospect of a ‘global village’: promises of inclusion or electronic colonization?
paper_content:
This paper discusses the reciprocal relationships among globalization, information and communication technologies (ICT), and the prospect of a ‘global village’. The current metaphor of a ‘global village’ (regardless of physical access to ICT) is problematic, and can be interpreted as a form of electronic colonization. However, through such concepts as blurred identity, nomadism, and hybridity, a distinctly (post‐modern) ICT landscape can be redrawn in a way that accepts the global identity of the ICT, but denies the colonial erasure associated with the global‐village narrative. ICT, in themselves, cannot serve as an end in education, but the demand for critical education involving ICT is pressing as the effects of globalization are experienced. Three methods of promoting decolonizing criticality are proposed: critical emotional literacy, collective witnessing, and collective intelligence.
---
paper_title: On the Collective Nature of Human Intelligence
paper_content:
A fundamental assumption of cognitive science is that the individual is the correct unit of analysis for understanding human intelligence. I present evidence that this assumption may have limited utility, that the social networks containing the individuals are an important additional unit of analysis, and that this “network intelligence” is significantly mediated by non-linguistic processes. Across a broad range of situations these network effects typically predict 40% or more of the variation in human behavior.
---
paper_title: Groups of diverse problem solvers can outperform groups of high-ability problem solvers
paper_content:
We introduce a general framework for modeling functionally diverse problem-solving agents. In this framework, problem-solving agents possess representations of problems and algorithms that they use to locate solutions. We use this framework to establish a result relevant to group composition. We find that when selecting a problem-solving team from a diverse population of intelligent agents, a team of randomly selected agents outperforms a team comprised of the best-performing agents. This result relies on the intuition that, as the initial pool of problem solvers becomes large, the best-performing agents necessarily become similar in the space of problem solvers. Their relatively greater ability is more than offset by their lack of problem-solving diversity.
---
paper_title: How social influence can undermine the wisdom of crowd effect
paper_content:
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects’ convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others’ responses was provided. Although groups are initially “wise,” knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The “social influence effect” diminishes the diversity of the crowd without improvements of its collective error. The “range reduction effect” moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The “confidence effect” boosts individuals’ confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.
---
paper_title: Swarm intelligence in humans: diversity can trump ability
paper_content:
We identify some of the possibilities and limitations of human swarm intelligence (SI) using the response of the public to two types of cognitive problems. Furthermore, we propose a simple measure for the quantification of collective information that could form the basis for SI in study populations for specific tasks. Our three main results are (1) that the potential benefits of SI depend on the type of problem, (2) that individual performance and collective performance can be uncorrelated and that a group of individually high performers can be outcompeted by a same-size group of individually low performers, and (3) that adding diversity to a group can be more beneficial than adding expertise. Our results question the emphasis that societies and organizations can put on individual performance to the detriment of diversity as far as teams are concerned. Nevertheless, it is important to point out that while diversity is a necessary condition for effective SI, diversity alone is clearly not sufficient. Finally, we discuss the potential implications of our findings for the evolution of group composition and the maintenance of personality diversity in animals.
---
paper_title: Collective intelligence for idea management with Internet‐based information aggregation markets
paper_content:
Purpose – The purpose of this paper is to explore the use of information aggregation markets (IAMs) for community‐based idea management and to present IDeM, a novel Internet‐based software tool that can be used for generating and evaluating new ideas utilizing the concept of IAMs.Design/methodology/approach – Starting with a review of existing methods for collective intelligence, IAMs are identified as a prominent method for collective intelligence. Specific requirements for exploring IAMs for idea management are derived. Based on these requirements, a software tool for implementing IAMs in the context of idea management is developed (IDeM). IDeM has been evaluated and evaluation results are used to identify IDeM's benefits and limitations. A review of related work points out the innovative characteristics of IDeM.Findings – Evaluation results indicate that IAMs is an efficient method for idea generation and evaluation. Moreover IDeM is perceived both as easy to use and efficient in supporting idea genera...
---
paper_title: Towards knowledge management based on harnessing collective intelligence on the web
paper_content:
The Web has acquired immense value as an active, evolving repository of knowledge. It is now entering a new era, which has been called “Web 2.0”. One of the essential elements of Web 2.0 is harnessing the collective intelligence of Web users. Large groups of people are remarkably intelligent, and are often smarter than the smartest people in them. Knowledge as collective intelligence is socially constructed from the common understandings of people. It works as a filter for selecting highly regarded information with collective annotation based on bottom-up consensus and the unifying force of Web-supported social networks. The rising interest in harnessing the collective intelligence of Web users entails changes in managing the knowledge of individual users. In this paper, we introduce a concept of knowledge management based on harnessing the collective intelligence of Web users, and explore the technical issues involved in implementing it.
---
paper_title: Holistic Sense Making : Conflicting Opinions , Creative Ideas , and Collective Intelligence
paper_content:
Purpose – The purpose of this work is to introduce a generic conceptual and methodological framework for the study of emergent social and intellectual patterns and trends in a diverse range of sense‐and decision‐making activities.Design/methodology/approach – The development of the framework is driven by three motivating challenges: capturing the collective intelligence of science, fostering scientific discoveries in science and e‐Science, and facilitating evidence‐based librarianship (EBL). The framework is built on concepts such as structural holes and intellectual turning points, methodologies and techniques for progressive knowledge domain visualization and differentiation of conflicting opinions, and information integration models to achieve coherent transitions between different conceptual scales.Findings – Structural holes and turning points are detected and validated with the domain of terrorism research as an example. Conflicting opinions are differentiated in the form of a decision tree of phras...
---
paper_title: Self-Organization in Biological Systems
paper_content:
From the Publisher: ::: "Broad in scope, thorough yet accessible, this book is a self-contained introduction to self-organization and complexity in biology - a field of study at the forefront of life sciences research."--BOOK JACKET.
---
paper_title: Toward collective intelligence of online communities: A primitive conceptual model
paper_content:
Inspired by the ideas of Swarm Intelligence and the “global brain”, a concept of “community intelligence” is suggested in the present paper, reflecting that some “intelligent” features may emerge in a Web-mediated online community from interactions and knowledge-transmissions between the community members. This possible research field of community intelligence is then examined under the backgrounds of “community” and “intelligence” researches. Furthermore, a conceptual model of community intelligence is developed from two views. From the structural view, the community intelligent system is modeled as a knowledge supernetwork that is comprised of triple interwoven networks of the media network, the human network, and the knowledge network. Furthermore, based on a dyad of knowledge in two forms of “knowing” and “knoware”, the dynamic view describes the basic mechanics of the formation and evolution of “community intelligence”. A few relevant research issues are shortly discussed on the basis of the proposed conceptual model.
---
paper_title: From social computing to reflexive collective intelligence: The IEML research program
paper_content:
The IEML research program promotes a radical innovation in the notation and processing of semantics. IEML (Information Economy MetaLanguage) is a regular language that provides new methods for semantic interoperability, semantic navigation, collective categorization and self-referential collective intelligence. This research program is compatible with the major standards of the Web of data and is in tune with the current trends in social computing. The paper explains the philosophical relevance of this new language, expounds its syntactic and semantic structures and ponders its possible implications for the growth of collective intelligence in cyberspace.
---
paper_title: On model design for simulation of collective intelligence
paper_content:
The study of collective intelligence (CI) systems is increasingly gaining interest in a variety of research and application domains. Those domains range from existing research areas such as computer networks and collective robotics to upcoming areas of agent-based and insect-based computing; also including applications on the internet and in games and movies. CI systems are complex by nature and (1) are effectively adaptive in uncertain and unknown environments, (2) can organise themselves autonomously, and (3) exhibit 'emergent' behaviour. Among others, multi-agent systems, complex adaptive systems, swarm intelligence and self-organising systems are considered to be such systems. The explosive wild growth of research studies of CI systems has not yet led to a systematic approach for model design of these kinds of systems. Although there have been recent efforts on the issue of system design (the complete design trajectory from identifying system requirements up to implementation), the problem of choosing and specifying a good model of a CI system is often done implicitly and sometimes even completely ignored. The aim of this article is to bring to the attention that model design is an essential as well as an integral part of system design. We present a constructive approach to systematically design, build and test models of CI systems. Because simulation is often used as a way to research CI systems, we particularly focus on models that can be used for simulation. Additionally, we show that it is not necessary to re-invent the wheel: here, we show how existing models and algorithms can be used for CI model design. The approach is illustrated by means of two example studies on a (semi-automated) multi-player game and collaborative robotics.
---
paper_title: Swarm Intelligent Surfing in the Web
paper_content:
Traditional ranking models used in Web search engines rely on a static snapshot of the Web graph, basically the link structure of the Web documents. However, visitors' browsing activities indicate the importance of a document. In the traditional static models, the information on document importance conveyed by interactive browsing is neglected. The nowadays Web server/surfer model lacks the ability to take advantage of user interaction for document ranking. We enhance the ordinary Web server/surfer model with a mechanism inspired by swarm intelligence to make it possible for the Web servers to interact with Web surfers and thus obtain a proper local ranking of Web documents. The proof-of-concept implementation of our idea demonstrates the potential of our model. The mechanism can be used directly in deployed Web servers which enable on-the-fly creation of rankings for Web documents local to a Web site. The local rankings can also be used as input for the generation of global Web rankings in a decentralized way.
---
paper_title: Termites as models of swarm cognition
paper_content:
Eusociality has evolved independently at least twice among the insects: among the Hymenoptera (ants and bees), and earlier among the Isoptera (termites). Studies of swarm intelligence, and by inference, swarm cognition, have focused largely on the bees and ants, while the termites have been relatively neglected. Yet, termites are among the world’s premier animal architects, and this betokens a sophisticated swarm intelligence capability. In this article, I review new findings on the workings of the mound of Macrotermes which clarify how these remarkable structures work, and how they come to be built. Swarm cognition in these termites is in the form of “extended” cognition, whereby the swarm’s cognitive abilities arise both from interaction amongst the individual agents within a swarm, and from the interaction of the swarm with the environment, mediated by the mound’s dynamic architecture. The latter provides large scale “cognitive maps” which enable termite swarms to assess the functional state of their structure and to guide repair efforts where necessary. The crucial role of the built environment in termite swarm cognition also points to certain “swarm cognitive disorders”, where swarms can be pushed into anomalous activities by manipulating crucial structural and functional attributes of the termite system of “extended cognition.”
---
paper_title: Group decision making in swarms of honey bees
paper_content:
This study renews the analysis of honey bee swarms as decision-making units. We repeated Lindauer's observations of swarms choosing future home sites but used modern videorecording and bee-labelling techniques to produce a finer-grained description of the decision-making process than was possible 40 years ago. Our results both confirm Lindauer's findings and reveal several new features of the decision-making process. Viewing the process at the group level, we found: (1) the scout bees in a swarm find potential nest sites in all directions and at distances of up to several kilometers; (2) initially, the scouts advertise a dozen or more sites with their dances on the swarm, but eventually they advertise just one site; (3) within about an hour of the appearance of unanimity among the dancers, the swarm lifts off to fly to the chosen site; (4) there is a crescendo of dancing just before liftoff, and (5) the chosen site is not necessarily the one that is first advertised on the swarm. Viewing the process at the individual level, we found: (1) the dances of individual scout bees tend to taper off and eventually cease, so that many dancers drop out each day; (2) some scout bees switch their allegiance from one site to another, and (3) the principal means of consensus building among the dancing bees is for bees that dance initially for a non-chosen site to cease their dancing altogether, not to switch their dancing to the chosen site. We hypothesize that scout bees are programmed to gradually quit dancing and that this reduces the possibility of the decision-making process coming to a standstill with groups of unyielding dancers deadlocked over two or more sites. We point out that a swarm's overall strategy of decision making is a “weighted additive strategy.” This strategy is the most accurate but also the most demanding in terms of information processing, because it takes account of all of the information relevant to a decision problem. Despite being composed of small-brained bees, swarms are able to use the weighted additive strategy by distributing among many bees both the task of evaluating the alternative sites and the task of identifying the best of these sites.
---
paper_title: Holistic Sense Making : Conflicting Opinions , Creative Ideas , and Collective Intelligence
paper_content:
Purpose – The purpose of this work is to introduce a generic conceptual and methodological framework for the study of emergent social and intellectual patterns and trends in a diverse range of sense‐and decision‐making activities.Design/methodology/approach – The development of the framework is driven by three motivating challenges: capturing the collective intelligence of science, fostering scientific discoveries in science and e‐Science, and facilitating evidence‐based librarianship (EBL). The framework is built on concepts such as structural holes and intellectual turning points, methodologies and techniques for progressive knowledge domain visualization and differentiation of conflicting opinions, and information integration models to achieve coherent transitions between different conceptual scales.Findings – Structural holes and turning points are detected and validated with the domain of terrorism research as an example. Conflicting opinions are differentiated in the form of a decision tree of phras...
---
paper_title: CorpWiki: A self-regulating wiki to promote corporate collective intelligence through expert peer matching
paper_content:
One of the main challenges that organizations face nowadays, is the efficient use of individual employee intelligence, through machine-facilitated understanding of the collected corporate knowledge, to develop their collective intelligence. Web 2.0 technologies, like wikis, can be used to address the above issue. Nevertheless, their application in corporate environments is limited, mainly due to their inability to ensure knowledge creation and assessment in a timely and reliable manner. In this study we propose CorpWiki, a self-regulating wiki system for effective acquisition of high-quality knowledge content. Inserted articles undergo a quality assessment control by a large number of corporate peer employees. In case the quality is inadequate, CorpWiki uses a novel expert peer matching algorithm (EPM), based on feed-forward neural networks, that searches the human network of the organization to select the most appropriate peer employee who will improve the quality of the article. Performance evaluation results, obtained through simulation modeling, indicate that CorpWiki improves the final quality levels of the inserted articles as well as the time and effort required to reach them. The proposed system, combining machine-learning intelligence with the individual intelligence of peer employees, aims to create new inferences regarding corporate issues, thus promoting the collective organizational intelligence.
---
paper_title: Decisions 2.0: the power of collective intelligence
paper_content:
Information markets, wikis and other applications that tap into the collective intelligence of groups have recently generated tremendous interest. But what�s the reality behind the hype?
---
paper_title: Regional intelligence: distributed localised information systems for innovation and development
paper_content:
Technological information is recognised as an important factor shaping regional systems of innovation and innovative regions. However, little has been written on how regions set up and manage this vital resource. This paper focuses on regional intelligence: distributed information systems localised over a region allowing continuous update and learning on technologies, competitors, markets, and the environment. We start by defining regional intelligence with respect to the concepts of business intelligence, organisational, and collective intelligence. We look at a number of case studies and experiences gained in the context of EU regional innovation and regional economic strategies, which highlight early forms of regional intelligence. We examine the fundamental information modules making up regional intelligence, including, R&D dissemination, technology and market watch, company benchmarking and competition analysis, regional foresight, and regional performance. We discuss the integration of distributed information systems and solutions which may be given to consolidate public content applications with the internal information systems of companies, and the role of information integration in the continuous making and remaking of innovative regions.
---
paper_title: Collective Intelligence and its Implementation on the Web: algorithms to develop a collective mental map
paper_content:
Collective intelligence is defined as the ability of a group to solve more problems than its individual members. It is argued that the obstacles created by individual cognitive limits and the difficulty of coordination can be overcome by using a collective mental map (CMM). A CMM is defined as an external memory with shared read/write access, that represents problem states, actions and preferences for actions. It can be formalized as a weighted, directed graph. The creation of a network of pheromone trails by ant colonies points us to some basic mechanisms of CMM development: averaging of individual preferences, amplification of weak links by positive feedback, and integration of specialised subnetworks through division of labor. Similar mechanisms can be used to transform the World-Wide Web into a CMM, by supplementing it with weighted links. Two types of algorithms are explored: 1) the co-occurrence of links in web pages or user selections can be used to compute a matrix of link strengths, thus generalizing the technique of &201C;collaborative filtering&201D;; 2) learning web rules extract information from a user&2018;s sequential path through the web in order to change link strengths and create new links. The resulting weighted web can be used to facilitate problem-solving by suggesting related links to the user, or, more powerfully, by supporting a software agent that discovers relevant documents through spreading activation.
---
paper_title: Swarm intelligence in animals and humans.
paper_content:
Electronic media have unlocked a hitherto largely untapped potential for swarm intelligence (SI; generally, the realisation that group living can facilitate solving cognitive problems that go beyond the capacity of single animals) in humans with relevance for areas such as company management, prediction of elections, product development and the entertainment industry. SI is a rapidly developing topic that has become a hotbed for both innovative research and wild speculation. Here, we tie together approaches from seemingly disparate areas by means of a general definition of SI to unite SI work on both animal and human groups. Furthermore, we identify criteria that are important for SI to operate and propose areas in which further progress with SI research can be made.
---
paper_title: The Allocation of Collaborative Efforts in Open-Source Software
paper_content:
The article investigates the allocation of collaborative efforts among core developers (maintainers) of open-source software by analyzing on-line development traces (logs) for a set of 10 large projects. Specifically, we investigate whether the division of labor within open-source projects is influenced by characteristics of software code. We suggest that the collaboration among maintainers tends to be influenced by different measures of code complexity. We interpret these findings by providing preliminary evidence that the organization of open-source software development would self-adapt to characteristics of the code base, in a [`]stigmergic' manner.
---
paper_title: Toward collective intelligence of online communities: A primitive conceptual model
paper_content:
Inspired by the ideas of Swarm Intelligence and the “global brain”, a concept of “community intelligence” is suggested in the present paper, reflecting that some “intelligent” features may emerge in a Web-mediated online community from interactions and knowledge-transmissions between the community members. This possible research field of community intelligence is then examined under the backgrounds of “community” and “intelligence” researches. Furthermore, a conceptual model of community intelligence is developed from two views. From the structural view, the community intelligent system is modeled as a knowledge supernetwork that is comprised of triple interwoven networks of the media network, the human network, and the knowledge network. Furthermore, based on a dyad of knowledge in two forms of “knowing” and “knoware”, the dynamic view describes the basic mechanics of the formation and evolution of “community intelligence”. A few relevant research issues are shortly discussed on the basis of the proposed conceptual model.
---
paper_title: On model design for simulation of collective intelligence
paper_content:
The study of collective intelligence (CI) systems is increasingly gaining interest in a variety of research and application domains. Those domains range from existing research areas such as computer networks and collective robotics to upcoming areas of agent-based and insect-based computing; also including applications on the internet and in games and movies. CI systems are complex by nature and (1) are effectively adaptive in uncertain and unknown environments, (2) can organise themselves autonomously, and (3) exhibit 'emergent' behaviour. Among others, multi-agent systems, complex adaptive systems, swarm intelligence and self-organising systems are considered to be such systems. The explosive wild growth of research studies of CI systems has not yet led to a systematic approach for model design of these kinds of systems. Although there have been recent efforts on the issue of system design (the complete design trajectory from identifying system requirements up to implementation), the problem of choosing and specifying a good model of a CI system is often done implicitly and sometimes even completely ignored. The aim of this article is to bring to the attention that model design is an essential as well as an integral part of system design. We present a constructive approach to systematically design, build and test models of CI systems. Because simulation is often used as a way to research CI systems, we particularly focus on models that can be used for simulation. Additionally, we show that it is not necessary to re-invent the wheel: here, we show how existing models and algorithms can be used for CI model design. The approach is illustrated by means of two example studies on a (semi-automated) multi-player game and collaborative robotics.
---
paper_title: Group performance and decision making.
paper_content:
Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.
---
paper_title: Decisions 2.0: the power of collective intelligence
paper_content:
Information markets, wikis and other applications that tap into the collective intelligence of groups have recently generated tremendous interest. But what�s the reality behind the hype?
---
|
Title: Collective Intelligence in Humans: A Literature Review
Section 1: INTRODUCTION
Description 1: Introduce the study of collective intelligence in humans, define terminology, and set the scope and objectives of the review.
Section 2: METHODS
Description 2: Explain the approach taken for literature selection, keyword search criteria, and the process of analyzing the selected literature.
Section 3: RESULTS
Description 3: Present the identified pattern in the literature and discuss the three levels of abstraction: micro-level, macro-level, and the level of emergence.
Section 4: The micro-level: Enabling factors of human beings
Description 4: Discuss collective intelligence at the micro-level, focusing on psychological, cognitive, and behavioral elements that enable collective intelligence.
Section 5: The macro-level: Output of the System
Description 5: Explain collective intelligence at the macro-level, particularly in the context of the "wisdom of crowds" effect and the impact of diversity, independence, and aggregation.
Section 6: The Level of Emergence: From Local Interactions to Global Patterns
Description 6: Explore how macro-level behavior emerges from micro-level interactions, using theories of complex adaptive systems, self-organization, and emergence.
Section 7: DISCUSSION AND CONCLUSIONS
Description 7: Summarize the review findings, propose a conceptual framework for studying collective intelligence, and suggest directions for future research.
|
Overview of Multi-Agent Approach for Micro-Grid Energy Management
| 6 |
---
paper_title: Toward a model integration methodology for advanced applications in power engineering
paper_content:
This paper discusses a novel approach to model integration applied within electrical power engineering. A model integration methodology, exploiting the novel approach developed and underpinned by agent technology, is proposed.
---
paper_title: An autonomous agent for reliable operation of power market and systems including microgrids
paper_content:
This paper proposes a multi-agent approach to microgrid power system operation. The proposed method consists of several Loads Agents (LAGs) , Generator Agents (GAGs) and a single Microgrid Control Agent (MAG). The target of this study is to maximize revenue from the microgrid. To demonstrate its capability, the proposed electricity trading algorithm is applied to a model system. The simulation results show that the proposed multi-agent approach is promising.
---
paper_title: An integration facility to accelerate deployment of distributed energy resources in microgrids
paper_content:
Microgrids are intentional islands formed at a facility or in an electrical distribution system that contain at least one distributed energy resource and associated loads. One of the most challenging aspects in the commercial deployment of microgrids is the ability to test and evaluate various microgrid technologies and system to evaluate their effectiveness. This paper discusses current testing of microgrid applications and the development of a new integration facility designed to accelerate the deployment of distributed resources including renewable energy technologies in advanced distribution system operations including microgrids.
---
paper_title: Defining control strategies for MicroGrids islanded operation
paper_content:
This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.
---
paper_title: Self-Reconfigurable Electric Power Distribution System using Multi-Agent Systems
paper_content:
Electric power distribution systems (EPDS) can be found almost everywhere, from ship power systems to data centers. In many critical applications, there is needed to maintain minimal operating capability under fault conditions. Therefore, it is necessary to develop energy distribution control techniques, which allow the implementation of a self-reconfigurable EPDS. This research project focuses on the application of multi agent systems (MAS) to develop a self-reconfigurable EPDS. MAS are composed of multiple interacting software elements, known as agents. An agent is an abstraction that describes autonomous software that "acts on behalf of a user or another program. A prototype of a MAS is proposed to reconfigure an electric system in order to maximize the number of served loads with highest priority. We used a shipboard simulation test system based on a zonal architecture. The test system is simulated using Matlabtrade Simulinktrade -SimPowerSystems toolbox, and the MAS was implemented using Javatrade programming language and JADE platform.
---
paper_title: Software agents: An overview
paper_content:
Agent software is a rapidly developing area of research. However, the overuse of the word ‘agent’ has tended to mask the fact that, in reality, there is a truly heterogeneous body of research being carried out under this banner. This overview paper presents a typology of agents. Next, it places agents in context, defines them and then goes on, inter alia, to overview critically the rationales, hypotheses, goals, challenges and state-of-the-art demonstrators of the various agent types in our typology. Hence, it attempts to make explicit much of what is usually implicit in the agents literature. It also proceeds to overview some other general issues which pertain to all the types of agents in the typology. This paper largely reviews software agents, and it also contains some strong opinions that are not necessarily widely accepted by the agent community.
---
paper_title: Agent-oriented programming
paper_content:
Abstract A new computational framework is presented, called agent-oriented programming (AOP), which can be viewed as a specialization of object-oriented programming . The state of an agent consists of components such as beliefs, decisions, capabilities, and obligations; for this reason the state of an agent is called its mental state . The mental state of agents is described formally in an extension of standard epistemic logics: beside temporalizing the knowledge and belief operators, AOP introduces operators for obligation, decision, and capability. Agents are controlled by agent programs , which include primitives for communicating with other agents. In the spirit of speech act theory , each communication primitive is of a certain type: informing, requesting, offering, and so on. This article presents the concept of AOP, discusses the concept of mental state and its formal underpinning, defines a class of agent interpreters, and then describes in detail a specific interpreter that has been implemented.
---
paper_title: Intelligent Agents in Design
paper_content:
In this paper we argue that learning or adaptation ability should be included in the basic set of features characterizing an intelligent agent in design. We propose a collection of attributes describing agents, which are grouped into several categories. Next, we present the results of a detailed study of all agents for design, which were discussed during the First International Workshop on Agents in Design at MIT in August of 2002. A statistical analysis of their attributes has been conducted and its results are reported to suggest future evolution of agents in design. Finally, we briefly overview the topic of Directed Evolution and use its paradigms to predict further development of agents in design. The paper also provides our initial conclusions and suggests further research.Copyright © 2003 by ASME
---
paper_title: A survey of Agent-Oriented Software Engineering
paper_content:
Agent-Oriented Software Engineering is the one of the most recent contributions to the field of Software Engineering. It has several benefits compared to existing development approaches, in particular the ability to let agents represent high-level abstractions of active entities in a software system. This paper gives an overview of recent research and industrial applications of both general high-level methodologies and on more specific design methodologies for industry-strength software engineering.
---
paper_title: Intelligent Distributed Autonomous Power Systems (IDAPS)
paper_content:
The electric power system is an enabling infrastructure that supports the operation of other critical infrastructures and thus the economic well-being of a nation. It is, therefore, very important to design for resiliency and autonomous reconfigurability in the electric power grid to guard against man-made and natural disasters. One way to assure such self-healing characteristics in an electric power system is to design for small and autonomous subsets of the larger grid. This paper presents the concept of a specialized microgrid called an intelligent distributed autonomous power system (IDAPS). The IDAPS microgrid aims at intelligently managing customer-owned distributed energy resources such that these assets can be shared in an autonomous grid both during normal and outage operations. The proposed concept is expected to make significant contributions during emergency conditions, as well as creating a new market for electricity transaction among customers.
---
|
Title: Overview of Multi-Agent Approach for Micro-Grid Energy Management
Section 1: INTRODUCTION
Description 1: This section introduces how Multi-Agent Systems (MAS) are used for energy management on distributed control and explains the aim and scope of the paper.
Section 2: RENEWABLE ENERGY MICRO-GRID
Description 2: This section elaborates on the concept of Micro-Grids, their operation, control, and the role of renewable energy technologies within them.
Section 3: MULTI-AGENT SYSTEMS
Description 3: This section provides a general overview of MAS, their concepts, frameworks, environments, communication, categorization, methodologies, and architectures.
Section 4: MAS MICRO-GRID ENERGY MANAGEMENT
Description 4: This section discusses the application of MAS specifically in the context of Micro-Grid energy management, highlighting the advantages and functions of MAS in this domain.
Section 5: Advantages of MAS in Micro-Grids
Description 5: This section outlines the benefits of implementing MAS within Micro-Grids, including flexibility, fault-tolerance, and efficiency in solving complex problems.
Section 6: CONCLUSION
Description 6: This section summarizes the paper, highlighting the key points discussed and emphasizing the importance of MAS in future energy management systems and smart grids.
|
Efficient Data Dissemination Techniques in VANETs: A Review
| 8 |
---
|
Title: Efficient Data Dissemination Techniques in VANETs: A Review
Section 1: Introduction
Description 1: Introduce the concept of Vehicular Ad Hoc Networks (VANETs) and their significance in Intelligent Transportation Systems (ITS).
Section 2: Background
Description 2: Provide background information on VANETs, including their characteristics, architecture, and communication capabilities.
Section 3: Data Dissemination Challenges
Description 3: Discuss the unique challenges associated with data dissemination in VANETs due to their dynamic topology and frequent disconnections.
Section 4: Existing Protocols
Description 4: Review various categories of protocols designed for efficient and reliable working of VANETs.
Section 5: Efficient Data Dissemination Techniques
Description 5: Present an overview of simple and robust dissemination techniques that handle data dissemination effectively in both dense and sparse vehicular networks.
Section 6: Categorization and Cases
Description 6: Explain the proposed technique that divides users into two categories and considers three different cases for improving data dissemination.
Section 7: Applications
Description 7: Describe the various attractive applications in VANETs that benefit from efficient data dissemination.
Section 8: Conclusion
Description 8: Summarize the key points discussed and highlight the importance of efficient data dissemination in VANETs.
|
A Survey on Time-aware Business Process Modeling
| 7 |
---
paper_title: Formal verification of business processes with temporal and resource constraints
paper_content:
The correctness of business process models is critical for IT system development. The properties of business processes need to be analyzed when they are designed. In particular, business processes usually have various constraints on time and resources, which may cause serious problems like bottlenecks and deadlocks. In this paper, we propose an approach based on the model checking technique for verifying business process models with temporal and resource constraints. First, we extend Business Process Modeling Notation (BPMN) to handle these constraints. Then, we provide a mapping of the business process models described with this extended BPMN onto timed automata that can be verified by the UPPAAL model checker. This approach helps to eliminate various problems with time and resources in the early phase of development, and enables the quality assurance of business process models.
---
paper_title: Representation, Verification, and Computation of Timed Properties in Web
paper_content:
In this paper we address the problem of qualitative and quantitative analysis of timing aspects of Web service compositions defined as a set of BPEL4WS processes. We introduce a formalism, called Web Service Timed State Transition Systems (WSTTS), to capture the timed behavior of the composite web services. We also exploit an interval temporal logic to express complex timed assumptions and requirements on the system’s behavior. Building on top of this formalization, we provide techniques and tools for modelchecking BPEL4WS compositions against time-related requirements. We also present a symbolic algorithm that can be used to compute duration bounds of behavioral intervals that satisfy such requirements. We perform a preliminary experimental evaluation of our approach and tools with the help of an e-Government case study.
---
paper_title: A Survey on Web Services Composition
paper_content:
Due to the web services' heterogeneous nature, which stems from the definition of several XML-based standards to overcome platform and language dependence, web services have become an emerging and promising technology to design and build complex inter-enterprise business applications out of single web-based software components. To establish the existence of a global component market, in order to enforce extensive software reuse, service composition experienced increasing interest in doing a lot of research effort. This paper discusses the urgent need for service composition, the required technologies to perform service composition. It also presents several different composition strategies, based on some currently existing composition platforms and frameworks, re-presenting first implementations of state-of the-art technologies, and gives an outlook to essential future research work.
---
paper_title: Dynamic Checking and Solution to Temporal Violations in Concurrent Workflow Processes
paper_content:
Current methods that deal with concurrent workflow temporal violations only focus on checking whether there are any temporal violations. They are not able to point out the path where the temporal violation happens and thus cannot provide specific solutions. This paper presents an approach based on a sprouting graph to find out the temporal violation paths in concurrent workflow processes as well as possible solutions to resolve the temporal violations. First, we model concurrent workflow processes with time workflow net and a sprouting graph. Second, we update the sprouting graph at the checking point. Finally, we find out the temporal violation paths and provide solutions. We apply the approach in a real business scenario to illustrate its advantages: 1) It can dynamically check temporal constraints of multiple concurrent workflow processes with resource constraints; 2) it can give the path information in the workflow processes where the temporal violation happens; and 3) it can provide solution to the temporal violation based on the analysis.
---
paper_title: Conceptual Modeling of Temporal Clinical Workflows
paper_content:
The diffusion of clinical guidelines to describe the proper way to deal with patients' situations is spreading out and opens new issues in the context of modeling and managing (temporal) information about medical activities. Guidelines can be seen as processes describing the sequence of activities to be executed, and thus approaches proposed in the business context can be used to model them. In this paper, we propose a general conceptual workflow model, considering both activities and their temporal properties, and focus on the representation of clinical guidelines by the proposed model.
---
paper_title: Formal verification of business processes with temporal and resource constraints
paper_content:
The correctness of business process models is critical for IT system development. The properties of business processes need to be analyzed when they are designed. In particular, business processes usually have various constraints on time and resources, which may cause serious problems like bottlenecks and deadlocks. In this paper, we propose an approach based on the model checking technique for verifying business process models with temporal and resource constraints. First, we extend Business Process Modeling Notation (BPMN) to handle these constraints. Then, we provide a mapping of the business process models described with this extended BPMN onto timed automata that can be verified by the UPPAAL model checker. This approach helps to eliminate various problems with time and resources in the early phase of development, and enables the quality assurance of business process models.
---
paper_title: Time Petri nets for workflow modelling and analysis
paper_content:
Time management in workflow processes is crucial in determining and controlling the life cycle of business activities. In our model, a temporal interval as an execution duration is assigned to every workflow task. While the real time taken by the task is nondeterministic and unpredictable, it may be between the bounds thus specified. We extend workflow nets (WF-nets) with time intervals and call the new nets Time WF-nets (TWF-nets). Extending our previous results on timed Petri nets, we show that certain behavioural properties of workflow processes modelled in TWF-nets can be verified. Using a clinical health care process as a case study, we also illustrate the modelling of shared resources available at different times.
---
paper_title: Towards Trustworthy Composite Service Through Business Process Model Verification
paper_content:
The Business Process Modeling Notation (BPMN) is a standard for modeling business processes in the early phases of systems development. Model verification is an important means to guarantee the trustiness of composite services. The verification of model, especially the model with strict time constraints, is a challenge in the field of trust composite services. Whether a model is trustworthy depends not only on the model structure but also on quantitative properties such as time properties. In this paper, we propose a mapping from BPMN to time Petri nets, and use the verification techniques on the basis. The algorithm we present can be used to check the model structure and the time choreography.
---
paper_title: Controllability in Temporal Conceptual Workflow Schemata
paper_content:
Workflow technology has emerged as one of the leading technologies in modelling, redesigning, and executing business processes. Currently available workflow management systems (WfMS ) and research prototypes offer a very limited support for the definition, detection, and management of temporal constraints over business processes. In this paper, we propose a new advanced workflow conceptual model for expressing time constraints in business processes and, in particular, we introduce and discuss the concept of controllability for workflow schemata and its evaluation at process design time. Controllability refers to the capability of executing a workflow for any possible duration of tasks. Since in several situations durations of tasks cannot be decided by WfMSs , even tough the minimum and the maximum durations for each task are known, checking controllability is stronger than verifying the consistency of the workflow temporal constraints.
---
paper_title: A relative timed semantics for BPMN
paper_content:
We describe a relative-timed semantic model for Business Process Modelling Notation (BPMN). We define the semantics in the language of Communicating Sequential Processes (CSP). This model augments our untimed model by introducing the notion of relative time in the form of delays chosen non-deterministically from a range. We illustrate the application by an example. We also show some properties relating the timed semantics and BPMN's untimed process semantics by exploiting CSP refinement. Our timed semantics allows behavioural properties of BPMN diagrams to be mechanically verified via automatic model-checking as provided by the FDR tool.
---
paper_title: The tool TINA – Construction of abstract state spaces for petri nets and time petri nets
paper_content:
In addition to the graphic-editing facilities, the software tool Tina proposes the construction of a number of representations for the behaviour of Petri nets or Time Petri nets. Various techniques are used to extract views of the behaviour of nets, preserving certain classes of properties of their state spaces. For Petri nets, these abstractions help prevent combinatorial explosion, relying on so-called partial order techniques such as covering steps and/or persistent sets. For Time Petri nets, which have, in general, infinite state spaces, they provide a finite symbolic representation of their behaviour in terms of state classes.
---
paper_title: Timed modelling and analysis in Web service compositions
paper_content:
In this paper we present an approach for modelling and analyzing time-related properties of Web service compositions defined as a set of BPEL4WS processes. We introduce a formalism, called Web service timed state transition systems (WSTTS), to capture the timed behavior of the composite Web services. We also exploit an interval temporal logic to express complex timed assumptions and requirements on the system's behavior. Building upon of this formalization, we provide techniques and tools for model checking BPEL4WS compositions against time-related requirements. We perform a preliminary experimental evaluation of our approach and tools with the help of the e-government case study.
---
paper_title: On Temporal Abstractions of Web Service Protocols
paper_content:
Web services are increasingly gaining acceptance as a framework for facilitatingapplication-to-application interactions within and across enterprises. They pro-videabstractionsandtechnologiesforexposingenterpriseapplicationsasservicesand make them accessible programmatically through standardized interfaces.However, tools supporting service development today provide little support forhigh level modeling and analysis of abstractions at higher level of services stack,and in particular there is little support for protocol modeling and management.We believe that indeed protocol modeling and management will be key in sup-porting Web service development and interaction, and that developing formalmodels and a protocolalgebra will have a positive impact similar to the one thatthe relational model and the relational algebra had in database technology.Whendeveloping ourframeworkfor serviceprotocolsmodeling, analysis,andmanagement [1,2], we identified the need for representing temporal abstractionsin protocol descriptions. In particular, our analysis of the characteristics and re-quirements of service protocols in terms of description languages, we found that,inadditiontomessagechoreographyconstraints,protocolspecificationlanguagesneed to cater for time-sensitive conversations(i.e., conversationsthat arecharac-terized by temporal constraints on when an operation must or can be invoked).For example, a protocolmay specify thatapurchase ordermessageis acceptedonly if it is received within 24 hours after a quotation has been received. In thispaper, we discuss the augmentation of business protocols with specifications oftemporal abstractions (called timed protocols). Then we motivate, through ex-amples, a need for analyzing timed protocol specifications, and specifically foridentifying if and under what conditions two services, characterized by certaintimed protocols, can interact. Technical details are given in an extended versionof this paper [3] where a formal timed business protocol model is presented andoperators that enable characterizing compatibility and replaceability classes fortimed protocols are described.
---
paper_title: Analysis and management of Web Service protocols
paper_content:
In the area of Web services and service-oriented architectures, business protocols are rapidly gaining importance and mindshare as a necessary part of Web service descriptions. Their immediate benefit is that they provide developers with information on how to write clients that can correctly interact with a given service or with a set of services. In addition, once protocols become an accepted practice and service descriptions become endowed with protocol information, the middleware can be significantly extended to better support service development, binding, and execution in a number of ways, considerably simplifying the whole service life-cycle. This paper discusses the different ways in which the middleware can leverage protocol descriptions, and focuses in particular on the notions of protocol compatibility, equivalence, and replace-ability. They characterise whether two services can interact based on their protocol definition, whether a service can replace another in general or when interacting with specific clients, and which are the set of possible interactions among two services.
---
paper_title: Representation, Verification, and Computation of Timed Properties in Web
paper_content:
In this paper we address the problem of qualitative and quantitative analysis of timing aspects of Web service compositions defined as a set of BPEL4WS processes. We introduce a formalism, called Web Service Timed State Transition Systems (WSTTS), to capture the timed behavior of the composite web services. We also exploit an interval temporal logic to express complex timed assumptions and requirements on the system’s behavior. Building on top of this formalization, we provide techniques and tools for modelchecking BPEL4WS compositions against time-related requirements. We also present a symbolic algorithm that can be used to compute duration bounds of behavioral intervals that satisfy such requirements. We perform a preliminary experimental evaluation of our approach and tools with the help of an e-Government case study.
---
paper_title: Specifying and Monitoring Temporal Properties in Web Services Compositions
paper_content:
Current Web service composition approaches and languages such as WS-BPEL do not allow to define temporal constraints in a declarative and separate way. Also it is not possible to verify if there are contradictions between the temporal constraints implemented in the composition. These limitations lead to maintainability and correctness problems. In this paper, we tackle these problems through a novel approach to temporal constraints in Web service compositions, which combines formal methods and aspect-oriented programming. In this approach, we use a powerful and expressive formal language, called XTUS-Automata, for specifying time-related properties and we introduce specification patterns that ease the definition of such constraints. The formal specifications are translated automatically into AO4BPEL aspects, which ensure the runtime monitoring of the temporal constraints. Our approach enables a declarative, separate, and verifiable specification of temporal properties and it generates automatically modular enforcement code for those properties.
---
paper_title: Towards Timed Requirement Verification for Service Choreographies
paper_content:
In this paper, we propose an approach for analyzing and validating a composition of services with respect to real time properties. We consider services defined using an extension of the Business Process Execution Language (BPEL) where timing constraints can be associated to the execution of an activity or define delays between events. The goal is to check whether a choreography of timed services satisfies given complex real time requirements. Our approach is based on a formal interpretation of timed choreographies in the Fiacre verification language that defines a precise model for the behavior of services and their timed interactions. We also rely on a logic-based language for property definition to formalize complex real-time requirements and on specific tooling for model-checking Fiacre specifications.
---
paper_title: Negotiating Deadline Constraints in Inter-organizational Logistic Systems: A Healthcare Case Study
paper_content:
Current logistics methods are more focused on strategic goals and do not deal with short term objectives, such as, reactivity and real-time constraints. Automated logistics management systems tend to facilitate information sharing between companies, in order to support cooperative strategies, improve productivity, control service quality and reduce administrative costs. In this paper, we discuss the application of Inter-Organizational Workflows (IOW) for automating logistic procedures in a collaborative context. A case study of healthcare process is presented, and focuses on the negotiations aspects of temporal constraints in critical situations. We show how our proposed temporal extension of the CoopFlow approach, brings advantages to automating logistics operational procedures, by providing real-time data knowledge and decision routing for the case of emergency healthcare.
---
paper_title: Temporal Consistency of View Based Interorganizational Workflows
paper_content:
Interorganizational workflows are a major step in automating B2B electronic commerce and to support collaborations of organizations on the technical level. Process views are an important conceptual modelling approach for interorganizational workflows as they allow interaction and communication while internal and private parts of the process can be hidden. However, it is essential to guarantee that an interorganizational workflow is free of conflicts and the overall quality assurances of the whole workflow can be achieved. This paper proposes an approach for checking temporal consistency of interorganizational workflows crossing boundaries of organizations.
---
paper_title: Temporal Conformance of Federated Choreographies
paper_content:
Web service composition is a new way for implementing business processes. In particular, a choreography supports modeling and enactment of interorganizational business processes consisting of autonomous organizations. Temporal constraints are important quality criteria. We propose a technique for modeling temporal constraints in choreographies and orchestrations, checking whether the orchestrations satisfy the temporal constraints of a choreography and compute internal deadlines for the activities in an interorganizational workflow.
---
paper_title: Satisfaction and coherence of deadline constraints in inter-organizational workflows
paper_content:
The integration of time constraints in Inter-Organizational Workflows (IOWs) is an important issue in the workflow research field. Since each partner exposes a limited version of his business process, some information is kept hidden and not visible to all partners. The inter-enterprise business process, however, is obtained by joining all activities and control flows that have relevant roles within the context of the global operation. It should be noted that this composition process does not intrinsically guarantee the satisfaction of any critical deadline constraints that may be imposed by the partners. Obviously, expressing and satisfying time deadlines is important for modern business processes that need to be optimized for efficiency and extreme competitiveness. In this paper, we propose a temporal extension to CoopFlow, an existing approach for designing and modeling IOW, based on Time Petri Net models. A method for expressing and publishing sensible time deadlines, by the partners, is given. We also give a systematic method assuring the verification and the consistency of the published time constraints within the context of the global business process, while maintaining the core advantage of CoopFlow, that each partner can keep the critical part of his business process private.
---
paper_title: Dynamic Checking and Solution to Temporal Violations in Concurrent Workflow Processes
paper_content:
Current methods that deal with concurrent workflow temporal violations only focus on checking whether there are any temporal violations. They are not able to point out the path where the temporal violation happens and thus cannot provide specific solutions. This paper presents an approach based on a sprouting graph to find out the temporal violation paths in concurrent workflow processes as well as possible solutions to resolve the temporal violations. First, we model concurrent workflow processes with time workflow net and a sprouting graph. Second, we update the sprouting graph at the checking point. Finally, we find out the temporal violation paths and provide solutions. We apply the approach in a real business scenario to illustrate its advantages: 1) It can dynamically check temporal constraints of multiple concurrent workflow processes with resource constraints; 2) it can give the path information in the workflow processes where the temporal violation happens; and 3) it can provide solution to the temporal violation based on the analysis.
---
paper_title: Negotiating Deadline Constraints in Inter-organizational Logistic Systems: A Healthcare Case Study
paper_content:
Current logistics methods are more focused on strategic goals and do not deal with short term objectives, such as, reactivity and real-time constraints. Automated logistics management systems tend to facilitate information sharing between companies, in order to support cooperative strategies, improve productivity, control service quality and reduce administrative costs. In this paper, we discuss the application of Inter-Organizational Workflows (IOW) for automating logistic procedures in a collaborative context. A case study of healthcare process is presented, and focuses on the negotiations aspects of temporal constraints in critical situations. We show how our proposed temporal extension of the CoopFlow approach, brings advantages to automating logistics operational procedures, by providing real-time data knowledge and decision routing for the case of emergency healthcare.
---
paper_title: Temporal Consistency of View Based Interorganizational Workflows
paper_content:
Interorganizational workflows are a major step in automating B2B electronic commerce and to support collaborations of organizations on the technical level. Process views are an important conceptual modelling approach for interorganizational workflows as they allow interaction and communication while internal and private parts of the process can be hidden. However, it is essential to guarantee that an interorganizational workflow is free of conflicts and the overall quality assurances of the whole workflow can be achieved. This paper proposes an approach for checking temporal consistency of interorganizational workflows crossing boundaries of organizations.
---
paper_title: Conceptual Modeling of Temporal Clinical Workflows
paper_content:
The diffusion of clinical guidelines to describe the proper way to deal with patients' situations is spreading out and opens new issues in the context of modeling and managing (temporal) information about medical activities. Guidelines can be seen as processes describing the sequence of activities to be executed, and thus approaches proposed in the business context can be used to model them. In this paper, we propose a general conceptual workflow model, considering both activities and their temporal properties, and focus on the representation of clinical guidelines by the proposed model.
---
paper_title: Formal verification of business processes with temporal and resource constraints
paper_content:
The correctness of business process models is critical for IT system development. The properties of business processes need to be analyzed when they are designed. In particular, business processes usually have various constraints on time and resources, which may cause serious problems like bottlenecks and deadlocks. In this paper, we propose an approach based on the model checking technique for verifying business process models with temporal and resource constraints. First, we extend Business Process Modeling Notation (BPMN) to handle these constraints. Then, we provide a mapping of the business process models described with this extended BPMN onto timed automata that can be verified by the UPPAAL model checker. This approach helps to eliminate various problems with time and resources in the early phase of development, and enables the quality assurance of business process models.
---
paper_title: Temporal Conformance of Federated Choreographies
paper_content:
Web service composition is a new way for implementing business processes. In particular, a choreography supports modeling and enactment of interorganizational business processes consisting of autonomous organizations. Temporal constraints are important quality criteria. We propose a technique for modeling temporal constraints in choreographies and orchestrations, checking whether the orchestrations satisfy the temporal constraints of a choreography and compute internal deadlines for the activities in an interorganizational workflow.
---
paper_title: Towards Trustworthy Composite Service Through Business Process Model Verification
paper_content:
The Business Process Modeling Notation (BPMN) is a standard for modeling business processes in the early phases of systems development. Model verification is an important means to guarantee the trustiness of composite services. The verification of model, especially the model with strict time constraints, is a challenge in the field of trust composite services. Whether a model is trustworthy depends not only on the model structure but also on quantitative properties such as time properties. In this paper, we propose a mapping from BPMN to time Petri nets, and use the verification techniques on the basis. The algorithm we present can be used to check the model structure and the time choreography.
---
paper_title: Representation, Verification, and Computation of Timed Properties in Web
paper_content:
In this paper we address the problem of qualitative and quantitative analysis of timing aspects of Web service compositions defined as a set of BPEL4WS processes. We introduce a formalism, called Web Service Timed State Transition Systems (WSTTS), to capture the timed behavior of the composite web services. We also exploit an interval temporal logic to express complex timed assumptions and requirements on the system’s behavior. Building on top of this formalization, we provide techniques and tools for modelchecking BPEL4WS compositions against time-related requirements. We also present a symbolic algorithm that can be used to compute duration bounds of behavioral intervals that satisfy such requirements. We perform a preliminary experimental evaluation of our approach and tools with the help of an e-Government case study.
---
paper_title: Specifying and Monitoring Temporal Properties in Web Services Compositions
paper_content:
Current Web service composition approaches and languages such as WS-BPEL do not allow to define temporal constraints in a declarative and separate way. Also it is not possible to verify if there are contradictions between the temporal constraints implemented in the composition. These limitations lead to maintainability and correctness problems. In this paper, we tackle these problems through a novel approach to temporal constraints in Web service compositions, which combines formal methods and aspect-oriented programming. In this approach, we use a powerful and expressive formal language, called XTUS-Automata, for specifying time-related properties and we introduce specification patterns that ease the definition of such constraints. The formal specifications are translated automatically into AO4BPEL aspects, which ensure the runtime monitoring of the temporal constraints. Our approach enables a declarative, separate, and verifiable specification of temporal properties and it generates automatically modular enforcement code for those properties.
---
paper_title: Satisfaction and coherence of deadline constraints in inter-organizational workflows
paper_content:
The integration of time constraints in Inter-Organizational Workflows (IOWs) is an important issue in the workflow research field. Since each partner exposes a limited version of his business process, some information is kept hidden and not visible to all partners. The inter-enterprise business process, however, is obtained by joining all activities and control flows that have relevant roles within the context of the global operation. It should be noted that this composition process does not intrinsically guarantee the satisfaction of any critical deadline constraints that may be imposed by the partners. Obviously, expressing and satisfying time deadlines is important for modern business processes that need to be optimized for efficiency and extreme competitiveness. In this paper, we propose a temporal extension to CoopFlow, an existing approach for designing and modeling IOW, based on Time Petri Net models. A method for expressing and publishing sensible time deadlines, by the partners, is given. We also give a systematic method assuring the verification and the consistency of the published time constraints within the context of the global business process, while maintaining the core advantage of CoopFlow, that each partner can keep the critical part of his business process private.
---
paper_title: Towards Timed Requirement Verification for Service Choreographies
paper_content:
In this paper, we propose an approach for analyzing and validating a composition of services with respect to real time properties. We consider services defined using an extension of the Business Process Execution Language (BPEL) where timing constraints can be associated to the execution of an activity or define delays between events. The goal is to check whether a choreography of timed services satisfies given complex real time requirements. Our approach is based on a formal interpretation of timed choreographies in the Fiacre verification language that defines a precise model for the behavior of services and their timed interactions. We also rely on a logic-based language for property definition to formalize complex real-time requirements and on specific tooling for model-checking Fiacre specifications.
---
paper_title: Constraint-based workflow models: Change made easy
paper_content:
The degree of flexibility of workflow management systems heavily influences the way business processes are executed. Constraint-based models are considered to be more flexible than traditional models because of their semantics: everything that does not violate constraints is allowed. Although constraint-based models are flexible, changes to process definitions might be needed to comply with evolving business domains and exceptional situations. Flexibility can be increased by run-time support for dynamic changes - transferring instances to a new model - and ad-hoc changes - changing the process definition for one instance. In this paper we propose a general framework for a constraint-based process modeling language and its implementation. Our approach supports both ad-hoc and dynamic change, and the transfer of instances can be done easier than in traditional approaches.
---
paper_title: Controllability in Temporal Conceptual Workflow Schemata
paper_content:
Workflow technology has emerged as one of the leading technologies in modelling, redesigning, and executing business processes. Currently available workflow management systems (WfMS ) and research prototypes offer a very limited support for the definition, detection, and management of temporal constraints over business processes. In this paper, we propose a new advanced workflow conceptual model for expressing time constraints in business processes and, in particular, we introduce and discuss the concept of controllability for workflow schemata and its evaluation at process design time. Controllability refers to the capability of executing a workflow for any possible duration of tasks. Since in several situations durations of tasks cannot be decided by WfMSs , even tough the minimum and the maximum durations for each task are known, checking controllability is stronger than verifying the consistency of the workflow temporal constraints.
---
paper_title: A relative timed semantics for BPMN
paper_content:
We describe a relative-timed semantic model for Business Process Modelling Notation (BPMN). We define the semantics in the language of Communicating Sequential Processes (CSP). This model augments our untimed model by introducing the notion of relative time in the form of delays chosen non-deterministically from a range. We illustrate the application by an example. We also show some properties relating the timed semantics and BPMN's untimed process semantics by exploiting CSP refinement. Our timed semantics allows behavioural properties of BPMN diagrams to be mechanically verified via automatic model-checking as provided by the FDR tool.
---
|
Title: A Survey on Time-aware Business Process Modeling
Section 1: INTRODUCTION
Description 1: Provide an introduction to the context of business process modeling, the importance of addressing temporal constraints in inter-organisational business processes, and the goals of this paper.
Section 2: OVERVIEW ON THE EXISTING TEMPORAL CONSTRAINTS SPECIFICATION METHODS
Description 2: Offer a classification of the existing models for specifying temporal constraints in business processes, including workflows, web service composition, and inter-organisational domain.
Section 3: Temporal constraints in the workflow research area
Description 3: Detail the approaches to specifying and verifying temporal constraints within the workflow research area, including notable methods and their limitations.
Section 4: Temporal constraints in web service composition research field
Description 4: Discuss the methods for temporal constraint specification and verification in the web service composition research field, highlighting key approaches and their strengths and limitations.
Section 5: Temporal constraints in the inter-organisational research field
Description 5: Explain the efforts and methods used to specify and verify temporal constraints in inter-organisational business processes, addressing challenges and limitations.
Section 6: EVALUATION AND DISCUSSION
Description 6: Provide an evaluation of the surveyed methods, highlight the effectiveness of different approaches, and discuss observed strengths, weaknesses, and gaps in current research.
Section 7: RESEARCH CHALLENGES AND CONCLUSION
Description 7: Summarize the key findings and insights from the survey, outline the major research challenges that need to be addressed, and conclude with remarks on future directions in time-aware business process modeling.
|
Telecommuting's Past and Future: A Literature Review and Research Agenda
| 11 |
---
paper_title: Remote office work: changing work patterns in space and time
paper_content:
Remote work refers to organizational work that is performed outside of the normal organizational confines of space and time. The term telecommuting refers to the substitution of communications capabilities for travel to a central work location. Office automation technology permits many office workers to be potential telecommuters in that their work can be performed remotely with computer and communications support. This paper examines some behavioral, organizational, and social issues surrounding remote work, particularly work at home. An exploratory study was conducted of 32 organizational employees who were working at home. Important characteristics of jobs that can be performed at home were: minimum physical requirements, individual control over work pace, defined deliverables, a need for concentration, and a relatively low need for communication. The individuals who worked at home successfully were found to be highly self-motivated and self-disciplined and to have skills which provided them with bargaining power. They also made the arrangement either because of family requirements or because they preferred few social contacts beyond family.
---
paper_title: The alternative workplace: changing where and how people work.
paper_content:
Today many organizations, including AT&T and IBM, are pioneering the alternative workplace--the combination of nontraditional work practices, settings, and locations that is beginning to supplement traditional offices. This is not a fad. Although estimates vary widely, it is safe to say that some 30 million to 40 million people in the United States are now either telecommuters or home-based workers. What motivates managers to examine how people spend their time at the office and where else they might do their work? Among the potential benefits for companies are reduced costs, increased productivity, and an edge in vying for and keeping talented employees. They can also capture government incentives and avoid costly sanctions. But at the same time, alternative workplace programs are not for everyone. Indeed, such programs can be difficult to adopt, even for those organizations that seem to be most suited to them. Ingrained behaviors and practical hurdles are hard to overcome. And the challenges of managing both the cultural changes and systems improvements required by an alternative workplace initiative are substantial. How should senior managers think about alternative workplace programs? What are the criteria for determining whether the alternative workplace is right for a given organization? What are the most common pitfalls in implementing alternative workplace programs? The author provides the answers to these questions in his examination of this new frontier of where and how people work.
---
paper_title: Home Is Where the Work Is
paper_content:
Some work full-time and others part-time. Some make a lot of money, and others barely scrape by. But home-based workers are beginning to account for more and more of New York State's economy. Mark Levine and Deirdre Martin have commutes most of us would envy. His is about thirty paces, up two flights of stairs, and takes all of fifteen seconds. Hers is even shorter, up just one flight of stairs. While the rest of us are running around looking for the car keys and trying to get the kids out the door, they're already at work at their computers or on the phone with colleagues in New York City. Levine and Martin, freelance writers of books and magazine articles, work in separate rooms of their home in Ithaca, New York he in a third-floor attic office and she in an office on the second floor. The two are among the more than 20 million Americans who spend at least part of the week working in or out of their homes. According to Ramona Heck, an associate professor of consumer economics and housing and the J. Thomas Clark Professor of Entrepreneurship and Personal Enterprise, they're part of a trend that has grown enormously in the last decade. "Home-based work has increased for three reasons," Heck says. "First, many women in the eighties wanted to work and bring in an income and still have time for their children. Second, the availability of technologies such as computers, printers, fax machines, and high-speed moderns has made it possible to do work at home that before could only have been done at the office. And third, employers are realizing that letting employees work at home cuts down on office expenses and other costs." Heck recently completed a multi-year study of 103 home-based workers in New York Stale to determine the value of home-based work to the state's economy. Her study was part of a larger project involving researchers from eight other universities who studied 796 home-based workers in Hawaii, Iowa, Michigan, Missouri, Ohio, Pennsylvania, Utah, and Vermont. Working with graduate student Likwang Chen, Heck estimated the value of home-based work to New York's economy, developed a profile of the average home-based worker in New York State, and compared it with home-based workers in the eight other states. Home-based workers have been the subject of just a few prior studies, none of which focused on New York State workers. The U.S. Census Bureau included information about home-based work in its 1980 census, and the Department of Labor conducted a study in 1985. A follow-up study was conducted in 1991. "One problem is that the definition of what constitutes home-based work differed in all these studies," Heck says. "For our study, we defined home-based workers as people who worked at the home or from the home at least six hours a week throughout the year. They had to have been involved in the activity at least twelve months, and they had to have had no other office from which the work was conducted. We did not include farmers unless they performed a value-added activity to what they grew, such as producing maple syrup or operating a retail market on the farm." The study divided the subjects into two groups: business owners, who operated their businesses in the home or from the home, and wage workers, who worked at home but were employed by someone else. In the New York portion of the study, Heck interviewed 75 business owners and 28 wage workers. The business owners included contractors, arts and crafts retailers, sales and marketing people, and professional and technical people. Among the wage workers, more than half were involved in sales and marketing and another 20 percent were involved in professional or technical occupations. The results of the study indicate that home-based workers have a significant impact on New York State's economy. Based on the sampling scheme, Heck and Chen estimate there are approximately 250,000 home-based workers in the state, exclusive of New York City, and that around 63,000 of them are in rural areas. …
---
paper_title: Patterns of telecommuting and their consequences: Framing the research agenda
paper_content:
Abstract While there are over 7 million telecommuters in the U.S. today, there has been little empirical research and virtually no theoretical work on telecommuting. Drawing from the literatures on contingent employment, job design and social isolation, this article presents a theoretical framework for understanding how different constellations of telecommuting arrangements and job characteristics lead to different patterns of employee attitudes and behaviors. After presenting a series of propositions, the article concludes with suggestions for the empirical testing of these propositions and a discussion of the implications for management practice.
---
paper_title: Inhibitors and motivators for telework: some Finnish experiences
paper_content:
Nordic countries have traditionally been the forerunners in both usage of telecommunication and restructuring of working life. Both elements are strongly involved in telework so there might be a lesson to be learnt from Nordic telework projects. In this article four Finnish telework initiatives are studied. The reasons for starting them are sought and their results are evaluated. Further factors making the daily telework easier or more difficult are explicated. Conclusions are drawn from the cases, and their characteristics are compared with those of other European telework initiatives.
---
paper_title: The experience of teleworking: an annotated review
paper_content:
The paper reviews the contemporary literature on the experience of teleworking. Particular attention is paid to the socializing aspects of work and its comparative absence when working from home; economic considerations, both for homeworkers and for the firms; work satisfaction and motivation; supervision; roles and gender issues in homeworking; the organization of time and space; and, lastly, questions of self-discipline. The evidence reviewed is based on various teleworking trials conducted mainly during the 1980s; this information is supplemented by original research conducted by the authors which investigated the pros and cons raised by British Telecom operators who were due to take part in a teleworking trial. These operators anticipated many of the issues faced by those who actually had teleworking experience.
---
paper_title: The alternative workplace: changing where and how people work.
paper_content:
Today many organizations, including AT&T and IBM, are pioneering the alternative workplace--the combination of nontraditional work practices, settings, and locations that is beginning to supplement traditional offices. This is not a fad. Although estimates vary widely, it is safe to say that some 30 million to 40 million people in the United States are now either telecommuters or home-based workers. What motivates managers to examine how people spend their time at the office and where else they might do their work? Among the potential benefits for companies are reduced costs, increased productivity, and an edge in vying for and keeping talented employees. They can also capture government incentives and avoid costly sanctions. But at the same time, alternative workplace programs are not for everyone. Indeed, such programs can be difficult to adopt, even for those organizations that seem to be most suited to them. Ingrained behaviors and practical hurdles are hard to overcome. And the challenges of managing both the cultural changes and systems improvements required by an alternative workplace initiative are substantial. How should senior managers think about alternative workplace programs? What are the criteria for determining whether the alternative workplace is right for a given organization? What are the most common pitfalls in implementing alternative workplace programs? The author provides the answers to these questions in his examination of this new frontier of where and how people work.
---
paper_title: An empirical evaluation of the impacts of telecommuting on intra-organizational communication
paper_content:
Abstract This study represents a preliminary step towards developing an understanding of how telework arrangements affect intra-organizational communication. The following general research questions are addressed: (1) Do telework arrangements change the way in which teleworkers communicate with their superiors, their subordinates, their colleagues and their clients?; and (2) Do telework arrangements change the way in which managers communicate with subordinates who telework? The study, which was conducted at two Canadian federal government departments, was designed to collect information from four groups: (1) teleworkers ( n =36 at Time 2); (2) managers of teleworkers ( n =28 at Time 2); (3) co-workers of teleworkers ( n =27 at Time 2); and (4) a control group ( n =25 at Time 2). Three data collection techniques were used in this study: paper and pencil questionnaires, telephone interviews, and focus group interviews. Data were collected at three points in time: (1) two weeks prior to the start of the telework pilot; (2) three months after the telework pilot had begun; and (3) six months after the start of the telework pilot. Analysis of the data suggests that, with a few important exceptions, part-time telework arrangements have little impact on intra-organizational communication.
---
paper_title: A review of telework research : findings , new directions , and lessons for the study of modern work
paper_content:
Summary Telework has inspired research in disciplines ranging from transportation and urban planning to ethics, law, sociology, and organizational studies. In our review of this literature, we seek answers to three questions: who participates in telework, why they do, and what happens when they do? Who teleworks remains elusive, but research suggests that male professionals and female clerical workers predominate. Notably, work-related factors like managers’ willingness are most predictive of which employees will telework. Employees’ motivations for teleworking are also unclear, as commonly perceived reasons such as commute reduction and family obligations do not appear instrumental. On the firms’ side, managers’ reluctance, forged by concerns about cost and control and bolstered by little perceived need, inhibits the creation of telework programmes. As for outcomes, little clear evidence exists that telework increases job satisfaction and productivity, as it is often asserted to do. We suggest three steps for future research may provide richer insights: consider group and organizational level impacts to understand who telework affects, reconsider why people telework, and emphasize theory-building and links to existing organizational theories. We conclude with lessons learned from the telework literature that may be relevant to research on new work forms and workplaces. Copyright # 2002 John Wiley & Sons, Ltd.
---
paper_title: Information technology as an enabler of telecommuting
paper_content:
One of the most interesting changes in business practices is telecommuting, namely doing work in places other than the corporate offices. The extent of telecommuting has been on the rise during the 1990s and it is expected to rise rapidly during the next few years. A major driving force in the spread of telecommuting is the increased availability of cost-effective supportive information technologies. The tasks performed by telecommuters are expanding. While the early telecommuters performed repeated transactions (such as processing insurance claims at home), today's telecommuters can perform at home, or on the road, almost any task that they do at the office. Thus, their information needs have been changed. This paper examines the various tasks performed by telecommuters and surveys the major supporting information technologies. Special attention is given to electronic mail, accessibility to databases and networks, desk top teleconferencing, personal digital assistants (PDAs), screen sharing, workflow systems, idea generation, and distributed group decision making. Also, Lotus Notes is viewed as a major computing environment that will facilitate telecommuting. Technological developments in an integrated services digital network (ISDN), an asynchronous transfer mode (ATM), wireless communication, and local area network (LAN) connectivity will have a major impact on the growth of telecommuting and so will the resolution of managerial issues such as appropriate controls and security, cost-benefit justification, training and ownership and maintenance of the necessary equipment at home.
---
paper_title: Work-At-Home and the Quality of Working Life.
paper_content:
Innovations in telecommunications technology increase the possibilities of working from the home. Implications of work-at-home arrangements for the individual's quality of working life are discussed. Included are discussions of several major aspects of the work experience relevant to quality of working life, analyses of the differences along these aspects between working at home and working at a normal workplace, and speculation about the possible consequences for the individual of the transfer of jobs from employers” premises to employees' homes.
---
paper_title: Telecommuting in the Public Sector An Overview and a Survey of the States
paper_content:
Every day a growing number of Americans engage in home- based work instead of commuting to a central workplace. This article examines the development of telecommuting and its effect on productivity...
---
paper_title: Patterns of telecommuting and their consequences: Framing the research agenda
paper_content:
Abstract While there are over 7 million telecommuters in the U.S. today, there has been little empirical research and virtually no theoretical work on telecommuting. Drawing from the literatures on contingent employment, job design and social isolation, this article presents a theoretical framework for understanding how different constellations of telecommuting arrangements and job characteristics lead to different patterns of employee attitudes and behaviors. After presenting a series of propositions, the article concludes with suggestions for the empirical testing of these propositions and a discussion of the implications for management practice.
---
paper_title: Environmental effects of the computer age
paper_content:
This article reviews the effects of the computer age on our environment. Although the usefulness of computer technology is inarguably an asset in today's world, the environmental implications are not yet fully understood by the majority of computer users. The subjects discussed in this article fall in three general areas: the direct effects of computers on the computer user and the workplace (ergonomics and telecommuting); the effects of the use of computers on the environment (consumption of electrical energy and solid waste disposal); and the environmental hazards of producing computers. >
---
paper_title: Knowledge creation in the telework context
paper_content:
This paper suggests that 'telework' alters the context within which teleworkers acquire knowledge. Because of sophisticated information technologies, teleworkers have greater access to on–line information and documentation. This access creates the potential for higher explicit knowledge in comparison with the traditional work environment. However, telework increases the physical distance from work and decreases the ability to socialize. Increased physical distance creates a challenge in the teleworkers' abilities to acquire tacit knowledge. Socialization, mentoring, training and documentation practices, therefore, become important for maintaining knowledge in the organization.
---
paper_title: Exploring differences in employee turnover intentions and its determinants among telecommuters and non-telecommuters
paper_content:
As telecommuting programs proliferate, a better understanding of the relationship between telecommuting and career success outcomes is required to provide human resources managers, telecommuters, and information systems managers with information to decide the future of telecommuting arrangements. This paper addresses this need by exploring whether turnover intentions and their determinants differ for telecommuters and non-telecommuters. Four hundred salespeople from one large company in the southeastern United States were asked to participate in the study. The organization entry point was the marketing director. One hundred and four telecommuting employees and one hundred and twenty-one regular employees responded, with a total of 225 usable questionnaires. Telecommuters seemed to face less role conflict and role ambiguity and tended to be happier with their supervisors and more committed to their organizations. They also showed lower satisfaction with peers and with promotion. Based on the results, recommendations are proposed for managing the implementation of telecommuting programs and their impact on the rest of the organization's employee population.
---
paper_title: Information technology as an enabler of telecommuting
paper_content:
One of the most interesting changes in business practices is telecommuting, namely doing work in places other than the corporate offices. The extent of telecommuting has been on the rise during the 1990s and it is expected to rise rapidly during the next few years. A major driving force in the spread of telecommuting is the increased availability of cost-effective supportive information technologies. The tasks performed by telecommuters are expanding. While the early telecommuters performed repeated transactions (such as processing insurance claims at home), today's telecommuters can perform at home, or on the road, almost any task that they do at the office. Thus, their information needs have been changed. This paper examines the various tasks performed by telecommuters and surveys the major supporting information technologies. Special attention is given to electronic mail, accessibility to databases and networks, desk top teleconferencing, personal digital assistants (PDAs), screen sharing, workflow systems, idea generation, and distributed group decision making. Also, Lotus Notes is viewed as a major computing environment that will facilitate telecommuting. Technological developments in an integrated services digital network (ISDN), an asynchronous transfer mode (ATM), wireless communication, and local area network (LAN) connectivity will have a major impact on the growth of telecommuting and so will the resolution of managerial issues such as appropriate controls and security, cost-benefit justification, training and ownership and maintenance of the necessary equipment at home.
---
paper_title: TELECOMMUTING: A TRANSPORTATION PLANNER'S VIEW
paper_content:
Although telecommuting may have some role in reducing traffic congestion and air pollution, a transportation planner advises promoters of telecommuting to look beyond broad social benefits and focus instead on telecommuting's role as a contributor to a full and financially sound system of distributed work. This way of thinking requires a market-based approach that focuses on the concerns of customers (i.e., employers), particularly their business needs.
---
paper_title: Relationships Between Telecommuting Workers and Their Managers: An Exploratory Study:
paper_content:
Employees who had become telecommuters at several corporations in the mid- Atlantic region of the United States were surveyed and interviewed. In inter views, telecommuters consistently reported th...
---
paper_title: Business process re‐engineering and teleworking
paper_content:
Summarizes the results of a European Commission project led by the author, which has examined business re‐structuring across Europe and the relationship between business process re‐engineering (BPR) and new ways of working. Found that there are many ways other than BPR for achieving fundamental change and that most exercises being undertaken in the name of BPR are of an improvement nature and in some cases more radical improvements are being achieved by those adopting new patterns of work. Argues that BPR is failing to harness enough of the potential of people. Business processes rather than management support or learning processes are being re‐engineered. People are working harder rather than smarter.
---
paper_title: The effect of environmental factors on the adoption and diffusion of telework
paper_content:
A stream of air whose pollution is to be measured is forced between the plates of a first and second air capacitor having a suitable length and applied voltage for furnishing a first and second measurement signal varying, respectively, as a function of small and large positive ion concentration in the air. A divider circuit divides the second by the first measurement signal. The resultant first output signal is added to a second output signal similarly derived from the measurement of negative ions to furnish the final pollution measuring signal.
---
paper_title: The alternative workplace: changing where and how people work.
paper_content:
Today many organizations, including AT&T and IBM, are pioneering the alternative workplace--the combination of nontraditional work practices, settings, and locations that is beginning to supplement traditional offices. This is not a fad. Although estimates vary widely, it is safe to say that some 30 million to 40 million people in the United States are now either telecommuters or home-based workers. What motivates managers to examine how people spend their time at the office and where else they might do their work? Among the potential benefits for companies are reduced costs, increased productivity, and an edge in vying for and keeping talented employees. They can also capture government incentives and avoid costly sanctions. But at the same time, alternative workplace programs are not for everyone. Indeed, such programs can be difficult to adopt, even for those organizations that seem to be most suited to them. Ingrained behaviors and practical hurdles are hard to overcome. And the challenges of managing both the cultural changes and systems improvements required by an alternative workplace initiative are substantial. How should senior managers think about alternative workplace programs? What are the criteria for determining whether the alternative workplace is right for a given organization? What are the most common pitfalls in implementing alternative workplace programs? The author provides the answers to these questions in his examination of this new frontier of where and how people work.
---
paper_title: The telecommuting innovation opportunity
paper_content:
Discusses and attempts to anticipate the changes in consumer attitudes and behaviors which may result from the growing importance of digital information technology. Based on a survey conducted among early adopters of the technology, the “telecommuters”, finds a high incidence of pet ownership among telecommuters, rejection of some forms of computer shopping, and long working hours interlaced with long breaks, etc. Suggests the need to further investigate ways to capitalize on these future trends for the businesses of: banking, finances, travel, video rental, pet supply, grocery and retail trade.
---
paper_title: Work-At-Home and the Quality of Working Life.
paper_content:
Innovations in telecommunications technology increase the possibilities of working from the home. Implications of work-at-home arrangements for the individual's quality of working life are discussed. Included are discussions of several major aspects of the work experience relevant to quality of working life, analyses of the differences along these aspects between working at home and working at a normal workplace, and speculation about the possible consequences for the individual of the transfer of jobs from employers” premises to employees' homes.
---
paper_title: A Gendered Perspective on Access to the Information Infrastructure
paper_content:
This article provides a gendered perspective on access to the emerging information infrastructure. It examines access issues as they affect women; discusses public policy work on gender equity to national information infrastructure initiatives; and provides recommended reforms towards increasing gender equity to the information infrastructure.
---
paper_title: What does telework really do to us?
paper_content:
In this paper the results of surveys of about 400 telecommuters in the USA, including transportation impacts, are presented, and whether telecommuting is actually related to any net reduction in travel in general and in car-use in particular is discussed. Findings from trip logs completed by driving age household members for an entire week are given. Teleworking is also shown to have no severe negative socio-psychological effects on either teleworkers or telemanagers, at least short term, provided all parties are properly selected and trained and do not telework full time. Brief mention is made of the differences between teleworkers in the USA and elsewhere.
---
paper_title: An empirical evaluation of the impacts of telecommuting on intra-organizational communication
paper_content:
Abstract This study represents a preliminary step towards developing an understanding of how telework arrangements affect intra-organizational communication. The following general research questions are addressed: (1) Do telework arrangements change the way in which teleworkers communicate with their superiors, their subordinates, their colleagues and their clients?; and (2) Do telework arrangements change the way in which managers communicate with subordinates who telework? The study, which was conducted at two Canadian federal government departments, was designed to collect information from four groups: (1) teleworkers ( n =36 at Time 2); (2) managers of teleworkers ( n =28 at Time 2); (3) co-workers of teleworkers ( n =27 at Time 2); and (4) a control group ( n =25 at Time 2). Three data collection techniques were used in this study: paper and pencil questionnaires, telephone interviews, and focus group interviews. Data were collected at three points in time: (1) two weeks prior to the start of the telework pilot; (2) three months after the telework pilot had begun; and (3) six months after the start of the telework pilot. Analysis of the data suggests that, with a few important exceptions, part-time telework arrangements have little impact on intra-organizational communication.
---
paper_title: INFLUENCES OF THE VIRTUAL OFFICE ON ASPECTS OF WORK AND WORK/LIFE BALANCE
paper_content:
Millions of employees now use portable electronic tools to do their jobs from a “virtual office” with extensive flexibility in the timing and location of work. However, little scholarly research exists about the effects of this burgeoning work form. This study of IBM employees explored influences of the virtual office on aspects of work and work/life balance as reported by virtual office teleworkers (n = 157) and an equivalent group of traditional office workers (n= 89). Qualitative analyses revealed the perception of greater productivity, higher morale, increased flexibility and longer work hours due to telework, as well as an equivocal influence on work/life balance and a negative influence on teamwork. Using a quasi-experimental design, quantitative multivariate analyses supported the qualitative findings related to productivity, flexibility and work/life balance. However, multivariate analyses failed to support the qualitative findings for morale, teamwork and work hours. This study highlights the need for a multi-method approach, including both qualitative and quantitative elements, when studying telework.
---
paper_title: The Expert’s Opinion
paper_content:
Keynote address to the 1994 Information Resources Management Association International Conference in San Antonio, Texas by Karen D. Walker Compaq Computer Corporation
---
paper_title: The impact of gender, occupation, and presence of children on telecommuting motivations and constraints
paper_content:
Accurate forecasts of the adoption and impacts of telecommuting depend on an understanding of what motivates individuals to adopt telecommuting and what constraints prevent them from doing so, since these motivations and constraints offer insight into who is likely to telecommute under what circumstances. Telecommuting motivations and constraints are likely to differ by various segments of society. In this study, we analyze differences in these variables due to gender, occupation, and presence of children for 583 employees of the City of San Diego. Numerous differences are identified, which can be used to inform policies (public or organizational) intended to support telecommuting. Most broadly, women on average rated the advantages of telecommuting more highly than men – both overall and within each occupation group. Women were more likely than men to have family, personal benefits, and stress reduction as potential motivations for telecommuting, and more likely to possess the constraints of supervisor unwillingness, risk aversion, and concern about lack of visibility to management. Clerical workers were more likely than managers or professionals to see the family, personal, and office stress-reduction benefits of telecommuting as important, whereas managers and professionals were more likely to cite getting more work done as the most important advantage of telecommuting. Constraints present more strongly for clerical workers than for other occupations included misunderstanding, supervisor unwillingness, job unsuitability, risk aversion, and (together with professional workers) perceived reduced social interaction. Constraints operating more strongly for professional workers included fear of household distractions, reduced social and (together with managers) professional interaction, the need for discipline, and lack of visibility to management. Key constraints present for managers included reduced professional interaction and household distractions. Lack of awareness, cost, and lack of technology or other resources did not differ significantly by gender or occupation. Respondents with children rated the stress reduction and family benefits of telecommuting more highly than did those with no children at home. Those with children were more likely than those without children to be concerned about the lack of visibility to management, and (especially managers) were more likely to cite household distractions as a constraint.
---
paper_title: The ‘greening’ of organizational change: A case study
paper_content:
Abstract There are many emerging corporate strategies designed to make large, complex business enterprises more responsive to environmental concerns. One major corporate innovation that has a direct environmental impact is the increased use of telework options for employees. These programmes significantly reduce the amount of employee travel, thereby reducing air pollution. However, adoption of telework programmes requires a change in organizational management strategies. The prevailing attitude of “If I can't see them, how do I know they are working” must be changed. This attitudinal change, coupled with the structural move towards the ‘virtual corporation’ can be managed using existing organizational development strategies and tactics. The paper reports the results of several field studies in California which examined the phenomenon of telework. The studies consistently report increases in worker productivity of 16 per cent and a significant reduction of personal automobile travel of between 20–40 per c...
---
paper_title: Applying the triple bottom line: Telework and the environment
paper_content:
The “triple bottom line” approach emphasizes not only economic goals, but social and environmental objectives as well. The “telework” option—in which employees work from home or a satellite office rather than from a central location—would appear to advance all three of these aims. But a close look at the specifics of teleworking makes clear that more data are needed to determine its ultimate impact. Moreover, despite its apparent advantages, teleworking is gaining acceptance less quickly than might be expected. For these reasons, telework offers a fascinating case study in the difficulties of applying the triple bottom line concept.© 1999 John Wiley & Sons, Inc.
---
paper_title: Patterns of telecommuting and their consequences: Framing the research agenda
paper_content:
Abstract While there are over 7 million telecommuters in the U.S. today, there has been little empirical research and virtually no theoretical work on telecommuting. Drawing from the literatures on contingent employment, job design and social isolation, this article presents a theoretical framework for understanding how different constellations of telecommuting arrangements and job characteristics lead to different patterns of employee attitudes and behaviors. After presenting a series of propositions, the article concludes with suggestions for the empirical testing of these propositions and a discussion of the implications for management practice.
---
paper_title: Telecommuting: a test of trust, competing values, and relative advantage
paper_content:
The advent of technologies that enable virtual work arrangements brings with it a challenge to managers: do they trust their employees to work outside of their presence? A perceived loss of control and a sense of being taken advantage of, may be experienced by a manager as employees disappear from the manager's daily gaze. To enable the transition of employees to virtual work arrangements, managers who work in bureaucratic organizations that value a high degree of control and stability may need to change their management style to accommodate new methods of employee communication and interaction. Alternately, corporate cultures well suited for the transition value results and are characterized as having the atmosphere of trust (a shared emotional understanding about who is to be trustee based on compatible values and open communications/attitudes). Telecommuting, as one form of virtual work arrangement, provides a prime opportunity to look into the management attitudes and corporate cultures that may hinder the transition of workers into remote settings. The study of telecommuting among information technology (IT) professionals suggests that management trust of employees, the ability to secure the technology involved, a rational culture, and a group culture, which emphasizes human resources and member participation, facilitate telecommuting implementation. Thus the study offers strong support for the important role of trust, security, and culture in the implementation of virtual work arrangements.
---
paper_title: Telecommuting innovation and organization: a contingency theory of labor process change
paper_content:
This paper develops a « contingency theory » of technological work reorganization that addresses organizational, managerial, and job characteristic contengencies in the reorganization of the work process. Sustantively, the focus is on the rationales of top decision-makers in a sample of firms for adopting and designing telecommuting jobs. Following the theoritical model developed in the paper, the AA. find that telecommuting innovation is primarily contingent on organizational constraints (such as bureaucratic inertia) and, on the other hand, upon managerial goals (such as control of the labor process) in interaction with the relative power and status of the target employee group.
---
paper_title: A Demand-Side Approach to Telecommuting: The Integrated Workplace Strategies Concept
paper_content:
The juxtaposition of available enabling technologies and low demand for telecommuting focuses attention on the need for businesses to understand how more flexible and innovative workplace strategies can help them gain competitive advantage. The integrated workplace strategies approach leverages settings, technologies, and management practices to support more effective ways of working and achieve the seemingly contradictory goals of reduced costs, improved performance, enhanced flexibility, improved air quality, and reduced traffic congestion.
---
paper_title: A review of telework research : findings , new directions , and lessons for the study of modern work
paper_content:
Summary Telework has inspired research in disciplines ranging from transportation and urban planning to ethics, law, sociology, and organizational studies. In our review of this literature, we seek answers to three questions: who participates in telework, why they do, and what happens when they do? Who teleworks remains elusive, but research suggests that male professionals and female clerical workers predominate. Notably, work-related factors like managers’ willingness are most predictive of which employees will telework. Employees’ motivations for teleworking are also unclear, as commonly perceived reasons such as commute reduction and family obligations do not appear instrumental. On the firms’ side, managers’ reluctance, forged by concerns about cost and control and bolstered by little perceived need, inhibits the creation of telework programmes. As for outcomes, little clear evidence exists that telework increases job satisfaction and productivity, as it is often asserted to do. We suggest three steps for future research may provide richer insights: consider group and organizational level impacts to understand who telework affects, reconsider why people telework, and emphasize theory-building and links to existing organizational theories. We conclude with lessons learned from the telework literature that may be relevant to research on new work forms and workplaces. Copyright # 2002 John Wiley & Sons, Ltd.
---
paper_title: A study on the usage of computer and communication technologies for telecommuting
paper_content:
Today, with the increasing proliferation of telecommuting in firms, information technology managers are confronted with yet another challenge of what telecommuting technologies and services to offer and to whom these technologies and services should be offered. This study intends to identify the telecommuters' patterns of usage of computer and communication technologies based on their background, employment, residential, and occupation characteristics. Based on a sample of 375 responses, this study finds that all of these factors can help to explain the usage of computer and communications technologies. The implications of the findings for researchers and technology managers are discussed.
---
paper_title: Telecommuting and organizational change: a middle‐managers’ perspective
paper_content:
Telecommuting programs transform communication patterns, performance management, corporate culture, and potentially the work itself. This study addresses middle managers’ views concerning the introduction of telecommuting programs in their organizations. Middle management views are important, because telecommuting directly impacts their positions, and their support is vital to ensure its successful implementation. The findings indicate that the majority of managers perceived cultural change as the most difficult issue to resolve when introducing a telecommuting program.
---
paper_title: COMPARISON OF THE JOB SATISFACTION AND PRODUCTIVITY OF TELECOMMUTERS VERSUS IN-HOUSE EMPLOYEES: A RESEARCH NOTE ON WORK IN PROGRESS
paper_content:
Job satisfaction and productivity were compared for 34 in-house employees and 34 telecommuters performing data-entry and coding. Job satisfaction was measured on the Minnesota Job Satisfaction Ques...
---
paper_title: Comparing Employees in Traditional Job Structures vs Telecommuting Jobs Using Herzberg's Hygienes & Motivators
paper_content:
AbstractAre the factors motivating present telecommuting employees the same motivating factors found for the workers of the Industrial Revolution according to Herzberg's Two-Factor theory? Herzberg concluded that motivating factors increasing job satisfaction are: achievement, recognition, work itself, responsibility, advancement, and growth. By survey of telecommuters, this article shows that telecommuters are motivated by the same Herzberg factors. Additionally, their newfound flexibility and control over their work, schedule and personal life motivates telecommuters, and that work overload has become a serious “dissatisfier” for the telecommuter.
---
paper_title: Communication and coordination in the virtual office
paper_content:
As information technology becomes more pervasive, the structure of the traditional work environment is changing. A number of alternatives are emerging where work is performed at remote locations. Existing work practices and managerial strategies are often not appropriate in this environment. In particular, traditional office communication with coworkers and management, which is often dependent on physical proximity, is disrupted. In this study, individual satisfaction with office communication in the telecommuting and conventional work environments is compared through a study of telecommuters and a comparison group of non-telecommuters in nine firms. We investigate the influence of certain organizational factors, such as job characteristics, IT support, and coordination methods, on satisfaction with office communication in the two work environments. We find telecommuters report higher satisfaction with office communication. Our findings indicate that task predictability, IT support, and electronic coordination have similar influences for both groups. We discuss implications of these findings for research and practice.
---
paper_title: Distributed work arrangements : A research framework
paper_content:
Various distributed work arrangements have been enabled by advances in information system and communication technologies. To date, these new arrangements have met with varying success, and it is unclear what outcomes society, organizations, and individuals expect from such new work settings. Moreover, we do not understand how aspects of the work environment, tasks, employees, management, and technology might interact to result in different outcomes. This article attempts to provide an integrative view of research on distributed work arrangements and provides a framework for exploring the impacts of these arrangements.
---
paper_title: A review of telework research : findings , new directions , and lessons for the study of modern work
paper_content:
Summary Telework has inspired research in disciplines ranging from transportation and urban planning to ethics, law, sociology, and organizational studies. In our review of this literature, we seek answers to three questions: who participates in telework, why they do, and what happens when they do? Who teleworks remains elusive, but research suggests that male professionals and female clerical workers predominate. Notably, work-related factors like managers’ willingness are most predictive of which employees will telework. Employees’ motivations for teleworking are also unclear, as commonly perceived reasons such as commute reduction and family obligations do not appear instrumental. On the firms’ side, managers’ reluctance, forged by concerns about cost and control and bolstered by little perceived need, inhibits the creation of telework programmes. As for outcomes, little clear evidence exists that telework increases job satisfaction and productivity, as it is often asserted to do. We suggest three steps for future research may provide richer insights: consider group and organizational level impacts to understand who telework affects, reconsider why people telework, and emphasize theory-building and links to existing organizational theories. We conclude with lessons learned from the telework literature that may be relevant to research on new work forms and workplaces. Copyright # 2002 John Wiley & Sons, Ltd.
---
paper_title: A study on the usage of computer and communication technologies for telecommuting
paper_content:
Today, with the increasing proliferation of telecommuting in firms, information technology managers are confronted with yet another challenge of what telecommuting technologies and services to offer and to whom these technologies and services should be offered. This study intends to identify the telecommuters' patterns of usage of computer and communication technologies based on their background, employment, residential, and occupation characteristics. Based on a sample of 375 responses, this study finds that all of these factors can help to explain the usage of computer and communications technologies. The implications of the findings for researchers and technology managers are discussed.
---
paper_title: COMPARISON OF THE JOB SATISFACTION AND PRODUCTIVITY OF TELECOMMUTERS VERSUS IN-HOUSE EMPLOYEES: A RESEARCH NOTE ON WORK IN PROGRESS
paper_content:
Job satisfaction and productivity were compared for 34 in-house employees and 34 telecommuters performing data-entry and coding. Job satisfaction was measured on the Minnesota Job Satisfaction Ques...
---
paper_title: INFLUENCES OF THE VIRTUAL OFFICE ON ASPECTS OF WORK AND WORK/LIFE BALANCE
paper_content:
Millions of employees now use portable electronic tools to do their jobs from a “virtual office” with extensive flexibility in the timing and location of work. However, little scholarly research exists about the effects of this burgeoning work form. This study of IBM employees explored influences of the virtual office on aspects of work and work/life balance as reported by virtual office teleworkers (n = 157) and an equivalent group of traditional office workers (n= 89). Qualitative analyses revealed the perception of greater productivity, higher morale, increased flexibility and longer work hours due to telework, as well as an equivocal influence on work/life balance and a negative influence on teamwork. Using a quasi-experimental design, quantitative multivariate analyses supported the qualitative findings related to productivity, flexibility and work/life balance. However, multivariate analyses failed to support the qualitative findings for morale, teamwork and work hours. This study highlights the need for a multi-method approach, including both qualitative and quantitative elements, when studying telework.
---
|
Title: Telecommuting's Past and Future: A Literature Review and Research Agenda
Section 1: Introduction
Description 1: This section should introduce the topic of telecommuting, its promises, and the distinctions among different forms of telecommuting.
Section 2: Background
Description 2: This section should provide historical context, key milestones, and the growth of telecommuting along with relevant studies and legislations.
Section 3: Scope of Study and Research Methodology
Description 3: This section should describe the criteria for selecting articles, databases used, and the methodology for screening the literature.
Section 4: Classification by Orientation
Description 4: This section should categorize the articles by their orientation (descriptive, conceptual, empirical, and case study) and provide examples for each category.
Section 5: Description of Schema
Description 5: This section should explain the schema used to categorize issues into workforce, organizational, technological, and environmental issues.
Section 6: Workforce Issues
Description 6: This section should explore topics from the employee's perspective, including work/life balance, productivity, job satisfaction, and worker attitudes.
Section 7: Organizational Issues
Description 7: This section should discuss topics affecting organizations, including telecommuting adoption, employee retention, intra-organizational communication, and management practices.
Section 8: Technological Issues
Description 8: This section should cover the role of technology in telecommuting, appropriate technologies for telecommuters, and their impact on productivity and communication.
Section 9: Environmental Issues
Description 9: This section should address the environmental impact of telecommuting, including effects on traffic, air quality, and the regulatory landscape.
Section 10: Results and Discussion
Description 10: This section should summarize the findings of the reviewed articles, including the predominant topics and the implications for organizations, workers, and researchers.
Section 11: Directions for Future Research
Description 11: This section should outline unresolved issues and areas for future research, including standard definitions, measurement difficulties, and the impact on business processes and the environment.
Section 12: Conclusion
Description 12: This section should provide a summary of the overall findings and suggest the importance of continued research on telecommuting, highlighting potential contributions to both scholarship and practice.
|
A survey on phrase structure learning methods for text classification
| 17 |
---
paper_title: Nearest Neighbor Pattern Classification
paper_content:
The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{\ast} \leq R \leq R^{\ast}(2 --MR^{\ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.
---
paper_title: Support-Vector Networks
paper_content:
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. ::: ::: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
---
paper_title: Induction of decision trees
paper_content:
The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.
---
paper_title: Automated learning of decision rules for text categorization
paper_content:
We describe the results of extensive experiments using optimized rule-based induction methods on large document collections. The goal of these methods is to discover automatically classification patterns that can be used for general document categorization or personalized filtering of free text. Previous reports indicate that human-engineered rule-based systems, requiring many man-years of developmental efforts, have been successfully built to “read” documents and assign topics to them. We show that machine-generated decision rules appear comparable to human performance, while using the identical rule-based representation. In comparison with other machine-learning techniques, results on a key benchmark from the Reuters collection show a large gain in performance, from a previously reported 67% recall/precision breakeven point to 80.5%. In the context of a very high-dimensional feature space, several methodological alternatives are examined, including universal versus local dictionaries, and binary versus frequency-related features.
---
paper_title: Kazakh Noun Phrase Extraction Based on N-gram and Rules
paper_content:
The aim of the work is to extract Kazakh phrase and basic noun phrase from corpus. For the phrase extraction, N-gram model methods were used, specifically bigram and trigram methods were applied. For basic noun phrase extraction, rule-based methods were used. We started from the grammar structure of basic noun phrase structure model, established a set of rules using the part-of-speech tag and the additional component information of Kazakh basic noun phrase, and extracted the basic noun phrase by rule matching. We have realized the extraction of phrase and basic noun phrase based on corpus of 31 days’ Xinjiang Daily. Experimental results showed that the two methods are feasible, and the extraction accuracies are 50.8% and 79.1% respectively.
---
paper_title: Kazakh Noun Phrase Extraction Based on N-gram and Rules
paper_content:
The aim of the work is to extract Kazakh phrase and basic noun phrase from corpus. For the phrase extraction, N-gram model methods were used, specifically bigram and trigram methods were applied. For basic noun phrase extraction, rule-based methods were used. We started from the grammar structure of basic noun phrase structure model, established a set of rules using the part-of-speech tag and the additional component information of Kazakh basic noun phrase, and extracted the basic noun phrase by rule matching. We have realized the extraction of phrase and basic noun phrase based on corpus of 31 days’ Xinjiang Daily. Experimental results showed that the two methods are feasible, and the extraction accuracies are 50.8% and 79.1% respectively.
---
paper_title: Text Chunking Using Transformation-Based Learning
paper_content:
Transformation-based learning, a technique introduced by Eric Brill (1993b), has been shown to do part-of-speech tagging with fairly high accuracy. This same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive “baseNP” chunks. For this purpose, it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word. In automatic tests using Treebank-derived data, this technique achieved recall and precision rates of roughly 93% for baseNP chunks (trained on 950K words) and 88% for somewhat more complex chunks that partition the sentence (trained on 200K words). Working in this new application and with larger template and training sets has also required some interesting adaptations to the transformation-based learning approach.
---
paper_title: Automated learning of decision rules for text categorization
paper_content:
We describe the results of extensive experiments using optimized rule-based induction methods on large document collections. The goal of these methods is to discover automatically classification patterns that can be used for general document categorization or personalized filtering of free text. Previous reports indicate that human-engineered rule-based systems, requiring many man-years of developmental efforts, have been successfully built to “read” documents and assign topics to them. We show that machine-generated decision rules appear comparable to human performance, while using the identical rule-based representation. In comparison with other machine-learning techniques, results on a key benchmark from the Reuters collection show a large gain in performance, from a previously reported 67% recall/precision breakeven point to 80.5%. In the context of a very high-dimensional feature space, several methodological alternatives are examined, including universal versus local dictionaries, and binary versus frequency-related features.
---
paper_title: Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based mt
paper_content:
Hierarchical phrase alignment is a method for extracting equivalent phrases from bilingual sentences, even though they belong to different language families. The method automatically extracts transfer knowledge from about 125K English and Japanese bilingual sentences and then applies it to a pattern-based MT system. The translation quality is then evaluated. The knowledge needs to be cleaned, since the corpus contains various translations and the phrase alignment contains errors. Various cleaning methods are applied in this paper. The results indicate that when the best cleaning method is used, the knowledge acquired by hierarchical phrase alignment is comparable to manually acquired knowledge.
---
paper_title: Integrated phrase segmentation and alignment algorithm for statistical machine translation
paper_content:
We present an integrated phrase segmentation/alignment algorithm (ISA) for statistical machine translation. Without the need of building an initial word-to-word alignment or initially segmenting the monolingual text into phrases as other methods do, this algorithm segments the sentences into phrases and finds their alignments simultaneously. For each sentence pair, ISA builds a two-dimensional matrix to represent a sentence pair where the value of each cell corresponds to the point-wise mutual information (MI) between the source and target words. Based on the similarities of MI values among cells, we identify the aligned phrase pairs. Once all the phrase pairs are found, we know both how to segment one sentence into phrases and also the alignments between the source and target sentences. We use monolingual bigram language models to estimate the joint probabilities of the identified phrase pairs. The joint probabilities are then normalized to conditional probabilities, which are used by the decoder. Despite its simplicity, this approach yields phrase-to-phrase translations with significant higher precisions than our baseline system where phrase translations are extracted from the HMM word alignment. When we combine the phrase-to-phrase translations generated by this algorithm with the baseline system, the improvement on translation quality is even larger.
---
paper_title: Effective Phrase Translation Extraction From Alignment Models
paper_content:
Phrase level translation models are effective in improving translation quality by addressing the problem of local re-ordering across language boundaries. Methods that attempt to fundamentally modify the traditional IBM translation model to incorporate phrases typically do so at a prohibitive computational cost. We present a technique that begins with improved IBM models to create phrase level knowledge sources that effectively represent local as well as global phrasal context. Our method is robust to noisy alignments at both the sentence and corpus level, delivering high quality phrase level translation pairs that contribute to significant improvements in translation quality (as measured by the BLEU metric) over word based lexica as well as a competing alignment based method.
---
paper_title: A Generalized Alignment-Free Phrase Extraction
paper_content:
In this paper, we present a phrase extraction algorithm using a translation lexicon, a fertility model, and a simple distortion model. Except these models, we do not need explicit word alignments for phrase extraction. For each phrase pair (a block), a bilingual lexicon based score is computed to estimate the translation quality between the source and target phrase pairs; a fertility score is computed to estimate how good the lengths are matched between phrase pairs; a center distortion score is computed to estimate the relative position divergence between the phrase pairs. We presented the results and our experience in the shared tasks on French-English.
---
paper_title: A Generalized Alignment-Free Phrase Extraction
paper_content:
In this paper, we present a phrase extraction algorithm using a translation lexicon, a fertility model, and a simple distortion model. Except these models, we do not need explicit word alignments for phrase extraction. For each phrase pair (a block), a bilingual lexicon based score is computed to estimate the translation quality between the source and target phrase pairs; a fertility score is computed to estimate how good the lengths are matched between phrase pairs; a center distortion score is computed to estimate the relative position divergence between the phrase pairs. We presented the results and our experience in the shared tasks on French-English.
---
paper_title: LOOSE PHRASE EXTRACTION WITH n-BEST ALIGNMENTS
paper_content:
Loose phrase extraction method is proposed and applied for phrase-based statistical machine translation. The method extracts phrase pairs that are not strictly consistent with word alignments. Two types of constraints on word positions are investigated for this method. Furthermore, n-best alignments are introduced for phrase extraction instead of the one-best. Experimental results show that the proposed approach outperforms the baseline system, Pharaoh system, for both one-best and n-best alignments.
---
paper_title: Integrating a Rule-based with a Hierarchical Translation System.
paper_content:
Recent developments on hybrid systems that combine rule-based machine translation (RBMT) systems with statistical machine translation (SMT) generally neglect the fact that RBMT systems tend to produce more syntactically well-formed translations than data-driven systems. This paper proposes a method that alleviates this issue by preserving more useful structures produced by RBMT systems and utilizing them in a SMT system that operates on hierarchical structures instead of flat phrases alone. For our experiments, we use Joshua as the decoder (Li et al., 2009). It is the first attempt towards a tighter integration of MT systems from different paradigms that both support hierarchical analyses. Preliminary results show consistent improvements over the previous approach.
---
paper_title: Using Moses to Integrate Multiple Rule-Based Machine Translation Engines into a Hybrid System
paper_content:
Based on an architecture that allows to combine statistical machine translation (SMT) with rule-based machine translation (RBMT) in a multi-engine setup, we present new results that show that this type of system combination can actually increase the lexical coverage of the resulting hybrid system, at least as far as this can be measured via BLEU score.
---
paper_title: Phrase Extraction for Japanese Predictive Input Method as Post-Processing
paper_content:
We propose a novel phrase extraction system to generate a phrase dictionary for predictive input methods from a large corpus. This system extracts phrases after counting n-grams so that it can be easily maintained, tuned, and re-executed independently. We developed a rule-based filter based on part-of-speech (POS) patterns to extract Japanese phrases. Our experiment shows usefulness of our system, which achieved a precision of 0.90 and a recall of 0.81, outperforming the N-gram baseline by a large margin.
---
paper_title: Integrated phrase segmentation and alignment algorithm for statistical machine translation
paper_content:
We present an integrated phrase segmentation/alignment algorithm (ISA) for statistical machine translation. Without the need of building an initial word-to-word alignment or initially segmenting the monolingual text into phrases as other methods do, this algorithm segments the sentences into phrases and finds their alignments simultaneously. For each sentence pair, ISA builds a two-dimensional matrix to represent a sentence pair where the value of each cell corresponds to the point-wise mutual information (MI) between the source and target words. Based on the similarities of MI values among cells, we identify the aligned phrase pairs. Once all the phrase pairs are found, we know both how to segment one sentence into phrases and also the alignments between the source and target sentences. We use monolingual bigram language models to estimate the joint probabilities of the identified phrase pairs. The joint probabilities are then normalized to conditional probabilities, which are used by the decoder. Despite its simplicity, this approach yields phrase-to-phrase translations with significant higher precisions than our baseline system where phrase translations are extracted from the HMM word alignment. When we combine the phrase-to-phrase translations generated by this algorithm with the baseline system, the improvement on translation quality is even larger.
---
|
Title: A Survey on Phrase Structure Learning Methods for Text Classification
Section 1: INTRODUCTION
Description 1: Introduce the topic of text classification, its applications, and the importance of phrase structure learning in improving text classification tasks.
Section 2: SURVEYED TECHNIQUES
Description 2: Summarize various phrase structure extraction techniques for text classification, detailing methods such as N-gram based approach, Rule based method, Word alignment based method, and others.
Section 3: Basic N-gram based approach
Description 3: Describe the N-gram based approach, including its statistical methods, applications, accuracy, and limitations.
Section 4: Rule based method
Description 4: Detail the rule-based method for phrase extraction, its approaches, accuracy, and comparison with N-gram based methods.
Section 5: Word alignment based method
Description 5: Discuss the word alignment based method along with its statistical approach, techniques, advantages, and measuring scores.
Section 6: Phrase alignment based method
Description 6: Explain the phrase alignment based method, including joint probability models, initial steps, performance, and improvements over other methods.
Section 7: Syntactic approach
Description 7: Outline the syntactic approach technique, including parsing of sentences, syntactic phrases, and BLEU score measurement.
Section 8: Mutual Information based method
Description 8: Describe the mutual information based method, including integrating phrase segmentation and alignment, its advantages, and measuring scores.
Section 9: Bilingual N-gram based approach
Description 9: Provide details on the bilingual N-gram based approach, including its phases, methods, benefits, and performance measures.
Section 10: Block based method
Description 10: Explain the block based method for phrase translation extraction, models used, and computational considerations.
Section 11: Clustering method
Description 11: Discuss the clustering method, its statistical approach, steps, and performance comparison.
Section 12: Loose phrase extraction method
Description 12: Describe the loose phrase extraction method with n-best alignments and constraints applied to extracted phrases.
Section 13: Word alignment and Rule based approach
Description 13: Provide an overview of the hybrid method integrating rule-based and hierarchical translation systems.
Section 14: N-gram and Rule based approach
Description 14: Explain the hybrid N-gram and Rule based approach, its methodology, errors observed, and performance measures.
Section 15: CLASSIFICATION
Description 15: Detail the classification of different phrase structure learning methods into statistical, rule-based, and hybrid methods.
Section 16: OBSERVATIONS AND DISCUSSION
Description 16: Compare and contrast various methods based on several factors and discuss their efficiency and performance.
Section 17: CONCLUSION
Description 17: Summarize the survey findings, emphasize the importance of phrases in text classification, and conclude with the most promising technique identified.
|
A Survey of Multicasting in Optical Burst Switched Networks: Future Research Directions
| 8 |
---
paper_title: Optical burst switching: a new area in optical networking research
paper_content:
In this tutorial, we give an introduction to optical burst switching and compare it with other existing optical switching paradigms. Basic burst assembly algorithms and their effect on assembled burst traffic characteristics are described first. Then a brief review of the early work on burst transmission is provided, followed by a description of a prevailing protocol for OBS networks called just-enough-time (JET). Algorithms used as an OBS core node for burst scheduling as well as contention resolution strategies are presented next. Trade-offs between their performance and implementation complexities are discussed. Recent work on QoS support, IP/WDM multicast, TCP performance in OBS networks, and labeled OBS is also described, and several open issues are mentioned.
---
paper_title: High-speed protocol for bursty traffic in optical networks
paper_content:
An optical backbone network based on WDM (or OTDM) technology may become an economical choice for providing future broadband services. To achieve a balance between the coarse-grain optical circuit switching (via wavelength routing) and fine- grain optical packet/cell switching, optical burst switching is proposed. We study a one-way reservation protocol called just-enough-time (JET), which is suitable for switching bursty traffic in a high speed optical backbone network. The JET protocol has two unique, integrated features, namely, the use of delayed reservation (DR) and buffered burst multiplexers (BBM). By virtue of DR, the JET protocol not only increases the bandwidth utilization, but also facilitates intelligent buffer management in BBMs, and consequently results in a high through-put. Both analysis and simulation results show that the JET protocol can significantly outperform other one-way reservation protocols lacking one or both of these features.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: WDM multicasting in IP over WDM networks
paper_content:
Supporting WDM multicasting in an IP over WDM network poses interesting problems because some WDM switches may be incapable of switching an incoming signal to more than one output interface. An approach to WDM multicasting based on wavelength-routing, which constructs a multicast forest for each multicast session so that multicast-incapable WDM switches do not need to multicast, was proposed and evaluated previously. Such an approach requires global knowledge of the WDM layer. In this paper, we study WDM multicasting in an IP over WDM network under the framework of multiprotocol label switching (MPLS) using optical burst/label switching (OBS/OLS). We propose a protocol which modifies a multicast tree constructed by distance vector multicast routing protocol (DVMRP) into a multicast forest based on the local information only.
---
paper_title: QoS performance of optical burst switching in IP-over-WDM networks
paper_content:
We address the issue of how to provide basic quality of service (QoS) in optical burst-switched WDM networks with limited fiber delay lines (FDLs). Unlike existing buffer-based QoS schemes, the novel offset-time-based QoS scheme we study in this paper does not mandate any buffer for traffic isolation, but nevertheless can take advantage of FDLs to improve the QoS. This makes the proposed QoS scheme suitable for the next generation optical Internet. The offset times required for class isolation when making wavelength and FDL reservations are quantified, and the upper and lower bounds on the burst loss probability are analyzed. Simulations are also conducted to evaluate the QoS performance in terms of burst loss probability and queuing delay. We show that with limited FDLs, the offset-time-based QoS scheme can be very efficient in supporting basic QoS.
---
paper_title: Terabit burst switching
paper_content:
Demand for network bandwidth is growing at unprecedented rates, placing growing demands on switching and transmission technologies. Wavelength division multiplexing will soon make it possible to combine hundreds of gigabit channels on a single fiber. This paper presents an architecture for Burst Switching Systems designed to switch data among WDM links, treating each link as a shared resource rather than just a collection of independent channels. The proposed network architecture separates burst level data and control, allowing major simplifications in the data path in order to facilitate all-optical implementations. To handle short data bursts efficiently, the burst level control mechanisms in burst switching systems must keep track of future resource availability when assigning arriving data bursts to channels or storage locations. The resulting Lookahead Resource Management problems raise new issues and require the invention of completely new types of high speed control mechanisms. This paper introduces these problems and describes approaches to burst level resource management that attempt to strike an appropriate balance between high speed operation and efficiency of resource usage.
---
paper_title: On fundamental issues in IP over WDM multicast
paper_content:
As WDM technology matures, IP over WDM multicast will become a challenging new topic. Supporting multicast at the WDM layer provides additional advantages, but also raises many new issues that do not exist in IP multicast. For example, the limitation on the light splitting capability of switches is one major difficulty in WDM multicast, and in addition, the limitations on both the wavelength conversion capability and optical buffer space may affect multicast routing as well. In this paper, we focus on the IP over WDM multicast routing problem, i.e. how to construct multicast trees at the WDM layer based on IP multicast routing protocols. More specifically, we study how label switched paths for optical label switching can be set up for multicast traffic. We propose two approaches, one without modification of existing IP multicast routing protocols, and the other with modification of existing IP multicast routing protocols.
---
paper_title: Assembling TCP/IP packets in optical burst switched networks
paper_content:
Optical burst switching (OBS) is a promising paradigm for the next-generation Internet infrastructure. We study the performance of TCP traffic in OBS networks and in particular, the effect of assembly algorithms on TCP traffic. We describe three assembly algorithms in this paper and compare them using the same TCP traffic input. The results show that the performance of the proposed adaptive-assembly-period (AAP) algorithm is better than that of the min-burstlength-max-assembly-period (MBMAP) algorithm and the fixed-assembly-period (FAP) algorithm in terms of goodput and data loss rate. The results also indicate that burst assembly mechanisms affect the behavior of TCP in that the assembled TCP traffic becomes smoother in the short term, and more suitable for transmission in optical networks.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: On fundamental issues in IP over WDM multicast
paper_content:
As WDM technology matures, IP over WDM multicast will become a challenging new topic. Supporting multicast at the WDM layer provides additional advantages, but also raises many new issues that do not exist in IP multicast. For example, the limitation on the light splitting capability of switches is one major difficulty in WDM multicast, and in addition, the limitations on both the wavelength conversion capability and optical buffer space may affect multicast routing as well. In this paper, we focus on the IP over WDM multicast routing problem, i.e. how to construct multicast trees at the WDM layer based on IP multicast routing protocols. More specifically, we study how label switched paths for optical label switching can be set up for multicast traffic. We propose two approaches, one without modification of existing IP multicast routing protocols, and the other with modification of existing IP multicast routing protocols.
---
paper_title: Control architecture in optical burst-switched WDM networks
paper_content:
Optical burst switching (OBS) is a promising solution for building terabit optical routers and realizing IP over WDM. In this paper, we describe the basic concept of OBS and present a general architecture of optical core routers and electronic edge routers in the OBS network. The key design issues related to the OBS are also discussed, namely, burst assembly (burstification), channel scheduling, burst offset-time management, and some dimensioning rules. A nonperiodic time-interval burst assembly mechanism is described. A class of data channel scheduling algorithms with void filling is proposed for optical routers using a fiber delay line buffer. The LAUC-VF (latest available unused channel with void filling) channel scheduling algorithm is studied in detail. Initial results on the burst traffic characteristics and on the performance of optical routers in the OBS network with self-similar traffic as inputs are reported in the paper.
---
paper_title: Constrained multicast routing in WDM networks with sparse light splitting
paper_content:
As WDM technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (or copying data in the optical domain). Specifically, we propose four WDM multicast routing algorithms, namely, Re-route-to Source, Re-route-to-Any, Member-First, and Member-Only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast light-forest (consisting one or more multicast trees) for each multicast session. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
paper_content:
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Distributed shared multicast tree construction protocols for tree-shared multicasting in OBS networks
paper_content:
Tree-shared multicasting in OBS networks can achieve bandwidth savings, less processing load, and lower burst blocking (loss) probability. In this paper, we propose several distributed shared multicast tree construction protocols, namely greedy-prune, non-member-join, all-member-join, closest-member on-tree (CMOT), and closest-node on-tree (CNOT), for tree-shared multicasting in OBS networks. For performance comparison, we also consider an optimal shared tree which is modeled as Steiner minimal tree. We evaluate the proposed protocols using simulations in terms of cost of the shared tree to the optimal shared tree. Simulations show that the CNOT and CMOT protocols outperform the other three proposed protocols in terms of the cost of the shared tree, and perform close to cost of the optimal shared tree.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: On a new multicasting approach in optical burst switched networks
paper_content:
We introduce a new multicasting approach, called tree-shared multicast (TS-MCAST), in order to alleviate overheads due to control packets and guard bands associated with data bursts when transporting multicast IP traffic in optical burst-switched WDM networks. We describe three tree sharing strategies and discuss implementation issues in constructing shared multicast trees for supporting TS-MCAST. Finally, we show the efficiency of TS-MCAST using simulation results.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: On a new multicasting approach in optical burst switched networks
paper_content:
We introduce a new multicasting approach, called tree-shared multicast (TS-MCAST), in order to alleviate overheads due to control packets and guard bands associated with data bursts when transporting multicast IP traffic in optical burst-switched WDM networks. We describe three tree sharing strategies and discuss implementation issues in constructing shared multicast trees for supporting TS-MCAST. Finally, we show the efficiency of TS-MCAST using simulation results.
---
paper_title: Tree-shared multicast in optical burst-switched WDM networks
paper_content:
In this paper, we propose a new multicast scheme called tree-shared multicasting (TS-MCAST) in optical burst-switched wavelength-division-multiplexing networks, taking into consideration overheads due to control packets and guard bands (GBs) associated with data bursts. In TS-MCAST, multicast traffic belonging to multiple multicast sessions from the same source-edge node to possibly different destination-edge nodes can be multiplexed together in a data burst, which is delivered via a shared multicast tree. To support TS-MCAST, we propose three tree-sharing strategies based on equal coverage, super coverage, and overlapping coverage, and present a simple shared multicast tree-construction algorithm. For performance comparison, we consider two other multicast schemes: separate multicasting (S-MCAST) and multiple unicasting (M-UCAST). We show that TS-MCAST outperforms S-MCAST and M-UCAST in terms of bandwidth consumed and processing load (i.e., number of control packets) incurred for a given amount of multicast traffic under the same unicast traffic load with static multicast sessions and membership.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: On a new multicasting approach in optical burst switched networks
paper_content:
We introduce a new multicasting approach, called tree-shared multicast (TS-MCAST), in order to alleviate overheads due to control packets and guard bands associated with data bursts when transporting multicast IP traffic in optical burst-switched WDM networks. We describe three tree sharing strategies and discuss implementation issues in constructing shared multicast trees for supporting TS-MCAST. Finally, we show the efficiency of TS-MCAST using simulation results.
---
paper_title: Tree-shared multicast in optical burst-switched WDM networks
paper_content:
In this paper, we propose a new multicast scheme called tree-shared multicasting (TS-MCAST) in optical burst-switched wavelength-division-multiplexing networks, taking into consideration overheads due to control packets and guard bands (GBs) associated with data bursts. In TS-MCAST, multicast traffic belonging to multiple multicast sessions from the same source-edge node to possibly different destination-edge nodes can be multiplexed together in a data burst, which is delivered via a shared multicast tree. To support TS-MCAST, we propose three tree-sharing strategies based on equal coverage, super coverage, and overlapping coverage, and present a simple shared multicast tree-construction algorithm. For performance comparison, we consider two other multicast schemes: separate multicasting (S-MCAST) and multiple unicasting (M-UCAST). We show that TS-MCAST outperforms S-MCAST and M-UCAST in terms of bandwidth consumed and processing load (i.e., number of control packets) incurred for a given amount of multicast traffic under the same unicast traffic load with static multicast sessions and membership.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: On a new multicasting approach in optical burst switched networks
paper_content:
We introduce a new multicasting approach, called tree-shared multicast (TS-MCAST), in order to alleviate overheads due to control packets and guard bands associated with data bursts when transporting multicast IP traffic in optical burst-switched WDM networks. We describe three tree sharing strategies and discuss implementation issues in constructing shared multicast trees for supporting TS-MCAST. Finally, we show the efficiency of TS-MCAST using simulation results.
---
paper_title: Tree-shared multicast in optical burst-switched WDM networks
paper_content:
In this paper, we propose a new multicast scheme called tree-shared multicasting (TS-MCAST) in optical burst-switched wavelength-division-multiplexing networks, taking into consideration overheads due to control packets and guard bands (GBs) associated with data bursts. In TS-MCAST, multicast traffic belonging to multiple multicast sessions from the same source-edge node to possibly different destination-edge nodes can be multiplexed together in a data burst, which is delivered via a shared multicast tree. To support TS-MCAST, we propose three tree-sharing strategies based on equal coverage, super coverage, and overlapping coverage, and present a simple shared multicast tree-construction algorithm. For performance comparison, we consider two other multicast schemes: separate multicasting (S-MCAST) and multiple unicasting (M-UCAST). We show that TS-MCAST outperforms S-MCAST and M-UCAST in terms of bandwidth consumed and processing load (i.e., number of control packets) incurred for a given amount of multicast traffic under the same unicast traffic load with static multicast sessions and membership.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: Efficient multicast schemes for optical burst-switched WDM networks
paper_content:
In this paper, we study several multicast schemes in optical burst-switched WDM networks taking into consideration of the overheads due to control packets and guard bands (GBs) of bursts on separate channels (wavelengths). A straightforward scheme is called separate multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to GBs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called multiple unicasting (M-UCAST). The third scheme is called tree-shared multicasting (TS-MCAST) whereby multicast traffic belonging to multiple multicast sessions can be mixed together in a burst, which is delivered via a shared multicast tree. The multicast schemes (M-UCAST and TS-MCAST) are compared with S-MCAST in terms of bandwidth consumed and processing load.
---
paper_title: A fast algorithm for Steiner trees
paper_content:
Given an undirected distance graph G=(V, E, d) and a set S, where V is the set of vertices in G, E is the set of edges in G, d is a distance function which maps E into the set of nonnegative numbers and S?V is a subset of the vertices of V, the Steiner tree problem is to find a tree of G that spans S with minimal total distance on its edges. In this paper, we analyze a heuristic algorithm for the Steiner tree problem. The heuristic algorithm has a worst case time complexity of O(¦S¦¦V¦ 2) on a random access computer and it guarantees to output a tree that spans S with total distance on its edges no more than 2(1?1/l) times that of the optimal tree, where l is the number of leaves in the optimal tree.
---
paper_title: Tree-shared multicast in optical burst-switched WDM networks
paper_content:
In this paper, we propose a new multicast scheme called tree-shared multicasting (TS-MCAST) in optical burst-switched wavelength-division-multiplexing networks, taking into consideration overheads due to control packets and guard bands (GBs) associated with data bursts. In TS-MCAST, multicast traffic belonging to multiple multicast sessions from the same source-edge node to possibly different destination-edge nodes can be multiplexed together in a data burst, which is delivered via a shared multicast tree. To support TS-MCAST, we propose three tree-sharing strategies based on equal coverage, super coverage, and overlapping coverage, and present a simple shared multicast tree-construction algorithm. For performance comparison, we consider two other multicast schemes: separate multicasting (S-MCAST) and multiple unicasting (M-UCAST). We show that TS-MCAST outperforms S-MCAST and M-UCAST in terms of bandwidth consumed and processing load (i.e., number of control packets) incurred for a given amount of multicast traffic under the same unicast traffic load with static multicast sessions and membership.
---
paper_title: Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
paper_content:
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
paper_content:
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
paper_content:
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
|
Title: A Survey of Multicasting in Optical Burst Switched Networks: Future Research Directions
Section 1: INTRODUCTION
Description 1: Summarize the introduction to Optical Burst Switching (OBS) technology, its benefits, and the relevance of multicast services over OBS networks.
Section 2: Optical Burst Switched Networks
Description 2: Discuss the characteristics and operational principles of Optical Burst Switched Networks, including the burstification process, control packet creation, and the separation of control functions and data transmission.
Section 3: Multicasting in Optical Burst Switched Networks
Description 3: Provide an overview of multicasting in OBS networks, outlining the increasing demand for multimedia distribution, multicast tree construction, control packet transmission, and guard band considerations.
Section 4: MULTICASTING SCHEMES IN OPTICAL BURST SWITCHED NETWORKS
Description 4: Detail the various multicasting schemes in OBS, including Separate Multicasting (S-MCAST), Multiple Unicasting (M-UCAST), and Tree Shared Multicasting (TS-MCAST), and evaluate their performance.
Section 5: TREE SHARING MULTICASTING
Description 5: Explain the concept of tree sharing in multicasting, including different strategies for tree sharing and the construction of shared trees using algorithms like Greedy, Breadth First Search, and Member Initiated.
Section 6: MULTICAST SCHEMES FOR DYNAMIC SESSIONS AND MEMBERSHIP
Description 6: Discuss the extension of multicast schemes to support dynamic sessions and membership changes, including approaches for dynamic sessions and re-grooming strategies for dynamic membership.
Section 7: Small Group Multicast with Deflection Routing
Description 7: Describe the OXCast multicast scheme for small group multicasting in OBS, including its approach to deflection routing and its impact on burst loss probability and delay.
Section 8: CONCLUSION AND FUTURE RESEARCH DIRECTIONS
Description 8: Summarize the key findings and identify the serious problems and areas that require further research, such as QoS-aware multicast sessions, optimal resource utilization, and the impact on business models.
|
A Review of Network Based Mobility Management Schemes, WSN Mobility in 6LoWPAN Domain and Open Challenges
| 6 |
---
paper_title: Performance evaluation of multihomed NEMO
paper_content:
Mobile networks can be formed in bus, train, aircrafts, satellites with a wide variety of on-board IP-enabled devices and Network Mobility (NEMO) protocols are required to support uninterrupted services to ongoing sessions. Earlier works have not demonstrated seamless handover for NEMO architecture. In this work, we proposed a handover scheme for NEMO that exploits the multi-homing feature of the Mobile Router and uses make-before-break strategy to ensure seamless handover for NEMO. Using experimental testbed, we have presented a thorough handoff performance evaluation of multihomed NEMO and compared it with basic NEMO. Results demonstrate that the proposed multihomed NEMO outperforms the basic NEMO while achieving seamless handover.
---
paper_title: Proxy Mobile IPv6
paper_content:
Network-based mobility management enables IP mobility for a host ::: without requiring its participation in any mobility-related signaling. ::: The network is responsible for managing IP mobility on behalf of the ::: host. The mobility entities in the network are responsible for ::: tracking the movements of the host and initiating the required ::: mobility signaling on its behalf. This specification describes a ::: network-based mobility management protocol and is referred to as Proxy ::: Mobile IPv6. [STANDARDS-TRACK]
---
paper_title: Routing and mobility approaches in IPv6 over LoWPAN mesh networks
paper_content:
It is foreseeable that any object in the near future will have an Internet connection—this is the Internet of Things vision. All these objects will be able to exchange and process information, most of them characterized by small size, power constrained, small computing and storage resources. In fact, connecting embedded low-power devices to the Internet is considered the biggest challenge and opportunity for the Internet. There is a strong trend of convergence towards an Internet-based solution and the 6LoWPAN may be the convergence solution to achieve the Internet of Things vision. Wireless mesh networks have attracted the interest of the scientific community in recent years. One of the key characteristics of wireless mesh networks is the ability to self-organize and self-configure. Mesh networking and mobility support are considered crucial to the Internet of Things success. This paper surveys the available solutions proposed to support routing and mobility over 6LoWPAN mesh networks. Copyright © 2011 John Wiley & Sons, Ltd.
---
paper_title: Comparative Handover Performance Analysis of IPv6 Mobility Management Protocols
paper_content:
IPv6 mobility management is one of the most challenging research topics for enabling mobility service in the forthcoming mobile wireless ecosystems. The Internet Engineering Task Force has been working for developing efficient IPv6 mobility management protocols. As a result, Mobile IPv6 and its extensions such as Fast Mobile IPv6 and Hierarchical Mobile IPv6 have been developed as host-based mobility management protocols. While the host-based mobility management protocols were being enhanced, the network-based mobility management protocols such as Proxy Mobile IPv6 (PMIPv6) and Fast Proxy Mobile IPv6 (FPMIPv6) have been standardized. In this paper, we analyze and compare existing IPv6 mobility management protocols including the recently standardized PMIPv6 and FPMIPv6. We identify each IPv6 mobility management protocol's characteristics and performance indicators by examining handover operations. Then, we analyze the performance of the IPv6 mobility management protocols in terms of handover latency, handover blocking probability, and packet loss. Through the conducted numerical results, we summarize considerations for handover performance.
---
paper_title: A 6LoWPAN Sensor Node Mobility Scheme Based on Proxy Mobile IPv6
paper_content:
In this paper, we focus on a scheme that supports mobility for IPv6 over Low power Wireless Personal Area Network (6LoWPAN) sensor nodes. We define a protocol for 6LoWPAN mobile sensor node, named 6LoMSN, based on Proxy Mobile IPv6 (PMIPv6). The conventional PMIPv6 standard supports only single-hop networks and cannot be applied to multihop-based 6LoWPAN. It does not support the mobility of 6LoMSNs and 6LoWPAN gateways, named 6LoGW, cannot detect the PAN attachment of the 6LoMSN. Therefore, we define the movement notification of a 6LoMSN in order to support its mobility in multihop-based 6LoWPAN environments. The attachment of 6LoMSNs reduces signaling costs over the wireless link by using router solicitation (RS) and router advertisement (RA) messages. Performance results show that our proposed scheme can minimize the total signaling costs and handoff latency. Additionally, we present the design and implementation of the 6LoMSN mobility based on PMIPv6 for a healthcare system. According to the experimental results, the 6LoMSN of the proposed PMIPv6-based 6LoWPAN can be expected to use more of the battery lifetime. We also verify that the 6LoMSN can maintain connectivity, even though it has the freedom of being able to move between PANs without a mobility protocol stack.
---
paper_title: eHealth service support in IPv6 vehicular networks
paper_content:
Recent vehicular networking activities include public vehicle to vehicle/infrastructure (V2X) large scale deployment, machine-to-machine (M2M) integration scenarios and more automotive applications. eHealth is about the use of the Internet to disseminate health related information, and is one of the promising Internet of Things (IoT) applications. Combining vehicular networking and eHealth to record and transmit a patient's vital signs is a special telemedicine application that helps hospital resident health professionals to optimally prepare the patient's admittance. From the automotive perspective, this is a typical Vehicle-to-Infrastructure (V2I) communication scenario. This proposal provides an IPv6 vehicular platform which integrates eHealth devices and allows sending captured health-related data to a Personal Health Record (PHR) application server in the IPv6 Internet. The collected data is viewed remotely by a doctor and supports a diagnostic decision. This paper introduces the integration of vehicular and eHealth testbeds, describes related work and presents a lightweight auto-configuration method based on a DHCPv6 extension to provide IPv6 connectivity for resource constrained devices.
---
paper_title: F-LQE: a fuzzy link quality estimator for wireless sensor networks
paper_content:
Radio Link Quality Estimation (LQE) is a fundamental building block for Wireless Sensor Networks, namely for a reliable deployment, resource management and routing. Existing LQEs (e.g. PRR, ETX, Four-bit, and LQI) are based on a single link property, thus leading to inaccurate estimation. In this paper, we propose F-LQE, that estimates link quality on the basis of four link quality properties: packet delivery, asymmetry, stability, and channel quality. Each of these properties is defined in linguistic terms, the natural language of Fuzzy Logic. The overall quality of the link is specified as a fuzzy rule whose evaluation returns the membership of the link in the fuzzy subset of good links. Values of the membership function are smoothed using EWMA filter to improve stability. An extensive experimental analysis shows that F-LQE outperforms existing estimators.
---
paper_title: Mobile multimedia in wireless sensor networks
paper_content:
One of the most referred and promising Wireless Sensor Network (WSN) applications is health monitoring. The small size and portability of nodes have made WSNs the perfect tool to easily monitor a person's health condition. In this type application, as well as in several other critical applications, reliability and mobility are paramount. In this paper we propose a method, based on WSNs and mobile intra-body sensors, to accurately detect the fertile period of women on time and other applications based on intra-vaginal temperature monitoring. In addition, our proposal also introduces the proposal of intra-body micro-cameras to monitor the women's cervix, capable to detect related pathologies. To efficiently support this mobile multimedia application, guaranteeing reliability in a continuous monitoring mode, we make use of a new WSN paradigm based on mobility proxies.
---
paper_title: Review: Mobility management for IP-based next generation mobile networks: Review, challenge and perspective
paper_content:
IP Mobility management protocols are divided into two kinds of category: host-based and network-based mobility protocol. The former category, such as MIPv6 protocol and its enhancements (e.g., HMIPv6 and FMIPv6), supports the mobility of a Mobile Node (MN) to roam across network domains. This is done through the involvement of MN in the mobility-related signalling, which requires protocol stack modification and IP address changes on the MN. The latter category, such as PMIPv6 protocol, handles mobility management on behalf of the MN thereby enabling it to connect and roam within localized domains, which requires neither protocol stack modification nor IP address change of the MN. PMIPv6 attracts attention in the Internet and telecommunication societies by improving the performance of the MN's communication to fulfil the requirements of QoS for real-time services. In this article, we present IPv6 features to support mobile systems and survey the mobility management services along with their techniques, strategies and protocol categories, and elaborate upon the classification and comparison among various mobility management protocols. Furthermore, it identifies and discusses several issues and challenges facing mobility management along with an evaluation and comparison of several relevant mobility studies.
---
paper_title: Mobility Support for Health Monitoring at Home Using Wearable Sensors
paper_content:
We present a simple but effective handoff protocol that enables continuous monitoring of ambulatory patients at home by means of resource-limited sensors. Our proposed system implements a 2-tier network: one created by wearable sensors used for vital signs collection, and another by a point-to-point link established between the body sensor network coordinator device and a fixed access point (AP). Upon experiencing poor signal reception in the latter network tier when the patient moves, the AP may instruct the sensor network coordinator to forward vital signs data through one of the wearable sensor nodes acting as a temporary relay if the sensor-AP link has a stronger signal. Our practical implementation of the proposed scheme reveals that this relayed data operation decreases packet loss rate down to 20% of the value otherwise obtained when solely using the point-to-point, coordinator-AP link. In particular, the wrist location yields the best results over alternative body sensor positions when patients walk at a 0.5 m/s.
---
paper_title: Mobile IP-Based Protocol for Wireless Personal Area Networks in Critical Environments
paper_content:
Low-power Wireless Personal Area Networks (LoWPANs) are still in their early stage of development, but the range of conceivable usage scenarios and applications is tremendous. That range is extended by its inclusion in Internet with IPv6 Low-Power Personal Area Networks (6LoWPANs). This makes it obvious that multi-technology topologies, security and mobility support will be prevalent in 6LoWPAN. Mobility based communication increases the connectivity, and allows extending and adapting LoWPANs to changes in their location and environment infrastructure. However, the required mobility is heavily dependent on the individual service scenario and the LoWPAN architecture. In this context, an optimized solution is proposed for critical applications, such as military, fire rescue or healthcare, where people need to frequently change their position. Our scenario is health monitoring in an oil refinery where many obstacles have been found to the effective use of LoWPANs in these scenarios, mainly due to transmission medium features i.e. high losses, high latency and low reliability. Therefore, it is very difficult to provide continuous health monitoring with such stringent requirements on mobility. In this paper, a paradigm is proposed for mobility over 6LoWPAN for critical environments. On the one hand the intra-mobility is supported by GinMAC, which is an extension of IEEE 802.15.4 to support a topology control algorithm, which offers intra-mobility transparently, and Movement Direction Determination (MDD) of the Mobile Node (MN). On the other hand, the inter-mobility is based on pre-set-up of the network parameters in the visited networks, such as Care of Address and channel, to reach a fast and smooth handoff. Pre-set-up is reached since MDD allows discovering the next 6LoWPAN network towards which MN is moving. The proposed approach has been simulated, prototyped, evaluated, and is being studied in a scenario of wearable physiological monitoring in hazardous industrial areas, specifically oil refineries, in the scope of the GinSeng European project.
---
paper_title: An Enhanced Group Mobility Protocol for 6LoWPAN-Based Wireless Body Area Networks
paper_content:
The IPv6 over low power wireless personal area network (6LoWPAN) has attracted lots of attention recently because it can be used for the communications of Internet of things. In this paper, the concept of group-based network roaming in proxy mobile IPv6 (PMIPv6) domain is considered in the 6LoWPAN-based wireless body area networks. PMIPv6 is a standard to manage the network-based mobility in all-IP wireless network. However, it does not perform well in group-based body area networks. To further reduce the handoff delay and signaling cost, an enhanced group mobility scheme is proposed in this paper to reduce the number of control messages, including router solicitation and router advertisement messages as opposed to the group-based PMIPv6 protocol. Simulation results illustrate that the proposed handoff scheme can reduce the handoff delay and signaling cost. The packet loss ratio and the overhead can also be reduced.
---
paper_title: A network-based mobility management scheme for future Internet
paper_content:
Abstract The current Internet was originally designed for “fixed” terminals and can hardly support mobility. It is necessary to develop new mobility management schemes for the future Internet. This paper proposes an Identifiers Separating and Mapping Scheme (ISMS), which is a candidate for the future Internet mobility management, and discusses the basic principles and detailed message flow. ISMS is a network-based mobility management scheme that takes advantage of the identity and location separation. The mobility entities in the core network are responsible for the location management. ISMS is designed to satisfy the requirements of faster handover, route optimism, advanced management, location privacy and security. The average handover delay of ISMS is on the order of milliseconds only, which is far smaller than that of Mobile IPv6. Analyses show that ISMS can reduce packet overhead on wireless channels. We build a prototype and perform some experiments. Results verify the feasibility of ISMS.
---
paper_title: Inter-MARIO: A Fast and Seamless Mobility Protocol to Support Inter-Pan Handover in 6LoWPAN
paper_content:
Mobility management is one of the most important research issues in 6LoWPAN, which is a standardizing IP-based Wireless Sensor Networks(IP-WSN) protocol. Since the IP-WSN application domain is expanded to real-time applications such as healthcare and surveillance systems, a fast and seamless handover becomes an important criterion for mobility support in 6LoWPAN. Unfortunately, since existing mobility protocols for 6LoWPAN have not solved how to reduce handover delay, we propose a new fast and seamless mobility protocol to support inter-PAN handover in 6LoWPAN, named inter-MARIO. In our protocol, a partner node, which serves as an access point for a mobile node, preconfigures the future handover of the mobile node by sending mobile node's information to candidate neighbor PANs and providing neighbor PAN information like channel information to the mobile node. Also, the preconfigured information enables the foreign agent to send a surrogate binding update message to a home agent instead of the mobile node. By the preconfiguration and surrogate binding update, inter-MARIO decreases channel scan delay and binding message exchange delay, which are elements of handover delay. Additionally, we define a compression method for binding messages, which achieves more compression than existing methods, by reducing redundant fields. We compare signaling cost and binding message exchange delay with existing mobility protocols analytically and we evaluate handover delay by simulation. Analysis and simulation results indicate that our approach has promising fast, seamless, and lightweight properties.
---
paper_title: Sensor Proxy Mobile IPv6 (SPMIPv6) - A framework of mobility supported IP-WSN
paper_content:
IP based Wireless Sensor Networks (IP-WSN) are gaining importance for its broad range of applications in health-care, home automation, environmental monitoring, security & safety and industrial automation. In all of these applications mobility in sensor network with special attention to energy efficiency is a major issue to be addressed. Host based mobility management protocol is inherently unsuitable for energy inefficient IPWSN. So network-based mobility management protocol can be an alternative to the mobility supported IP-WSN. In this paper we propose a mobility supported IP-WSN protocol based on PMIPv6 called Sensor Proxy Mobile IPv6 (SPMIPv6). We present its architecture, message formats and also analyze its performance considering signaling cost and mobility cost. Our analyses show that the proposed scheme reduces the signaling cost by 67% and 60% as well as reduces mobility cost by 55% and 60% with comparison to MIPv6 and PMIPv6 respectively.
---
paper_title: Group mobility in 6LoWPAN-based WSN
paper_content:
Group mobility in wireless sensor network (WSN) is of particular importance in many practical application scenarios. In this paper, the group mobility management in IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) based WSN is considered and the application of network mobility (NEMO) protocol to support group mobility in WSN is discussed. A new network architecture supporting the integration of NEMO and 6LoWPAN-based WSN is proposed, the group mobility management mechanism and the corresponding signaling flow are discussed. Simulation results demonstrate that comparing to MIPv6, the application of NEMO protocol in 6LoWPAN reduces both the handoff latency and the energy consumption of sensor node.
---
paper_title: A mobility support scheme for 6LoWPAN
paper_content:
This paper proposes a mobility support scheme for 6LoWPAN. In the scheme, the control information interaction for the mobile handoff is achieved in the link layer, and the routing of the control information is automatically performed through the network topology, which saves the power and the delay time consumed by the routing establishment. In addition, neither does the mobile entity need a care-of address during the mobility process, nor is involved in the mobile handoff process, which reduces the mobile entity's power consumption and prolongs its life span. From the theoretical and simulative perspectives, the paper analyzes the performance parameters, including the mobility handoff cost, the mobility handoff delay time and packet loss rate, and the analytical results show that the performance of the scheme is better than other schemes.
---
paper_title: Mobile IPv6 in Internet of Things: Analysis, experimentations and optimizations
paper_content:
The IPv6 over Low-Power Wireless Personal Area Networks (6LoWPAN) standard allows heavily constrained devices to connect to IPv6 networks. This is an important step towards the Internet of Things, in which most of the physical objects will be connected to the Internet. Among them, a large number is likely to be mobile and therefore requires a mobility management protocol to maintain IP connectivity. Layer 3 mobility is commonly managed by Mobile IPv6 but this protocol is categorized as too complex for constrained devices in the literature. Such conclusions are based on simulations or experimentations in which several aspects of the protocol remain insufficiently detailed nor evaluated. In this article, we propose a complete evaluation of Mobile IPv6 over 6LoWPAN. For this, we have implemented Mobile IPv6 in the Contiki operating system and have performed intensive experimentations on a real testbed. We also propose a new mechanism for movement detection, as the standard procedure cannot be applied as is. This new mechanism, referred to as Mobinet, is based on passive overhearings. The results highlight that Mobile IPv6 can be a practical solution to manage layer 3 mobility on 6LoWPAN.
---
paper_title: Proxy Mobile IPv6
paper_content:
Network-based mobility management enables IP mobility for a host ::: without requiring its participation in any mobility-related signaling. ::: The network is responsible for managing IP mobility on behalf of the ::: host. The mobility entities in the network are responsible for ::: tracking the movements of the host and initiating the required ::: mobility signaling on its behalf. This specification describes a ::: network-based mobility management protocol and is referred to as Proxy ::: Mobile IPv6. [STANDARDS-TRACK]
---
paper_title: A proposal for proxy-based mobility in WSNs
paper_content:
Inability to meet the key requirement of efficient mobility support is becoming a major impairment of wireless sensor network (WSN). Many critical WSN applications need not only reliability, but also the ability to adequately cope with the movement of nodes between different sub-networks. Despite the work of IETF's 6lowPAN WG and work on the use of MIPv6 (and many of its variants) in WSNs, no practical mobility support solution exists for this type of networks. In this paper we start by assessing the use of MIPv6 in WSNs, considering soft and hard handoff, showing that, although feasible in small networks, MIPv6 complexity leads to long handoff time and high energy consumption. In order to solve these problems, we propose a proxy-based mobility approach which, by relieving resource-constrained sensor nodes from heavy mobility management tasks, drastically reduces time and energy expenditure during handoff. The evaluation of both MIPv6 and the proposed solution is done by implementation and simulation, with a varying number of nodes, sinks and mobility strategies.
---
paper_title: Performance analysis of fast handover for proxy Mobile IPv6
paper_content:
In Proxy Mobile IPv6 (PMIPv6), any involvement by the Mobile Node (MN) is not required, so that any tunneling overhead can be removed from over-the-air. However, during the PMIPv6 handover process, there still exists a period when the MN is unable to send or receive packets because of PMIPv6 protocol operations, suffering from handover latency and data loss. Thus, to reduce the handover latency and data loss in PMIPv6, Fast Handover for PMIPv6 (PFMIPv6) is being standardized in the IETF. Nevertheless, PFMIPv6 has a few weaknesses: (1) handover initiation can be false, resulting in the PFMIPv6 handover process done so far becoming unnecessary. (2) Extra signaling is introduced in setting up an IP-in-IP tunnel between the serving and the new Mobile Access Gateways (MAGs). Therefore, in this paper, we present our study on the protocol overhead and performance aspects of PFMIPv6 in comparison with PMIPv6. We quantify the signaling overhead and the enhanced handover latency and data loss by conducting a thorough analysis of the performance aspects. The analysis is very meaningful to obtain important insights on how PFMIPv6 improves the handover performance over PMIPv6, especially in a highway vehicular traffic scenario where Base Stations (BSs)/Access Points (APs) can be placed in one dimensional space and MN's movements are quasi one-dimensional, so that the degree of certainty for an anticipated handover is increased. Further, our analytical study is verified by simulation results.
---
paper_title: Adaptive mobility anchor point to reduce regional registration and packets delivery costs
paper_content:
An AMAP (Adaptive Mobility Anchor Point) has been proposed to minimize the regional registration cost and packet delivery cost in IPv6 networks. The AMAP is a special mobility anchor point which is selected based on the activity rate (ARate) of mobile users. MIPv6 (Mobile IPv6) has been developed as macro-mobility management protocol to support mobility of MUs (mobile users) over the Internet. Hierarchical Mobile IPv6 (HMIPv6) has been developed as micro-mobility management protocol. There are many other mobility management protocols proposed so far like Fast Mobile IPv6, Proxy Mobile IPv6, Optimal Choice of Mobility management, and Fast Proxy Mobile IPv6. These are based on MIPv6 and HMIPv6 and have their own advantages and limitations. These protocols do not consider the fixed mobility pattern of MUs. Many MUs have fixed mobility pattern on daily basis and there is scope of further reduction in regional registration cost. We propose an AMAP (Adaptive Mobility Anchor Point) to minimize the regional registration cost and packet delivery cost in IPv6 networks. The AMAP is a special mobility anchor point which is selected based on the activity rate (ARate) of MUs.
---
paper_title: LoWMob: Intra-PAN Mobility Support Schemes for 6LoWPAN
paper_content:
Mobility in 6LoWPAN (IPv6 over Low Power Personal Area Networks) is being utilized in realizing many applications where sensor nodes, while moving, sense and transmit the gathered data to a monitoring server. By employing IEEE802.15.4 as a baseline for the link layer technology, 6LoWPAN implies low data rate and low power consumption with periodic sleep and wakeups for sensor nodes, without requiring them to incorporate complex hardware. Also enabling sensor nodes with IPv6 ensures that the sensor data can be accessed anytime and anywhere from the world. Several existing mobility-related schemes like HMIPv6, MIPv6, HAWAII, and Cellular IP require active participation of mobile nodes in the mobility signaling, thus leading to the mobility-related changes in the protocol stack of mobile nodes. In this paper, we present LoWMob, which is a network-based mobility scheme for mobile 6LoWPAN nodes in which the mobility of 6LoWPAN nodes is handled at the network-side. LoWMob ensures multi-hop communication between gateways and mobile nodes with the help of the static nodes within a 6LoWPAN. In order to reduce the signaling overhead of static nodes for supporting mobile nodes, LoWMob proposes a mobility support packet format at the adaptation layer of 6LoWPAN. Also we present a distributed version of LoWMob, named as DLoWMob (or Distributed LoWMob), which employs Mobility Support Points (MSPs) to distribute the traffic concentration at the gateways and to optimize the multi-hop routing path between source and destination nodes in a 6LoWPAN. Moreover, we have also discussed the security considerations for our proposed mobility schemes. The performance of our proposed schemes is evaluated in terms of mobility signaling costs, end-to-end delay, and packet success ratio.
---
paper_title: A Group-Based Handoff Scheme for Correlated Mobile Nodes in Proxy Mobile IPv6
paper_content:
Proxy Mobile IPv6 (PMIPv6), a network-based IP mobility solution, is a promising approach for mobility management in all-IP wireless networks. How to enhance its handoff-related performance, such as handoff delay and signaling cost, is an important issue. Current solutions rely on approaches such as fast handoff, routing optimization and paging extension. However, the case of many correlated Mobile Nodes (MNs) moving together and taking handoffs at the same time has not been considered. In this paper, we propose a group-based handoff scheme for correlated MNs to enhance the performance of PMIPv6. We first propose a correlated MNs detection algorithm to detect MNs as groups. Based on this algorithm, we propose a groupbased handoff procedure, and discuss its benefits and limitations. Furthermore, we evaluate the performance of PMIPv6 and our proposal through the analysis and simulation. The results show that the proposed scheme is very efficient in reducing both the handoff delay and signaling cost.
---
paper_title: A novel network mobility management scheme supporting seamless handover for high-speed trains
paper_content:
Automotive telematics has become an important technology for high-speed rail systems, which are being increasingly popular in this era of green technology. As the train speed increases, however, communications between the train and infrastructure encounter major difficulties of maintaining high quality communication. Handovers on high-speed trains occur more frequently and have shorter permissible handling times than for traditional vehicles. In this paper, the proposed 2MR network mobility scheme takes advantage of the physical size of high-speed trains to deploy two mobile routers (MRs) in the first and last carriages. This scheme provides a protocol to allow the two MRs to cooperate with a wireless network infrastructure in facilitating seamless handovers. Our simulation results demonstrate that compared to the traditional single MR schemes, the 2MR scheme noticeably improves the communication quality during handover by significantly reducing handover latency as well as packet loss for high-speed trains.
---
paper_title: The costs and benefits of combining different IP mobility standards
paper_content:
Several IP mobility support protocols have been standardized. Each solution provides a specific functionality and/or requires operations of particular nodes. The current trend is towards the co-existence of these solutions, though the impact of doing so has not been yet fully understood. This article reviews key standards for providing IP mobility support, the functionality achieved by combining them, and the performance cost of each combination in terms of protocol overhead and handover latency. We show that combining different mobility mechanisms has a non-negligible cost. Finally we identify a strategy for combining mobility protocols and properties that facilitate this combination.
---
paper_title: Mobility in WSNs for critical applications
paper_content:
Recent critical application sectors of sensor networks like military, health care, and industry require the use of mobile sensor nodes, something that poses unique challenges in aspects like handoff delay, packet loss, and reliability. In this paper we propose a novel mobility model that handles those challenges effectively by providing on-time mobility detection and handoff triggering. In that way soft handoffs and controlled disconnections are assured. The proposed solution uses cross-layer information from the MAC and Network layers. Our solution was implemented and evaluated in an experimental testbed, in the context of the European FP7 GINSENG project.
---
paper_title: Mobility solutions for wireless sensor and actuator networks with performance guarantees
paper_content:
Wireless sensor and actuator networks (WSANs) have been studied for about ten years now. However, a gap between research and real applications and implementations remains. The lack of an integrated solution, capable of providing the reliability levels of monitoring and actuation required by critical applications, have postponed the replacement and extension of the existing inflexible and expensive wired solutions with the low-cost, easy-to-deploy, and portable wireless options. In order to assist this transition this paper presents a new method for supporting mobility in WSANs specifically designed for time-critical scenarios. The method is being targeted for a critical application located in a real oil refinery, in which a WSAN has been implemented in the scope of a European research project.
---
paper_title: A Network Mobility Solution Based on 6LoWPAN Hospital Wireless Sensor Network (NEMO-HWSN)
paper_content:
IPv6 Low-power Personal Area Networks (6LoWPANs) have recently found renewed interest because of the emergence of Internet of Things (IoT).However, mobility support in 6LoWPANs is still in its infancy for large-scale IP-based sensor technology in future IoT. The hospital wireless network is one important 6LoWPAN application of the IoT, where it keeps continuous monitoring of patients' vital signs while the patients are on the move. Proper mobility management is needed to maintain connectivity between patient nodes and the hospital network to monitor their exact locations. It should also support fault tolerance and optimize energy consumption of the devices. In this paper, we survey IPv6 mobility protocols and propose some solutions which make it more suitable to a hospital architecture based on 6LoWPAN technology. Our initial numerical results show a reduction of the handoff costs on the mobile router which normally constitute a bottleneck in such a system. We also discuss important metrics such as signaling overload, bandwidth efficiency and power consumption and how they can be optimized through the mobility management.
---
paper_title: A Comparative Analysis on the Signaling Load of Proxy Mobile IPv6 and Hierarchical Mobile IPv6
paper_content:
In this paper, we investigate the performance of the proxy mobile IPv6 and compare it with that of the hierarchical mobile IPv6. It is well known that performance of proxy mobile IPv6 is better than that of hierarchical mobile IPv6. For the more detailed performance analysis, we propose an analytic mobility model based on the random walk to take into account various mobility conditions. Based on the analytic models, we formulate the location management cost and handoff management cost. Then, we analyze the performance of the proxy mobile IPv6 and hierarchical mobile IPv6, respectively. The numerical results show that the proxy mobile IPv6 can has superior performance to hierarchical mobile IPv6 by reducing the latencies for location update and handoff.
---
paper_title: On a Reliable Handoff Procedure for Supporting Mobility in Wireless Sensor Networks
paper_content:
Wireless sensor network (WSN) applications such as patients’ health monitoring in hospitals, location-aware ambient intelligence, industrial monitoring /maintenance or homeland security require the support of mobile nodes or node groups. In many of these applications, the lack of network connectivity is not admissible or should at least be time bounded, i.e. mobile nodes cannot be disconnected from the rest of the WSN for an undefined period of time. In this context, we aim at reliable and real-time mobility support in WSNs, for which appropriate handoff and re-routing decisions are mandatory. This paper drafts a mechanism and correspondent heuristics for taking reliable handoff decisions in WSNs. Fuzzy logic is used to incorporate the inherent imprecision and uncertainty of the physical quantities at stake. On a Reliable Handoff Procedure for Supporting Mobility in Wireless Sensor Networks Hossein Fotouhi, Mario Alves, Anis Koubâa , Nouha Baccour CISTER Research Unit, Polytechnic Institute of Porto (ISEP-IPP), Portugal COINS Research Group, Al-Imam Muhammad bin Saud University (CCIS-IMAMU), Riyadh, Saudi Arabia ReDCAD Research Unit, National school of Engineers of Sfax, Tunisia {mhfg,mjf,aska,nabr}@isep.ipp.pt Abstract Wireless sensor network (WSN) applications such as patients’ health monitoring in hospitals, location-aware ambient intelligence, industrial monitoring /maintenance or homeland security require the support of mobile nodes or node groups. In many of these applications, the lack of network connectivity is not admissible or should at least be time bounded, i.e. mobile nodes cannot be disconnected from the rest of the WSN for an undefined period of time. In this context, we aim at reliable and real-time mobility support in WSNs, for which appropriate handoff and rerouting decisions are mandatory. This paper drafts a mechanism and correspondent heuristics for taking reliable handoff decisions in WSNs. Fuzzy logic is used to incorporate the inherent imprecision and uncertainty of the physical quantities at stake.Wireless sensor network (WSN) applications such as patients’ health monitoring in hospitals, location-aware ambient intelligence, industrial monitoring /maintenance or homeland security require the support of mobile nodes or node groups. In many of these applications, the lack of network connectivity is not admissible or should at least be time bounded, i.e. mobile nodes cannot be disconnected from the rest of the WSN for an undefined period of time. In this context, we aim at reliable and real-time mobility support in WSNs, for which appropriate handoff and rerouting decisions are mandatory. This paper drafts a mechanism and correspondent heuristics for taking reliable handoff decisions in WSNs. Fuzzy logic is used to incorporate the inherent imprecision and uncertainty of the physical quantities at stake.
---
paper_title: Performance Analysis of PMIPv6-Based NEtwork MObility for Intelligent Transportation Systems
paper_content:
While host mobility support for individual mobile hosts (MHs) has been widely investigated and developed over the past years, there has been relatively less attention to NEtwork MObility (NEMO). Since NEMO Basic Support (NEMO-BS) was developed, it has been the central pillar in Intelligent Transport Systems (ITS) communication architectures for maintaining the vehicle's Internet connectivity. As the vehicle moves around, it attaches to a new access network and is required to register a new address obtained from the new access network to a home agent (HA). This location update of NEMO-BS often results in unacceptable long handover latency and increased traffic load to the vehicle. To address these issues, in this paper, we introduce new NEMO support protocols, which rely on mobility service provisioning entities introduced in Proxy Mobile IPv6 (PMIPv6), as possible mobility support protocols for ITS. As a base protocol, we present PMIPv6-based NEMO (P-NEMO) to maintain the vehicle's Internet connectivity while moving and without participating in the location update management. In P-NEMO, the mobility management for the vehicle is supported by mobility service provisioning entities residing in a given PMIPv6 domain. To further improve handover performance, fast P-NEMO (FP-NEMO) has been developed as an extension protocol. FP-NEMO utilizes wireless L2 events to anticipate the vehicle's handovers. The mobility service provisioning entities prepare the vehicle's handover prior to the attachment of the vehicle to the new access network. Detailed handover procedures for P-NEMO and FP-NEMO are provided, and handover timing diagrams are presented to evaluate the performance of the proposed protocols. P-NEMO and FP-NEMO are compared with NEMO-BS in terms of traffic cost and handover latency.
---
paper_title: End to End Security and Path Security in Network Mobility
paper_content:
At RFC 3776, IP security protocol (IPsec) has been implemented in mobile IP for securing IP datagram at IP layer. Previous research only considered the traffic between mobile node (MN) and home agent (HA), but the traffic from HA to correspondent node (CN) was not considered. Network Mobility (NEMO) is based on Mobile IPv6 (MIPv6), so it inherits the same problem of only providing protection between mobile router (MR) and MR_HA. This paper aims to improve the security vulnerability by proposing a nested IPsec Encapsulating Security Payload (ESP) scheme capable of establishing nested IPsec ESP from MN to CN. The proposed scheme obviously enhances security with confidentiality and integrity in NEMO.
---
paper_title: A survey of mobility management in next-generation all-IP-based wireless systems
paper_content:
Next-generation wireless systems are envisioned to have an IP-based infrastructure with the support of heterogeneous access technologies. One of the research challenges for next generation all-IP-based wireless systems is the design of intelligent mobility management techniques that take advantage of IP-based technologies to achieve global roaming among various access technologies. Next-generation wireless systems call for the integration and interoperation of mobility management techniques in heterogeneous networks. In this article the current state of the art for mobility management in next-generation all-IP-based wireless systems is presented. The previously proposed solutions based on different layers are reviewed, and their qualitative comparisons are given. A new wireless network architecture for mobility management is introduced, and related open research issues are discussed in detail.
---
paper_title: Wireless sensor networks mobility management using fuzzy logic
paper_content:
This paper presents a novel, intelligent controller to support mobility in wireless sensor networks. In particular, the focus is on the deployment of such mobility solution to critical applications, like personnel safety in an industrial environment. A Fuzzy Logic-based mobility controller is proposed to aid sensor Mobile Nodes (MN) to decide whether they have to trigger the handoff procedure and perform the handoff to a new connection position or not. To do so, we use a combination of two locally available metrics, the RSSI and the Link Loss, in order to ''predict'' the End-to-End losses and support the handoff triggering procedure. As a performance evaluation environment, a real industrial setting (oil refinery) is used. Based on on-site experiments run in the oil refinery testbed area, the proposed mobility controller has shown significant benefits compared to other conventional solutions, in terms of packet loss, packet delivery delay, energy consumption, and ratio of successful handoff triggers.
---
paper_title: Evaluation of Fast PMIPv6 and Transient Binding PMIPv6 in Vertical Handover Environment
paper_content:
Recently, the IETF MIPSHOP working group proposes Fast PMIPv6 (FPMIPv6) and Transient Binding PMIPv6 (TPMIPv6) to reduce the handover latency and packets loss of PMIPv6. The research and standardization of FPMIPv6 and TPMIPv6 are just in the initial stage. The system performance analysis of them is benefit to the protocol design and deployment. In this paper, through the theoretical analysis and system simulation, we evaluate the handover latency of PMIPv6, FPMIPv6 and TPMIPv6 in vertical handover environment. Furthermore, in order to reflect handover performance more comprehensively, in system simulation, we also evaluate the UDP packets loss rate and the TCP throughput declining degree of such protocols. The results of theoretical analysis and simulation show that: (1) in vertical handover, the handover latency of FPMIPv6 is much larger than TPMIPv6 and PMIPv6. But the UDP packet loss rate of FPMIPv6 is smaller than TPMIPv6 and PMIPv6; (2) the handover performance of FPMIPv6-pre and TPMIPv6 much depend on MN's residence time in signal overlapped area.
---
paper_title: Selective Channel Scanning for Fast Handoff in Wireless LAN Using Neighbor Graph
paper_content:
Handoff at the link layer 2 (L2) consists of three phases: scanning, authentication, and reassociation. Among the three phases, scanning is dominant in terms of time delay. Thus, in this paper, we propose an improved scanning mechanism to minimize the disconnected time while the wireless station (STA) changes the associated access points (APs). According to IEEE 802.11 standard, the STA has to scan all channels in the scanning phase. In this paper, based on the neighbor graph (NG), we introduce a selective channel scanning method for fast handoff in which the STA scans only channels selected by the NG. Experimental results show that the proposed method reduces the scanning delay drastically.
---
paper_title: A 6LoWPAN Sensor Node Mobility Scheme Based on Proxy Mobile IPv6
paper_content:
In this paper, we focus on a scheme that supports mobility for IPv6 over Low power Wireless Personal Area Network (6LoWPAN) sensor nodes. We define a protocol for 6LoWPAN mobile sensor node, named 6LoMSN, based on Proxy Mobile IPv6 (PMIPv6). The conventional PMIPv6 standard supports only single-hop networks and cannot be applied to multihop-based 6LoWPAN. It does not support the mobility of 6LoMSNs and 6LoWPAN gateways, named 6LoGW, cannot detect the PAN attachment of the 6LoMSN. Therefore, we define the movement notification of a 6LoMSN in order to support its mobility in multihop-based 6LoWPAN environments. The attachment of 6LoMSNs reduces signaling costs over the wireless link by using router solicitation (RS) and router advertisement (RA) messages. Performance results show that our proposed scheme can minimize the total signaling costs and handoff latency. Additionally, we present the design and implementation of the 6LoMSN mobility based on PMIPv6 for a healthcare system. According to the experimental results, the 6LoMSN of the proposed PMIPv6-based 6LoWPAN can be expected to use more of the battery lifetime. We also verify that the 6LoMSN can maintain connectivity, even though it has the freedom of being able to move between PANs without a mobility protocol stack.
---
paper_title: Rapid IPv6 address autoconfiguration for heterogeneous mobile technologies
paper_content:
This paper proposes a novel IPv6 address autoconfiguration that works with multiple types of mobile networks, such as MANET, NEMO, MANEMO, as well as regular IPv6 networks. Our proposed algorithm assigns unique addresses to mobile devices without performing duplicate address detection. As a result, an address autoconfiguration can be done rapidly. This is suitable for system that requires dynamic movement and quick handover such as a disaster relief system or car-car communication.
---
paper_title: DeuceScan: Deuce-Based Fast Handoff Scheme in IEEE 802.11 Wireless Networks
paper_content:
The IEEE 802.11 standard has enabled low-cost and effective wireless local area network (WLAN) services. It is widely believed that WLANs will become a major portion of the fourth-generation cellular system. The seamless handoff problem in WLANs is a very important design issue to support the new astounding and amazing applications in WLANs, particularly for a user in a mobile vehicle. The entire delay time of a handoff is divided into probe, authentication, and reassociation delay times. Because the probe delay occupies most of the handoff delay time, efforts have mainly focused on reducing the probe delay to develop faster handoff schemes. This paper presents a new fast handoff scheme (i.e., the DeuceScan scheme) to further reduce the probe delay for IEEE-802.11-based WLANs. The proposed scheme can be useful to improve wireless communication qualities on vehicles. A spatiotemporal approach is developed in this paper to utilize a spatiotemporal graph to provide spatiotemporal information for making accurate handoff decisions by correctly searching for the next access point. The DeuceScan scheme is a prescan approach that efficiently reduces the layer-2 handoff latency. Two factors of stable signal strength and variable signal strength are used in our developed DeuceScan scheme. Finally, simulation results illustrate the performance achievements of the DeuceScan scheme in reducing handoff delay time and packet loss rate and improving link quality.
---
paper_title: An Enhanced Group Mobility Protocol for 6LoWPAN-Based Wireless Body Area Networks
paper_content:
The IPv6 over low power wireless personal area network (6LoWPAN) has attracted lots of attention recently because it can be used for the communications of Internet of things. In this paper, the concept of group-based network roaming in proxy mobile IPv6 (PMIPv6) domain is considered in the 6LoWPAN-based wireless body area networks. PMIPv6 is a standard to manage the network-based mobility in all-IP wireless network. However, it does not perform well in group-based body area networks. To further reduce the handoff delay and signaling cost, an enhanced group mobility scheme is proposed in this paper to reduce the number of control messages, including router solicitation and router advertisement messages as opposed to the group-based PMIPv6 protocol. Simulation results illustrate that the proposed handoff scheme can reduce the handoff delay and signaling cost. The packet loss ratio and the overhead can also be reduced.
---
paper_title: Sensor Proxy Mobile IPv6 (SPMIPv6) - A framework of mobility supported IP-WSN
paper_content:
IP based Wireless Sensor Networks (IP-WSN) are gaining importance for its broad range of applications in health-care, home automation, environmental monitoring, security & safety and industrial automation. In all of these applications mobility in sensor network with special attention to energy efficiency is a major issue to be addressed. Host based mobility management protocol is inherently unsuitable for energy inefficient IPWSN. So network-based mobility management protocol can be an alternative to the mobility supported IP-WSN. In this paper we propose a mobility supported IP-WSN protocol based on PMIPv6 called Sensor Proxy Mobile IPv6 (SPMIPv6). We present its architecture, message formats and also analyze its performance considering signaling cost and mobility cost. Our analyses show that the proposed scheme reduces the signaling cost by 67% and 60% as well as reduces mobility cost by 55% and 60% with comparison to MIPv6 and PMIPv6 respectively.
---
paper_title: Group mobility in 6LoWPAN-based WSN
paper_content:
Group mobility in wireless sensor network (WSN) is of particular importance in many practical application scenarios. In this paper, the group mobility management in IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) based WSN is considered and the application of network mobility (NEMO) protocol to support group mobility in WSN is discussed. A new network architecture supporting the integration of NEMO and 6LoWPAN-based WSN is proposed, the group mobility management mechanism and the corresponding signaling flow are discussed. Simulation results demonstrate that comparing to MIPv6, the application of NEMO protocol in 6LoWPAN reduces both the handoff latency and the energy consumption of sensor node.
---
paper_title: Enhanced handoff latency reduction mechanism in layer 2 and layer 3 of mobile IPv6 (MIPv6) network
paper_content:
Next Generation Networks (NGN) both static and mobile are expected to be fully Internet Protocol Version 6 (IPv6) based. Mobility in IPv6 (MIPv6) network was designed to provide Internet services to end users at anytime and anywhere. However, MIPv6 is not widely deployed yet due to handoff latency and other limitations leading to packet loss and Quality of Service (QoS) degradation for real time applications such as audio and video streaming. MIPv6 handoff latency can be categorized into layer 2 (L2) and layer 3 (L3) delays that includes link layer establishment delay, movement detection delay, address configuration delay and binding update or registration delay. Movement detection delay and address configuration including Duplicate Address Detection (DAD) in L2 and L3 respectively consume the highest time of the total delay. In order to reduce these handoff latencies, two solutions are proposed to focus all the delays both in L2 and L3. The first solution is the fuzzy logic technique based network awareness to reduce movement detection delay especially the scanning time in L2 in heterogeneous networks. The second solution is Parallel DAD (PDAD) to reduce address configuration time in L3. Both solutions benchmarked with OMNeT++ simulator show improvements over standard MIPv6 networks. The handoff latency reduced more than 50% and packet loss improved around 55% in L2. Moreover, in L3 the handoff latency reduction accounts for 70% and packet loss improved approximately 60%. The handoff latency is reduced from 1300 ms to 500 ms applying fuzzy logic technique at L2 and PDAD mechanism in L3 leading to overall delay reduction of 60%.
---
paper_title: Performance analysis of fast handover for proxy Mobile IPv6
paper_content:
In Proxy Mobile IPv6 (PMIPv6), any involvement by the Mobile Node (MN) is not required, so that any tunneling overhead can be removed from over-the-air. However, during the PMIPv6 handover process, there still exists a period when the MN is unable to send or receive packets because of PMIPv6 protocol operations, suffering from handover latency and data loss. Thus, to reduce the handover latency and data loss in PMIPv6, Fast Handover for PMIPv6 (PFMIPv6) is being standardized in the IETF. Nevertheless, PFMIPv6 has a few weaknesses: (1) handover initiation can be false, resulting in the PFMIPv6 handover process done so far becoming unnecessary. (2) Extra signaling is introduced in setting up an IP-in-IP tunnel between the serving and the new Mobile Access Gateways (MAGs). Therefore, in this paper, we present our study on the protocol overhead and performance aspects of PFMIPv6 in comparison with PMIPv6. We quantify the signaling overhead and the enhanced handover latency and data loss by conducting a thorough analysis of the performance aspects. The analysis is very meaningful to obtain important insights on how PFMIPv6 improves the handover performance over PMIPv6, especially in a highway vehicular traffic scenario where Base Stations (BSs)/Access Points (APs) can be placed in one dimensional space and MN's movements are quasi one-dimensional, so that the degree of certainty for an anticipated handover is increased. Further, our analytical study is verified by simulation results.
---
paper_title: A Group-Based Handoff Scheme for Correlated Mobile Nodes in Proxy Mobile IPv6
paper_content:
Proxy Mobile IPv6 (PMIPv6), a network-based IP mobility solution, is a promising approach for mobility management in all-IP wireless networks. How to enhance its handoff-related performance, such as handoff delay and signaling cost, is an important issue. Current solutions rely on approaches such as fast handoff, routing optimization and paging extension. However, the case of many correlated Mobile Nodes (MNs) moving together and taking handoffs at the same time has not been considered. In this paper, we propose a group-based handoff scheme for correlated MNs to enhance the performance of PMIPv6. We first propose a correlated MNs detection algorithm to detect MNs as groups. Based on this algorithm, we propose a groupbased handoff procedure, and discuss its benefits and limitations. Furthermore, we evaluate the performance of PMIPv6 and our proposal through the analysis and simulation. The results show that the proposed scheme is very efficient in reducing both the handoff delay and signaling cost.
---
paper_title: A novel network mobility management scheme supporting seamless handover for high-speed trains
paper_content:
Automotive telematics has become an important technology for high-speed rail systems, which are being increasingly popular in this era of green technology. As the train speed increases, however, communications between the train and infrastructure encounter major difficulties of maintaining high quality communication. Handovers on high-speed trains occur more frequently and have shorter permissible handling times than for traditional vehicles. In this paper, the proposed 2MR network mobility scheme takes advantage of the physical size of high-speed trains to deploy two mobile routers (MRs) in the first and last carriages. This scheme provides a protocol to allow the two MRs to cooperate with a wireless network infrastructure in facilitating seamless handovers. Our simulation results demonstrate that compared to the traditional single MR schemes, the 2MR scheme noticeably improves the communication quality during handover by significantly reducing handover latency as well as packet loss for high-speed trains.
---
paper_title: Reducing MAC layer handoff latency in IEEE 802.11 wireless LANs
paper_content:
With the growth of IEEE 802.11-based wireless LANs, VoIP and similar applications are now commonly used over wireless networks. Mobile station performs a handoff whenever it moves out of the range of one access point (AP) and tries to connect to a different one. This takes a few hundred milliseconds, causing interruptions in VoIP sessions. We developed a new handoff procedure which reduces the MAC layer handoff latency, in most cases, to a level where VoIP communication becomes seamless. This new handoff procedure reduces the discovery phase using a selective scanning algorithm and a caching mechanism.
---
paper_title: Fast Handovers for Proxy Mobile IPv6
paper_content:
This document specifies the usage of Fast Mobile IPv6 (FMIPv6) when ::: Proxy Mobile IPv6 is used as the mobility management protocol. ::: Necessary extensions are specified for FMIPv6 to support the scenario ::: when the mobile node does not have IP mobility functionality and hence ::: is not involved with either MIPv6 or FMIPv6 operations.
---
paper_title: Sensor fast proxy mobile IPv6 (SFPMIPv6)-A framework for mobility supported IP-WSN for improving QoS and building IoT
paper_content:
Recently it has been observed that internet of things (IoT) technology is being introduced in medical environment to achieve a global connectivity with the patient, sensors and everything around it. The main goal of this global connectivity is to provide a context awareness to make the patient's life easier and the clinical process more effective. IPv6, the new Internet Protocol has made every grain of sand on earth addressable and thus emergence of a new technology called IoT. IoT is simply a machine to machine M2M communication and Sensor nodes are proved to be best suited for this new technology. IPv6 over low power wireless personal area network (6LoWPAN) has attracted lots of attention recently as it can be used for communication of IoT. This paper provides a framework for medical environment, for reducing handoff (HO) latency and signaling overhead while the patient is on the move inside the hospital premises and further it can be applied to other domains also where can be applicable. Sensor nodes can be used for collecting the particular parameters of human body which constitutes wireless body area network (WBAN). The proposed framework surely reduces the signaling overhead and thus the HO latency and hence there will be an improvement in QoS.
---
paper_title: Mobility support in IP: a survey of related protocols
paper_content:
This article presents an overview of a set of IP-based mobility protocols mobile IP, HAWAII, cellular IP, hierarchical MIP, TeIeMIP, dynamic mobility agent, and terminal independent MIP - that will play an important role in the forthcoming convergence of IP and legacy wireless networks. A comparative analysis with respect to system parameters such as location update, handoff latency and signaling overhead exposes their ability in managing micro/macro/global-level mobility. We use this observation to relate their features against a number of key design issues identified for seamless IP-based mobility as envisioned for future 4G networks.
---
paper_title: Hierarchical mobile IPv6 mobility management
paper_content:
This document introduces extensions to Mobile IPv6 and IPv6 Neighbour ::: Discovery to allow for local mobility handling. Hierarchical mobility ::: management for Mobile IPv6 is designed to reduce the amount of ::: signalling between the Mobile Node, its Correspondent Nodes, and its ::: Home Agent. The Mobility Anchor Point (MAP) described in this document ::: can also be used to improve the performance of Mobile IPv6 in terms of ::: handover speed.
---
|
Title: A Review of Network Based Mobility Management Schemes, WSN Mobility in 6LoWPAN Domain and Open Challenges
Section 1: Introduction
Description 1: Introduce the paper by giving an overview of the topics including network based mobility management, WSN mobility in 6LoWPAN domain, and the open challenges identified.
Section 2: Network Based Mobility Management Schemes
Description 2: Discuss the network based mobility management schemes, their principles and protocols developed over time.
Section 3: NEMO-BS
Description 3: Detail the NEMO Basic Support Protocol for IPv6 and IPv4, including its operations, benefits, and the communication flow.
Section 4: Survey of Network Based Mobility Management Schemes and 6LoWPAN WSN Mobility
Description 4: Survey and compare some of the mobility management support protocols for network and data link layers, and cross-layer for 6LoWPAN WSN mobility.
Section 5: Open Challenges
Description 5: Examine the open challenges in network based mobility management schemes and WSN mobility in 6LoWPAN domain, focusing on issues like signaling cost, packet loss, HO latency, and the impact on healthcare.
Section 6: Conclusion
Description 6: Summarize the findings of the survey, highlighting the importance and impact of advanced mobility management schemes. Suggest future directions and practical implementation for further research.
|
A survey of schema versioning issues for database systems
| 25 |
---
paper_title: Versions and change notification in an object-oriented database system
paper_content:
The authors have built a prototype object-oriented database system called ORION to support applications from the CAD/CAM (computer-aided-design/computer-aided-manufacturing), AI (artificial-intelligence), and office-information-system domains. Advanced functions supported in ORION include versions, change notification, composite objects, dynamic schema evolution, and multimedia data. The versions and change notification features are based on a model that the authors developed earlier. They have integrated their model of versions and change notification into the ORION object-oriented data model, and also provide an insight into system overhead that versions and change notification incur. >
---
paper_title: Issues in Software Maintenance
paper_content:
Abstract : Up to a few years ago the area of software maintenance was largely ignored. Interest has increased in the last few years due to several factors. First, the increased volume of enhancement and maintenance with more systems from that of ten years ago has restricted resources available for new development. Second, there has been a growing awareness that tools and aids which assist development of information systems may have little effect on operational systems. Third, the management of information systems has come under increasing scrutiny. In this report we highlight some of the major issues that surfaced during several extensive operational software studies. These sources have pointed to significant questions that must be addressed concerning the roles of the users in operations and maintenance, the management of maintenance, and the types of tools and techniques that are needed in maintenance. (Author)
---
paper_title: Data model issues for object-oriented applications
paper_content:
Presented in this paper is the data model for ORION, a prototype database system that adds persistence and sharability to objects created and manipulated in object-oriented applications. The ORION data model consolidates and modifies a number of major concepts found in many object-oriented systems, such as objects, classes, class lattice, methods, and inheritance. These concepts are reviewed and three major enhancements to the conventional object-oriented data model, namely, schema evolution, composite objects, and versions, are elaborated upon. Schema evolution is the ability to dynamically make changes to the class definitions and the structure of the class lattice. Composite objects are recursive collections of exclusive components that are treated as units of storage, retrieval, and integrity enforcement. Versions are variations of the same object that are related by the history of their derivation. These enhancements are strongly motivated by the data management requirements of the ORION applications from the domains of artificial intelligence, computer-aided design and manufacturing, and office information systems with multimedia documents.
---
paper_title: Semantic heterogeneity as a result of domain evolution
paper_content:
We describe examples of problems of semantic heterogeneity in databases due to “domain evolution”, as it occurs in both single- and multidatabase systems. These problems occur when the semantics of values of a particular domain change over time in ways that are not amenable to applying simple mappings between “old” and “new” values. The paper also proposes facilities and strategies for solving such problems.
---
paper_title: Temporally oriented data definitions: managing schema evolution in temporally oriented databases
paper_content:
A simplifying — yet unrealistic — assumption widely held throughout the research of Temporally Oriented Data Models (TODM) is that the associated schema never changes. The implications of allowing data structures to evolve over time within a TODM and related databases are examined in this paper, and key issues and concepts are identified. Specifically, Temporally Oriented Data Definition (TODD) raises questions with respect to (1) the evolution of meanings in databases, (2) the nature of the temporal prevalence of database schema, and (3) the general principles that may guide the implementation of a TODM database with TODD.
---
paper_title: Managing Schema Versions in a Time-Versioned Non-First-Normal-Form Relational Database
paper_content:
Support of time versions is a very advanced feature in a DBMS. However, full flexibility of history processing is achieved only if we can also change the database schema dynamically, without touching the history. A technique for achieving this goal is here presented, in the frames of the Non-First- Normal-Form (NF2) relational data model. The environment is a pilot DBMS supporting this model, developed by the Advanced Information Management (AIM) project at the IBM Heidelberg Scientific Center. The technical solution pursues to minimize the storage space and the number of data versions. One way to achieve this is to avoid the immediate update of all data instances in the context of a schema change. Transformations between versions enable the correct interpretation of data. The management of time-related queries becomes complicated, when schema changes are involved. The paper describes a technique of applying global views over different schema versions, when formulating the queries and their results.
---
paper_title: Predictions and Challenges for Database Systems in the Year 2000
paper_content:
Pcrmirsion to copy without fee ail or part of ihir material ir granted provided that ibe copier are not made or diritibuted jot direct commercial advantage, tbe VLDB copyright notiee and tbe title oj the publication and ilr data appear, and noiice ir given tbot copying ir by permirrion of ihe Very Lar/c Data Baee Endowment. To copy oibcrwiee, or to npublirh, reqrirer a jet and/or rpccial permirrion jrom the Endowment.
---
paper_title: SQL/SE: a query language extension for databases supporting schema evolution
paper_content:
The incorporation of a knowledge of time within database systems allows for temporally related information to be modelled more naturally and consistently. Adding this support to the metadatabase further enhances its semantic capability and allows elaborate interrogation of data. This paper presents SQL/SE, an SQL extension capable of handling schema evolution in relational database systems.
---
paper_title: Schema evolution and the relational algebra
paper_content:
In this paper we discuss extensions to the conventional relational algebra to support both aspects of transaction time, evolution of a database’s contents and evolution of a database’s schema. We dene a relation’s schema to be the relation’s temporal signature, a function mapping the relation’s attribute names onto their value domains, and class, indicating the extent of support for time. We also introduce commands to change a relation, now dened as a triple consisting of a sequence of classes, a sequence of signatures, and a sequence of states. A semantic type system is required to identify semantically incorrect expressions and to enforce consistency constraints among a relation’s class, signature, and state following update. We show that these extensions are applicable, without change, to historical algebras that support valid time, yielding an algebraic language for the query and update of temporal databases. The additions preserve the useful properties of the conventional algebra. A database’s schema describes the structure of the database; the contents of the database must adhere to that structure [Date 1976, Ullman 1982]. Schema evolution refers to changes to the database’s schema over time. Conventional databases allow only one schema to be in force at a time, requiring restructuring (also termed logical reorganization [Sockut & Goldberg 1979]) when the schema is modied. With the advent of databases storing past states [McKenzie 1986], it becomes desirable to accommodate multiple schemas, each in eect for an interval in the past. Schema versioning refers to retention of past schemas resulting from schema evolution. In an earlier paper [McKenzie & Snodgrass 1987A] we proposed extensions to the conventional relational algebra [Codd 1970] that model the evolution of a database’s contents. We did not, however, consider the evolution of a database’s schema. In this paper, we provide further extensions to the conventional relational algebra that model the evolution of a database’s schema. The extensions that support evolution of a database’s contents are repeated here for completeness and because the extensions supporting schema evolution are best explained in concert with those earlier extensions.
---
paper_title: Versions of Schema for Object-Oriented Databases
paper_content:
Version control is one of the important database requirements for design environments. Various models of versions have been proposed and implemented. However, research in versions has been focused exclusively on versioning single design objects. In a multi-user design environment where the schema (definition) of the design objects may undergo dynamic changes, it is important to be able to version the schema, as well as version the single design objects. In this paper, we propose a model of versions of schema by extending our model of versions of single objects. In particular, we present the semantics of our model of versions of schema for object-oriented databases, explore issues in implementing the model, and examine a few alternatives to our model of versions of schema.
---
paper_title: The Historical Relational Data Model (HRDM) and Algebra Based on Lifespans
paper_content:
Critical to the design of an historical database model is the representation of the “existence” of objects across the temporal dimension — for example, the “birth,” “death,” or “rebirth” of an individual, or the establishment or dis-establishment of a relationship. The notion of the “lifespan” of a database object is proposed as a simple framework for expressing these concepts. An object's lifespan is simply those periods of time during which the database models the properties of that object. In this paper we propose the historical relational data model (HRDM) and algebra that is based upon lifespans and that views the values of all attributes as functions from time points to simple domains. The model that we obtain is a consistent extension of the relational data model, and provides a simple mechanism for providing both time-varying data and time-varying schemes.
---
paper_title: Quantifying Schema Evolution
paper_content:
Abstract Achieving correct changes is the dominant activity in the application software industry. Modification of database schemata is one kind of change which may have severe consequences for database applications. The paper presents a method for measuring modifications to database schemata and their consequences by using a thesaurus tool. Measurements of the evolution of a large-scale database application currently running in several hospitals in the UK are presented and interpreted. The kind of measurements provided by this in-depth study is useful input to the design of change management tools.
---
paper_title: Semantics and implementation of schema evolution in object-oriented databases
paper_content:
Object-oriented programming is well-suited to such data-intensive application domains as CAD/CAM, AI, and OIS (office information systems) with multimedia documents. At MCC we have built a prototype object-oriented database system, called ORION. It adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. One of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. In this paper, following a brief review of the object-oriented data model that we support in ORION, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation.
---
paper_title: Version Management in an Object-Oriented Database
paper_content:
We describe a database system that includes a built-in version control mechanism that can be used in the definition of any new object types. This database system is object-oriented in the sense that it supports data abstraction, object types, and inheritance.
---
paper_title: Temporal semantics in information systems: a survey
paper_content:
Abstract If a computer system is to deal with temporal semantics, it must understand the nature of time and have the ability to accept and reason with time-related facts. This reasoning ranges from the knowledge and use of the chronological nature of a given calendar system, to the more complex nature of inductive reasoning between related events and time periods. This paper investigates the handling of time as it has been applied to the fields of data modelling and artificial intelligence. Systems using the techniques are investigated. Significant features and properties are then extracted and examined where they are pertinent to systems capable of modelling temporal data.
---
paper_title: A consensus glossary of temporal database concepts
paper_content:
This document contains definitions of a wide range of concepts specific to and widely used within temporal databases. In addition to providing definitions, the document also includes separate explanations of many of the defined concepts. Two sets of criteria are included. First, all included concepts were required to satisfy four relevance criteria, and, second, the naming of the concepts was resolved using a set of evaluation criteria. The concepts are grouped into three categories: concepts of general database interest, of temporal database interest, and of specialized interest. This document is a digest of a full version of the glossary1. In addition to the material included here, the full version includes substantial discussions of the naming of the concepts.The consensus effort that lead to this glossary was initiated in Early 1992. Earlier status documents appeared in March 1993 and December 1992 and included terms proposed after an initial glossary appeared in SIGMOD Record in September 1992. The present glossary subsumes all the previous documents. It was most recently discussed at the "ARPA/NSF International Workshop on an Infrastructure for Temporal Databases," in Arlington, TX, June 1993, and is recommended by a significant part of the temporal database community. The glossary meets a need for creating a higher degree of consensus on the definition and naming of temporal database concepts.
---
paper_title: Temporally oriented data definitions: managing schema evolution in temporally oriented databases
paper_content:
A simplifying — yet unrealistic — assumption widely held throughout the research of Temporally Oriented Data Models (TODM) is that the associated schema never changes. The implications of allowing data structures to evolve over time within a TODM and related databases are examined in this paper, and key issues and concepts are identified. Specifically, Temporally Oriented Data Definition (TODD) raises questions with respect to (1) the evolution of meanings in databases, (2) the nature of the temporal prevalence of database schema, and (3) the general principles that may guide the implementation of a TODM database with TODD.
---
paper_title: Management Of Schema Evolution In Databases
paper_content:
This paper presents a version model which handles database schema changes and which takes evolution into account. Its originality is in allowing the development of partial schema versions, or views of a schema. These versions are created in the same database from a common schema. We define the set of authorised modifications on a schema and the rules which guarantee its coherence after transformation. Mechanisms allowing data to be associated with each version are also integrated in the model.
---
paper_title: An incremental mechanism for schema evolution in engineering domains
paper_content:
The authors focus on one class of schema revisions necessitated by a very basic phenomenon: a given individual object evolves into a family of objects which are similar to it in many ways. This is commonly called the version problem. In theoretical terms, one can handle the above schema change in the standard, object-oriented database models by the interposition of suitable abstractions into the existing type lattice. There are practical and engineering difficulties with such schema changes. The authors propose an incremental mechanism called instance inheritance which is well suited to handling the schema changes without the attendant practical costs. The authors formally characterize this augmentation to the standard database models, and show examples of its applications. >
---
paper_title: The Use of Information Capacity in Schema Integration and Translation
paper_content:
In this paper, we carefully explore the assumptions behind using information capacity equivalence as a measure of correctness for judging transformed schemas in schema integration and translation methodologies. We present a classification of common integration and translation tasks based on their operational goals and derive from them the relative information capacity requirements of the original and transformed schemas. We show that for many tasks, information capacity equivalence of the schemas is not strictly required. Based on this, we present a new definition of correctness that reflects each undertaken task. We then examine existing methodologies and show how anomalies can arise when using those that do not meet the proposed correctness criteria.
---
paper_title: Structural schema integration with full and partial correspondence using the dual model
paper_content:
Abstract The integration of views and schemas is an important part of database design and evolution and permits the sharing of data across complex applications. The view and schema integration methodologies used to date are driven purely by semantic considerations, and allow integration of objects only if that is valid from both semantic and structural view points. We discuss a new integration method called structural integration that has the advantage of being able to integrate objects that have structural similarities, even if they differ semantically. This is possible by using the object-oriented Dual Model which allows separate representation of structure and semantics. Structural integration has several advantages, including the identification of shared common structures that is important for sharing of data and methods.
---
paper_title: A theory of attributed equivalence in databases with application to schema integration
paper_content:
The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set. This common foundation is based on the basic principle of integrating attributes. Any pair of objects whose identifying attributes can be integrated can themselves be integrated. Several definitions of attribute equivalence are presented. These definitions can be used to specify the exact nature of the relationship between a pair of attributes. Based on these definitions, several strategies for attribute integration are presented and evaluated. >
---
paper_title: The Use of Information Capacity in Schema Integration and Translation
paper_content:
In this paper, we carefully explore the assumptions behind using information capacity equivalence as a measure of correctness for judging transformed schemas in schema integration and translation methodologies. We present a classification of common integration and translation tasks based on their operational goals and derive from them the relative information capacity requirements of the original and transformed schemas. We show that for many tasks, information capacity equivalence of the schemas is not strictly required. Based on this, we present a new definition of correctness that reflects each undertaken task. We then examine existing methodologies and show how anomalies can arise when using those that do not meet the proposed correctness criteria.
---
paper_title: Temporal semantics in information systems: a survey
paper_content:
Abstract If a computer system is to deal with temporal semantics, it must understand the nature of time and have the ability to accept and reason with time-related facts. This reasoning ranges from the knowledge and use of the chronological nature of a given calendar system, to the more complex nature of inductive reasoning between related events and time periods. This paper investigates the handling of time as it has been applied to the fields of data modelling and artificial intelligence. Systems using the techniques are investigated. Significant features and properties are then extracted and examined where they are pertinent to systems capable of modelling temporal data.
---
paper_title: Semantic heterogeneity as a result of domain evolution
paper_content:
We describe examples of problems of semantic heterogeneity in databases due to “domain evolution”, as it occurs in both single- and multidatabase systems. These problems occur when the semantics of values of a particular domain change over time in ways that are not amenable to applying simple mappings between “old” and “new” values. The paper also proposes facilities and strategies for solving such problems.
---
paper_title: Schema evolution in database systems: an annotated bibliography
paper_content:
Schema Evolution is the ability of a database system to respond to changes in the real world by allowing the schema to evolve. In many systems this property also implies a retaining of past states of the schema. This latter property is necessary if data recorded during the lifetime of one version of the schema is not to be made obsolete as the schema changes. This annotated bibliography investigates current published research with respect to the handling of changing schemas in database systems.
---
paper_title: A Taxonomy for Schema Versioning Based on the Relational and Entity Relationship Models
paper_content:
Recently there has been increasing interest in both the problems and the potential of accommodating evolving schema in databases, especially in systems which necessitate a high volume of structural changes or where structural change is difficult. This paper presents a taxonomy of changes applicable to the Entity-Relationship Model together with their effects on the underlying relational model expressed in terms of a second taxonomy relevant to the relational model.
---
paper_title: An architecture for automatic relational database sytem conversion
paper_content:
Changes in requirements for database systems necessitate schema restructuring, database translation, and application or query program conversion. An alternative to the lengthy manual revision process is proposed by offering a set of 15 transformations keyed to the relational model of data and the relational algebra. Motivations, examples, and detailed descriptions are provided.
---
paper_title: Extending the relational algebra to support transaction time
paper_content:
In this paper we discuss extensions to the conventional relational algebra to support transaction time. We show that these extensions are applicable to historical algebras that support valid time, yielding a temporal algebraic language. Since transaction time concerns the storage of information in the database, the notion of state is central. The extensions are formalized using denotational semantics. The additions preserve the useful properties of the conventional relational algebra.
---
paper_title: Algebra and query language for a historical data model
paper_content:
We propose a «state» oriented view of historical databases. We propose an algebra for historical relations which contains classical as well as some new operators. The operators are simple to comprehend, unlike in other research proposals. Were are also able to formulate a completeness criteria for the proposed model. Finally, we extend the popular SQL query language for use with historical databases. Again, the extensions are consistent with the simple basis of standard SQL
---
paper_title: An algebraic language for query and update of temporal databases
paper_content:
Although time is a property of events and objects in the real world, conventional relational database management systems (RDBM's) can't model the evolution of either the objects being modeled or the database itself. Relational databases can be viewed as snapshot databases in that they record only the current database state, which represents the state of the enterprise being modeled at some particular time. We extend the relational algebra to support two orthogonal aspects of time: valid time, which concerns the modeling of time-varying reality, and transaction time, which concerns the recording of information in databases. In so doing, we define an algebraic language for query and update of temporal databases. ::: The relational algebra is first extended to support valid time. Historical versions of nine relational operators (i.e., union, difference, cartesian product, selection, projection, intersection, $\Theta$-join, natural join, and quotient) are defined and three new operators (i.e., historical derivation, non-unique aggregation, and unique aggregation) are introduced. Both the relational algebra and this new historical algebra are then encapsulated within a language of commands to support transaction time. The language's semantics is formalized using denotational semantics. Rollback operators are added to the algebras to allow relations to be rolled back in time. The language accommodates scheme and contents evolution, handles single-command and multiple-command transactions, and supports queries on valid time. The language is shown to have the expressive power of the temporal query language TQuel. ::: The language supports both unmaterialized and materialized views and accommodates a spectrum of view maintenance strategies, including incremental, recomputed, and immediate view materialization. Incremental versions of the snapshot and historical operators are defined to support incremental view materialization. A prototype query processor was built for TQuel to study incremental view materialization in temporal databases. Problems that arise when materialized views are maintained incrementally are discussed, and solutions to those problems are proposed. ::: Criteria for evaluating temporal algebras are presented. Incompatibilities among the criteria are identified and a maximal set of compatible evaluation criteria is proposed. Our language and other previously proposed temporal extensions of the relational algebra are evaluated against these criteria.
---
paper_title: A Temporal Relational Algebra as Basis for Temporal Relational Completeness
paper_content:
We define a temporal algebra that is applicable to anytemporal relational data model supporting discrete linearbounded time. This algebra has the five basicrelational algebra operators extended to the temporaldomain and an operator of linear recursion. Weshow that this algebra has the expressive power of asafe temporal calculus based on the predicate temporallogic with the until and since temporal operators.In [CrC189], a historical calculus was proposed as abasis for historical relational completeness. We proposethe temporal algebra defined in this paper andthe equivalent temporal calculus as an alternative basisfor temporal relational completeness.
---
paper_title: Extending the database relational model to capture more meaning
paper_content:
During the last three or four years several investigators have been exploring “semantic models” for formatted databases. The intent is to capture (in a more or less formal way) more of the meaning of the data so that database design can become more systematic and the database system itself can behave more intelligently. Two major thrusts are clear. (1) the search for meaningful units that are as small as possible— atomic semantics ; (2) the search for meaningful units that are larger than the usual n -ary relation— molecular semantics . In this paper we propose extensions to the relational model to support certain atomic and molecular semantics. These extensions represent a synthesis of many ideas from the published work in semantic modeling plus the introduction of new rules for insertion, update, and deletion, as well as new algebraic operators.
---
paper_title: Adding time dimension to relational model and extending relational algebra
paper_content:
Abstract A methodology for adding the time dimension to the relational model is proposed and relational algebra is extended for this purpose. We propose time-stamping attributes instead of adding time to tuples. Each attribute value is stored along with a time interval over which it is valid. Non-first normal form realations are used. A relation can have atomic, set-valued, triplet-valued, or set triplet-valued attributes. The last two types of attributes preserve the time (history). Furthermore, new algebraic operations are defined to extract information from historical relations. These operations convert one attribute type to another and do selection over the time dimension. Algebraic rules and identities for the new operations are also included.
---
paper_title: A homogeneous relational model and query languages for temporal databases
paper_content:
In a temporal database, time values are associated with data item to indicate their periods of validity. We propose a model for temporal databases within the framework of the classical database theory. Our model is realized as a temporal parameterization of static relations. We do not impose any restrictions upon the schemes of temporal relations. The classical concepts of normal forms and dependencies are easily extended to our model, allowing a suitable design for a database scheme. We present a relational algebra and a tuple calculus for our model and prove their equivalence. Our data model is homogeneous in the sense that the periods of validity of all the attributes in a given tuple of a temporal relation are identical. We discuss how to relax the homogeneity requirement to extend the application domain of our approach.
---
paper_title: The Historical Relational Data Model (HRDM) and Algebra Based on Lifespans
paper_content:
Critical to the design of an historical database model is the representation of the “existence” of objects across the temporal dimension — for example, the “birth,” “death,” or “rebirth” of an individual, or the establishment or dis-establishment of a relationship. The notion of the “lifespan” of a database object is proposed as a simple framework for expressing these concepts. An object's lifespan is simply those periods of time during which the database models the properties of that object. In this paper we propose the historical relational data model (HRDM) and algebra that is based upon lifespans and that views the values of all attributes as functions from time points to simple domains. The model that we obtain is a consistent extension of the relational data model, and provides a simple mechanism for providing both time-varying data and time-varying schemes.
---
paper_title: Evaluation of relational algebras incorporating the time dimension in databases
paper_content:
The relational algebra is a procedural query language for relational databases. In this paper we survey extensions of the relational algebra that can query databases recording time-varying data. Such an algebra is a critical part of a temporal DBMS. We identify 26 criteria that provide an objective basis for evaluating temporal algebras, Seven of the criteria are shown to be mutually unsatisfiable, implying there can be no perfect temporal algebra, Choices made as to which of the incompatible criteria are satisfied characterize existing algebras Twelve time-oriented algebras are summarized and then evaluated against the criteria. We demonstrate that the design space has in some sense been explored in that all combinations of basic design decisions have at least one representative algebra. Coverage of the remaining criteria provides one measure of the quality of each algebra We argue that all of the criteria are independent and that the criteria identified as compatible are indeed so, Finally, we list plausible properties proposed by others that are either subsumed by other criteria, are not well defined, or have no objective basis for being evaluated. The algebras realize many different approaches to what appears initially to be a straightforward design task.
---
paper_title: Relational completeness of data base sublanguages
paper_content:
In the near future, we can expect a great variety of languages to be proposed for interrogating and updating data bases. This paper attempts to provide a theoretical basis which may be used to determine how complete a selection capability is provided in a proposed data sublanguage independently of any host language in which the sublanguage may be embedded. A relational algebra and a relational calculus are defined. Then, an algorithm is presented for reducing an arbitrary relation-defining expression (based on the calculus) into a semantically equivalent expression of the relational algebra. Finally, some opinions are stated regarding the relative merits of calculusoriented versus algebra-oriented data sublanguages from the standpoint of optimal search and highly discriminating authorization schemes. RJ 987 Hl7041) March 6, 1972 Computer Sciences
---
paper_title: Architecture of the ORION next-generation database system
paper_content:
Various architectural components of ORION-1 and ORION-1SX are described and a review of the current implementation is provided. The message handler receives all messages sent to the ORION system. The object subsystem provides high-level data management functions, including query optimization, schema management, long data management (including text search) and support for versionable objects, composite objects, and multimedia objects. The transaction management subsystem coordinates concurrent object accesses and provides recovery capabilities. The storage subsystem manages persistent storage of objects and controls the flow of objects between the secondary storage device and main memory buffers. In ORION-1, all subsystems reside in one computer. The ORION-1SX architecture is significantly different from ORION-1 in the management of shared data structures and distribution of these subsystems and their components. >
---
paper_title: Management Of Schema Evolution In Databases
paper_content:
This paper presents a version model which handles database schema changes and which takes evolution into account. Its originality is in allowing the development of partial schema versions, or views of a schema. These versions are created in the same database from a common schema. We define the set of authorised modifications on a schema and the rules which guarantee its coherence after transformation. Mechanisms allowing data to be associated with each version are also integrated in the model.
---
paper_title: Versions of Schema for Object-Oriented Databases
paper_content:
Version control is one of the important database requirements for design environments. Various models of versions have been proposed and implemented. However, research in versions has been focused exclusively on versioning single design objects. In a multi-user design environment where the schema (definition) of the design objects may undergo dynamic changes, it is important to be able to version the schema, as well as version the single design objects. In this paper, we propose a model of versions of schema by extending our model of versions of single objects. In particular, we present the semantics of our model of versions of schema for object-oriented databases, explore issues in implementing the model, and examine a few alternatives to our model of versions of schema.
---
paper_title: Version Management in an Object-Oriented Database
paper_content:
We describe a database system that includes a built-in version control mechanism that can be used in the definition of any new object types. This database system is object-oriented in the sense that it supports data abstraction, object types, and inheritance.
---
paper_title: Versions of Schema for Object-Oriented Databases
paper_content:
Version control is one of the important database requirements for design environments. Various models of versions have been proposed and implemented. However, research in versions has been focused exclusively on versioning single design objects. In a multi-user design environment where the schema (definition) of the design objects may undergo dynamic changes, it is important to be able to version the schema, as well as version the single design objects. In this paper, we propose a model of versions of schema by extending our model of versions of single objects. In particular, we present the semantics of our model of versions of schema for object-oriented databases, explore issues in implementing the model, and examine a few alternatives to our model of versions of schema.
---
paper_title: Semantics and implementation of schema evolution in object-oriented databases
paper_content:
Object-oriented programming is well-suited to such data-intensive application domains as CAD/CAM, AI, and OIS (office information systems) with multimedia documents. At MCC we have built a prototype object-oriented database system, called ORION. It adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. One of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. In this paper, following a brief review of the object-oriented data model that we support in ORION, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation.
---
paper_title: Version Management in an Object-Oriented Database
paper_content:
We describe a database system that includes a built-in version control mechanism that can be used in the definition of any new object types. This database system is object-oriented in the sense that it supports data abstraction, object types, and inheritance.
---
paper_title: Meta Operations for Type Management in Object-Oriented Databases: — A Lazy Mechanism for Schema Evolution
paper_content:
In object-oriented database systems, type definitions are used as the basis of object manipulation. They may change causing the systems' schemata evolve dynamically. In this paper, we first clarify the concept of schema evolution in databases and discuss its existing solutions. Next we propose a lazy evaluation method of schema evolution which minimize the amount of object manipulation. It is realized in a system which incorporates the concept of persistent meta-object, while meta-object interprets meta-message and principally maintains type validness while schema evolves. Better balance between the availability of object and the speed of accessing object is obtained by our method.
---
paper_title: The time relational model
paper_content:
Existing data base management systems (DBMSs) allow users to operate upon the latest (committed) data base state. However, many real world applications require storing and accessing historical information. Furthermore, there are no provisions in current DBMSs for the distinction between the physical time a given data item is entered, and the time period to which it pertains. Many data base updates in real applications are either "retroactive", i.e., their effectiveness takes place sometime in the past, or "proactive", i.e., their effectiveness will take place sometime in the future. ::: The Time-Relational Model is an architecture which integrates comprehensive time processing capabilities into the Relational Model of data bases for managing changes of data values, data manipulation rules, and data structure. However, those who are not concerned about Time in their particular applications may perceive this model as a regular relational model. ::: A fundamental concept is the time-view of data and transaction programs. Its major objective is to automatically and dynamically provide multiple and complete data base views reflecting different points in time, including all system components that may change over time. The time-view theorem provides necessary and sufficient conditions for achieving any desirable, dynamic time-view. The time-consistency theorem states the conditions for which a given computation can be reproduced again and again in the face of data base changes over time. The basic relational algebra operators are extended to include the time-view concept, resulting in the time relational algebra. An implementation architecture is developed which demonstrates the feasibility of such model. The central idea is to gain more functions and solutions based on the presence of time related information and, thus, spread the overall operational and implementation cost across multiple sources. Recovery Manager and Concurrency Control Procedures are developed. The recovery manager uses only the data itself for its own operation rather, than a logging facility; there is no need for a locking mechanism to provide each user with an appropriate isolation level from other concurrent users. ::: A real-life case study is described based on the Informatics Inc. Manufacturing Planning System - PRODUCTION-IV.
---
paper_title: The temporal query language TQuel
paper_content:
Recently, attention has been focused on temporal databases , representing an enterprise over time. We have developed a new language, Tquel , to query a temporal database. TQuel was designed to be a minimal extension, both syntactically and semantically, of Quel, the query language in the Ingres relational database management system. This paper discusses the language informally, then provides a tuple relational calculus semantics for the TQuel statements that differ from their Quel counterparts, including the modification statements. The three additional temporal constructs defined in Tquel are shown to be direct semantic analogues of Quel's where clause and target list. We also discuss reducibility of the semantics to Quel's semantics when applied to a static database. TQuel is compared with ten other query languages supporting time.
---
paper_title: Formal semantics for time in databases
paper_content:
The concept of a historical database is introduced as a tool for modeling the dynamic nature of some part of the real world. Just as first-order logic has been shown to be a useful formalism for expressing and understanding the underlying semantics of the relational database model, intensional logic is presented as an analogous formalism for expressing and understanding the temporal semantics involved in a historical database. The various components of the relational model, as extended to include historical relations, are discussed in terms of the model theory for the logic IL s , a variation of the logic IL formulated by Richard Montague. The modal concepts of intensional and extensional data constraints and queries are introduced and contrasted. Finally, the potential application of these ideas to the problem of natural language database querying is discussed.
---
paper_title: Quantifying Schema Evolution
paper_content:
Abstract Achieving correct changes is the dominant activity in the application software industry. Modification of database schemata is one kind of change which may have severe consequences for database applications. The paper presents a method for measuring modifications to database schemata and their consequences by using a thesaurus tool. Measurements of the evolution of a large-scale database application currently running in several hospitals in the UK are presented and interpreted. The kind of measurements provided by this in-depth study is useful input to the design of change management tools.
---
paper_title: Formal semantics for time in databases
paper_content:
The concept of a historical database is introduced as a tool for modeling the dynamic nature of some part of the real world. Just as first-order logic has been shown to be a useful formalism for expressing and understanding the underlying semantics of the relational database model, intensional logic is presented as an analogous formalism for expressing and understanding the temporal semantics involved in a historical database. The various components of the relational model, as extended to include historical relations, are discussed in terms of the model theory for the logic IL s , a variation of the logic IL formulated by Richard Montague. The modal concepts of intensional and extensional data constraints and queries are introduced and contrasted. Finally, the potential application of these ideas to the problem of natural language database querying is discussed.
---
paper_title: Database relations with null values
paper_content:
Abstract A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
---
paper_title: Null values in nested relational databases
paper_content:
The desire to extend the applicability of the relational model beyond traditional data-processing applications has stimulated interest in nested or non-first normal form relations in which the attributes of a relation can take on values which are sets or even relations themselves. In this paper, we study the role of null values in the nested relational model using an open world assumption. We extend the traditional theory and study the properties of extended operators for nested relations containing nulls. The no-information, unknown, and non-existent interpretation of nulls are discussed and the meaning of “empty set” is clarified. Finally, contrary to several previous results, we determine that the traditional axiomatization of functional and multivalued dependencies is valid in the presence of nulls.
---
paper_title: Schema evolution and the relational algebra
paper_content:
In this paper we discuss extensions to the conventional relational algebra to support both aspects of transaction time, evolution of a database’s contents and evolution of a database’s schema. We dene a relation’s schema to be the relation’s temporal signature, a function mapping the relation’s attribute names onto their value domains, and class, indicating the extent of support for time. We also introduce commands to change a relation, now dened as a triple consisting of a sequence of classes, a sequence of signatures, and a sequence of states. A semantic type system is required to identify semantically incorrect expressions and to enforce consistency constraints among a relation’s class, signature, and state following update. We show that these extensions are applicable, without change, to historical algebras that support valid time, yielding an algebraic language for the query and update of temporal databases. The additions preserve the useful properties of the conventional algebra. A database’s schema describes the structure of the database; the contents of the database must adhere to that structure [Date 1976, Ullman 1982]. Schema evolution refers to changes to the database’s schema over time. Conventional databases allow only one schema to be in force at a time, requiring restructuring (also termed logical reorganization [Sockut & Goldberg 1979]) when the schema is modied. With the advent of databases storing past states [McKenzie 1986], it becomes desirable to accommodate multiple schemas, each in eect for an interval in the past. Schema versioning refers to retention of past schemas resulting from schema evolution. In an earlier paper [McKenzie & Snodgrass 1987A] we proposed extensions to the conventional relational algebra [Codd 1970] that model the evolution of a database’s contents. We did not, however, consider the evolution of a database’s schema. In this paper, we provide further extensions to the conventional relational algebra that model the evolution of a database’s schema. The extensions that support evolution of a database’s contents are repeated here for completeness and because the extensions supporting schema evolution are best explained in concert with those earlier extensions.
---
paper_title: Quantifying Schema Evolution
paper_content:
Abstract Achieving correct changes is the dominant activity in the application software industry. Modification of database schemata is one kind of change which may have severe consequences for database applications. The paper presents a method for measuring modifications to database schemata and their consequences by using a thesaurus tool. Measurements of the evolution of a large-scale database application currently running in several hospitals in the UK are presented and interpreted. The kind of measurements provided by this in-depth study is useful input to the design of change management tools.
---
paper_title: Handling discovered structure in database systems
paper_content:
Most database systems research assumes that the database schema is determined by a database administrator. With the recent increase in interest in knowledge discovery from databases and the predicted increase in the volume of data expected to be stored it is appropriate to reexamine this assumption and investigate how derived or induced, rather than database administrator supplied, structure can be accommodated and used by database systems. The paper investigates some of the characteristics of inductive learning and knowledge discovery as they pertain to database systems and the constraints that would be imposed on appropriate inductive learning algorithms is discussed. A formal method of defining induced dependencies (both static and temporal) is proposed as the inductive analogue to functional dependencies. The Boswell database system exemplifying some of these characteristics is also briefly discussed.
---
|
Title: A Survey of Schema Versioning Issues for Database Systems
Section 1: Background
Description 1: Provide an introduction to the problems of schema modifications and their implications for database management and administration.
Section 2: Pragmatic considerations
Description 2: Discuss the practical constraints and considerations for proposed solutions to schema versioning and evolution.
Section 3: Outline of this paper
Description 3: Summarize the structure and main sections of the paper.
Section 4: Handling heterogeneous schemata
Description 4: Define terms related to heterogeneous schemata and discuss their relationship with schema evolution.
Section 5: Schema modification, evolution and versioning
Description 5: Provide definitions and distinctions between schema modification, schema evolution, and schema versioning.
Section 6: Data and view integration
Description 6: Discuss associated research areas like data and view integration and their relevance to schema evolution.
Section 7: Temporal database systems
Description 7: Offer a brief overview of temporal database systems and their concepts related to schema versioning.
Section 8: Domain/type evolution
Description 8: Explore the issues and solutions related to the evolution of domains in data models.
Section 9: Relation/class evolution
Description 9: Examine the challenges and methodologies for evolving relation and class structures.
Section 10: Algebras supporting schema evolution
Description 10: Review different algebras proposed for supporting schema evolution in database systems.
Section 11: Schema conversion mechanisms
Description 11: Outline various proposed mechanisms for converting schemas at the physical level.
Section 12: Data conversion mechanisms
Description 12: Discuss approaches for converting data to align with new schema versions.
Section 13: Access right considerations
Description 13: Address the potential violations of access rights due to schema evolution changes.
Section 14: Concurrency considerations and concurrent schemata
Description 14: Discuss the concurrency issues that arise with schema modifications in a multi-user environment.
Section 15: Issues in query language support
Description 15: Explore the challenges and potential solutions for supporting schema evolution in database query languages.
Section 16: Levels of support for schema evolution in query languages
Description 16: Propose different approaches for handling schema changes in query languages.
Section 17: Completed schemata
Description 17: Discuss the concept of completed schemata and its application for data retrieval and backup purposes.
Section 18: Problems presented by null values
Description 18: Examine how null values can present issues in the context of an evolving schema.
Section 19: Schema valid-time support
Description 19: Discuss the potential merits and implementation considerations for schema valid-time support.
Section 20: Schema-time projection
Description 20: Define schema-time projection and methods for specifying and constructing effective schemas.
Section 21: Schema-time selection
Description 21: Address how schema-time selection helps access data based on schema formats and considerations related to data update.
Section 22: Version naming
Description 22: Present methods for naming schema versions to track and manage schema changes effectively.
Section 23: Casting of output attribute domains
Description 23: Discuss the relevance of casting or converting attribute domains for stability in applications using evolving schemas.
Section 24: Other related research issues
Description 24: Identify other research areas related to schema versioning and evolution, including pragmatic limitations and automated schema evolution.
Section 25: Further Research
Description 25: Summarize the current research landscape and outline directions for future research in schema versioning and evolution.
|
A survey on evaluation methods for image segmentation
| 10 |
---
paper_title: Performance Characterization in Computer Vision
paper_content:
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The problem, however, is that for most published algorithms, there is no performance characterization which has been established in the research literature. This is an awful state of affairs for the engineers whose job it is to design and build image analysis or machine vision systems.
---
paper_title: Computational Techniques in the Visual Segmentation of Static Scenes.
paper_content:
A wide range of segmentation techniques continues to evolve in the literature on scene analysis. Many of these approaches have been constrained to limited applications or goals. This survey analyzes the complexities encountered in applying these techniques to color images of natural scenes involving complex textured objects. It also explores new ways of using the techniques to overcome some of the problems which are described. An outline of considerations in the development of a general image segmentation system which can provide input to a semantic interpretation process is distributed throughout the paper. In particular, the problems of feature selection and extraction in images with textural variations are discussed. The approaches to segmentation are divided into two broad categories, boundary formation and region formation. The tools for extraction of boundaries involve spatial differentiation, nonmaxima suppression, relaxation processes, and grouping of local edges into segments. Approaches to region formation include region growing under local spatial guidance, histograms for analysis of global feature activity, and finally an integration of the strengths of each by a spatial analysis of feature activity. A brief discussion of attempts by others to integrate the segmentation and interpretation phrases is also provided. The discussion is supported by a variety of experimental results.
---
paper_title: A survey of threshold selection techniques
paper_content:
Abstract The use of thresholding as a tool in image segmentation has been extensively studied, and a variety of techniques have been proposed for automatic threshold selection. This paper presents a review of these techniques, including global, local, and dynamic methods.
---
paper_title: Segmentation evaluation using ultimate measurement accuracy
paper_content:
As a wide range of segmentation techniques have been developed in the last two decades, the evaluation and comparison of segmentation techniques becomes indispensable. In this paper, after a thorough review of previous work, we present a general approach for evaluation and comparison of segmentation techniques. More specifically, under this general framework, we propose to use the ultimate measurement accuracy to assess the performance of different algorithms. In image analysis, the ultimate goals of segmentation and other processing are often to obtain measurements of the object features in the image. Therefore, the accuracy of those ultimate measurements over segmented images would be a good index revealing the performance of segmentation techniques. We feel this measure is of much greater importance than, e.g., error probabilities on pixel labeling, or even specially developed figure of merit. There exist many features describing the properties of the objects in the image. Some of them are discussed here and their applicability and performance in the context of segmentation evaluation are studied. Based on experimental results, we provide some useful guidelines for choosing specific measurements for different evaluation situations and for selecting adequate techniques in particular segmentation applications.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: A survey on image segmentation
paper_content:
Abstract For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (1) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques. In the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: A survey of thresholding techniques
paper_content:
Abstract In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.
---
paper_title: Transition region determination based thresholding
paper_content:
Abstract We present a newly developed thresholding technique which is not based on the image's gray-level histogram. This technique is fully automatic and quite robust in the presence of noise and unexpected structures. Moreover, no empirical parameters are used, and no limitations on shape and size of objects are imposed. A comparison with histogram based threshold selection is also discussed.
---
paper_title: Image Structure Representation and Processing: A Discussion of Some Segmentation Methods in Cytology
paper_content:
Image processing methods (segmentation) are presented in connection with a modeling of image structure. An image is represented as a set of primitives, characterized by their type, abstraction level, and a list of attributes. Entities (regions for example) are then described as a subset of primitives obeying particular rules. Image segmentation methods are discussed, according to the associated image modeling level. Their potential efficacity is compared, when applied to cytologic image analysis.
---
paper_title: Quantitative design and evaluation of enhancement/thresholding edge detectors
paper_content:
Quantitative design and performance evaluation techniques are developed for the enhancement/thresholding class of image edge detectors. The design techniques are based on statistical detection theory and deterministic pattern-recognition classification procedures. The performance evaluation methods developed include: a)deterministic measurement of the edge gradient amplitude; b)comparison of the probabilities of correct and false edge detection; and c) figure of merit computation. The design techniques developed are used to optimally design a variety of small and large mask edge detectors. Theoretical and experimental comparisons of edge detectors are presented.
---
paper_title: Image segmentation and image models
paper_content:
This paper discusses image segmentation techniques from the standpoint of the assumptions that an image should satisfy in order for a particular technique to be applicable to it. These assumptions, which are often not stated explicitly, can be regarded as (perhaps informal) "models" for classes of images. The paper emphasizes two basic classes of models: statistical models that describe the pixel population in an image or region, and spatial models that describe the decomposition of an image into regions.
---
paper_title: Segmentation of microscopic cell scenes.
paper_content:
Different methods for the automated segmentation of microscopic cell scenes are presented with examples. The techniques discussed include edge detection by thresholding, "blob" detection by split-and-merge algorithm, global thresholding using gray-level histograms, hierarchic thresholding using color information, global thresholding using two-dimensional histograms and segmentation by "blob" labeling. Methods are more robust against insignificant changes in the scene and perform more reliably as more a priori knowledge about the scene is incorporated in the segmentation algorithm. The inclusion of both photometric and geometric a priori knowledge can result in a high level of correct segmentations, the cost of which is increased computation time.
---
paper_title: Computer and Robot Vision
paper_content:
From the Publisher: ::: This two-volume set is an authoritative, comprehensive, modern work on computer vision that covers all of the different areas of vision with a balanced and unified approach. The discussion in "Volume I" focuses on image in, and image out or feature set out. "Volume II" covers the higher level techniques of illumination, perspective projection, analytical photogrammetry, motion, image matching, consistent labeling, model matching, and knowledge-based vision systems.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: Image thresholding: Some new techniques
paper_content:
Abstract Some of the existing threshold selection techniques have been critically reviewed. Two algorithms based on a new conditional entropy measure of a partitioned image have been formulated. The approximate minimum error thresholding algorithm of Kittler and Illingworth has been implemented considering the Poisson distribution for the gray level instead of the commonly used normal distribution. Justification in support of the Poisson distribution has also been given. This method is found to be much better both from the point of view of convergence and segmented output. The proposed methods have been applied on a number of images and are found to produce good results. Objective evaluation of the thresholds has been done using divergence, region uniformity, correlation between original image and the segmented image, and second order entropy.
---
paper_title: Threshold Evaluation Techniques
paper_content:
Threshold selection techniques have been used as a basic tool in image segmentation, but little work has been done on the problem of evaluating a threshold of an image. The problem of threshold evaluation is addressed, and two methods are proposed for measuring the "goodness" of a thresholded image, one based on a busyness criterion and the other based on a discrepancy or error criterion. These evaluation techniques are applied to a set of infrared images and are shown to be useful in facilitating threshold selection. In fact, both methods usually result in similar or identical thresholds which yield good segmentations of the images.
---
paper_title: Textural Features for Image Classification
paper_content:
Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.
---
paper_title: Low Level Image Segmentation: An Expert System
paper_content:
A major problem in robotic vision is the segmentation of images of natural scenes in order to understand their content. This paper presents a new solution to the image segmentation problem that is based on the design of a rule-based expert system. General knowledge about low level properties of processes employ the rules to segment the image into uniform regions and connected lines. In addition to the knowledge rules, a set of control rules are also employed. These include metarules that embody inferences about the order in which the knowledge rules are matched. They also incorporate focus of attention rules that determine the path of processing within the image. Furthermore, an additional set of higher level rules dynamically alters the processing strategy. This paper discusses the structure and content of the knowledge and control rules for image segmentation.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: A survey of thresholding techniques
paper_content:
Abstract In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: A survey of thresholding techniques
paper_content:
Abstract In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.
---
paper_title: Validation of the interleaved pyramid for the segmentation of 3D vector images
paper_content:
Abstract A multiresolution pyramid with double scale space sampling (compared to the Burt & Hong scheme) for the segmentation of 3D images, of which the elements are multiple valued, is described. Evaluation is carried out by quality constrained cost analysis (QCCA).
---
paper_title: Performance Characterization in Computer Vision
paper_content:
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The problem, however, is that for most published algorithms, there is no performance characterization which has been established in the research literature. This is an awful state of affairs for the engineers whose job it is to design and build image analysis or machine vision systems.
---
paper_title: Performance Characterization in Computer Vision
paper_content:
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The problem, however, is that for most published algorithms, there is no performance characterization which has been established in the research literature. This is an awful state of affairs for the engineers whose job it is to design and build image analysis or machine vision systems.
---
paper_title: Threshold Evaluation Techniques
paper_content:
Threshold selection techniques have been used as a basic tool in image segmentation, but little work has been done on the problem of evaluating a threshold of an image. The problem of threshold evaluation is addressed, and two methods are proposed for measuring the "goodness" of a thresholded image, one based on a busyness criterion and the other based on a discrepancy or error criterion. These evaluation techniques are applied to a set of infrared images and are shown to be useful in facilitating threshold selection. In fact, both methods usually result in similar or identical thresholds which yield good segmentations of the images.
---
paper_title: Error measures for scene segmentation
paper_content:
Abstract Scene segmentation is an important problem in pattern recognition. Current subjective methods for evaluation and comparison of scene segmentation techniques are inadequate and objective quantitative measures are desirable. Two error measures, the percentage area misclassified ( p ) and a new pixel distance error (ϵ) were defined and evaluated in terms of their correlation with human observation for comparison of multiple segmentations of the same scene and multiple scenes segmented by the same technique. The results indicate that both these measures can be helpful in the evaluation and comparison of scene segmentation procedures.
---
paper_title: Segmentation evaluation using ultimate measurement accuracy
paper_content:
As a wide range of segmentation techniques have been developed in the last two decades, the evaluation and comparison of segmentation techniques becomes indispensable. In this paper, after a thorough review of previous work, we present a general approach for evaluation and comparison of segmentation techniques. More specifically, under this general framework, we propose to use the ultimate measurement accuracy to assess the performance of different algorithms. In image analysis, the ultimate goals of segmentation and other processing are often to obtain measurements of the object features in the image. Therefore, the accuracy of those ultimate measurements over segmented images would be a good index revealing the performance of segmentation techniques. We feel this measure is of much greater importance than, e.g., error probabilities on pixel labeling, or even specially developed figure of merit. There exist many features describing the properties of the objects in the image. Some of them are discussed here and their applicability and performance in the context of segmentation evaluation are studied. Based on experimental results, we provide some useful guidelines for choosing specific measurements for different evaluation situations and for selecting adequate techniques in particular segmentation applications.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Error measures for scene segmentation
paper_content:
Abstract Scene segmentation is an important problem in pattern recognition. Current subjective methods for evaluation and comparison of scene segmentation techniques are inadequate and objective quantitative measures are desirable. Two error measures, the percentage area misclassified ( p ) and a new pixel distance error (ϵ) were defined and evaluated in terms of their correlation with human observation for comparison of multiple segmentations of the same scene and multiple scenes segmented by the same technique. The results indicate that both these measures can be helpful in the evaluation and comparison of scene segmentation procedures.
---
paper_title: Computer and Robot Vision
paper_content:
From the Publisher: ::: This two-volume set is an authoritative, comprehensive, modern work on computer vision that covers all of the different areas of vision with a balanced and unified approach. The discussion in "Volume I" focuses on image in, and image out or feature set out. "Volume II" covers the higher level techniques of illumination, perspective projection, analytical photogrammetry, motion, image matching, consistent labeling, model matching, and knowledge-based vision systems.
---
paper_title: Evaluation of edge detection algorithms
paper_content:
In the past two decades several algorithms have been developed to extract the contour of homogeneous regions within digital images. A lot of the attention is focused to edge detection, being a crucial part in most of the algorithms. The classical edge operators emphasize the high frequency components in the image and therefore act poorly in cases of moderate low SNR and/or low spatial resolution of the imaging device. The awareness of this has lead to new approaches in which balanced trade-offs are sought between noise suppression, image deblurring and the ability to resolve interfering edges, altogether resulting in operators acting like bandpass filters. The ultimate goal of this work is to arrive at an evaluation scheme with criteria reflecting the requirements issuing the major application of edge detectors: contour extraction.
---
paper_title: Image segmentation and image models
paper_content:
This paper discusses image segmentation techniques from the standpoint of the assumptions that an image should satisfy in order for a particular technique to be applicable to it. These assumptions, which are often not stated explicitly, can be regarded as (perhaps informal) "models" for classes of images. The paper emphasizes two basic classes of models: statistical models that describe the pixel population in an image or region, and spatial models that describe the decomposition of an image into regions.
---
paper_title: Segmentation of microscopic cell scenes.
paper_content:
Different methods for the automated segmentation of microscopic cell scenes are presented with examples. The techniques discussed include edge detection by thresholding, "blob" detection by split-and-merge algorithm, global thresholding using gray-level histograms, hierarchic thresholding using color information, global thresholding using two-dimensional histograms and segmentation by "blob" labeling. Methods are more robust against insignificant changes in the scene and perform more reliably as more a priori knowledge about the scene is incorporated in the segmentation algorithm. The inclusion of both photometric and geometric a priori knowledge can result in a high level of correct segmentations, the cost of which is increased computation time.
---
paper_title: Three-dimensional image segmentation using a split, merge and group approach
paper_content:
A 3-D segmentation algorithm is presented, based on a split, merge and group approach. It uses a mixed (oct/quad)tree implementation. A number of homogeneity criteria is discussed and evaluated. An example shows the segmentation of mythramycin stained cell nuclei.
---
paper_title: Image Structure Representation and Processing: A Discussion of Some Segmentation Methods in Cytology
paper_content:
Image processing methods (segmentation) are presented in connection with a modeling of image structure. An image is represented as a set of primitives, characterized by their type, abstraction level, and a list of attributes. Entities (regions for example) are then described as a subset of primitives obeying particular rules. Image segmentation methods are discussed, according to the associated image modeling level. Their potential efficacity is compared, when applied to cytologic image analysis.
---
paper_title: Quantitative design and evaluation of enhancement/thresholding edge detectors
paper_content:
Quantitative design and performance evaluation techniques are developed for the enhancement/thresholding class of image edge detectors. The design techniques are based on statistical detection theory and deterministic pattern-recognition classification procedures. The performance evaluation methods developed include: a)deterministic measurement of the edge gradient amplitude; b)comparison of the probabilities of correct and false edge detection; and c) figure of merit computation. The design techniques developed are used to optimally design a variety of small and large mask edge detectors. Theoretical and experimental comparisons of edge detectors are presented.
---
paper_title: Three-dimensional image segmentation using a split, merge and group approach
paper_content:
A 3-D segmentation algorithm is presented, based on a split, merge and group approach. It uses a mixed (oct/quad)tree implementation. A number of homogeneity criteria is discussed and evaluated. An example shows the segmentation of mythramycin stained cell nuclei.
---
paper_title: Scene-segmentation algorithm development using error measures.
paper_content:
Development of scene-segmentation algorithms has generally been an ad hoc process. This paper presents a systematic technique for developing these algorithms using error-measure minimization. If scene segmentation is regarded as a problem of pixel classification whereby each pixel of a scene is assigned to a particular object class, development of a scene-segmentation algorithm becomes primarily a process of feature selection. In this study, four methods of feature selection were used to develop segmentation techniques for cervical cytology images: (1) random selection, (2) manual selection (best features in the subjective judgment of the investigator), (3) eigenvector selection (ranking features according to the largest contribution to each eigenvector of the feature covariance matrix) and (4) selection using the scene-segmentation error measure A2. Four features were selected by each method from a universe of 35 features consisting of gray level, color, texture and special pixel neighborhood features in 40 cervical cytology images . Evaluation of the results was done with a composite of the scene-segmentation error measure A2, which depends on the percentage of scenes with measurable error, the agreement of pixel class proportions, the agreement of number of objects for each pixel class and the distance of each misclassified pixel to the nearest pixel of the misclassified class. Results indicate that random and eigenvector feature selection were the poorest methods, manual feature selection somewhat better and error-measure feature selection best. The error-measure feature selection method provides a useful, systematic method of developing and evaluating scene-segmentation algorithms.
---
paper_title: Segmentation evaluation using ultimate measurement accuracy
paper_content:
As a wide range of segmentation techniques have been developed in the last two decades, the evaluation and comparison of segmentation techniques becomes indispensable. In this paper, after a thorough review of previous work, we present a general approach for evaluation and comparison of segmentation techniques. More specifically, under this general framework, we propose to use the ultimate measurement accuracy to assess the performance of different algorithms. In image analysis, the ultimate goals of segmentation and other processing are often to obtain measurements of the object features in the image. Therefore, the accuracy of those ultimate measurements over segmented images would be a good index revealing the performance of segmentation techniques. We feel this measure is of much greater importance than, e.g., error probabilities on pixel labeling, or even specially developed figure of merit. There exist many features describing the properties of the objects in the image. Some of them are discussed here and their applicability and performance in the context of segmentation evaluation are studied. Based on experimental results, we provide some useful guidelines for choosing specific measurements for different evaluation situations and for selecting adequate techniques in particular segmentation applications.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Sampling density and quantitative microscopy.
paper_content:
The sampling densities required for the quantitative analysis of digitized microscope images is discussed. It is shown that the Nyquist sampling theorem is not the proper reference point for determining the sampling density when the goal is measurement, although it may be a proper reference point when the goal is image filtering and reconstruction. The problems associated with signal truncation--the use of a finite amount of data--and the finite amount of time available for computation make it impossible to reconstruct an arbitrary image, even if it is bandlimited. Two examples taken from straightforward measurement problems exhibit the fundamental problems associated with the measurement of analog quantities from digital data and the role played by the sampling density.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: Evaluating quality of compressed medical images : SNR, subjective rating, and diagnostic accuracy : Data compression
paper_content:
Compressing a digital image can facilitate its transmission, storage, and processing. As radiology departments become increasingly digital, the quantities fo their imaging data are forcing consideration of compression in picture archiving and communication systems. Significant compression is achievable anly by lossy algorithms, which do not permit the exact recovery of the original images
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: Image thresholding: Some new techniques
paper_content:
Abstract Some of the existing threshold selection techniques have been critically reviewed. Two algorithms based on a new conditional entropy measure of a partitioned image have been formulated. The approximate minimum error thresholding algorithm of Kittler and Illingworth has been implemented considering the Poisson distribution for the gray level instead of the commonly used normal distribution. Justification in support of the Poisson distribution has also been given. This method is found to be much better both from the point of view of convergence and segmented output. The proposed methods have been applied on a number of images and are found to produce good results. Objective evaluation of the thresholds has been done using divergence, region uniformity, correlation between original image and the segmented image, and second order entropy.
---
paper_title: Three-dimensional image segmentation using a split, merge and group approach
paper_content:
A 3-D segmentation algorithm is presented, based on a split, merge and group approach. It uses a mixed (oct/quad)tree implementation. A number of homogeneity criteria is discussed and evaluated. An example shows the segmentation of mythramycin stained cell nuclei.
---
paper_title: Threshold Evaluation Techniques
paper_content:
Threshold selection techniques have been used as a basic tool in image segmentation, but little work has been done on the problem of evaluating a threshold of an image. The problem of threshold evaluation is addressed, and two methods are proposed for measuring the "goodness" of a thresholded image, one based on a busyness criterion and the other based on a discrepancy or error criterion. These evaluation techniques are applied to a set of infrared images and are shown to be useful in facilitating threshold selection. In fact, both methods usually result in similar or identical thresholds which yield good segmentations of the images.
---
paper_title: Quantitative design and evaluation of enhancement/thresholding edge detectors
paper_content:
Quantitative design and performance evaluation techniques are developed for the enhancement/thresholding class of image edge detectors. The design techniques are based on statistical detection theory and deterministic pattern-recognition classification procedures. The performance evaluation methods developed include: a)deterministic measurement of the edge gradient amplitude; b)comparison of the probabilities of correct and false edge detection; and c) figure of merit computation. The design techniques developed are used to optimally design a variety of small and large mask edge detectors. Theoretical and experimental comparisons of edge detectors are presented.
---
paper_title: Segmentation of microscopic cell scenes.
paper_content:
Different methods for the automated segmentation of microscopic cell scenes are presented with examples. The techniques discussed include edge detection by thresholding, "blob" detection by split-and-merge algorithm, global thresholding using gray-level histograms, hierarchic thresholding using color information, global thresholding using two-dimensional histograms and segmentation by "blob" labeling. Methods are more robust against insignificant changes in the scene and perform more reliably as more a priori knowledge about the scene is incorporated in the segmentation algorithm. The inclusion of both photometric and geometric a priori knowledge can result in a high level of correct segmentations, the cost of which is increased computation time.
---
paper_title: Segmentation evaluation and comparison: a study of various algorithms
paper_content:
ABSTRACT An objective and quantitative study of several representative segmentation algorithms is presented. In this study, themeasurement accuracy of object features from the segmented images is taken tojudge the quality of segmentation results andto assess the performance of applied algorithms. Moreover, some synthetic images are specially generated and used in testexperiments. This evaluation and comparison study reveals the behaviour of those algorithms within various situations,provides their performance ranking under real-like conditions, gives some limits and/or constraints for employing thosealgorithms in different applications as well as indicates several potential directions for improving their performance andinitiating new developments. Since the investigated algorithms are selected from different technique groups, this study alsoshows that the presented approach would be valid and effective for treating a wide range of segmentation algorithms. I. INTRODUCTION Since a large variety of different segmentation algorithms have been developed over last years (see, e.g.,
---
paper_title: Low Level Image Segmentation: An Expert System
paper_content:
A major problem in robotic vision is the segmentation of images of natural scenes in order to understand their content. This paper presents a new solution to the image segmentation problem that is based on the design of a rule-based expert system. General knowledge about low level properties of processes employ the rules to segment the image into uniform regions and connected lines. In addition to the knowledge rules, a set of control rules are also employed. These include metarules that embody inferences about the order in which the knowledge rules are matched. They also incorporate focus of attention rules that determine the path of processing within the image. Furthermore, an additional set of higher level rules dynamically alters the processing strategy. This paper discusses the structure and content of the knowledge and control rules for image segmentation.
---
paper_title: A survey of thresholding techniques
paper_content:
Abstract In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: Validation of the interleaved pyramid for the segmentation of 3D vector images
paper_content:
Abstract A multiresolution pyramid with double scale space sampling (compared to the Burt & Hong scheme) for the segmentation of 3D images, of which the elements are multiple valued, is described. Evaluation is carried out by quality constrained cost analysis (QCCA).
---
paper_title: Sampling density and quantitative microscopy.
paper_content:
The sampling densities required for the quantitative analysis of digitized microscope images is discussed. It is shown that the Nyquist sampling theorem is not the proper reference point for determining the sampling density when the goal is measurement, although it may be a proper reference point when the goal is image filtering and reconstruction. The problems associated with signal truncation--the use of a finite amount of data--and the finite amount of time available for computation make it impossible to reconstruct an arbitrary image, even if it is bandlimited. Two examples taken from straightforward measurement problems exhibit the fundamental problems associated with the measurement of analog quantities from digital data and the role played by the sampling density.
---
paper_title: Performance Characterization in Computer Vision
paper_content:
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The problem, however, is that for most published algorithms, there is no performance characterization which has been established in the research literature. This is an awful state of affairs for the engineers whose job it is to design and build image analysis or machine vision systems.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: Validation of the interleaved pyramid for the segmentation of 3D vector images
paper_content:
Abstract A multiresolution pyramid with double scale space sampling (compared to the Burt & Hong scheme) for the segmentation of 3D images, of which the elements are multiple valued, is described. Evaluation is carried out by quality constrained cost analysis (QCCA).
---
paper_title: Performance Characterization in Computer Vision
paper_content:
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The problem, however, is that for most published algorithms, there is no performance characterization which has been established in the research literature. This is an awful state of affairs for the engineers whose job it is to design and build image analysis or machine vision systems.
---
paper_title: Comparison of thresholding techniques using synthetic images and ultimate measurement accuracy
paper_content:
Presents an objective and quantitative study of several thresholding techniques. For this study, three sets of test images to simulate various real situations have been designed. The accuracy of object feature measurement is used to assess the performance. The authors not only provide an insight into these techniques but also show the usefulness of synthetic images and ultimate measurement accuracy in segmentation evaluation and comparison. >
---
paper_title: Segmentation evaluation using ultimate measurement accuracy
paper_content:
As a wide range of segmentation techniques have been developed in the last two decades, the evaluation and comparison of segmentation techniques becomes indispensable. In this paper, after a thorough review of previous work, we present a general approach for evaluation and comparison of segmentation techniques. More specifically, under this general framework, we propose to use the ultimate measurement accuracy to assess the performance of different algorithms. In image analysis, the ultimate goals of segmentation and other processing are often to obtain measurements of the object features in the image. Therefore, the accuracy of those ultimate measurements over segmented images would be a good index revealing the performance of segmentation techniques. We feel this measure is of much greater importance than, e.g., error probabilities on pixel labeling, or even specially developed figure of merit. There exist many features describing the properties of the objects in the image. Some of them are discussed here and their applicability and performance in the context of segmentation evaluation are studied. Based on experimental results, we provide some useful guidelines for choosing specific measurements for different evaluation situations and for selecting adequate techniques in particular segmentation applications.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
---
paper_title: Entropic thresholding using a block source model
paper_content:
Since the pioneer work of Frieden (J. Opt. Soc. Am. 62, 1972, 511-518; Comput. Graphics Image Process. 12, 1980, 40-59), the entropy concept is increasingly used in image analysis, especially in image reconstruction, image segmentation, and image compression. In the present paper a new entropic thresholding method based on a block source model is presented. This new approach is based on a distribution-free local analysis of the image and does not use higher order entropy. Our method is compared to the existing entropic thresholding methods.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: Validation of the interleaved pyramid for the segmentation of 3D vector images
paper_content:
Abstract A multiresolution pyramid with double scale space sampling (compared to the Burt & Hong scheme) for the segmentation of 3D images, of which the elements are multiple valued, is described. Evaluation is carried out by quality constrained cost analysis (QCCA).
---
paper_title: Task-directed evaluation of image segmentation methods
paper_content:
In the image processing literature many methods to segment 2D and 3D images have been presented. However, relatively little effort has been spent on the validation of the results of these methods. The goal of the paper is to explore a validation methodology that is based on developing a task-directed quality norm that can be used as a constraint in cost analysis. In this methodology a segmentation method is evaluated by the cost reduction it provides relative to the cost of a full-interactive (manual) segmentation. This cost is constrained by a quality threshold, so that less-than-perfect segmentations are allowed. In this way segmentation methods can be compared, which are designed for the same task, but are different of nature. >
---
paper_title: Image thresholding: Some new techniques
paper_content:
Abstract Some of the existing threshold selection techniques have been critically reviewed. Two algorithms based on a new conditional entropy measure of a partitioned image have been formulated. The approximate minimum error thresholding algorithm of Kittler and Illingworth has been implemented considering the Poisson distribution for the gray level instead of the commonly used normal distribution. Justification in support of the Poisson distribution has also been given. This method is found to be much better both from the point of view of convergence and segmented output. The proposed methods have been applied on a number of images and are found to produce good results. Objective evaluation of the thresholds has been done using divergence, region uniformity, correlation between original image and the segmented image, and second order entropy.
---
paper_title: Computational Techniques in the Visual Segmentation of Static Scenes.
paper_content:
A wide range of segmentation techniques continues to evolve in the literature on scene analysis. Many of these approaches have been constrained to limited applications or goals. This survey analyzes the complexities encountered in applying these techniques to color images of natural scenes involving complex textured objects. It also explores new ways of using the techniques to overcome some of the problems which are described. An outline of considerations in the development of a general image segmentation system which can provide input to a semantic interpretation process is distributed throughout the paper. In particular, the problems of feature selection and extraction in images with textural variations are discussed. The approaches to segmentation are divided into two broad categories, boundary formation and region formation. The tools for extraction of boundaries involve spatial differentiation, nonmaxima suppression, relaxation processes, and grouping of local edges into segments. Approaches to region formation include region growing under local spatial guidance, histograms for analysis of global feature activity, and finally an integration of the strengths of each by a spatial analysis of feature activity. A brief discussion of attempts by others to integrate the segmentation and interpretation phrases is also provided. The discussion is supported by a variety of experimental results.
---
paper_title: A survey of threshold selection techniques
paper_content:
Abstract The use of thresholding as a tool in image segmentation has been extensively studied, and a variety of techniques have been proposed for automatic threshold selection. This paper presents a review of these techniques, including global, local, and dynamic methods.
---
paper_title: Entropic thresholding using a block source model
paper_content:
Since the pioneer work of Frieden (J. Opt. Soc. Am. 62, 1972, 511-518; Comput. Graphics Image Process. 12, 1980, 40-59), the entropy concept is increasingly used in image analysis, especially in image reconstruction, image segmentation, and image compression. In the present paper a new entropic thresholding method based on a block source model is presented. This new approach is based on a distribution-free local analysis of the image and does not use higher order entropy. Our method is compared to the existing entropic thresholding methods.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: Sampling density and quantitative microscopy.
paper_content:
The sampling densities required for the quantitative analysis of digitized microscope images is discussed. It is shown that the Nyquist sampling theorem is not the proper reference point for determining the sampling density when the goal is measurement, although it may be a proper reference point when the goal is image filtering and reconstruction. The problems associated with signal truncation--the use of a finite amount of data--and the finite amount of time available for computation make it impossible to reconstruct an arbitrary image, even if it is bandlimited. Two examples taken from straightforward measurement problems exhibit the fundamental problems associated with the measurement of analog quantities from digital data and the role played by the sampling density.
---
paper_title: Comments on gray-level thresholding of images using a correlation criterion
paper_content:
In our comments we deal with two automatic thresholding algorithms proposed by Otsu and Brink, respectively. These algorithms, in spite of their different approach, lead to one and the same function to be maximized.
---
paper_title: Image thresholding: Some new techniques
paper_content:
Abstract Some of the existing threshold selection techniques have been critically reviewed. Two algorithms based on a new conditional entropy measure of a partitioned image have been formulated. The approximate minimum error thresholding algorithm of Kittler and Illingworth has been implemented considering the Poisson distribution for the gray level instead of the commonly used normal distribution. Justification in support of the Poisson distribution has also been given. This method is found to be much better both from the point of view of convergence and segmented output. The proposed methods have been applied on a number of images and are found to produce good results. Objective evaluation of the thresholds has been done using divergence, region uniformity, correlation between original image and the segmented image, and second order entropy.
---
paper_title: Image segmentation and image models
paper_content:
This paper discusses image segmentation techniques from the standpoint of the assumptions that an image should satisfy in order for a particular technique to be applicable to it. These assumptions, which are often not stated explicitly, can be regarded as (perhaps informal) "models" for classes of images. The paper emphasizes two basic classes of models: statistical models that describe the pixel population in an image or region, and spatial models that describe the decomposition of an image into regions.
---
paper_title: Dynamic Measurement of Computer Generated Image Segmentations
paper_content:
This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.
---
paper_title: Three-dimensional image segmentation using a split, merge and group approach
paper_content:
A 3-D segmentation algorithm is presented, based on a split, merge and group approach. It uses a mixed (oct/quad)tree implementation. A number of homogeneity criteria is discussed and evaluated. An example shows the segmentation of mythramycin stained cell nuclei.
---
paper_title: Image segmentation as an estimation problem
paper_content:
Abstract Picture segmentation is expressed as a sequence of decision problems with the framework of a split-and-merge algorithm. First regions of an arbitrary initial segmentation are tested for uniformity and if not uniform they are subdivided into smaller regions, or set aside if their size is below a given threshold. Next regions classified as uniform are subject to a cluster analysis to identify similar types which are merged. At this point there exist reliable estimates of the parameters of the random field of each type of region and they are used to classify some of the remaining small regions. Any regions remaining after this step are considered part of a boundary ambiguity zone. The location of the boundary is estimated then by interpolation between the existing uniform regions. Experimental results on artificial pictures are also included.
---
paper_title: Objective and quantitative segmentation evaluation and comparison
paper_content:
Abstract A general framework for segmentation evaluation is introduced after a brief review of previous work. The accuracy of object feature measurement is proposed as a criterion for judging the quality of segmentation results and assessing the performance of applied algorithms. This goal-oriented approach has been shown useful for an objective and quantitative study of segmentation techniques.
---
paper_title: A survey of thresholding techniques
paper_content:
Abstract In digital image processing, thresholding is a well-known technique for image segmentation. Because of its wide applicability to other areas of the digital image processing, quite a number of thresholding methods have been proposed over the years. In this paper, we present a survey of thresholding techniques and update the earlier survey work by Weszka (Comput. Vision Graphics & Image Process 7, 1978 , 259–265) and Fu and Mu (Pattern Recognit. 13, 1981 , 3–16). We attempt to evaluate the performance of some automatic global thresholding methods using the criterion functions such as uniformity and shape measures. The evaluation is based on some real world images.
---
paper_title: Image thresholding: Some new techniques
paper_content:
Abstract Some of the existing threshold selection techniques have been critically reviewed. Two algorithms based on a new conditional entropy measure of a partitioned image have been formulated. The approximate minimum error thresholding algorithm of Kittler and Illingworth has been implemented considering the Poisson distribution for the gray level instead of the commonly used normal distribution. Justification in support of the Poisson distribution has also been given. This method is found to be much better both from the point of view of convergence and segmented output. The proposed methods have been applied on a number of images and are found to produce good results. Objective evaluation of the thresholds has been done using divergence, region uniformity, correlation between original image and the segmented image, and second order entropy.
---
paper_title: Error measures for objective assessment of scene segmentation algorithms.
paper_content:
Scene segmentation is an important element in pattern recognition problems. Previous efforts to evaluate and compare scene segmentation procedures have been largely subjective. Quantitative error measures would facilitate objective comparison of scene segmentation algorithms. A theoretical discussion leading to a new generalized quantitative error measure, G2, based on comparison of both pixel class proportions and spatial distributions of "true" and test segmentations, is presented. This error measure was tested on 14 manual segmentations and 40 gynecologic cytology specimens segmented with five different scene segmentation techniques. Results indicate that G2 seems to have the desirable properties of correlation with human observation, categorization of error allowing for weighting, invariance with picture size and ease of computation necessary for a useful scene segmentation error measure.
---
paper_title: Segmentation evaluation and comparison: a study of various algorithms
paper_content:
ABSTRACT An objective and quantitative study of several representative segmentation algorithms is presented. In this study, themeasurement accuracy of object features from the segmented images is taken tojudge the quality of segmentation results andto assess the performance of applied algorithms. Moreover, some synthetic images are specially generated and used in testexperiments. This evaluation and comparison study reveals the behaviour of those algorithms within various situations,provides their performance ranking under real-like conditions, gives some limits and/or constraints for employing thosealgorithms in different applications as well as indicates several potential directions for improving their performance andinitiating new developments. Since the investigated algorithms are selected from different technique groups, this study alsoshows that the presented approach would be valid and effective for treating a wide range of segmentation algorithms. I. INTRODUCTION Since a large variety of different segmentation algorithms have been developed over last years (see, e.g.,
---
paper_title: Low Level Image Segmentation: An Expert System
paper_content:
A major problem in robotic vision is the segmentation of images of natural scenes in order to understand their content. This paper presents a new solution to the image segmentation problem that is based on the design of a rule-based expert system. General knowledge about low level properties of processes employ the rules to segment the image into uniform regions and connected lines. In addition to the knowledge rules, a set of control rules are also employed. These include metarules that embody inferences about the order in which the knowledge rules are matched. They also incorporate focus of attention rules that determine the path of processing within the image. Furthermore, an additional set of higher level rules dynamically alters the processing strategy. This paper discusses the structure and content of the knowledge and control rules for image segmentation.
---
paper_title: Error measures for scene segmentation
paper_content:
Abstract Scene segmentation is an important problem in pattern recognition. Current subjective methods for evaluation and comparison of scene segmentation techniques are inadequate and objective quantitative measures are desirable. Two error measures, the percentage area misclassified ( p ) and a new pixel distance error (ϵ) were defined and evaluated in terms of their correlation with human observation for comparison of multiple segmentations of the same scene and multiple scenes segmented by the same technique. The results indicate that both these measures can be helpful in the evaluation and comparison of scene segmentation procedures.
---
paper_title: Three-dimensional image segmentation using a split, merge and group approach
paper_content:
A 3-D segmentation algorithm is presented, based on a split, merge and group approach. It uses a mixed (oct/quad)tree implementation. A number of homogeneity criteria is discussed and evaluated. An example shows the segmentation of mythramycin stained cell nuclei.
---
paper_title: Scene-segmentation algorithm development using error measures.
paper_content:
Development of scene-segmentation algorithms has generally been an ad hoc process. This paper presents a systematic technique for developing these algorithms using error-measure minimization. If scene segmentation is regarded as a problem of pixel classification whereby each pixel of a scene is assigned to a particular object class, development of a scene-segmentation algorithm becomes primarily a process of feature selection. In this study, four methods of feature selection were used to develop segmentation techniques for cervical cytology images: (1) random selection, (2) manual selection (best features in the subjective judgment of the investigator), (3) eigenvector selection (ranking features according to the largest contribution to each eigenvector of the feature covariance matrix) and (4) selection using the scene-segmentation error measure A2. Four features were selected by each method from a universe of 35 features consisting of gray level, color, texture and special pixel neighborhood features in 40 cervical cytology images . Evaluation of the results was done with a composite of the scene-segmentation error measure A2, which depends on the percentage of scenes with measurable error, the agreement of pixel class proportions, the agreement of number of objects for each pixel class and the distance of each misclassified pixel to the nearest pixel of the misclassified class. Results indicate that random and eigenvector feature selection were the poorest methods, manual feature selection somewhat better and error-measure feature selection best. The error-measure feature selection method provides a useful, systematic method of developing and evaluating scene-segmentation algorithms.
---
|
Title: A Survey on Evaluation Methods for Image Segmentation
Section 1: INTRODUCTION
Description 1: Write about the significance of image segmentation, the need for evaluation methods, and provide an overview of the paper's purpose and structure.
Section 2: ANALYTICAL METHODS
Description 2: Discuss the analytical methods used for evaluating segmentation algorithms, highlighting their principles, advantages, and limitations.
Section 3: EMPIRICAL GOODNESS METHODS
Description 3: Describe the empirical goodness methods that evaluate segmentation algorithms based on the quality of segmented images using various goodness measures.
Section 4: EMPIRICAL DISCREPANCY METHODS
Description 4: Explain empirical discrepancy methods that use reference images to measure the disparity between segmented results and the ideal segmentation to evaluate the performance of algorithms.
Section 5: COMPARISON OF METHOD GROUPS
Description 5: Compare the analytical methods, empirical goodness methods, and empirical discrepancy methods, discussing their generality, quantitative and objective nature, complexity, and application considerations.
Section 6: COMPARISON OF SOME EMPIRICAL METHODS
Description 6: Present an experimental comparison of several commonly used empirical methods, detailing their performance and rankings.
Section 7: SPECIAL EVALUATION METHODS
Description 7: Discuss special evaluation methods that do not fit neatly into the previous categories, describing how they work and their unique aspects.
Section 8: COMMON PROBLEMS FOR MOST EXISTING METHODS
Description 8: Highlight common issues associated with existing evaluation methods, such as bias in criteria and the influence of subjective factors.
Section 9: CONCLUDING REMARKS
Description 9: Summarize the findings of the survey, emphasizing the need for continued research in segmentation evaluation and proposing future directions.
Section 10: ACKNOWLEDGEMENT
Description 10: Acknowledge contributions and suggestions from reviewers and collaborators.
|
A Survey of Self-protected Mobile Agents
| 10 |
---
paper_title: Secure mobile multiagent systems in virtual marketplaces : a case study on comparison shopping
paper_content:
The growth of the Internet has deeply influenced our daily lives as well as our commercial structures. Agents and multiagent systems will play a major role in the further development of Internet-based applications like virtual marketplaces. However, there is an increasing awareness of the security problems involved. These systems will not be successful until their problems are solved. This report examines comparison shopping, a virtual marketplace scenario and an application domain for a mobile multiagent system, with respect to its security issues. The interests of the participants in the scenario, merchants and clients, are investigated. Potential security threats are identified and security objectives counteracting those threats are established. These objectives are refined into building blocks a secure multiagent system should provide. The building blocks are transformed into features of agents and executing platforms. Originating from this analysis, solutions for the actual implementation of these building blocks are suggested. It is pointed out under which assumptions it is possible to achieve the security goals, if at all.
---
paper_title: Multi-Agent System Security for Mobile Communication
paper_content:
This thesis investigates security in multi-agent systems for mobile communication. Mobile as well as non-mobile agent technology is addressed. A general security analysis based on properties of agents and multi-agent systems is presented along with an overview of security measures applicable to multi-agent systems, and in particular to mobile agent systems. A security architecture, designed for deployment of agent technology in a mobile communication environment, is presented. The security architecture allows modelling of interactions at all levels within a mobile communication system. This architecture is used as the basis for describing security services and mechanisms for a multi-agent system. It is shown how security mechanisms can be used in an agent system, with emphasis on secure agent communication. Mobile agents are vulnerable to attacks from the hosts on which they are executing. Two methods for dealing with threats posed by malicious hosts to a trading agent are presented. The first approach uses a threshold scheme and multiple mobile agents to minimise the effect of malicious hosts. The second introduces trusted nodes into the infrastructure. Undetachable signatures have been proposed as a way to limit the damage a malicious host can do by misusing a signature key carried by a mobile agent. This thesis proposes an alternative scheme based on conventional signatures and public key certificates. Threshold signatures can be used in a mobile agent scenario to spread the risk between several agents and thereby overcome the threats posed by individual malicious hosts. An alternative to threshold signatures, based on conventional signatures, achieving comparable security guarantees with potential practical advantages compared to a threshold scheme is proposed in this thesis. Undetachable signatures and threshold signatures are both concepts applicable to mobile agents. This thesis proposes a technique combining the two schemes to achieve undetachable threshold signatures. This thesis defines the concept of certificate translation, which allows an agent to have one certificate translated into another format if so required, and thereby save storage space as well as being able to cope with a certificate format not foreseen at the time the agent was created.
---
paper_title: Classification of malicious host threats in mobile agent computing
paper_content:
Full-scale adoption of mobile agent technology in untrustworthy network environments, such as the Internet, has been delayed by several security complexities [Montanari, 2001]. Presently, there is a large array of security issues and sub-issues within mobile agent computing that makes it tough to distinguish between different types of problems, and therefore also interfere with the definition of suitable solutions. Literature addressing the full range of problems is limited and mostly discusses single security threats or subsets of security problems and their possible solutions. The purpose of this paper is to analyse the different security threats that can possibly be imposed on agents by malicious hosts, and then provide a classification of these threats before we describe the current solution approaches that are implemented to address the identified problems. By providing such a classification we were able to identify specific gaps in current research efforts, and enable researchers to systematically focus their attention on different classes of solutions to remedy these threats.
---
paper_title: Two-Party Computing with Encrypted Data
paper_content:
We consider a new model for online secure computation on encrypted inputs in the presence of malicious adversaries. The inputs are independent of the circuit computed in the sense that they can be contributed by separate third parties. The model attempts to emulate as closely as possible the model of "Computing with Encrypted Data" that was put forth in 1978 by Rivest, Adleman and Dertouzos which involved a single online message. In our model, two parties publish their public keys in an offline stage, after which any party (i.e., any of the two and any third party) can publish encryption of their local inputs. Then in an on-line stage, given any common input circuit C and its set of inputs from among the published encryptions, the first party sends a single message to the second party, who completes the computation.
---
paper_title: Computing arbitrary functions of encrypted data
paper_content:
Suppose that you want to delegate the ability to process your data, without giving away access to it. We show that this separation is possible: we describe a "fully homomorphic" encryption scheme that keeps data private, but that allows a worker that does not have the secret decryption key to compute any (still encrypted) result of the data, even when the function of the data is very complex. In short, a third party can perform complicated processing of data without being able to see it. Among other things, this helps make cloud computing compatible with privacy.
---
paper_title: Time Limited Blackbox Security: Protecting Mobile Agents From Malicious Hosts
paper_content:
In this paper, an approach to partially solve one of the most difficult aspects of security of mobile agents systems is presented, the problem of malicious hosts. This problem consists in the possibility of attacks against a mobile agent by the party that maintains an agent system node, a host. The idea to solve this problem is to create a blackbox out of an original agent. A blackbox is an agent that performs the same work as the original agent, but is of a different structure. This difference allows to assume a certain agent protection time interval, during which it is impossible for an attacker to discover relevant data or to manipulate the execution of the agent. After that time interval the agent and some associated data get invalid and the agent cannot migrate or interact anymore, which prevents the exploitation of attacks after the protection interval.
---
paper_title: TAMAP: a new trust-based approach for mobile agent protection
paper_content:
Human activities are increasingly based on the use of distant resources and services, and on the interaction between remotely located parties that may know little about each other. Mobile agents are the most suited technology. They must therefore be prepared to execute on different hosts with various environmental security conditions. This paper introduces a trust-based mechanism to improve the security of mobile agents against malicious hosts and to allow their execution in various environments. It is based on the dynamic interaction between the agent and the host. Information collected during the interaction enables generation of an environment key. This key allows then to deduce the host’s trust degree and permits the mobile agent to adapt its execution accordingly to the host trustworthiness, its behavior history and the provided Quality of Service (QoS). An adaptive mobile agent architecture is therefore proposed. It endows the mobile agent with the ability to react with an unexpected behavior.
---
paper_title: Strong Cryptography Armoured Computer Viruses Forbidding Code Analysis: the Bradley Virus 1
paper_content:
Imagining what the nature of future viral attacks might look like is the key to successfully protecting against them. This paper discusses how cryptography and key management techniques may definitively checkmate antiviral analysis and mechanisms. We present a generic virus, denoted bradley which protects its code with a very secure, ultra-fast symmetric encryption. Since the main drawback of using encryption in that case lies on the existence of the secret key or information about it within the viral code, we show how to bypass this limitation by using suitable key management techniques. Finally, we show that the complexity of the bradley code analysis is at least as high as that of the cryptanalysis of its underlying encryption algorithm.
---
paper_title: An Approach to the Sensitive Information Protection for Mobile Code
paper_content:
Environmental key generation can be used when mobile code producer (MCP) needs mobile code consumer (MCC) to decrypt the code correctly only if some special environmental conditions are true. In this paper, we introduce a new approach, which is based on environmental key generation, to protect sensitive information within mobile code. It is achieved through introduction of trusted computing technology-sealing. Our approach uses the combination of hardware and software technology, so it is tamper-resistant to attackers.
---
paper_title: Computing Functions of a Shared Secret
paper_content:
In this work we introduce and study threshold (t-out-of-n) secret sharing schemes for families of functions ${\cal F}$. Such schemes allow any set of at least t parties to compute privately the value f(s) of a (previously distributed) secret s, for any $f\in {\cal F}$. Smaller sets of players get no more information about the secret than what follows from the value f(s). The goal is to make the shares as short as possible. Results are obtained for two different settings: we study the case when the evaluation is done on a broadcast channel without interaction, and we examine what can be gained by allowing evaluations to be done interactively via private channels.
---
paper_title: TAMAP: a new trust-based approach for mobile agent protection
paper_content:
Human activities are increasingly based on the use of distant resources and services, and on the interaction between remotely located parties that may know little about each other. Mobile agents are the most suited technology. They must therefore be prepared to execute on different hosts with various environmental security conditions. This paper introduces a trust-based mechanism to improve the security of mobile agents against malicious hosts and to allow their execution in various environments. It is based on the dynamic interaction between the agent and the host. Information collected during the interaction enables generation of an environment key. This key allows then to deduce the host’s trust degree and permits the mobile agent to adapt its execution accordingly to the host trustworthiness, its behavior history and the provided Quality of Service (QoS). An adaptive mobile agent architecture is therefore proposed. It endows the mobile agent with the ability to react with an unexpected behavior.
---
paper_title: Secure Internet programming: security issues for mobile and distributed objects
paper_content:
Foundations.- Trust: Benefits, Models, and Mechanisms.- Protection in Programming-Language Translations.- Reflective Authorization Systems: Possibilities, Benefits, and Drawbacks.- Abstractions for Mobile Computation.- Type-Safe Execution of Mobile Agents in Anonymous Networks.- Types as Specifications of Access Policies.- Security Properties of Typed Applets.- Concepts.- The Role of Trust Management in Distributed Systems Security.- Distributed Access-Rights Management with Delegation Certificates.- A View-Based Access Control Model for CORBA.- Apoptosis - the Programmed Death of Distributed Services.- A Sanctuary for Mobile Agents.- Mutual Protection of Co-operating Agents.- Implementations.- Access Control in Configurable Systems.- Providing Policy-Neutral and Transparent Access Control in Extensible Systems.- Interposition Agents: Transparently Interposing User Code at the System Interface.- J-Kernel: A Capability-Based Operating System for Java.- Secure Network Objects.- History-Based Access Control for Mobile Code.- Security in Active Networks.- Using Interfaces to Specify Access Rights.- Introducing Trusted Third Parties to the Mobile Agent Paradigm.
---
paper_title: Protecting mobile agents ’ data using trusted computing technology
paper_content:
A technique efficiently creates a database used to correlate information identifying data session traffic exchanged between entities of a computer network with information relating to multi-protocol intermediate devices configured to carry the session traffic throughout the network. The intermediate devices are preferably routers configured to carry System Network Architecture (SNA) session traffic between SNA entities comprising a host network connection and a physical unit (PU) station. The host network connection utilizes a channel-attached router for connectivity with a host computer. The technique allows a network management station (NMS) program to establish a NMS database by obtaining a PU name associated with an active SNA session without requiring a presence on or continual communication with a virtual telecommunication access method on the host.
---
paper_title: Public Protection of Software
paper_content:
One of the overwhelming problems that software producers must contend with, is the unauthorized use and distribution of their products. Copyright laws concerning software are rarely enforced, thereby causing major losses to the software companies. Technical means of protecting software from illegal duplication are required, but the available means are imperfect. We present, protocols that enables software protection, without causing overhead in distribution and maintenance. The protocols may be implemented by a conventional cryptosystem, such as the DES, or by a public key cryptosystem, such as the RSA. Both implementations are proved to satisfy required security criterions.
---
paper_title: On the Problem of Trust in Mobile Agent Systems
paper_content:
Systems that support mobile agents are increasingly being used on the global Internet. Security concerns dealing with the protection of the execution environment from malicious agents are extensively being tackled. We concentrate on the reverse problem, namely how a mobile agent can be protected from malicious behaviour of the execution environment, which is largely ignored. We will identify the problem of trust as the major issue in this context and describe a trusted and tamper-proof hardware that can be used to divide this problem among several principals, each of which has to be trusted with a special task. We show that the presented approach can be used to mitigate an important problem in the design of open systems.
---
paper_title: Classification of malicious host threats in mobile agent computing
paper_content:
Full-scale adoption of mobile agent technology in untrustworthy network environments, such as the Internet, has been delayed by several security complexities [Montanari, 2001]. Presently, there is a large array of security issues and sub-issues within mobile agent computing that makes it tough to distinguish between different types of problems, and therefore also interfere with the definition of suitable solutions. Literature addressing the full range of problems is limited and mostly discusses single security threats or subsets of security problems and their possible solutions. The purpose of this paper is to analyse the different security threats that can possibly be imposed on agents by malicious hosts, and then provide a classification of these threats before we describe the current solution approaches that are implemented to address the identified problems. By providing such a classification we were able to identify specific gaps in current research efforts, and enable researchers to systematically focus their attention on different classes of solutions to remedy these threats.
---
paper_title: Using trust for secure collaboration in uncertain environments
paper_content:
The SECURE project investigates the design of security mechanisms for pervasive computing based on trust. It addresses how entities in unfamiliar pervasive computing environments can overcome initial suspicion to provide secure collaboration.
---
paper_title: A service-oriented trust management framework
paper_content:
In this paper we present and analyse a service-oriented trust management framework based on the integration of role-based modelling and risk assessment in order to support trust management solutions. We also survey recent definitions of trust and subsequently introduce a service-oriented definition of trust, and analyse some general properties of trust in e-services, emphasising properties underpinning the propagation and transferability of trust.
---
paper_title: TAMAP: a new trust-based approach for mobile agent protection
paper_content:
Human activities are increasingly based on the use of distant resources and services, and on the interaction between remotely located parties that may know little about each other. Mobile agents are the most suited technology. They must therefore be prepared to execute on different hosts with various environmental security conditions. This paper introduces a trust-based mechanism to improve the security of mobile agents against malicious hosts and to allow their execution in various environments. It is based on the dynamic interaction between the agent and the host. Information collected during the interaction enables generation of an environment key. This key allows then to deduce the host’s trust degree and permits the mobile agent to adapt its execution accordingly to the host trustworthiness, its behavior history and the provided Quality of Service (QoS). An adaptive mobile agent architecture is therefore proposed. It endows the mobile agent with the ability to react with an unexpected behavior.
---
paper_title: Modelling and evaluating trust relationships in mobile agents based systems
paper_content:
This paper considers a trust framework for mobile agent based systems. It introduces the concept of trust reflection and presents a novel method using views of the trustor to derive the trust symmetry. This new approach addresses the trust initialization issues and allows for different levels of initial trust to be established based on trustor's initial assessment and then enables different trust dynamics to evolve in the course of future interactions in situations where full trust appraisal is not possible at the beginning of interactions. This is an important aspect for security in distributed systems, especially in mobile agent based systems due to the fact that the agent owner principals may not have all the information to appraise full trust in other entities(e.g. foreign hosts to be visited by the mobile agent) when composing an itinerary prior to the deployment of mobile agent. Our framework proposes a new formalism to capture and reason about trust reflection and presents an algorithm for updating trust during system operation. This is then applied to a simulated mobile agent system and an analysis of the results is presented.
---
paper_title: A Pessimistic Approach to Trust in Mobile Agent Platforms
paper_content:
The problem of protecting an execution environment from possibly malicious mobile agents has been studied extensively, but the reverse problem, protecting the agent from malicious execution environments, has not. We propose an approach that relies on trusted and tamper-resistant hardware to prevent breaches of trust, rather than correcting them after the fact. We address the question of how to base trust on technical reasoning. We present a pessimistic approach to trust that tries to prevent malicious behavior from occurring in the first place, rather than correcting it after it has occurred.
---
paper_title: E-Commerce Trust Metrics and Models
paper_content:
Traditional models of trust between vendors and buyers fall short of requirements for an electronic marketplace, where anonymous transactions cross territorial and legal boundaries as well as traditional value-chain structures. Alternative quantifications of trust may offer better evaluations of transaction risk in this environment. This article introduces a notion of quantifiable trust and then develops models that can use these metrics to verify e-commerce transactions in ways that might be able to satisfy the requirements of mutual trust. The article uses two examples in illustrating these concepts: one for an e-commerce printing enterprise and the other for Internet stock trading.
---
paper_title: Trust Relationships in a Mobile Agent System
paper_content:
The notion of trust is presented as an important component in a security infrastructure for mobile agents. A trust model that can be used in tackling the aspect of protecting mobile agents from hostile platforms is proposed. We define several trust relationships in our model, and present a trust derivation algorithm that can be used to infer new relationships from existing ones. An example of how such a model can be utilized in a practical system is provided.
---
paper_title: Trust revelation in multiagent interaction
paper_content:
We analyze untrustworthy interactions, that is, interactions in which a party may fail to carry out its obligations. Such interactions pose agents with the problem of how to estimate the trustworthiness of the other party. The efficiency of untrustworthy interactions critically depends on the amount and the nature of information about untrustworthy agents. We propose a solution to the problem of learning and estimating trustworthiness. Instead of relying on a third party for providing information or for backing up multiagent interaction, we propose an incentivecompatible interaction mechanism in which agents truthfully reveal their trustworthiness at the beginning of every interaction. In such a mechanism agents always report their true level of trustworthiness, even if they are untrustworthy.
---
paper_title: Trust is much more than subjective probability: mental components and sources of trust
paper_content:
In this paper we claim the importance of a cognitive view of trust (its articulate, analytic and founded view) in contrast with a mere quantitative and opaque view of trust supported by economics and game theory (GT). We argue in favour of a cognitive view of trust as a complex structure of beliefs and goals, implying that the truster must have a "theory of the mind" of the trustee. Such a structure of beliefs determines a "degree of trust" and an estimation of risk, and then a decision to rely or not on the other, which is also based on a personal threshold of risk acceptance/avoidance. Finally, we also explain rational and irrational components and uses of trust.
---
paper_title: On the Problem of Trust in Mobile Agent Systems
paper_content:
Systems that support mobile agents are increasingly being used on the global Internet. Security concerns dealing with the protection of the execution environment from malicious agents are extensively being tackled. We concentrate on the reverse problem, namely how a mobile agent can be protected from malicious behaviour of the execution environment, which is largely ignored. We will identify the problem of trust as the major issue in this context and describe a trusted and tamper-proof hardware that can be used to divide this problem among several principals, each of which has to be trusted with a special task. We show that the presented approach can be used to mitigate an important problem in the design of open systems.
---
paper_title: Trust metrics, models and protocols for electronic commerce transactions
paper_content:
The paper introduces the notion of quantifiable trust for electronic commerce. It describes metrics and models for the measurement of trust variables and fuzzy verification of transactions. Trust metrics help preserve system availability by determining risk on transactions. Furthermore, when several entities are involved in electronic transactions, previously know techniques are applied for trust propagation. Malicious transacting entities may try to illegitimately gain access to private trust information. Suitable protocols are developed to minimize breach of privacy and incorporate a non repudiable context using cryptographic techniques.
---
|
Title: A Survey of Self-protected Mobile Agents
Section 1: INTRODUCTION
Description 1: Introduce the concept of mobile agents and outline the security risks they encounter, focusing on the need for self-protection mechanisms.
Section 2: MOBILE AGENT' SELF-PROTECTION
Description 2: Present classical techniques for mobile agent self-protection and emphasize the importance of trusted execution environments.
Section 3: Mobile agent security
Description 3: Discuss different security threats mobile agents face and common approaches to protect themselves, including cryptographic methods and obfuscation techniques.
Section 4: Cryptographic Approaches
Description 4: Detail cryptographic methods such as hiding functions used in mobile agent self-protection and their limitations.
Section 5: Obfuscation Techniques
Description 5: Explain obfuscation techniques for protecting mobile agent code from reverse engineering and their practical constraints.
Section 6: Environmental Key
Description 6: Explore the concept of using environmental keys for agent self-protection and estimating host trustworthiness.
Section 7: The k-out-of-n Threshold Scheme Approach
Description 7: Describe the k-out-of-n threshold scheme and other related methods for ensuring secure distributed computing with mobile agents.
Section 8: Trusted execution environments
Description 8: Review trusted hardware and software approaches for creating trusted execution environments to protect mobile agents.
Section 9: Synthesis
Description 9: Synthesize previous findings on trusted hardware and software security approaches and discuss the necessity of dynamic trust estimation for mobile agents.
Section 10: TRUST ESTIMATION
Description 10: Elaborate on the concept of trust in mobile agent security, reviewing various trust models and their application in dynamically assessing host trustworthiness.
|
Overview on initial METIS D2D concept
| 6 |
---
paper_title: Smart mobility management for D2D communications in 5G networks
paper_content:
Direct device-to-device (D2D) communications is regarded as a promising technology to provide low-power, high-data rate and low-latency services between end-users in the future 5G networks. However, it may not always be feasible to provide low-latency reliable communication between end-users due to the nature of mobility. For instance, the latency could be increased when several controlling nodes have to exchange D2D related information among each other. Moreover, the introduced signaling overhead due to D2D operation need to be minimized. Therefore, in this paper, we propose several mobility management solutions with their technical challenges and expected gains under the assumptions of 5G small cell networks.
---
paper_title: 5G small cell optimized radio design
paper_content:
The 5th generation (5G) of mobile radio access technologies is expected to become available for commercial launch around 2020. In this paper, we present our envisioned 5G system design optimized for small cell deployment taking a clean slate approach, i.e. removing most compatibility constraints with the previous generations of mobile radio access technologies. This paper mainly covers the physical layer aspects of the 5G concept design.
---
paper_title: On the performance gain of flexible UL/DL TDD with centralized and decentralized resource allocation in dense 5G deployments
paper_content:
Ultra dense small cell deployments and a very large number of applications are expected to be the essential aspects of the newly emerging 5th generation (5G) wireless communication system. To match the diverse quality of service requirements imposed by a variety of applications, dynamic TDD is proposed as a solution by enabling flexible utilization of the spectrum for uplink and downlink of each cell. In this paper, the system performance of flexible (dynamic) TDD is compared to a fixed portioning of resources for uplink and downlink. Further, the degree of centralization for resource management is investigated in the context of dynamic TDD, because multi-cell scheduling will be important for the design of 5G ultra-dense network architecture. The results show that dynamic TDD is indeed a very promising option for 5G networks, and substantially decreases packet outage delays. We find that the performance gap between centralized and decentralized scheduling is small in case of planned deployments. However, centralized scheduling may be beneficial in certain dynamic TDD deployment scenarios with a very asymmetric access point distribution.
---
|
Title: Overview on initial METIS D2D concept
Section 1: Introduction
Description 1: Provide an introduction to the METIS project and its objectives, especially focusing on the goals for the 5G system concept and its relevance to beyond-2020 scenarios.
Section 2: METIS Scenarios and Horizontal Topics
Description 2: Discuss the identified scenarios and applications for the connected information society and detail the selected Horizontal Topics (HTs) including Direct Device-to-Device (D2D) communication.
Section 3: Technical Challenges
Description 3: Outline the key technical challenges for D2D communication including device discovery, communication mode selection, coexistence and interference management, and multi-operator D2D operation.
Section 4: METIS HT D2D Concept
Description 4: Introduce the initial METIS D2D concept and its core technology components designed to address the identified technical challenges, such as flexible air interface, device discovery, mobility management, D2D relay, and spectrum management/sharing.
Section 5: D2D Performance Evaluation
Description 5: Analyze the system performance of specific D2D technology components, including enhanced Inter-Cell Interference Coordination (ICIC) in D2D enabled HetNets and multi-cell coordinated and flexible mode selection and resource allocation for D2D.
Section 6: Conclusions
Description 6: Summarize the METIS D2D concept's ability to address various technical challenges and highlight the substantial gains achieved through the proposed technology components. Discuss potential future evaluations and analyses.
|
A review of the characteristics of 108 author-level bibliometric indicators
| 15 |
---
paper_title: Quality, quantity, and impact in academic publication
paper_content:
Publication records of 85 social-personality psychologists were tracked from the time of their doctoral studies until 10 years post-PhD. Associations between publication quantity (number of articles), quality (mean journal impact factor and article influence score), and impact (citations, h-index, g-index, webpage visits) were examined. Publication quantity and quality were only modestly related, and there was evidence of a quality-quantity trade-off. Impact was more strongly associated with quantity than quality. Authors whose records weighed quality over quantity tended to be associated with more prestigious institutions, but had lesser impact. Quantity- and quality-favoring publication strategies may have important implications for the shape and success of scientific careers. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Assessing basic research : Some partial indicators of scientific progress in radio astronomy
paper_content:
As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups' recent scientific performance. This paper suggests a method for evaluating that performance. ::: ::: After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group's performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups. ::: ::: In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.
---
paper_title: The w-index: A significant improvement of the h-index
paper_content:
I propose a new measure, the w-index, as a particularly simple and useful way to assess the integrated impact of a researcher's work, especially his or her excellent papers. The w-index can be defined as follows: If w of a researcher's papers have at least 10w citations each and the other papers have fewer than 10(w+1) citations, his/her w-index is w. It is a significant improvement of the h-index.
---
paper_title: Towards objectivity in research evaluation using bibliometric indicators – A protocol for incorporating complexity
paper_content:
Abstract Publications are thought to be an integrative indicator best suited to measure the multifaceted nature of scientific performance. Therefore, indicators based on the publication record (citation analysis) are the primary tool for rapid evaluation of scientific performance. Nevertheless, it has to be questioned whether the indicators really do measure what they are intended to measure because people adjust to the indicator value system by optimizing their indicator rather than their performance. Thus, no matter how sophisticated an indicator may be, it will never be proof against manipulation. A literature review identifies the most critical problems of citation analysis: database-related problems, inflated citation records, bias in citation rates and crediting of multi-author papers. We present a step-by-step protocol to address these problems. By applying this protocol, reviewers can avoid most of the pitfalls associated with the pure numbers of indicators and achieve a fast but fair evaluation of a scientist's performance. We as ecologists should accept complexity not only in our research but also in our research evaluation and should encourage scientists of other disciplines to do so as well.
---
paper_title: The Holy Grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence
paper_content:
Abstract Evaluation studies of scientific performance conducted during the past years more and more focus on the identification of research of the 'highest quality', 'top' research, or 'scientific excellence'. This shift in focus has lead to the development of new bibliometric methodologies and indicators. Technically, it meant a shift from bibliometric impact scores based on average values such as the average impact of all papers published by some unit to be evaluated towards indicators reflecting the topof the citation distribution, such as the number of 'highly cited' or 'top' articles. In this study we present a comparative analysis of a number of standard and new indicators of research performance or 'scientific excellence', using techniques applied in studies conducted by CWTS in recent years. It will be shown that each type of indicator reflects a particular dimension of the general concept of research performance. Consequently, the application of one single indicator only may provide an incomplet...
---
paper_title: h-Index : A review focused in its variants , computation and standardization for different scientific fields
paper_content:
The h-index and some related bibliometric indices have received a lot of attention from the scientific community in the last few years due to some of their good properties (easiness of computation, balance between quantity of publications and their impact and so on). Many different indicators have been developed in order to extend and overcome the drawbacks of the original Hirsch proposal. In this contribution we present a comprehensive review on the h-index and related indicators field. From the initial h-index proposal we study their main advantages, drawbacks and the main applications that we can find in the literature. A description of many of the h-related indices that have been developed along with their main characteristics and some of the works that analyze and compare them are presented. We also review the most up to date standardization studies that allow a fair comparison by means of the h-index among scientists from different research areas and finally, some works that analyze the computation of the h-index and related indices by using different citation databases (ISI Citation Indexes, Google Scholar and Scopus) are introduced.
---
paper_title: Scaling the h-index for different scientific ISI fields
paper_content:
We propose a simple way to put in a common scale the h values of researchers working in different scientific ISI fields, so that the foreseeable misuse of this index for inter-areas comparison might be prevented, or at least, alleviated.
---
paper_title: UK Research Assessment Exercises: Informed judgments on research quality or quantity?
paper_content:
A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE. When in the RAE 1992 total publications counts were requested, UK scientists substantially increased their article production. When a shift in evaluation criteria in the RAE 1996 was announced from ‘quantity’ to ‘quality’, UK authors gradually increased their number of papers in journals with a relatively high citation impact. And during 1997–2000, institutions raised their number of active research staff by stimulating their staff members to collaborate more intensively, or at least to co-author more intensively, although their joint paper productivity did not. This finding suggests that, along the way towards the RAE 2001, evaluated units in a sense shifted back from ‘quality’ to ‘quantity’. The analysis also observed a slight upward trend in overall UK citation impact, corroborating conclusions from an earlier study. The implications of the findings for the use of citation analysis in the RAE are briefly discussed.
---
paper_title: The Four Literatures of Social Science
paper_content:
This chapter reviews bibliometric studies of the social sciences and humanities. SSCI bibliometrics will work reasonably well in economics and psychology, whose literatures share many characteristics with science, and less well in sociology, characterised by a typical social science literature. The premise of the chapter is that quantitative evaluation of research output faces severe methodological difficulties in fields whose literature differs in nature from scientific literature. Bibliometric evaluations are based on international journal literature indexed in the SSCI, but social scientists also publish books, write for national journals and for the non-scholarly press. These literatures form distinct, yet partially overlapping worlds, each serving a different purpose. For example, national journals communicate with a local scholarly community, and the non-scholarly press represents research in interaction with contexts of application. Each literature is more trans-disciplinary than its scientific counterpart, which itself poses methodological challenges. The nature and role of each of the literatures will be explored here, and the chapter will argue that by ignoring the three other literatures of social science bibliometric evaluation produces a distorted picture of social science fields.
---
paper_title: Is it possible to compare researchers with different scientific interests?
paper_content:
The number h of papers with at least h citations has been proposed to evaluate individuals scientific research production. This index is robust in several ways but yet strongly dependent on the research field. We propose a complementary index ) ( 2 T a I N h h = , with ) (T a N being the total number of authors in the considered h papers. A researcher with index hI has hI papers with at least hI citation if he/she had published alone. We have obtained the rank plots of h and h I for four Brazilian scientific communities. In contrast with the h-index, the h I index rank plots collapse into a single curve allowing comparison among different research areas.
---
paper_title: Exploratory factor analysis for the Hirsch index, 17 h-type variants, and some traditional bibliometric indicators
paper_content:
The purpose of this article is to come up with a valid categorization and to examine the performance and properties of a wide range of h-type indices presented recently in the relevant literature. By exploratory factor analysis (EFA) we study the relationship between the h-index, its variants, and some standard bibliometric indicators of 26 physicists compiled from the Science Citation Index in the Web of Science.
---
paper_title: Lost in publication: how measurement harms science
paper_content:
Measurement of scientific productivity is difficult. The measures used (impact factor of the journal, citations to the paper being measured) are crude. But these measures are now so univer- sally adopted that they determine most things that matter: tenure or unemployment, a postdoctoral grant or none, success or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible — and trying to work them into high impact factor journals. Consequently, scientific behaviour has become distorted and the utility, quality and objectivity of articles has deteriorated. Changes to the way scientists are assessed are urgently needed, and I suggest some here.
---
paper_title: How good is research really?
paper_content:
Bibliometrics increasingly determine the allocation of jobs and funding in science. Bibliometricians must therefore develop and adopt reliable measures of quality that truly reflect a scientist's contribution to his or her field.
---
paper_title: The h-index and its alternatives: An application to the 100 most prolific economists
paper_content:
The h-index is a recent but already quite popular way of measuring research quality and quantity. However, it discounts highly-cited papers. The g-index corrects for this, but it is sensitivity to the number of never-cited papers. Besides, h- or g-index-based rankings have a large number of ties. Therefore, this paper introduces two new indices, and tests their performance for the 100 most prolific economists. A researcher has a t-number (f-number) of t (f) if t (f) is the largest number for which it holds that she has t (f) publications for which the geometric (harmonic) average number of citations is at least t (f). The new indices overcome the shortcomings of the old indices.
---
paper_title: New developments related to the Hirsch index
paper_content:
It is shown that the h-index on the one hand, and the A- and g-indices on the other, measure different things. The A-index, however, seems overly sensitive to one extremely highly cited article. For this reason it would seem that the g-index is the more useful of the two. As to the h- and the g-index: they do measure different aspects of a scientist’s publication list. Certainly the h-index does not tell the full story, and, although a more sensitive indicator than the h-index, neither does the g-index. Taken together, g and h present a concise picture of a scientist’s achievements in terms of publications and citations.
---
paper_title: Quantifying Scholarly Impact: IQp Versus the Hirsch h
paper_content:
Hirsch's (2005) h index of scholarly output has generated substantial interest and wide acceptance because of its apparent ability to quantify scholarly impact simply and accurately. We show that the excitement surrounding h is premature for three reasons: h stagnates with increasing scientific age; it is highly dependent on publication quantity; and it is highly dependent on field-specific citation rates. Thus, it is not useful for comparing scholars across disciplines. We propose the scholarly “index of quality and productivity” (IQp) as an alternative to h. The new index takes into account a scholar's total impact and also corrects for field-specific citation rates, scholarly productivity, and scientific age. The IQp accurately predicts group membership on a common metric, as tested on a sample of 80 scholars from three populations: (a) Nobel winners in physics (n = 10), chemistry (n = 10), medicine (n = 10), and economics (n = 10), and towering psychologists (n = 10); and scholars who have made more modest contributions to science including randomly selected (b) fellows (n = 15) and (c) members (n = 15) of the Society of Industrial and Organizational Psychology. The IQp also correlates better with expert ratings of greatness than does the h index. © 2008 Wiley Periodicals, Inc.
---
paper_title: On measuring the relation between social science research activity and research publication
paper_content:
Using data from the ESRC Research Activity and Publications Information Database (RAPID), this is a report of an investigation that differs from ‘traditional bibliometrics’. With the aid of a data model that prompts an initial focus on the research project rather than those research publications that are easy to see, evidence was uncovered on the varied type of publication used to disseminate social science findings, and on the considerable time intervals involved between funded research activity and the date of publication. The former suggests shortcomings in the performance indicators generated by bibliometric methods; the latter has important implications for the research evaluation practices of funding bodies. Copyright , Beech Tree Publishing.
---
paper_title: Meeting the Micro-Level Challenges : Bibliometrics at the Individual Level
paper_content:
The aim of this paper is to demonstrate a method for bibliometric evaluation of individuals, i.e. research staff currently employed within a university department or other knowledge organisations w ...
---
paper_title: Approaches to understanding and measuring interdisciplinary scientific research (IDR): A review of the literature
paper_content:
Interdisciplinary scientific research (IDR) extends and challenges the study of science on a number of fronts, including creating output science and engineering (S&E) indicators. This literature review began with a narrow search for quantitative measures of the output of IDR that could contribute to indicators, but the authors expanded the scope of the review as it became clear that differing definitions, assessment tools, evaluation processes, and measures all shed light on different aspects of IDR. Key among these broader aspects is (a) the importance of incorporating the concept of knowledge integration, and (b) recognizing that integration can occur within a single mind as well as among a team. Existing output measures alone cannot adequately capture this process. Among the quantitative measures considered, bibliometrics (co-authorships, co-inventors, collaborations, references, citations and co-citations) are the most developed, but leave considerable gaps in understanding of the social dynamics that lead to knowledge integration. Emerging measures in network dynamics (particularly betweenness centrality and diversity), and entropy are promising as indicators, but their use requires sophisticated interpretations. Combinations of quantitative measures and qualitative assessments being applied within evaluation studies appear to reveal IDR processes but carry burdens of expense, intrusion, and lack of reproducibility year-upon-year. This review is a first step toward providing a more holistic view of measuring IDR, although research and development is needed before metrics can adequately reflect the actual phenomenon of IDR.
---
paper_title: An index to quantify an individual's scientific research output
paper_content:
I propose the index $h$, defined as the number of papers with citation number higher or equal to $h$, as a useful index to characterize the scientific output of a researcher.
---
paper_title: Meeting the Micro-Level Challenges : Bibliometrics at the Individual Level
paper_content:
The aim of this paper is to demonstrate a method for bibliometric evaluation of individuals, i.e. research staff currently employed within a university department or other knowledge organisations w ...
---
paper_title: Assessing basic research : Some partial indicators of scientific progress in radio astronomy
paper_content:
As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups' recent scientific performance. This paper suggests a method for evaluating that performance. ::: ::: After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group's performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups. ::: ::: In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.
---
paper_title: Assessing basic research : Some partial indicators of scientific progress in radio astronomy
paper_content:
As the costs of certain types of scientific research have escalated and as growth rates in overall national science budgets have declined, so the need for an explicit science policy has grown more urgent. In order to establish priorities between research groups competing for scarce funds, one of the most important pieces of information needed by science policy-makers is an assessment of those groups' recent scientific performance. This paper suggests a method for evaluating that performance. ::: ::: After reviewing the literature on scientific assessment, we argue that, while there are no simple measures of the contributions to scientific knowledge made by scientists, there are a number of ‘partial indicators’ — that is, variables determined partly by the magnitude of the particular contributions, and partly by ‘other factors’. If the partial indicators are to yield reliable results, then the influence of these ‘other factors’ must be minimised. This is the aim of the method of ‘converging partial indicators’ proposed in this paper. We argue that the method overcomes many of the problems encountered in previous work on scientific assessment by incorporating the following elements: (1) the indicators are applied to research groups rather than individual scientists; (2) the indicators based on citations are seen as reflecting the impact, rather than the quality or importance, of the research work; (3) a range of indicators are employed, each of which focusses on different aspects of a group's performance; (4) the indicators are applied to matched groups, comparing ‘like’ with ‘like’ as far as possible; (5) because of the imperfect or partial nature of the indicators, only in those cases where they yield convergent results can it be assumed that the influence of the ‘other factors’ has been kept relatively small (i.e. the matching of the groups has been largely successful), and that the indicators therefore provide a reasonably reliable estimate of the contribution to scientific progress made by different research groups. ::: ::: In an empirical study of four radio astronomy observatories, the method of converging partial indicators is tested, and several of the indicators (publications per researcher, citations per paper, numbers of highly cited papers, and peer evaluation) are found to give fairly consistent results. The results are of relevance to two questions: (a) can basic research be assessed? (b) more specifically, can significant differences in the research performance of radio astronomy centres be identified? We would maintain that the evidence presented in this paper is sufficient to justify a positive answer to both these questions, and hence to show that the method of converging partial indicators can yield information useful to science policy-makers.
---
paper_title: Google Scholar duped and deduped – the aura of “robometrics”
paper_content:
Purpose – This purpose of this paper is to discuss some of the problems that exist with Google Scholar, particularly regarding content spam and citation spam.Design/methodology/approach – The paper provides an analysis of how Google Scholar has been duped by real but manipulated documents and reference lists, as well as by fake documents and references. Details of research regarding the duping of Google Scholar is presented and a possible solution is offered.Findings – Researchers showed how easy it was to dupe Google Scholar. In one case, the researchers added invisible words to the first page of one of their conference papers (using the well‐known white letter on white screen/paper technique), and modified the content and bibliography of some of their already published papers, then posted them on the web to see if Google Scholar would bite, i.e. would improve their rank position, and increase the number of citations that the targeted papers received, and the number of papers published by the authors. Go...
---
paper_title: On the h-index - A mathematical approach to a new measure of publication activity and citation impact
paper_content:
L'A. analyse les proprietes basiques de l'index-h, indicateur developpe par J. E. Hirsch, sur la base d'un modele de distribution de probabilites largement utilise en bibliometrie, a savoir les distributions Pareto. L'index-h, fonde sur le nombre de citations recues, mesure l'activite de publication et l'impact en citations. C'est un indicateur utile avec d'interessantes proprietes mathematiques, mais qui ne saurait se substituer aux indicateurs bibliometriques courants plus sophistiques.
---
paper_title: Is scientific literature subject to a sell-by-date? A general methodology to analyze the durability of scientific documents
paper_content:
The study of the citation histories and ageing of documents are topics that have been addressed from several perspectives, especially in the analysis of documents with delayed recognition or sleeping beauties. However, there is no general methodology that can be extensively applied for different time periods and/or research fields. In this paper a new methodology for the general analysis of the ageing and durability of scientific papers is presented. This methodology classifies documents into three general types: Delayed documents, which receive the main part of their citations later than normal documents; Flash in the pans, which receive citations immediately after their publication but they are not cited in the long term; and Normal documents, documents with a typical distribution of citations over time. These three types of durability have been analyzed considering the whole population of documents in the Web of Science with at least 5 external citations (i.e. not considering self-citations). Several patterns related to the three types of durability have been found and the potential for further research of the developed methodology is discussed.
---
paper_title: A bibliometric analysis of NOAA’s Office of Ocean Exploration and Research
paper_content:
Bibliometric analysis techniques are increasingly being used to analyze and evaluate scientific research produced by institutions and grant funding agencies. This article uses bibliometric methods to analyze journal articles funded by NOAA’s Office of Ocean Exploration and Research (OER), an extramural grant-funding agency focused on the scientific exploration of the world’s oceans. OER-supported articles in this analysis were identified through grant reports, personal communication, and acknowledgement of OER support or grant numbers. The articles identified were analyzed to determine the number of publications and citations received per year, subject, and institution. The productivity and citation impact of institutions in the US receiving OER grant funding were mapped geographically. Word co-occurrence and bibliographic coupling networks were created and visualized to identify the research topics of OER-supported articles. Finally, article citation counts were evaluated by means of percentile ranks. This article demonstrates that bibliometric analysis can be useful for summarizing and evaluating the research performance of a grant funding agency.
---
paper_title: The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level
paper_content:
The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested.
---
paper_title: The Holy Grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence
paper_content:
Abstract Evaluation studies of scientific performance conducted during the past years more and more focus on the identification of research of the 'highest quality', 'top' research, or 'scientific excellence'. This shift in focus has lead to the development of new bibliometric methodologies and indicators. Technically, it meant a shift from bibliometric impact scores based on average values such as the average impact of all papers published by some unit to be evaluated towards indicators reflecting the topof the citation distribution, such as the number of 'highly cited' or 'top' articles. In this study we present a comparative analysis of a number of standard and new indicators of research performance or 'scientific excellence', using techniques applied in studies conducted by CWTS in recent years. It will be shown that each type of indicator reflects a particular dimension of the general concept of research performance. Consequently, the application of one single indicator only may provide an incomplet...
---
paper_title: Scaling the h-index for different scientific ISI fields
paper_content:
We propose a simple way to put in a common scale the h values of researchers working in different scientific ISI fields, so that the foreseeable misuse of this index for inter-areas comparison might be prevented, or at least, alleviated.
---
paper_title: How good is research really?
paper_content:
Bibliometrics increasingly determine the allocation of jobs and funding in science. Bibliometricians must therefore develop and adopt reliable measures of quality that truly reflect a scientist's contribution to his or her field.
---
paper_title: Quantifying Scholarly Impact: IQp Versus the Hirsch h
paper_content:
Hirsch's (2005) h index of scholarly output has generated substantial interest and wide acceptance because of its apparent ability to quantify scholarly impact simply and accurately. We show that the excitement surrounding h is premature for three reasons: h stagnates with increasing scientific age; it is highly dependent on publication quantity; and it is highly dependent on field-specific citation rates. Thus, it is not useful for comparing scholars across disciplines. We propose the scholarly “index of quality and productivity” (IQp) as an alternative to h. The new index takes into account a scholar's total impact and also corrects for field-specific citation rates, scholarly productivity, and scientific age. The IQp accurately predicts group membership on a common metric, as tested on a sample of 80 scholars from three populations: (a) Nobel winners in physics (n = 10), chemistry (n = 10), medicine (n = 10), and economics (n = 10), and towering psychologists (n = 10); and scholars who have made more modest contributions to science including randomly selected (b) fellows (n = 15) and (c) members (n = 15) of the Society of Industrial and Organizational Psychology. The IQp also correlates better with expert ratings of greatness than does the h index. © 2008 Wiley Periodicals, Inc.
---
paper_title: The publication-citation matrix and its derived quantities
paper_content:
We give an overview of the main data of a publication-citation matrix. We show how impact factors are defined, and, in particular, point out the difference between the synchronous and the diachronous impact factor. The advantages and disadvantages of using both as tools in research evaluation are discussed.
---
paper_title: On the calculation of percentile-based bibliometric indicators
paper_content:
A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance, the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators have been proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
---
paper_title: Discovering author impact: A PageRank perspective
paper_content:
This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.
---
paper_title: Quality, quantity, and impact in academic publication
paper_content:
Publication records of 85 social-personality psychologists were tracked from the time of their doctoral studies until 10 years post-PhD. Associations between publication quantity (number of articles), quality (mean journal impact factor and article influence score), and impact (citations, h-index, g-index, webpage visits) were examined. Publication quantity and quality were only modestly related, and there was evidence of a quality-quantity trade-off. Impact was more strongly associated with quantity than quality. Authors whose records weighed quality over quantity tended to be associated with more prestigious institutions, but had lesser impact. Quantity- and quality-favoring publication strategies may have important implications for the shape and success of scientific careers. Copyright © 2009 John Wiley & Sons, Ltd.
---
paper_title: Exploratory factor analysis for the Hirsch index, 17 h-type variants, and some traditional bibliometric indicators
paper_content:
The purpose of this article is to come up with a valid categorization and to examine the performance and properties of a wide range of h-type indices presented recently in the relevant literature. By exploratory factor analysis (EFA) we study the relationship between the h-index, its variants, and some standard bibliometric indicators of 26 physicists compiled from the Science Citation Index in the Web of Science.
---
paper_title: A RATIONAL, SUCCESSIVE G-INDEX APPLIED TO ECONOMICS DEPARTMENTS IN IRELAND
paper_content:
A rational, successive g-index is proposed, and applied to economics departments in Ireland. The successive g-index has greater discriminatory power than the successive h-index, and the rational index performs better still. The rational, successive g-index is also more robust to differences in department size.
---
paper_title: Development of bibliometric indicators for utility of research to users in society: Measurement of external knowledge transfer via publications in trade journals
paper_content:
The development of a set of bibliometric tools to contribute to the assessment and monitoring of utility of university and non-university research institutes to society is described. Trade publications were weighted according to the utility of the journals for relevant nonscientific user groups. Furthermore, one indicator addresses the extent to which a general or a specific type of audience is addressed. Results are shown for one university and one university department. In general, validation interviews show that the indicator provide a good first estimation of the potential effectivity of the knowledge transfer efforts by means of publications in trade journals to practice and policy bodies.
---
paper_title: An index to quantify an individual's scientific research output
paper_content:
I propose the index $h$, defined as the number of papers with citation number higher or equal to $h$, as a useful index to characterize the scientific output of a researcher.
---
paper_title: Theory and practise of the g-index
paper_content:
The g-index is introduced as an improvement of the h-index of Hirsch to measure the global citation performance of a set of articles. If this set is ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g 2 citations. We prove the unique existence of g for any set of articles and we have that g ≥ h. The general Lotkaian theory of the g-index is presented and we show that g = (α-1 / α-2) α-1/α T 1/α where a> 2 is the Lotkaian exponent and where T denotes the total number of sources. We then present the g-index of the (still active) Price medallists for their complete careers up to 1972 and compare it with the h-index. It is shown that the g-index inherits all the good properties of the h-index and, in addition, better takes into account the citation scores of the top articles. This yields a better distinction between and order of the scientists from the point of view of visibility.
---
paper_title: h-Index : A review focused in its variants , computation and standardization for different scientific fields
paper_content:
The h-index and some related bibliometric indices have received a lot of attention from the scientific community in the last few years due to some of their good properties (easiness of computation, balance between quantity of publications and their impact and so on). Many different indicators have been developed in order to extend and overcome the drawbacks of the original Hirsch proposal. In this contribution we present a comprehensive review on the h-index and related indicators field. From the initial h-index proposal we study their main advantages, drawbacks and the main applications that we can find in the literature. A description of many of the h-related indices that have been developed along with their main characteristics and some of the works that analyze and compare them are presented. We also review the most up to date standardization studies that allow a fair comparison by means of the h-index among scientists from different research areas and finally, some works that analyze the computation of the h-index and related indices by using different citation databases (ISI Citation Indexes, Google Scholar and Scopus) are introduced.
---
paper_title: Scaling the h-index for different scientific ISI fields
paper_content:
We propose a simple way to put in a common scale the h values of researchers working in different scientific ISI fields, so that the foreseeable misuse of this index for inter-areas comparison might be prevented, or at least, alleviated.
---
paper_title: Exploratory factor analysis for the Hirsch index, 17 h-type variants, and some traditional bibliometric indicators
paper_content:
The purpose of this article is to come up with a valid categorization and to examine the performance and properties of a wide range of h-type indices presented recently in the relevant literature. By exploratory factor analysis (EFA) we study the relationship between the h-index, its variants, and some standard bibliometric indicators of 26 physicists compiled from the Science Citation Index in the Web of Science.
---
paper_title: A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants
paper_content:
This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N=9005; on average, each study had a sample size of n=257. The results of a three-level cross-classified mixed-effects meta-analysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index.
---
paper_title: Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine
paper_content:
In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor ‘impact of the productive core’ than using the factor ‘quantity of the productive core.’ © 2008 Wiley Periodicals, Inc.
---
paper_title: The e-Index, Complementing the h-Index for Excess Citations
paper_content:
BACKGROUND ::: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2) citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: To solve these problems, I here propose the e-index, where e(2) represents the ignored excess citations, in addition to the h(2) citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.
---
paper_title: The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level
paper_content:
The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested.
---
paper_title: On the robustness of the h-index
paper_content:
The h-index (Hirsch, 2005) is robust, remaining relatively unaffected by errors in the long tails of the citations-rank distribution, such as typographic errors that short-change frequently cited articles and create bogus additional records. This robustness, and the ease with which h-indices can be verified, support the use of a Hirsch-type index over alternatives such as the journal impact factor. These merits of the h-index apply both to individuals and to journals.
---
paper_title: h-Index : A review focused in its variants , computation and standardization for different scientific fields
paper_content:
The h-index and some related bibliometric indices have received a lot of attention from the scientific community in the last few years due to some of their good properties (easiness of computation, balance between quantity of publications and their impact and so on). Many different indicators have been developed in order to extend and overcome the drawbacks of the original Hirsch proposal. In this contribution we present a comprehensive review on the h-index and related indicators field. From the initial h-index proposal we study their main advantages, drawbacks and the main applications that we can find in the literature. A description of many of the h-related indices that have been developed along with their main characteristics and some of the works that analyze and compare them are presented. We also review the most up to date standardization studies that allow a fair comparison by means of the h-index among scientists from different research areas and finally, some works that analyze the computation of the h-index and related indices by using different citation databases (ISI Citation Indexes, Google Scholar and Scopus) are introduced.
---
paper_title: Generalized Hirsch h-index for disclosing latent facts in citation networks
paper_content:
What is the value of a scientist and its impact upon the scientific thinking? How can we measure the prestige of a journal or a conference? The evaluation of the scientific work of a scientist and the estimation of the quality of a journal or conference has long attracted significant interest, due to the benefits by obtaining an unbiased and fair criterion. Although it appears to be simple, defining a quality metric is not an easy task. To overcome the disadvantages of the present metrics used for ranking scientists and journals, J. E. Hirsch proposed a pioneering metric, the now famous h-index. In this article we demonstrate several inefficiencies of this index and develop a pair of generalizations and effective variants of it to deal with scientist ranking and publication forum ranking. The new citation indices are able to disclose trendsetters in scientific research, as well as researchers that constantly shape their field with their influential work, no matter how old they are. We exhibit the effectiveness and the benefits of the new indices to unfold the full potential of the h-index, with extensive experimental results obtained from the DBLP, a widely known on-line digital library.
---
paper_title: h-index sequence and h-index matrix: Constructions and applications
paper_content:
The calculation of Hirsch's h-index is a detail-ignoring way, therefore, single h-index could not reflect the difference of time spans for scientists to accumulate their papers and citations. In this study the h-index sequence and the h-index matrix are constructed, which complement the absent details of single h-index, reveal different increasing manner and the increasing mechanism of the h-index, and make the scientists at different scientific age comparable.
---
paper_title: A made-to-measure indicator for cross-disciplinary bibliometric ranking of researchers performance
paper_content:
This paper presents and discusses a new bibliometric indicator of research performance, designed with the fundamental concern of enabling cross-disciplinary comparisons. The indicator, called x-index, compares a researcher's output to a reference set of research output from top researchers, identified in the journals where the researcher has published. It reflects publication quantity and quality, uses a moderately sized data set, and works with a more refined definition of scientific fields. x-index was developed to rank researchers in a scientific excellence award in the Faculty of Engineering of the University of Porto. The data set collected for the 2009 edition of the award is used to study the indicator's features and design choices, and provides the basis for a discussion of its advantages and limitations.
---
paper_title: Universality of citation distributions: towards an objective measure of scientific impact
paper_content:
We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator cf = c/c0 is considered, where c0 is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of cf as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h index suitable for comparing scientists working in different fields.
---
paper_title: The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level
paper_content:
The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested.
---
paper_title: The pure h-index: calculating an author’s h- index by taking co-authors into account
paper_content:
Abstract The article introduces a new Hirsch-type index for a scientist. This so-called pure h-index, denoted by hP, takes the actual number of coauthors, and the scientist’s relative position in the byline into account. The transformation from h to hP can also be applied to the R-index, leading to the pure R-index, denoted as RP. This index takes the number of collaborators, possibly the rank in the byline and the actual number of citations into account.
---
paper_title: Is it possible to compare researchers with different scientific interests?
paper_content:
The number h of papers with at least h citations has been proposed to evaluate individuals scientific research production. This index is robust in several ways but yet strongly dependent on the research field. We propose a complementary index ) ( 2 T a I N h h = , with ) (T a N being the total number of authors in the considered h papers. A researcher with index hI has hI papers with at least hI citation if he/she had published alone. We have obtained the rank plots of h and h I for four Brazilian scientific communities. In contrast with the h-index, the h I index rank plots collapse into a single curve allowing comparison among different research areas.
---
paper_title: The w-index: A significant improvement of the h-index
paper_content:
I propose a new measure, the w-index, as a particularly simple and useful way to assess the integrated impact of a researcher's work, especially his or her excellent papers. The w-index can be defined as follows: If w of a researcher's papers have at least 10w citations each and the other papers have fewer than 10(w+1) citations, his/her w-index is w. It is a significant improvement of the h-index.
---
paper_title: Theory and practise of the g-index
paper_content:
The g-index is introduced as an improvement of the h-index of Hirsch to measure the global citation performance of a set of articles. If this set is ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g 2 citations. We prove the unique existence of g for any set of articles and we have that g ≥ h. The general Lotkaian theory of the g-index is presented and we show that g = (α-1 / α-2) α-1/α T 1/α where a> 2 is the Lotkaian exponent and where T denotes the total number of sources. We then present the g-index of the (still active) Price medallists for their complete careers up to 1972 and compare it with the h-index. It is shown that the g-index inherits all the good properties of the h-index and, in addition, better takes into account the citation scores of the top articles. This yields a better distinction between and order of the scientists from the point of view of visibility.
---
paper_title: h-Index : A review focused in its variants , computation and standardization for different scientific fields
paper_content:
The h-index and some related bibliometric indices have received a lot of attention from the scientific community in the last few years due to some of their good properties (easiness of computation, balance between quantity of publications and their impact and so on). Many different indicators have been developed in order to extend and overcome the drawbacks of the original Hirsch proposal. In this contribution we present a comprehensive review on the h-index and related indicators field. From the initial h-index proposal we study their main advantages, drawbacks and the main applications that we can find in the literature. A description of many of the h-related indices that have been developed along with their main characteristics and some of the works that analyze and compare them are presented. We also review the most up to date standardization studies that allow a fair comparison by means of the h-index among scientists from different research areas and finally, some works that analyze the computation of the h-index and related indices by using different citation databases (ISI Citation Indexes, Google Scholar and Scopus) are introduced.
---
paper_title: The h-index and its alternatives: An application to the 100 most prolific economists
paper_content:
The h-index is a recent but already quite popular way of measuring research quality and quantity. However, it discounts highly-cited papers. The g-index corrects for this, but it is sensitivity to the number of never-cited papers. Besides, h- or g-index-based rankings have a large number of ties. Therefore, this paper introduces two new indices, and tests their performance for the 100 most prolific economists. A researcher has a t-number (f-number) of t (f) if t (f) is the largest number for which it holds that she has t (f) publications for which the geometric (harmonic) average number of citations is at least t (f). The new indices overcome the shortcomings of the old indices.
---
paper_title: The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level
paper_content:
The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested.
---
paper_title: Harmonic publication and citation counting: sharing authorship credit equitably – not equally, geometrically or arithmetically
paper_content:
Bibliometric counting methods need to be validated against perceived notions of authorship credit allocation, and standardized by rejecting methods with poor fit or questionable ethical implications. Harmonic counting meets these concerns by exhibiting a robust fit to previously published empirical data from medicine, psychology and chemistry, and by complying with three basic ethical criteria for the equitable sharing of authorship credit. Harmonic counting can also incorporate additional byline information about equal contribution, or the elevated status of a corresponding last author. By contrast, several previously proposed counting schemes from the bibliometric literature including arithmetic, geometric and fractional counting, do not fit the empirical data as well and do not consistently meet the ethical criteria. In conclusion, harmonic counting would seem to provide unrivalled accuracy, fairness and flexibility to the long overdue task of standardizing bibliometric allocation of publication and citation credit.
---
paper_title: Usage impact factor: The effects of sample characteristics on usage-based impact metrics
paper_content:
There exist ample demonstrations that indicators of scholarly impact analogous to the citation-based ISI Impact Factor can be derived from usage data; however, so far, usage can practically be recorded only at the level of distinct information services. This leads to community-specific assessments of scholarly impact that are difficult to generalize to the global scholarly community. In contrast, the ISI Impact Factor is based on citation data and thereby represents the global community of scholarly authors. The objective of this study is to examine the effects of community characteristics on assessments of scholarly impact from usage. We define a journal Usage Impact Factor that mimics the definition of the Thomson Scientific ISI Impact Factor. Usage Impact Factor rankings are calculated on the basis of a large-scale usage dataset recorded by the linking servers of the California State University system from 2003 to 2005. The resulting journal rankings are then compared to the Thomson Scientific ISI Impact Factor that is used as a reference indicator of general impact. Our results indicate that the particular scientific and demographic characteristics of a discipline have a strong effect on resulting usage-based assessments of scholarly impact. In particular, we observed that as the number of graduate students and faculty increases in a particular discipline, Usage Impact Factor rankings will converge more strongly with the ISI Impact Factor. © 2008 Wiley Periodicals, Inc.
---
paper_title: Unethical practices in authorship of scientific papers
paper_content:
Over the past few decades, there has been an increase in the number of multi-author papers within scientific journals. This increase, in combination with the pressure to publish within academia, has precipitated various unethical authorship practices within biomedical research. These include dilution of authorship responsibility, ‘guest’, ‘pressured’ and ‘ghost’ authorship, and obfuscation of authorship credit within by-lines. Other authorship irregularities include divided and duplicate publication. This article discusses these problems and why the International Committee of Medical Journal Editors guidelines are failing to control them.
---
paper_title: Harmonic publication and citation counting: sharing authorship credit equitably – not equally, geometrically or arithmetically
paper_content:
Bibliometric counting methods need to be validated against perceived notions of authorship credit allocation, and standardized by rejecting methods with poor fit or questionable ethical implications. Harmonic counting meets these concerns by exhibiting a robust fit to previously published empirical data from medicine, psychology and chemistry, and by complying with three basic ethical criteria for the equitable sharing of authorship credit. Harmonic counting can also incorporate additional byline information about equal contribution, or the elevated status of a corresponding last author. By contrast, several previously proposed counting schemes from the bibliometric literature including arithmetic, geometric and fractional counting, do not fit the empirical data as well and do not consistently meet the ethical criteria. In conclusion, harmonic counting would seem to provide unrivalled accuracy, fairness and flexibility to the long overdue task of standardizing bibliometric allocation of publication and citation credit.
---
paper_title: Discovering author impact: A PageRank perspective
paper_content:
This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.
---
|
Title: A Review of the Characteristics of 108 Author-Level Bibliometric Indicators
Section 1: Introduction
Description 1: Introduce the importance of bibliometric indicators for assessing researchers' reputations and the explosion in their use.
Section 2: Methodology
Description 2: Explain the selection of indicators, sources of supplementary information, and the criterion for analysis.
Section 3: Categories of Publication Indicators
Description 3: Describe the typology of publication and effect indicators used in the study.
Section 4: Judgement of Complexity
Description 4: Detail the complexity assessment criteria used for evaluating each indicator.
Section 5: Results
Description 5: Provide an overview of the identified indicators, their complexity scores, and the detailed results of the complexity analysis for each indicator.
Section 6: Overview of the Identified Indicators
Description 6: Summarize the identified indicators, including their complexity scores and the number of indicators in each category.
Section 7: Summary of Complexity Scores
Description 7: Discuss the overall complexity scores and their implications for end-user usability.
Section 8: Publication Count
Description 8: Present and discuss indicators related to the count of publications.
Section 9: Qualifying Output as Journal Impact
Description 9: Explore indicators that measure journal impact at the author-level.
Section 10: Effect of Output
Description 10: Detail indicators measuring the effect of output through citations and field normalization.
Section 11: Indicators that Rank the Publications in an Individual Portfolio
Description 11: Discuss indicators that rank the publications in an individual's portfolio, including h-dependent and h-independent indicators.
Section 12: Indicators of Impact Over Time
Description 12: Examine indicators assessing the impact of a researcher’s work over time, normalized to the portfolio and field.
Section 13: Discussion
Description 13: Reflect on challenges and implications of using multiple indicators, and the necessity of combining them for comprehensive assessment.
Section 14: Methodological Considerations
Description 14: Discuss the limitations of the study, methodological choices, and areas for future research.
Section 15: Conclusions
Description 15: Summarize the main findings, emphasizing the inadequacy of single indicators and recommending combinations of simple indicators for practical use.
|
A Review of Three Different Studies on Hidden Markov Models for Epigenetic Problems: A Computational Perspective
| 7 |
---
paper_title: An HMM approach to genome-wide identification of differential histone modification sites from ChIP-seq data
paper_content:
Motivation: Epigenetic modifications are one of the critical factors to regulate gene expression and genome function. Among different epigenetic modifications, the differential histone modification sites (DHMSs) are of great interest to study the dynamic nature of epigenetic and gene expression regulations among various cell types, stages or environmental responses. To capture the histone modifications at whole genome scale, ChIP-seq technology is becoming a robust and comprehensive approach. Thus the DHMSs are potentially identifiable by comparing two ChIP-seq libraries. However, little has been addressed on this issue in literature. ::: ::: Results: Aiming at identifying DHMSs, we propose an approach called ChIPDiff for the genome-wide comparison of histone modification sites identified by ChIP-seq. Based on the observations of ChIP fragment counts, the proposed approach employs a hidden Markov model (HMM) to infer the states of histone modification changes at each genomic location. We evaluated the performance of ChIPDiff by comparing the H3K27me3 modification sites between mouse embryonic stem cell (ESC) and neural progenitor cell (NPC). We demonstrated that the H3K27me3 DHMSs identified by our approach are of high sensitivity, specificity and technical reproducibility. ChIPDiff was further applied to uncover the differential H3K4me3 and H3K36me3 sites between different cell states. Interesting biological discoveries were achieved from such comparison in our study. ::: ::: Availability: http://cmb.gis.a-star.edu.sg/ChIPSeq/tools.htm ::: ::: Contact:[email protected]; [email protected] ::: ::: Supplementary information:Supplementary methods and data are available at Bioinformatics online.
---
paper_title: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids
paper_content:
Probablistic models are becoming increasingly important in analyzing the huge amount of data being produced by large-scale DNA-sequencing efforts such as the Human Genome Project. For example, hidden Markov models are used for analyzing biological sequences, linguistic-grammar-based probabilistic models for identifying RNA secondary structure, and probabilistic evolutionary models for inferring phylogenies of sequences from different organisms. This book gives a unified, up-to-date and self-contained account, with a Bayesian slant, of such methods, and more generally to probabilistic methods of sequence analysis. Written by an interdisciplinary team of authors, it is accessible to molecular biologists, computer scientists, and mathematicians with no formal knowledge of the other fields, and at the same time presents the state of the art in this new and important field.
---
paper_title: Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
paper_content:
Background ::: Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction.
---
paper_title: An evolutionary method for learning HMM structure: prediction of protein secondary structure
paper_content:
BackgroundThe prediction of the secondary structure of proteins is one of the most studied problems in bioinformatics. Despite their success in many problems of biological sequence analysis, Hidden Markov Models (HMMs) have not been used much for this problem, as the complexity of the task makes manual design of HMMs difficult. Therefore, we have developed a method for evolving the structure of HMMs automatically, using Genetic Algorithms (GAs).ResultsIn the GA procedure, populations of HMMs are assembled from biologically meaningful building blocks. Mutation and crossover operators were designed to explore the space of such Block-HMMs. After each step of the GA, the standard HMM estimation algorithm (the Baum-Welch algorithm) was used to update model parameters. The final HMM captures several features of protein sequence and structure, with its own HMM grammar. In contrast to neural network based predictors, the evolved HMM also calculates the probabilities associated with the predictions. We carefully examined the performance of the HMM based predictor, both under the multiple- and single-sequence condition.ConclusionWe have shown that the proposed evolutionary method can automatically design the topology of HMMs. The method reads the grammar of protein sequences and converts it into the grammar of an HMM. It improved previously suggested evolutionary methods and increased the prediction quality. Especially, it shows good performance under the single-sequence condition and provides probabilistic information on the prediction result. The protein secondary structure predictor using HMMs (P.S.HMM) is on-line available http://www.binf.ku.dk/~won/pshmm.htm. It runs under the single-sequence condition.
---
paper_title: Modeling sequencing errors by combining Hidden Markov models
paper_content:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
---
paper_title: The language of genes
paper_content:
Linguistic metaphors have been woven into the fabric of molecular biology since its inception. The determination of the human genome sequence has brought these metaphors to the forefront of the popular imagination, with the natural extension of the notion of DNA as language to that of the genome as the 'book of life'. But do these analogies go deeper and, if so, can the methods developed for analysing languages be applied to molecular biology? In fact, many techniques used in bioinformatics, even if developed independently, may be seen to be grounded in linguistics. Further interweaving of these fields will be instrumental in extending our understanding of the language of life.
---
paper_title: Methods of DNA methylation analysis
paper_content:
PURPOSE OF REVIEW ::: To provide guidance for investigators who are new to the field of DNA methylation analysis. ::: ::: ::: RECENT FINDINGS ::: Epigenetics is the study of mitotically heritable alterations in gene expression potential that are not mediated by changes in DNA sequence. Recently, it has become clear that nutrition can affect epigenetic mechanisms, causing long-term changes in gene expression. This review focuses on methods for studying the epigenetic mechanism DNA methylation. Recent advances include improvement in high-throughput methods to obtain quantitative data on locus-specific DNA methylation and development of various approaches to study DNA methylation on a genome-wide scale. ::: ::: ::: SUMMARY ::: No single method of DNA methylation analysis will be appropriate for every application. By understanding the type of information provided by, and the inherent potential for bias and artifact associated with, each method, investigators can select the method most appropriate for their specific research needs.
---
paper_title: Automatic generation of gene finders for eukaryotic species
paper_content:
BackgroundThe number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity and quality of reliable gene annotation grows.ResultsWe present a procedure, Agene, that automatically generates a species-specific gene predictor from a set of reliable mRNA sequences and a genome. We apply a Hidden Markov model (HMM) that implements explicit length distribution modelling for all gene structure blocks using acyclic discrete phase type distributions. The state structure of the each HMM is generated dynamically from an array of sub-models to include only gene features represented in the training set.ConclusionAcyclic discrete phase type distributions are well suited to model sequence length distributions. The performance of each individual gene predictor on each individual genome is comparable to the best of the manually optimised species-specific gene finders. It is shown that species-specific gene finders are superior to gene finders trained on other species.
---
paper_title: A hidden Markov model for analyzing ChIP-chip experiments on genome tiling arrays and its application to p53 binding sequences
paper_content:
Motivation: Transcription factors (TFs) regulate gene expression by recognizing and binding to specific regulatory regions on the genome, which in higher eukaryotes can occur far away from the regulated genes. Recently, Affymetrix developed the high-density oligonucleotide arrays that tile all the non-repetitive sequences of the human genome at 35 bp resolution. This new array platform allows for the unbiased mapping of in vivo TF binding sequences (TFBSs) using Chromatin ImmunoPrecipitation followed by microarray experiments (ChIP-chip). The massive dataset generated from these experiments pose great challenges for data analysis. ::: ::: Results: We developed a fast, scalable and sensitive method to extract TFBSs from ChIP-chip experiments on genome tiling arrays. Our method takes advantage of tiling array data from many experiments to normalize and model the behavior of each individual probe, and identifies TFBSs using a hidden Markov model (HMM). When applied to the data of p53 ChIP-chip experiments from an earlier study, our method discovered many new high confidence p53 targets including all the regions verified by quantitative PCR. Using a de novo motif finding algorithm MDscan, we also recovered the p53 motif from our HMM identified p53 target regions. Furthermore, we found substantial p53 motif enrichment in these regions comparing with both genomic background and the TFBSs identified earlier. Several of the newly identified p53 TFBSs are in the promoter region of known genes or associated with previously characterized p53-responsive genes. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Available at the following URL http://genome.dfci.harvard.edu/~xsliu/HMMTiling/index.html
---
paper_title: A sequence-based filtering method for ncRNA identification and its application to searching for riboswitch elements
paper_content:
Motivation: Recent studies have uncovered an “RNA world”, in which non coding RNA (ncRNA) sequences play a central role in the regulation of gene expression. Computational studies on ncRNA have been directed toward developing detection methods for ncRNAs. State-of-the-art methods for the problem, like covariance models, suffer from high computational cost, underscoring the need for efficient filtering approaches that can identify promising sequence segments and speedup the detection process. Results: In this paper we make several contributions toward this goal. First, we formalize the concept of a filter and provide figures of merit that allow comparison between filters. Second, we design efficient sequence based filters that dominate the current state-of-the-art HMM filters. Third, we provide a new formulation of the covariance model that allows speeding up RNA alignment. We demonstrate the power of our approach on both synthetic data and real bacterial genomes. We then apply our algorithm to the detection of novel riboswitch elements from the whole bacterial and archaeal genomes. Our results point to a number of novel riboswitch candidates, and include genomes that were not previously known to contain riboswitches. Availability: The program is available upon request from the authors. Contact: [email protected]
---
paper_title: An integrated encyclopedia of DNA elements in the human genome
paper_content:
The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation. The newly identified elements also show a statistical correspondence to sequence variants linked to human disease, and can thereby guide interpretation of this variation. Overall, the project provides new insights into the organization and regulation of our genes and genome, and is an expansive resource of functional annotations for biomedical research.
---
paper_title: Structural Alignment of RNAs Using Profile-csHMMs and Its Application to RNA Homology Search: Overview and New Results
paper_content:
Systematic research on noncoding RNAs (ncRNAs) has revealed that many ncRNAs are actively involved in various biological networks. Therefore, in order to fully understand the mechanisms of these networks, it is crucial to understand the roles of ncRNAs. Unfortunately, the annotation of ncRNA genes that give rise to functional RNA molecules has begun only recently, and it is far from being complete. Considering the huge amount of genome sequence data, we need efficient computational methods for finding ncRNA genes. One effective way of finding ncRNA genes is to look for regions that are similar to known ncRNA genes. As many ncRNAs have well-conserved secondary structures, we need statistical models that can represent such structures for this purpose. In this paper, we propose a new method for representing RNA sequence profiles and finding structural alignment of RNAs based on profile context-sensitive hidden Markov models (profile-csHMMs). Unlike existing models, the proposed approach can handle any kind of RNA secondary structures, including pseudoknots. We show that profile-csHMMs can provide an effective framework for the computational analysis of RNAs and the identification of ncRNA genes.
---
paper_title: Discovery and characterization of chromatin states for systematic annotation of the human genome
paper_content:
A plethora of epigenetic modifications have been described in the human genome and shown to play diverse roles in gene regulation, cellular differentiation and the onset of disease. Although individual modifications have been linked to the activity levels of various genetic functional elements, their combinatorial patterns are still unresolved and their potential for systematic de novo genome annotation remains untapped. Here, we use a multivariate Hidden Markov Model to reveal 'chromatin states' in human T cells, based on recurrent and spatially coherent combinations of chromatin marks. We define 51 distinct chromatin states, including promoter-associated, transcription-associated, active intergenic, large-scale repressed and repeat-associated states. Each chromatin state shows specific enrichments in functional annotations, sequence motifs and specific experimentally observed characteristics, suggesting distinct biological roles. This approach provides a complementary functional annotation of the human genome that reveals the genome-wide locations of diverse classes of epigenetic function.
---
paper_title: Recent Progresses in the Linguistic Modeling of Biological Sequences Based on Formal Language Theory
paper_content:
Treating genomes just as languages raises the possibility of producing concise generalizations about information in biological sequences. Grammars used in this way would constitute a model of underlying biological processes or structures, and that grammars may, in fact, serve as an appropriate tool for theory formation. The increasing number of biological sequences that have been yielded further highlights a growing need for developing grammatical systems in bioinformatics. The intent of this review is therefore to list some bibliographic references regarding the recent progresses in the field of grammatical modeling of biological sequences. This review will also contain some sections to briefly introduce basic knowledge about formal language theory, such as the Chomsky hierarchy, for non-experts in computational linguistics, and to provide some helpful pointers to start a deeper investigation into this field.
---
paper_title: Advances in Brief Genome-wide Loss of Heterozygosity Analysis from Laser Capture Microdissected Prostate Cancer Using Single Nucleotide Polymorphic Allele (SNP) Arrays and a Novel Bioinformatics Platform dChipSNP 1,2
paper_content:
Oligonucleotide arrays that detect single nucleotide polymorphisms were used to generate genome-wide loss of heterozygosity (LOH) maps from laser capture microdissected paraffin-embedded samples using as little as 5 ng of DNA. The allele detection rate from such samples was comparable with that obtained with standard amounts of DNA prepared from frozen tissues. A novel informatics platform, dChipSNP, was used to automate the definition of statistically valid regions of LOH, assign LOH genotypes to prostate cancer samples, and organize by hierarchical clustering prostate cancers based on the pattern of LOH. This organizational strategy revealed apparently distinct genetic subsets of prostate cancer.
---
paper_title: Methods of DNA methylation analysis
paper_content:
PURPOSE OF REVIEW ::: To provide guidance for investigators who are new to the field of DNA methylation analysis. ::: ::: ::: RECENT FINDINGS ::: Epigenetics is the study of mitotically heritable alterations in gene expression potential that are not mediated by changes in DNA sequence. Recently, it has become clear that nutrition can affect epigenetic mechanisms, causing long-term changes in gene expression. This review focuses on methods for studying the epigenetic mechanism DNA methylation. Recent advances include improvement in high-throughput methods to obtain quantitative data on locus-specific DNA methylation and development of various approaches to study DNA methylation on a genome-wide scale. ::: ::: ::: SUMMARY ::: No single method of DNA methylation analysis will be appropriate for every application. By understanding the type of information provided by, and the inherent potential for bias and artifact associated with, each method, investigators can select the method most appropriate for their specific research needs.
---
paper_title: An HMM approach to genome-wide identification of differential histone modification sites from ChIP-seq data
paper_content:
Motivation: Epigenetic modifications are one of the critical factors to regulate gene expression and genome function. Among different epigenetic modifications, the differential histone modification sites (DHMSs) are of great interest to study the dynamic nature of epigenetic and gene expression regulations among various cell types, stages or environmental responses. To capture the histone modifications at whole genome scale, ChIP-seq technology is becoming a robust and comprehensive approach. Thus the DHMSs are potentially identifiable by comparing two ChIP-seq libraries. However, little has been addressed on this issue in literature. ::: ::: Results: Aiming at identifying DHMSs, we propose an approach called ChIPDiff for the genome-wide comparison of histone modification sites identified by ChIP-seq. Based on the observations of ChIP fragment counts, the proposed approach employs a hidden Markov model (HMM) to infer the states of histone modification changes at each genomic location. We evaluated the performance of ChIPDiff by comparing the H3K27me3 modification sites between mouse embryonic stem cell (ESC) and neural progenitor cell (NPC). We demonstrated that the H3K27me3 DHMSs identified by our approach are of high sensitivity, specificity and technical reproducibility. ChIPDiff was further applied to uncover the differential H3K4me3 and H3K36me3 sites between different cell states. Interesting biological discoveries were achieved from such comparison in our study. ::: ::: Availability: http://cmb.gis.a-star.edu.sg/ChIPSeq/tools.htm ::: ::: Contact:[email protected]; [email protected] ::: ::: Supplementary information:Supplementary methods and data are available at Bioinformatics online.
---
paper_title: Advances in Brief Genome-wide Loss of Heterozygosity Analysis from Laser Capture Microdissected Prostate Cancer Using Single Nucleotide Polymorphic Allele (SNP) Arrays and a Novel Bioinformatics Platform dChipSNP 1,2
paper_content:
Oligonucleotide arrays that detect single nucleotide polymorphisms were used to generate genome-wide loss of heterozygosity (LOH) maps from laser capture microdissected paraffin-embedded samples using as little as 5 ng of DNA. The allele detection rate from such samples was comparable with that obtained with standard amounts of DNA prepared from frozen tissues. A novel informatics platform, dChipSNP, was used to automate the definition of statistically valid regions of LOH, assign LOH genotypes to prostate cancer samples, and organize by hierarchical clustering prostate cancers based on the pattern of LOH. This organizational strategy revealed apparently distinct genetic subsets of prostate cancer.
---
paper_title: The language of genes
paper_content:
Linguistic metaphors have been woven into the fabric of molecular biology since its inception. The determination of the human genome sequence has brought these metaphors to the forefront of the popular imagination, with the natural extension of the notion of DNA as language to that of the genome as the 'book of life'. But do these analogies go deeper and, if so, can the methods developed for analysing languages be applied to molecular biology? In fact, many techniques used in bioinformatics, even if developed independently, may be seen to be grounded in linguistics. Further interweaving of these fields will be instrumental in extending our understanding of the language of life.
---
paper_title: Additive inheritance of histone modifications in Arabidopsis thaliana intra-specific hybrids
paper_content:
Plant genomes are earmarked with defined patterns of chromatin marks. Little is known about the stability of these epigenomes when related, but distinct genomes are brought together by intra-species hybridization. Arabidopsis thaliana accessions and their reciprocal hybrids were used as a model system to investigate the dynamics of histone modification patterns. The genome-wide distribution of histone modifications H3K4me2 and H3K27me3 in the inbred parental accessions Col-0, C24 and Cvi and their hybrid offspring was compared by chromatin immunoprecipitation in combination with genome tiling array hybridization. The analysis revealed that, in addition to DNA sequence polymorphisms, chromatin modification variations exist among accessions of A. thaliana. The range of these variations was higher for H3K27me3 (typically a repressive mark) than for H3K4me2 (typically an active mark). H3K4me2 and H3K27me3 were rather stable in response to intra-species hybridization, with mainly additive inheritance in hybrid offspring. In conclusion, intra-species hybridization does not result in gross changes to chromatin modifications.
---
paper_title: Comparing genome-wide chromatin profiles using ChIP-chip or ChIP-seq
paper_content:
Motivation: ChIP-chip and ChIP-seq technologies provide genome-wide measurements of various types of chromatin marks at an unprecedented resolution. With ChIP samples collected from different tissue types and/or individuals, we can now begin to characterize stochastic or systematic changes in epigenetic patterns during development (intra-individual) or at the population level (inter-individual). This requires statistical methods that permit a simultaneous comparison of multiple ChIP samples on a global as well as locus-specific scale. Current analytical approaches are mainly geared toward single sample investigations, and therefore have limited applicability in this comparative setting. This shortcoming presents a bottleneck in biological interpretations of multiple sample data. ::: ::: Results: To address this limitation, we introduce a parametric classification approach for the simultaneous analysis of two (or more) ChIP samples. We consider several competing models that reflect alternative biological assumptions about the global distribution of the data. Inferences about locus-specific and genome-wide chromatin differences are reached through the estimation of multivariate mixtures. Parameter estimates are obtained using an incremental version of the Expectation–Maximization algorithm (IEM). We demonstrate efficient scalability and application to three very diverse ChIP-chip and ChIP-seq experiments. The proposed approach is evaluated against several published ChIP-chip and ChIP-seq software packages. We recommend its use as a first-pass algorithm to identify candidate regions in the epigenome, possibly followed by some type of second-pass algorithm to fine-tune detected peaks in accordance with biological or technological criteria. ::: ::: Availability: R source code is available at http://gbic.biol.rug.nl/supplementary/2009/ChromatinProfiles/ ::: ::: Access to Chip-seq data: GEO repository GSE17937 ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online.
---
paper_title: A hidden Markov model for analyzing ChIP-chip experiments on genome tiling arrays and its application to p53 binding sequences
paper_content:
Motivation: Transcription factors (TFs) regulate gene expression by recognizing and binding to specific regulatory regions on the genome, which in higher eukaryotes can occur far away from the regulated genes. Recently, Affymetrix developed the high-density oligonucleotide arrays that tile all the non-repetitive sequences of the human genome at 35 bp resolution. This new array platform allows for the unbiased mapping of in vivo TF binding sequences (TFBSs) using Chromatin ImmunoPrecipitation followed by microarray experiments (ChIP-chip). The massive dataset generated from these experiments pose great challenges for data analysis. ::: ::: Results: We developed a fast, scalable and sensitive method to extract TFBSs from ChIP-chip experiments on genome tiling arrays. Our method takes advantage of tiling array data from many experiments to normalize and model the behavior of each individual probe, and identifies TFBSs using a hidden Markov model (HMM). When applied to the data of p53 ChIP-chip experiments from an earlier study, our method discovered many new high confidence p53 targets including all the regions verified by quantitative PCR. Using a de novo motif finding algorithm MDscan, we also recovered the p53 motif from our HMM identified p53 target regions. Furthermore, we found substantial p53 motif enrichment in these regions comparing with both genomic background and the TFBSs identified earlier. Several of the newly identified p53 TFBSs are in the promoter region of known genes or associated with previously characterized p53-responsive genes. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Available at the following URL http://genome.dfci.harvard.edu/~xsliu/HMMTiling/index.html
---
paper_title: ChIPmix: mixture model of regressions for two-color ChIP–chip analysis
paper_content:
Chromatin immunoprecipitation (ChIP) combined with DNA microarray is a high-throughput technology to investigate DNA-protein binding or chromatin/histone modifications. ChIP-chip data require adapted statistical method in order to identify enriched regions. All methods already proposed are based on the analysis of the log ratio (Ip/Input). Nevertheless, the assumption that the log ratio is a pertinent quantity to assess the probe status is not always veri.ed and it leads to a poor data interpretation.[br/] ::: Instead of working on the log ratio, we directly work with the Ip and Input signals of each probe by modeling the distribution of the Ip signal conditional to the Input signal. We propose a method named ChIPmix based on a linear regression mixture model to identify actual binding targets of the protein under study. Moreover, we are able to control the proportion of false positives. The efficiency of ChIPmix is illustrated on several datasets obtained from different organisms and hybridized either on tiling or promoter arrays. This validation shows that ChIPmix is convenient for any two-color array whatever its density and provides promising results.[br/] ::: The ChIPmix method is implemented in R and is available at http://www.agroparistech.fr/mia/outil_A.html.
---
paper_title: TileMap: create chromosomal map of tiling array hybridizations
paper_content:
Motivation: Tiling array is a new type of microarray that can be used to survey genomic transcriptional activities and transcription factor binding sites at high resolution. The goal of this paper is to develop effective statistical tools to identify genomic loci that show transcriptional or protein binding patterns of interest. ::: ::: Results: A two-step approach is proposed and is implemented in TileMap. In the first step, a test-statistic is computed for each probe based on a hierarchical empirical Bayes model. In the second step, the test-statistics of probes within a genomic region are used to infer whether the region is of interest or not. Hierarchical empirical Bayes model shrinks variance estimates and increases sensitivity of the analysis. It allows complex multiple sample comparisons that are essential for the study of temporal and spatial patterns of hybridization across different experimental conditions. Neighboring probes are combined through a moving average method (MA) or a hidden Markov model (HMM). Unbalanced mixture subtraction is proposed to provide approximate estimates of false discovery rate for MA and model parameters for HMM. ::: ::: Availability: TileMap is freely available at http://biogibbs.stanford.edu/~jihk/TileMap/index.htm ::: ::: Contact: [email protected] ::: ::: Supplementary information:http://biogibbs.stanford.edu/~jihk/TileMap/index.htm (includes coloured versions of all figures)
---
paper_title: An HMM approach to genome-wide identification of differential histone modification sites from ChIP-seq data
paper_content:
Motivation: Epigenetic modifications are one of the critical factors to regulate gene expression and genome function. Among different epigenetic modifications, the differential histone modification sites (DHMSs) are of great interest to study the dynamic nature of epigenetic and gene expression regulations among various cell types, stages or environmental responses. To capture the histone modifications at whole genome scale, ChIP-seq technology is becoming a robust and comprehensive approach. Thus the DHMSs are potentially identifiable by comparing two ChIP-seq libraries. However, little has been addressed on this issue in literature. ::: ::: Results: Aiming at identifying DHMSs, we propose an approach called ChIPDiff for the genome-wide comparison of histone modification sites identified by ChIP-seq. Based on the observations of ChIP fragment counts, the proposed approach employs a hidden Markov model (HMM) to infer the states of histone modification changes at each genomic location. We evaluated the performance of ChIPDiff by comparing the H3K27me3 modification sites between mouse embryonic stem cell (ESC) and neural progenitor cell (NPC). We demonstrated that the H3K27me3 DHMSs identified by our approach are of high sensitivity, specificity and technical reproducibility. ChIPDiff was further applied to uncover the differential H3K4me3 and H3K36me3 sites between different cell states. Interesting biological discoveries were achieved from such comparison in our study. ::: ::: Availability: http://cmb.gis.a-star.edu.sg/ChIPSeq/tools.htm ::: ::: Contact:[email protected]; [email protected] ::: ::: Supplementary information:Supplementary methods and data are available at Bioinformatics online.
---
paper_title: MeDIP-HMM: genome-wide identification of distinct DNA methylation states from high-density tiling arrays
paper_content:
Motivation: Methylation of cytosines in DNA is an important epigenetic mechanism involved in transcriptional regulation and preservation of genome integrity in a wide range of eukaryotes. Immunoprecipitation of methylated DNA followed by hybridization to genomic tiling arrays (MeDIP-chip) is a cost-effective and sensitive method for methylome analyses. However, existing bioinformatics methods only enable a binary classification into unmethylated and methylated genomic regions, which limit biological interpretations. Indeed, DNA methylation levels can vary substantially within a given DNA fragment depending on the number and degree of methylated cytosines. Therefore, a method for the identification of more than two methylation states is highly desirable. ::: ::: Results: Here, we present a three-state hidden Markov model (MeDIP-HMM) for analyzing MeDIP-chip data. MeDIP-HMM uses a higher-order state-transition process improving modeling of spatial dependencies between chromosomal regions, allows a simultaneous analysis of replicates and enables a differentiation between unmethylated, methylated and highly methylated genomic regions. We train MeDIP-HMM using a Bayesian Baum–Welch algorithm, integrating prior knowledge on methylation levels. We apply MeDIP-HMM to the analysis of the Arabidopsis root methylome and systematically investigate the benefit of using higher-order HMMs. Moreover, we also perform an in-depth comparison study with existing methods and demonstrate the value of using MeDIP-HMM by comparisons to current knowledge on the Arabidopsis methylome. We find that MeDIP-HMM is a fast and precise method for the analysis of methylome data, enabling the identification of distinct DNA methylation levels. Finally, we provide evidence for the general applicability of MeDIP-HMM by analyzing promoter DNA methylation data obtained for chicken. ::: ::: Availability: MeDIP-HMM is available as part of the open-source Java library Jstacs ( www.jstacs.de/index.php/MeDIP-HMM). Data files are available from the Jstacs website. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online.
---
paper_title: Discovery and characterization of chromatin states for systematic annotation of the human genome
paper_content:
A plethora of epigenetic modifications have been described in the human genome and shown to play diverse roles in gene regulation, cellular differentiation and the onset of disease. Although individual modifications have been linked to the activity levels of various genetic functional elements, their combinatorial patterns are still unresolved and their potential for systematic de novo genome annotation remains untapped. Here, we use a multivariate Hidden Markov Model to reveal 'chromatin states' in human T cells, based on recurrent and spatially coherent combinations of chromatin marks. We define 51 distinct chromatin states, including promoter-associated, transcription-associated, active intergenic, large-scale repressed and repeat-associated states. Each chromatin state shows specific enrichments in functional annotations, sequence motifs and specific experimentally observed characteristics, suggesting distinct biological roles. This approach provides a complementary functional annotation of the human genome that reveals the genome-wide locations of diverse classes of epigenetic function.
---
paper_title: Spatial Clustering of Multivariate Genomic and Epigenomic Information
paper_content:
The combination of fully sequence genomes and new technologies for high density arrays and ultra-rapid sequencing enables the mapping of gene-regulatory and epigenetics marks on a global scale. This new experimental methodology was recently applied to map multiple histone marks and genomic factors, characterizing patterns of genome organization and discovering interactions among processes of epigenetic reprogramming during cellular differentiation. The new data poses a significant computational challenge in both size and statistical heterogeneity. Understanding it collectively and without bias remains an open problem. Here we introduce spatial clustering - a new unsupervised clustering methodology for dissection of large, multi-track genomic and epigenomic data sets into a spatially organized set of distinct combinatorial behaviors. We develop a probabilistic algorithm that finds spatial clustering solutions by learning an HMM model and inferring the most likely genomic layout of clusters. Application of our methods to meta-analysis of combined ChIP-seq and ChIP-chip epigenomic datasets in mouse and human reveals known and novel patterns of local co-occurrence among histone modification and related factors. Moreover, the model weaves together these local patterns into a coherent global model that reflects the higher level organization of the epigenome. Spatial clustering constitutes a powerful and scalable analysis methodology for dissecting even larger scale genomic dataset that will soon become available.
---
paper_title: Mapping and analysis of chromatin state dynamics in nine human cell types
paper_content:
Chromatin profiling has emerged as a powerful means of genome annotation and detection of regulatory activity. The approach is especially well suited to the characterization of non-coding portions of the genome, which critically contribute to cellular phenotypes yet remain largely uncharted. Here we map nine chromatin marks across nine cell types to systematically characterize regulatory elements, their cell-type specificities and their functional interactions. Focusing on cell-type-specific patterns of promoters and enhancers, we define multicell activity profiles for chromatin state, gene expression, regulatory motif enrichment and regulator expression. We use correlations between these profiles to link enhancers to putative target genes, and predict the cell-type-specific activators and repressors that modulate them. The resulting annotations and regulatory predictions have implications for the interpretation of genome-wide association studies. Top-scoring disease single nucleotide polymorphisms are frequently positioned within enhancer elements specifically active in relevant cell types, and in some cases affect a motif instance for a predicted regulator, thus suggesting a mechanism for the association. Our study presents a general framework for deciphering cis-regulatory connections and their roles in disease.
---
paper_title: Discovery and characterization of chromatin states for systematic annotation of the human genome
paper_content:
A plethora of epigenetic modifications have been described in the human genome and shown to play diverse roles in gene regulation, cellular differentiation and the onset of disease. Although individual modifications have been linked to the activity levels of various genetic functional elements, their combinatorial patterns are still unresolved and their potential for systematic de novo genome annotation remains untapped. Here, we use a multivariate Hidden Markov Model to reveal 'chromatin states' in human T cells, based on recurrent and spatially coherent combinations of chromatin marks. We define 51 distinct chromatin states, including promoter-associated, transcription-associated, active intergenic, large-scale repressed and repeat-associated states. Each chromatin state shows specific enrichments in functional annotations, sequence motifs and specific experimentally observed characteristics, suggesting distinct biological roles. This approach provides a complementary functional annotation of the human genome that reveals the genome-wide locations of diverse classes of epigenetic function.
---
|
Title: A Review of Three Different Studies on Hidden Markov Models for Epigenetic Problems: A Computational Perspective
Section 1: Introduction
Description 1: Describe the relevance of formal language theory in biological sequence analysis, the significance of HMMs, and an overview of the surveyed studies.
Section 2: HMMs and Their Design Issues
Description 2: Explain the structure and components of HMMs, their application in bioinformatics, and the differences in model designs for epigenetic studies.
Section 3: Different HMM Designs for Identifying DNA Methylation Patterns
Description 3: Outline the general design process for mapping epigenetic information and introduce the three major studies reviewed.
Section 4: Two-State HMMs to Differentiate Non-Enriched Genomic Regions from Enriched Ones
Description 4: Detail the study by Li et al. on using two-state HMM for identifying transcription factor binding sites and other related methods for differentiating enriched regions.
Section 5: Three-State HMMs for ChIP Analysis
Description 5: Discuss the study by Xu et al. on three-state HMMs for identifying differential histone modification sites and other methods extending this approach.
Section 6: Multiple-State and Multivariate HMMs for Analyzing Systematic State Dynamics of Human Cells
Description 6: Explain the approach by Ernst and Kellis utilizing multi-state multivariate HMMs for comprehensive analysis of chromatin state dynamics.
Section 7: Conclusion
Description 7: Summarize the key points reviewed, the advantages of using HMMs in epigenetic data analysis, and suggest future directions for research.
|
A Survey Paper on Voice over Internet Protocol (VOIP)
| 9 |
---
|
Title: A Survey Paper on Voice over Internet Protocol (VOIP)
Section 1: Introduction
Description 1: This section should present an overview and the importance of Voice over Internet Protocol (VoIP), including its basic principles and definition.
Section 2: Background and History
Description 2: This section should cover the historical development of VoIP technology, key milestones, and the evolution of the technology over time.
Section 3: Technical Architecture
Description 3: This section should describe the underlying technical architecture of VoIP, including protocols, frameworks, and network components involved in its operation.
Section 4: Key Technologies and Protocols
Description 4: This section should delve into the core technologies and protocols that enable VoIP, such as SIP, H.323, RTP, and others.
Section 5: Benefits and Applications
Description 5: This section should highlight the advantages of VoIP over traditional telephony and discuss various applications and use cases in different industries.
Section 6: Challenges and Limitations
Description 6: This section should discuss the technical and operational challenges associated with VoIP, including issues like latency, jitter, security, and quality of service.
Section 7: Security Issues and Solutions
Description 7: This section should focus on the security threats that VoIP faces and the countermeasures or solutions to mitigate these risks.
Section 8: Future Trends and Directions
Description 8: This section should speculate on future developments in VoIP technology, emerging trends, and the potential direction of research and innovation.
Section 9: Conclusion
Description 9: This section should summarize the key points discussed in the paper and offer final thoughts on the impact and future of VoIP.
|
Resilient Wireless Sensor Networks Using Topology Control: A Review
| 10 |
---
paper_title: Research on Key Technology and Applications for Internet of Things
paper_content:
Abstract The Internet of Things (IOT) has been paid more and more attention by the academe, industry, and government all over the world. The concept of IOT and the architecture of IOT are discussed. The key technologies of IOT, including Radio Frequency Identification technology, Electronic Product Code technology, and ZigBee technology are analyzed. The framework of digital agriculture application based on IOT is proposed.
---
paper_title: The Internet of Things: A survey
paper_content:
This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details.
---
paper_title: Wireless sensor networks: a survey
paper_content:
This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
---
paper_title: Strategies and Techniques for Node Placement in Wireless Sensor Networks : A Survey
paper_content:
The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.
---
paper_title: Resilience is more than availability
paper_content:
In applied sciences there is a tendency to rely on terminology that is either ill-defined or applied inconsistently across areas of research and application domains. Examples in information assurance include the terms resilience, robustness and survivability, where there exists subtle shades of meaning between researchers. These nuances can result in confusion and misinterpretations of goals and results, hampering communication and complicating collaboration. In this paper, we propose security-related definitions for these terms. Using this terminology, we argue that research in these areas must consider the functionality of the system holistically, beginning with a careful examination of what we actually want the system to do. We note that much of the published research focuses on a single aspect of a system -- availability -- as opposed to the system's ability to complete its function without disclosing confidential information or, to a lesser extent, with the correct output. Finally, we discuss ways in which researchers can explore resilience with respect to integrity, availability and confidentiality.
---
paper_title: Resilience and survivability in communication networks : Strategies , principles , and survey of disciplines
paper_content:
The Internet has become essential to all aspects of modern life, and thus the consequences of network disruption have become increasingly severe. It is widely recognised that the Internet is not sufficiently resilient, survivable, and dependable, and that significant research, development, and engineering is necessary to improve the situation. This paper provides an architectural framework for resilience and survivability in communication networks and provides a survey of the disciplines that resilience encompasses, along with significant past failures of the network infrastructure. A resilience strategy is presented to defend against, detect, and remediate challenges, a set of principles for designing resilient networks is presented, and techniques are described to analyse network resilience.
---
paper_title: Network resilience: a measure of network fault tolerance
paper_content:
A probabilistic measure of network fault tolerance expressed as the probability of a disconnection is proposed. Qualitative evaluation of this measure is presented. As expected, the single-node disconnection probability is the dominant factor irrespective of the topology under consideration. The authors derive an analytical approximation to the disconnection probability and verify it with a Monte Carlo simulation. On the basis of this model, the measures of network resilience and relative network resilience are proposed as probabilistic measures of network fault tolerance. These are used to evaluate the effects of the disconnection probability on the reliability of the system. >
---
paper_title: Designing evolvable systems in a framework of robust, resilient and sustainable engineering analysis
paper_content:
''Evolvability'' is a concept normally associated with biology or ecology, but recent work on control of interdependent critical infrastructures reveals that network informatics systems can be designed to enable artificial, human systems to ''evolve''. To explicate this finding, we draw on an analogy between disruptive behavior and stable variation in the history of science and the adaptive patterns of robustness and resilience in engineered systems. We present a definition of an evolvable system in the context of a model of robust, resilient and sustainable systems. Our review of this context and standard definitions indicates that many analysts in engineering (as well as in biology and ecology) do not differentiate Resilience from Robustness. Neither do they differentiate overall dependable system adaptability from a multi-phase process that includes graceful degradation and time-constrained recovery, restabilization, and prevention of catastrophic failure. We analyze how systemic Robustness, Resilience, and Sustainability are related to Evolvability. Our analysis emphasizes the importance of Resilience as an adaptive capability that integrates Sustainability and Robustness to achieve Evolvability. This conceptual framework is used to discuss nine engineering principles that should frame systems thinking about developing evolvable systems. These principles are derived from Kevin Kelly's book: Out of Control, which describes living and artificial self-sustaining systems. Kelly's last chapter, ''The Nine Laws of God,'' distills nine principles that govern all life-like systems. We discuss how these principles could be applied to engineering evolvability in artificial systems. This discussion is motivated by a wide range of practical problems in engineered artificial systems. Our goal is to analyze a few examples of system designs across engineering disciplines to explicate a common framework for designing and testing artificial systems. This framework highlights managing increasing complexity, intentional evolution, and resistance to disruptive events. From this perspective, we envision a more imaginative and time-sensitive appreciation of the evolution and operation of ''reliable'' artificial systems. We conclude with a short discussion of two hypothetical examples of engineering evolvable systems in network-centric communications using Error Resilient Data Fusion (ERDF) and cognitive radio.
---
paper_title: Highly-resilient, energy-efficient multipath routing in wireless sensor networks
paper_content:
Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.
---
paper_title: Graph-theoretic analysis of structured peer-to-peer systems: routing distances and fault resilience
paper_content:
This paper examines graph-theoretic properties of existing peer-to-peer architectures and proposes a new infrastructure based on optimal diameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suited for structured peer-to-peer networks. Using the example of Chord, CAN, and de Bruijn, we first study routing performance, graph expansion, and clustering properties of each graph. We then examine bisection width, path overlap, and several other properties that affect routing and resilience of peer-to-peer networks. Having confirmed that de Bruijn graphs offer the best diameter and highest connectivity among the existing peer-to-peer structures, we offer a very simple incremental building process that preserves optimal properties of de Bruijn graphs under uniform user joins/departures. We call the combined peer-to-peer architecture ODRI -- Optimal Diameter Routing Infrastructure.
---
paper_title: Conceptualizing and measuring resilience: A key to disaster loss reduction
paper_content:
A multidisciplinary research project has examined ways to improve resilience, which can be measured by the functionality of an infrastructure system after a disaster and also by the time it takes for a system to return to previous levels of performance. In this article two project leaders present the components and dimensions of resilience and the implications for disaster response strategies.
---
paper_title: Integrated coverage and connectivity configuration in wireless sensor networks
paper_content:
An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.
---
paper_title: On secure and reliable communications in wireless sensor networks: Towards k-connectivity under a random pairwise key predistribution scheme
paper_content:
To be considered for an IEEE Jack Keil Wolf ISIT Student Paper Award. We study the secure and reliable connectivity of wireless sensor networks. Security is assumed to be ensured by the random pairwise key predistribution scheme of Chan, Perrig, and Song, and unreliable wireless links are represented by independent on/off channels. Modeling the network by an intersection of a random K-out graph and an Erdős-Renyi graph, we present scaling conditions (on the number of nodes, the scheme parameter K, and the probability of a wireless channel being on) such that the resulting graph contains no nodes with degree less than k with high probability, when the number of nodes gets large. Results are given in the form of zero-one laws and are shown to improve the previous results by Yagan and Makowski on the absence of isolated nodes (i.e., absence of nodes with degree zero). Via simulations, the established zero-one laws are shown to hold also for the property of k-connectivity; i.e., the property that graph remains connected despite the deletion of any k − 1 nodes or edges.
---
paper_title: On distributed fault-tolerant detection in wireless sensor networks
paper_content:
In this paper, we consider two important problems for distributed fault-tolerant detection in wireless sensor networks: 1) how to address both the noise-related measurement error and sensor fault simultaneously in fault-tolerant detection and 2) how to choose a proper neighborhood size n for a sensor node in fault correction such that the energy could be conserved. We propose a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process. We mathematically show that the optimal detection error decreases exponentially with the increase of the neighborhood size. Experiments with both Bayesian and Neyman-Pearson approaches in simulated sensor networks demonstrate that the proposed algorithm is able to achieve better detection and better balance between detection accuracy and energy usage. Our work makes it possible to perform energy-efficient fault-tolerant detection in a wireless sensor network.
---
paper_title: Exact analysis of k-connectivity in secure sensor networks with unreliable links
paper_content:
The Eschenauer--Gligor (EG) random key predistribution scheme has been widely recognized as a typical approach to secure communications in wireless sensor networks (WSNs). However, there is a lack of precise probability analysis on the reliable connectivity of WSNs under the EG scheme. To address this, we rigorously derive the asymptotically exact probability of $k$-connectivity in WSNs employing the EG scheme with unreliable links represented by independent on/off channels, where $k$-connectivity ensures that the network remains connected despite the failure of any $(k-1)$ sensors or links. Our analytical results are confirmed via numerical experiments, and they provide precise guidelines for the design of secure WSNs that exhibit a desired level of reliability against node and link failures.
---
paper_title: Designing secure and reliable wireless sensor networks under a pairwise key predistribution scheme
paper_content:
We investigate k-connectivity in secure wireless sensor networks under the random pairwise key predistribution scheme with unreliable links; a network is said to be k-connected if it remains connected despite the failure of any of its (k − 1) nodes or links. With wireless communication links modeled as independent on-off channels, this amounts to analyzing a random graph model formed by intersecting a random K-out graph and an Erdős-Renyi graph. We present conditions on how to scale the parameters of this intersection model so that the resulting graph is k-connected with probability approaching to one (resp. zero) as the number of nodes gets large. The resulting zero-one law is shown to improve and sharpen the previous result on the 1-connectivity of the same model. We also provide numerical results to support our analysis and show that even in the finite node regime, our results can provide useful guidelines for designing sensor networks that are secure and reliable.
---
paper_title: Towards $k$-connectivity of the random graph induced by a pairwise key predistribution scheme with unreliable links
paper_content:
We study the secure and reliable connectivity of wireless sensor networks. Security is assumed to be ensured by the random pairwise key predistribution scheme of Chan, Perrig, and Song, and unreliable wireless links are represented by independent ON/OFF channels. Modeling the network by an intersection of a random $K$ -out graph and an Erdős–Renyi graph, we present scaling conditions (on the number of nodes $n$ , the scheme parameter $K$ , and the probability $p$ of a wireless channel being on), such that the resulting graph contains no nodes with a degree less than $k$ with high probability. Results are given in the form of zero–one laws with $n$ getting large, and are shown to improve the previous results by Yagan and Makowski on the absence of isolated nodes (i.e., absence of nodes with degree zero) in the same model. Through simulations, the established zero–one laws are also shown to hold for the property of $k$ -connectivity, i.e., the property that graph remains connected despite the deletion of any $k-1$ nodes or edges.
---
paper_title: On Topological Properties of Wireless Sensor Networks under the q-Composite Key Predistribution Scheme with On/Off Channels
paper_content:
The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on/off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on/off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.
---
paper_title: Connectivity in Secure Wireless Sensor Networks under Transmission Constraints
paper_content:
In wireless sensor networks (WSNs), the Eschenauer-Gligor (EG) key pre-distribution scheme is a widely recognized way to secure communications. Although connectivity properties of secure WSNs with the EG scheme have been extensively investigated, few results address physical transmission constraints. These constraints reflect real-world implementations of WSNs in which two sensors have to be within a certain distance from each other to communicate. In this paper, we present zero-one laws for connectivity in WSNs employing the EG scheme under transmission constraints. These laws help specify the critical transmission ranges for connectivity. Our analytical findings are confirmed via numerical experiments. In addition to secure WSNs, our theoretical results are also applied to frequency hopping in wireless networks.
---
paper_title: THE MAXIMUM CONNECTIVITY OF A GRAPH
paper_content:
Abstract : The paper solves the problem of the maximum connectivity of any graph with a given number of points and lines. In addition, the minimum connectivity. The maximum diameter, and the minimum diameter are obtained. Two unsolved problems concerning the distribution of the values of the connectivity and the diameter are included.
---
paper_title: Secure k-connectivity in wireless sensor networks under an on/off channel model
paper_content:
Random key predistribution scheme of Eschenauer and Gligor (EG) is a typical solution for ensuring secure communications in a wireless sensor network (WSN). Connectivity of the WSNs under this scheme has received much interest over the last decade, and most of the existing work is based on the assumption of unconstrained sensor-to-sensor communications. In this paper, we study the k-connectivity of WSNs under the EG scheme with physical link constraints; k-connectivity is defined as the property that the network remains connected despite the failure of any (k - 1) sensors. We use a simple communication model, where unreliable wireless links are modeled as independent on/off channels, and derive zero-one laws for the properties that i) the WSN is k-connected, and ii) each sensor is connected to at least k other sensors. These zero-one laws improve the previous results by Rybarczyk on the k-connectivity under a fully connected communication model. Moreover, under the on/off channel model, we provide a stronger form of the zero-one law for the 1-connectivity as compared to that given by Yagan.
---
paper_title: Strategies and Techniques for Node Placement in Wireless Sensor Networks : A Survey
paper_content:
The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.
---
paper_title: Deploying sensor networks with guaranteed capacity and fault tolerance
paper_content:
We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multi-path connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing. We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k-connected network, for any desired parameter k. Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k. We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. We are also in the process of using our algorithms to deploy nodes in a physical sensor network using a mobile robot.
---
paper_title: Poly: A reliable and energy efficient topology control protocol for wireless sensor networks
paper_content:
Energy efficiency and reliability are the two important requirements for mission-critical wireless sensor networks. In the context of sensor topology control for routing and dissemination, Connected Dominating Set (CDS) based techniques proposed in prior literature provide the most promising efficiency and reliability. In a CDS-based topology control technique, a backbone - comprising a set of highly connected nodes - is formed which allows communication between any arbitrary pair of nodes in the network. In this paper, we show that formation of a polygon in the network provides a reliable and energy-efficient topology. Based on this observation, we propose Poly, a novel topology construction protocol based on the idea of polygons. We compare the performance of Poly with three prominent CDS-based topology construction protocols namely CDS-Rule K, Energy-efficient CDS (EECDS) and A3. Our simulation results demonstrate that Poly performs consistently better in terms of message overhead and other selected metrics. We also model the reliability of Poly and compare it with other CDS-based techniques to show that it achieves better connectivity under highly dynamic network topologies.
---
paper_title: An energy-efficient topology construction algorithm for wireless sensor networks
paper_content:
Topology management schemes have emerged as promising approaches for prolonging the lifetime of the wireless sensor networks (WSNs). The connected dominating set (CDS) concept has also emerged as the most popular method for energy-efficient topology control in WSNs. A sparse CDS-based network topology is highly susceptible to partitioning, while a dense CDS leads to excessive energy consumption due to overlapped sensing areas. Therefore, finding an optimal-size CDS with which a good trade-off between the network lifetime and network coverage can be made is a crucial problem in CDS-based topology control. In this paper, a degree-constrained minimum-weight version of the CDS problem, seeking for the load-balanced network topology with the maximum energy, is presented to model the energy-efficient topology control problem in WSNs. A learning automata-based heuristic is proposed for finding a near optimal solution to the proxy equivalent degree-constrained minimum-weight CDS problem in WSN. A strong theorem in presented to show the convergence of the proposed algorithm. Superiority of the proposed topology control algorithm over the prominent existing methods is shown through the simulation experiments in terms of the number of active nodes (network topology size), control message overhead, residual energy level, and network lifetime.
---
paper_title: Fast distributed algorithms for (weakly) connected dominating sets and linear-size skeletons
paper_content:
Motivated by routing issues in ad hoc networks, we present polylogarithmic-time distributed algorithms for two problems. Given a network, we first show how to compute connected and weakly connected dominating sets whose size is at most O([email protected]) times the optimum, @D being the maximum degree of the input network. This is best-possible if [email protected]?DTIME[n^O^(^l^o^g^l^o^g^n^)] and if the processors are required to run in polynomial-time. We then show how to construct dominating sets that have the above properties, as well as the ''low stretch'' property that any two adjacent nodes in the network have their dominators at a distance of at most O(logn) in the output network. (Given a dominating set S, a dominator of a vertex u is any [email protected]?S such that the distance between u and v is at most one.) We also show our time bounds to be essentially optimal.
---
paper_title: Impact of random failures and attacks on Poisson and power-law random networks
paper_content:
It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and mean-field approximations. Previous work insisted on the fact that power-law degree distributions induce high resilience to random failure but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then, much work has been done to extend these results. ::: We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions and give full proofs in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at enlightening some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. ::: We reach the conclusion that, even if the basic results of the field are clearly true and important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by power-law and Poisson distributions are not as striking as often claimed.
---
paper_title: Fault-Tolerant Relay Node Placement in Heterogeneous Wireless Sensor Networks
paper_content:
Existing work on placing additional relay nodes in wireless sensor networks to improve network connectivity typically assumes homogeneous wireless sensor nodes with an identical transmission radius. In contrast, this paper addresses the problem of deploying relay nodes to provide fault-tolerance with higher network connectivity in heterogeneous wireless sensor networks, where sensor nodes possess different transmission radii. Depending on the level of desired fault-tolerance, such problems can be categorized as: (1) full fault-tolerance relay node placement, which aims to deploy a minimum number of relay nodes to establish k (k ges 1) vertex-disjoint paths between every pair of sensor and/or relay nodes; (2) partial fault-tolerance relay node placement, which aims to deploy a minimum number of relay nodes to establish k (k ges 1) vertex-disjoint paths only between every pair of sensor nodes. Due to the different transmission radii of sensor nodes, these problems are further complicated by the existence of two different kinds of communication paths in heterogeneous wireless sensor networks, namely two-way paths, along which wireless communications exist in both directions; and one-way paths, along which wireless communications exist in only one direction. Assuming that sensor nodes have different transmission radii, while relay nodes use the same transmission radius, this paper comprehensively analyzes the range of problems introduced by the different levels of fault-tolerance (full or partial) coupled with the different types of path (one-way or two-way). Since each of these problems is NP-hard, we develop O(sigmak2)-approximation algorithms for both one-way and two-way partial fault-tolerance relay node placement, as well as O(sigmak3)-approximation algorithms for both one-way and two-way full fault-tolerance relay node placement (sigma is the best performance ratio of existing approximation algorithms for finding a minimum k-vertex connected spanning graph). To facilitate the applications in higher dimensions, we also extend these algorithms and derive their performance ratios in d-dimensional heterogeneous wireless sensor networks (d ges 3). Finally, heuristic implementations of these algorithms are evaluated via simulations.
---
paper_title: Approximation algorithms for connected dominating sets
paper_content:
The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to either be in the dominating set, or adjacent to at least one node in the dominating set. We focus on the question of finding a connected dominating set of minimum size, where the graph induced by vertices in the dominating set is required to be connected. This problem arises in network testing, as well as in wireless communication.
---
paper_title: Radio irregularity problem in wireless sensor networks: New experimental results
paper_content:
A key design issue in wireless networks is represented by the irregular and dynamic radio coverage at each node. This is especially true for wireless sensor networks, which usually employ low quality radio modules to reduce the cost. It results in irregularity in radio coverage and variations in packet reception in different directions. Due to its likely impact on the upper layer protocols, many services, such as localization, routing and others, needs to be resilient to the irregular and dynamic radio propagation, and to include mechanisms to deal with these problems. As such, accurate models of radio propagation patterns are important for protocol design and evaluation. In this paper, measurements of radio propagation patterns have been carried out using the motes themselves. With empirical data obtained from the Mica2 platforms we were able to observe and further quantify such phenomena. The results demonstrate that the radio pattern is largely random; however, radio signal attenuation varies along different direction, and more importantly, is time-varying while stationary.
---
paper_title: Impact of radio irregularity on wireless sensor networks
paper_content:
In this paper, we investigate the impact of radio irregularity on the communication performance in wireless sensor networks. Radio irregularity is a common phenomenon which arises from multiple factors, such as variance in RF sending power and different path losses depending on the direction of propagation. From our experiments, we discover that the variance in received signal strength is largely random; however, it exhibits a continuous change with incremental changes in direction. With empirical data obtained from the MICA2 platform, we establish a radio model for simulation, called the Radio Irregularity Model (RIM). This model is the first to bridge the discrepancy between spherical radio models used by simulators and the physical reality of radio signals. With this model, we are able to analyze the impact of radio irregularity on some of the well-known MAC and routing protocols. Our results show that radio irregularity has a significant impact on routing protocols, but a relatively small impact on MAC protocols. Finally, we propose six solutions to deal with radio irregularity. We evaluate two of them in detail. The results obtained from both the simulation and a running testbed demonstrate that our solutions greatly improve communication performance in the presence of radio irregularity.
---
paper_title: Finding Minimum Energy Disjoint Paths in Wireless Ad-Hoc Networks
paper_content:
We develop algorithms for finding minimum energy disjoint paths in an all-wireless network, for both the node and link-disjoint cases. Our major results include a novel polynomial time algorithm that optimally solves the minimum energy 2 link-disjoint paths problem, as well as a polynomial time algorithm for the minimum energy k node-disjoint paths problem. In addition, we present efficient heuristic algorithms for both problems. Our results show that link-disjoint paths consume substantially less energy than node-disjoint paths. We also found that the incremental energy of additional link-disjoint paths is decreasing. This finding is some what surprising due to the fact that in general networks additional paths are typically longer than the shortest path. However, in a wireless network, additional paths can be obtained at lower energy due to the broadcast nature of the wireless medium. Finally, we discuss issues regarding distributed implementation and present distributed versions of the optimal centralized algorithms presented in the paper.
---
paper_title: Fault tolerance measures for large-scale wireless sensor networks
paper_content:
Connectivity, primarily a graph-theoretic concept, helps define the fault tolerance of wireless sensor networks (WSNs) in the sense that it enables the sensors to communicate with each other so their sensed data can reach the sink. On the other hand, sensing coverage, an intrinsic architectural feature of WSNs plays an important role in meeting application-specific requirements, for example, to reliably extract relevant data about a sensed field. Sensing coverage and network connectivity are not quite orthogonal concepts. In fact, it has been proven that connectivity strongly depends on coverage and hence considerable attention has been paid to establish tighter connection between them although only loose lower bound on network connectivity of WSNs is known. In this article, we investigate connectivity based on the degree of sensing coverage by studying k-covered WSNs, where every location in the field is simultaneously covered (or sensed) by at least k sensors (property known as k-coverage, where k is the degree of coverage). We observe that to derive network connectivity of k-covered WSNs, it is necessary to compute the sensor spatial density required to guarantee k-coverage. More precisely, we propose to use a model, called the Reuleaux Triangle, to characterize k-coverage with the help of Helly's Theorem and the analysis of the intersection of sensing disks of k sensors. Using a deterministic approach, we show that the sensor spatial density to guarantee k-coverage of a convex field is proportional to k and inversely proportional to the sensing range of the sensors. We also prove that network connectivity of k-covered WSNs is higher than their sensing coverage k. Furthermore, we propose a new measure of fault tolerance for k-covered WSNs, called conditional fault tolerance, based on the concepts of conditional connectivity and forbidden faulty sensor set that includes all the neighbors of a given sensor. We prove that k-covered WSNs can sustain a large number of sensor failures provided that the faulty sensor set does not include a forbidden faulty sensor set.
---
paper_title: Power optimization in fault-tolerant topology control algorithms for wireless multi-hop networks
paper_content:
In ad hoc wireless networks, it is crucial to minimize power consumption while maintaining key network properties. This work studies power assignments of wireless devices that minimize power while maintaining k-fault tolerance. Specifically, we require all links established by this power setting be symmetric and form a k-vertex connected subgraph of the network graph. This problem is known to be NP-hard. We show current heuristic approaches can use arbitrarily more power than the optimal solution. Hence, we seek approximation algorithms for this problem. We present three approximation algorithms. The first algorithm gives an O(kalpha)-approximation where is the best approximation factor for the related problem in wired networks (the best alpha so far is O(log k)). With a more careful analysis, we show our second (slightly more complicated) algorithm is an O(k)-approximation. Our third algorithm assumes that the edge lengths of the network graph form a metric. In this case, we present simple and practical distributed algorithms for the cases of 2- and 3-connectivity with constant approximation factors. We generalize this algorithm to obtain an O(k2c+2)-approximation for general k-connectivity (2 les c les 4 is the power attenuation exponent). Finally, we show that these approximation algorithms compare favorably with existing heuristics. We note that all algorithms presented in this paper can be used to minimize power while maintaining -edge connectivity with guaranteed approximation factors. Recently, different set of authors used the notion of k-connectivity and the results of this paper to deal with the fault-tolerance issues for static wireless network settings.
---
paper_title: Robust topology control for indoor wireless sensor networks
paper_content:
Topology control can reduce power consumption and channel contention in wireless sensor networks by adjusting the transmission power. However, topology control for wireless sensor networks faces significant challenges, especially in indoor environments where wireless characteristics are extremely complex and dynamic. We first provide insights on the design of robust topology control schemes based on an empirical study in an office building. For example, our analysis shows that Received Signal Strength Indicator and Link Quality Indicator are not always robust indicators of Packet Reception Rate in indoor environments due to significant multi-path effects. We then present Adaptive and Robust Topology control (ART), a novel and practical topology control algorithm with several salient features: (1) ART is robust in indoor environments as it does not rely on simplifying assumptions about the wireless properties; (2) ART can adapt to variations in both link quality and contention; (3) ART introduces zero communication overhead for applications which already use acknowledgements. We have implemented ART as a topology layer in TinyOS 2.x. Our topology layer only adds 12 bytes of RAM per neighbor and 1.5 kilobytes of ROM, and requires minimal changes to upper-layer routing protocols. The advantages of ART have been demonstrated through empirical results on a 28-node indoor testbed.
---
paper_title: Efficient topology control for ad-hoc wireless networks with non-uniform transmission ranges
paper_content:
Wireless network topology control has drawn considerable attention recently. Priori arts assumed that the wireless ad hoc networks are modeled by unit disk graphs (UDG), i.e., two mobile hosts can communicate as long as their Euclidean distance is no more than a threshold. However, practically, the networks are never so perfect as unit disk graphs: the transmission ranges may vary due to various reasons such as the device differences, the network control necessity, and the perturbation of the transmission ranges even the transmission ranges are set as the same originally. Thus, we assume that each mobile host has its own transmission range. The networks are modeled by mutual inclusion graphs (MG), where two nodes are connected iff they are within the transmission range of each other. Previously, no method is known for topology control when the networks are modeled as mutual inclusion graphs.The paper proposes the first distributed mechanism to build a sparse power efficient network topology for ad hoc wireless networks with non-uniform transmission ranges. We first extend the Yao structure to build a spanner with a constant length and power stretch factor for mutual inclusion graph. We then propose two efficient localized algorithms to construct connected sparse network topologies. The first structure, called extended Yao-Yao, has node degree at most O(log γ), where γ = maxu maxuv∈MG ru/rv. The second structure, called extended Yao and Sink, has node degree bounded by O(log γ), and is a length and power spanner. The methods are based on a novel partition strategy of the space surrounded each mobile host. Both algorithms have communication cost O(n) under a local broadcasting communication model, where each message has O(log n) bits.
---
paper_title: Deploying sensor networks with guaranteed capacity and fault tolerance
paper_content:
We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multi-path connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing. We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k-connected network, for any desired parameter k. Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k. We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. We are also in the process of using our algorithms to deploy nodes in a physical sensor network using a mobile robot.
---
paper_title: An effective approach for tolerating simultaneous failures in wireless sensor and actor networks
paper_content:
Wireless Sensor and Actor Networks (WSANs) engage mobile nodes with actuators to respond to certain events in inhospitable environments. The harsh surroundings make actors susceptible to failure. Tolerating occasional actor failure is very important to avoid degrading the performance. In some cases, multiple actors simultaneously fail which makes the recovery very challenging. In this paper, we propose a new recovery approach which can handle simultaneous failures in WSANs. The approach is based on ranking network nodes relevant to a pre-assigned root actor. Ranking creates a tree to be used for coordinating the recovery among nodes. To minimize the recovery overhead, the nodes are virtually grouped to clusters and each node is assigned a recovery weight as well as a nearby cluster member which serves as a gateway to lost nodes. Designation of cluster heads is based on the number of children in the recovery tree. The simulation results confirm the correctness and effectiveness of the proposed approach in restoring network connectivity and also show that on the average the incurred overhead is low compared to contemporary single-failure recovery approaches.
---
paper_title: Octopus: a fault-tolerant and efficient ad-hoc routing protocol
paper_content:
Mobile ad-hoc networks (MANETs) are failure-prone environments; it is common for mobile wireless nodes to intermittently disconnect from the network, e.g., due to signal blockage. This paper focuses on withstanding such failures in large MANETs: we present Octopus, a fault-tolerant and efficient position-based routing protocol. Fault-tolerance is achieved by employing redundancy, i.e., storing the location of each node at many other nodes, and by keeping frequently refreshed soft state. At the same time, Octopus achieves a low location update overhead by employing a novel aggregation technique, whereby a single packet updates the location of many nodes at many other nodes. Octopus is highly scalable: for a fixed node density, the number of location update packets sent does not grow with the network size. And when the density increases, the overhead drops. Thorough empirical evaluation using the ns2 simulator with up to 675 mobile nodes shows that Octopus achieves excellent fault-tolerance at a modest overhead: when all nodes intermittently disconnect and reconnect, Octopus achieves the same high reliability as when all nodes are constantly up.
---
paper_title: Impact of random failures and attacks on Poisson and power-law random networks
paper_content:
It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and mean-field approximations. Previous work insisted on the fact that power-law degree distributions induce high resilience to random failure but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then, much work has been done to extend these results. ::: We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions and give full proofs in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at enlightening some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. ::: We reach the conclusion that, even if the basic results of the field are clearly true and important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by power-law and Poisson distributions are not as striking as often claimed.
---
paper_title: Localized and Energy-Efficient Topology Control in Wireless Sensor Networks Using Fuzzy-Logic Control Approaches
paper_content:
The sensor nodes in the Wireless Sensor Networks (WSNs) are prone to failures due to many reasons, for example, running out of battery or harsh environment deployment; therefore, the WSNs are expected to be able to maintain network connectivity and tolerate certain amount of node failures. By applying fuzzy-logic approach to control the network topology, this paper aims at improving the network connectivity and fault-tolerant capability in response to node failures, while taking into account that the control approach has to be localized and energy efficient. Two fuzzy controllers are proposed in this paper: one is Learning-based Fuzzy-logic Topology Control (LFTC), of which the fuzzy controller is learnt from a training data set; another one is Rules-based Fuzzy-logic Topology Control (RFTC), of which the fuzzy controller is obtained through designing if-then rules and membership functions. Both LFTC and RFTC do not rely on location information, and they are localized. Comparing them with other three representative algorithms (LTRT, List-based, and NONE) through extensive simulations, our two proposed fuzzy controllers have been proved to be very energy efficient to achieve desired node degree and improve the network connectivity when sensor nodes run out of battery or are subject to random attacks.
---
paper_title: Highly-resilient, energy-efficient multipath routing in wireless sensor networks
paper_content:
Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.
---
paper_title: Fault tolerance measures for large-scale wireless sensor networks
paper_content:
Connectivity, primarily a graph-theoretic concept, helps define the fault tolerance of wireless sensor networks (WSNs) in the sense that it enables the sensors to communicate with each other so their sensed data can reach the sink. On the other hand, sensing coverage, an intrinsic architectural feature of WSNs plays an important role in meeting application-specific requirements, for example, to reliably extract relevant data about a sensed field. Sensing coverage and network connectivity are not quite orthogonal concepts. In fact, it has been proven that connectivity strongly depends on coverage and hence considerable attention has been paid to establish tighter connection between them although only loose lower bound on network connectivity of WSNs is known. In this article, we investigate connectivity based on the degree of sensing coverage by studying k-covered WSNs, where every location in the field is simultaneously covered (or sensed) by at least k sensors (property known as k-coverage, where k is the degree of coverage). We observe that to derive network connectivity of k-covered WSNs, it is necessary to compute the sensor spatial density required to guarantee k-coverage. More precisely, we propose to use a model, called the Reuleaux Triangle, to characterize k-coverage with the help of Helly's Theorem and the analysis of the intersection of sensing disks of k sensors. Using a deterministic approach, we show that the sensor spatial density to guarantee k-coverage of a convex field is proportional to k and inversely proportional to the sensing range of the sensors. We also prove that network connectivity of k-covered WSNs is higher than their sensing coverage k. Furthermore, we propose a new measure of fault tolerance for k-covered WSNs, called conditional fault tolerance, based on the concepts of conditional connectivity and forbidden faulty sensor set that includes all the neighbors of a given sensor. We prove that k-covered WSNs can sustain a large number of sensor failures provided that the faulty sensor set does not include a forbidden faulty sensor set.
---
paper_title: Recovery from multiple simultaneous failures in wireless sensor networks using minimum Steiner tree
paper_content:
In some applications, wireless sensor networks (WSNs) operate in very harsh environments and nodes become subject to increased risk of damage. Sometimes a WSN suffers from the simultaneous failure of multiple sensors and gets partitioned into disjoint segments. Restoring network connectivity in such a case is crucial in order to avoid negative effects on the application. Given that WSNs often operate unattended in remote areas, the recovery should be autonomous. This paper promotes an effective strategy for restoring the connectivity among these segments by populating the least number of relay nodes. Finding the optimal count and position of relay nodes is NP-hard and heuristics are thus pursued. We propose a Distributed algorithm for Optimized Relay node placement using Minimum Steiner tree (DORMS). Since in autonomously operating WSNs it is infeasible to perform a network-wide analysis to diagnose where segments are located, DORMS moves relay nodes from each segment toward the center of the deployment area. As soon as those relays become in range of each other, the partitioned segments resume operation. DORMS further model such initial inter-segment topology as Steiner tree in order to minimize the count of required relays. Disengaged relays can return to their respective segments to resume their pre-failure duties. We analyze DORMS mathematically and explain the beneficial aspects of the resulting topology with respect to connectivity, and traffic balance. The performance of DORMS is validated through extensive simulation experiments.
---
paper_title: Network resilience: a measure of network fault tolerance
paper_content:
A probabilistic measure of network fault tolerance expressed as the probability of a disconnection is proposed. Qualitative evaluation of this measure is presented. As expected, the single-node disconnection probability is the dominant factor irrespective of the topology under consideration. The authors derive an analytical approximation to the disconnection probability and verify it with a Monte Carlo simulation. On the basis of this model, the measures of network resilience and relative network resilience are proposed as probabilistic measures of network fault tolerance. These are used to evaluate the effects of the disconnection probability on the reliability of the system. >
---
paper_title: Impact of random failures and attacks on Poisson and power-law random networks
paper_content:
It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and mean-field approximations. Previous work insisted on the fact that power-law degree distributions induce high resilience to random failure but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then, much work has been done to extend these results. ::: We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions and give full proofs in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at enlightening some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. ::: We reach the conclusion that, even if the basic results of the field are clearly true and important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by power-law and Poisson distributions are not as striking as often claimed.
---
paper_title: Modelling wireless challenges
paper_content:
A thorough understanding of the network behaviour when exposed to challenges is of paramount importance to construct a resilient MANET (mobile ad hoc network). However, modelling mobile and wireless networks as well as challenges against them is non-trivial due to dynamic and intermittent connectivity caused by channel fading and mobility of the nodes. We treat MANETs as time-varying graphs (TVGs) represented as a weighted adjacency matrix, in which the weights denote the link availability. We present how centrality-based attacks could affect network performance for different routing protocols. Furthermore, we model propagation loss models that represent realistic area-based challenges in wireless networks.
---
paper_title: Octopus: a fault-tolerant and efficient ad-hoc routing protocol
paper_content:
Mobile ad-hoc networks (MANETs) are failure-prone environments; it is common for mobile wireless nodes to intermittently disconnect from the network, e.g., due to signal blockage. This paper focuses on withstanding such failures in large MANETs: we present Octopus, a fault-tolerant and efficient position-based routing protocol. Fault-tolerance is achieved by employing redundancy, i.e., storing the location of each node at many other nodes, and by keeping frequently refreshed soft state. At the same time, Octopus achieves a low location update overhead by employing a novel aggregation technique, whereby a single packet updates the location of many nodes at many other nodes. Octopus is highly scalable: for a fixed node density, the number of location update packets sent does not grow with the network size. And when the density increases, the overhead drops. Thorough empirical evaluation using the ns2 simulator with up to 675 mobile nodes shows that Octopus achieves excellent fault-tolerance at a modest overhead: when all nodes intermittently disconnect and reconnect, Octopus achieves the same high reliability as when all nodes are constantly up.
---
paper_title: On distributed fault-tolerant detection in wireless sensor networks
paper_content:
In this paper, we consider two important problems for distributed fault-tolerant detection in wireless sensor networks: 1) how to address both the noise-related measurement error and sensor fault simultaneously in fault-tolerant detection and 2) how to choose a proper neighborhood size n for a sensor node in fault correction such that the energy could be conserved. We propose a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process. We mathematically show that the optimal detection error decreases exponentially with the increase of the neighborhood size. Experiments with both Bayesian and Neyman-Pearson approaches in simulated sensor networks demonstrate that the proposed algorithm is able to achieve better detection and better balance between detection accuracy and energy usage. Our work makes it possible to perform energy-efficient fault-tolerant detection in a wireless sensor network.
---
paper_title: Sensor network data fault types
paper_content:
Little of the work in sensor network studies related to data quality has presented a detailed study of sensor faults and fault models. We provide a comprehensive look at sensor network data fault types and a unified basis for describing sensor faults backed up by real world deployment examples. We also identify several considerations one must take into account when developing a fault detection or diagnosis system. We suggest a broad framework of important considerations when developing a data fault detection system and discuss some assumptions that can be made in this context. Based upon experience and previous work we define a series of features to consider when modeling sensor data for either fault detection or fault correction. We define three main headings of types of features and explore the effect that each type has on a sensor. Then we list all common faults that we have observed in actual sensor network deployments. We provide an example of how one may use such a taxonomy of faults.
---
paper_title: Fault-tolerant target detection in sensor networks
paper_content:
Fault-tolerant target detection and localization is a challenging task in collaborative sensor networks. The paper introduces our exploratory work toward identifying a stationary target in sensor networks with faulty sensors. We explore both spatial and temporal dimensions for data aggregation to decrease the false alarm rate and improve the target position accuracy. To filter out extreme measurements, the median of all readings in the close neighborhood is used to approximate the local observation to the target. The sensor whose observation is a local maximum computes a position estimate at each epoch. Results from multiple epochs are combined to decrease the false alarm rate further and improve the target localization accuracy. Our algorithms have low computation and communication overheads. A simulation study demonstrates the validity and efficiency of our design.
---
paper_title: Localized fault-tolerant event boundary detection in sensor networks
paper_content:
This paper targets the identification of faulty sensors and detection of the reach of events in sensor networks with faulty sensors. Typical applications include the detection of the transportation front line of a contamination and the diagnosis of network health. We propose and analyze two novel algorithms for faulty sensor identification and fault-tolerant event boundary detection. These algorithms are purely localized and thus scale well to large sensor networks. Their computational overhead is low, since only simple numerical operations are involved. Simulation results indicate that these algorithms can clearly detect the event boundary and can identify faulty sensors with a high accuracy and a low false alarm rate when as many as 20% sensors become faulty. Our work is exploratory in that the proposed algorithms can accept any kind of scalar values as inputs, a dramatic improvement over existing works that take only 0/1 decision predicates. Therefore, our algorithms are generic. They can be applied as long as the "events" can be modelled by numerical numbers. Though designed for sensor networks, our algorithms can be applied to the outlier detection and regional data analysis in spatial data mining.
---
paper_title: Fault detection of wireless sensor networks
paper_content:
This paper presents a distributed fault detection algorithm for wireless sensor networks. Faulty sensor nodes are identified based on comparisons between neighboring nodes and dissemination of the decision made at each node. Time redundancy is used to tolerate transient faults in sensing and communication. To eliminate delay involved in time redundancy scheme a sliding window is employed with some storage for previous comparison results. Simulation results show that sensor nodes with permanent faults are identified with high accuracy for a wide range of fault rates, while most of the transient faults are tolerated with negligible performance degradation.
---
paper_title: Random coverage with guaranteed connectivity: joint scheduling for wireless sensor networks
paper_content:
Sensor scheduling plays a critical role for energy efficiency of wireless sensor networks. Traditional methods for sensor scheduling use either sensing coverage or network connectivity, but rarely both. In this paper, we deal with a challenging task: without accurate location information, how do we schedule sensor nodes to save energy and meet both constraints of sensing coverage and network connectivity? Our approach utilizes an integrated method that provides statistical sensing coverage and guaranteed network connectivity. We use random scheduling for sensing coverage and then turn on extra sensor nodes, if necessary, for network connectivity. Our method is totally distributed, is able to dynamically adjust sensing coverage with guaranteed network connectivity, and is resilient to time asynchrony. We present analytical results to disclose the relationship among node density, scheduling parameters, coverage quality, detection probability, and detection delay. Analytical and simulation results demonstrate the effectiveness of our joint scheduling method
---
paper_title: Distributed protocols for ensuring both coverage and connectivity of a wireless sensor network
paper_content:
Wireless sensor networks have attracted a lot of attention recently. Such environments may consist of many inexpensive nodes, each capable of collecting, storing, and processing environmental information, and communicating with neighboring nodes through wireless links. For a sensor network to operate successfully, sensors must maintain both sensing coverage and network connectivity. This issue has been studied in wang et al. [2003] and Zhang and Hou [2004a], both of which reach a similar conclusion that coverage can imply connectivity as long as sensors' communication ranges are no less than twice their sensing ranges. In this article, without relying on this strong assumption, we investigate the issue from a different angle and develop several necessary and sufficient conditions for ensuring coverage and connectivity of a sensor network. Hence, the results significantly generalize the results in Wang et al. [2003] and Zhang and Hou [2004a]. This work is also a significant extension of our earlier work [Huang and Tseng 2003; Huang et al. 2004], which addresses how to determine the level of coverage of a given sensor network but does not consider the network connectivity issue. Our work is the first work allowing an arbitrary relationship between sensing ranges and communication distances of sensor nodes. We develop decentralized solutions for determining, or even adjusting, the levels of coverage and connectivity of a given network. Adjusting levels of coverage and connectivity is necessary when sensors are overly deployed, and we approach this problem by putting sensors to sleep mode and tuning their transmission powers. This results in prolonged network lifetime.
---
paper_title: Integrated coverage and connectivity configuration in wireless sensor networks
paper_content:
An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.
---
paper_title: Coverage problems in sensor networks: A survey
paper_content:
Sensor networks, which consist of sensor nodes each capable of sensing environment and transmitting data, have lots of applications in battlefield surveillance, environmental monitoring, industrial diagnostics, etc. Coverage which is one of the most important performance metrics for sensor networks reflects how well a sensor field is monitored. Individual sensor coverage models are dependent on the sensing functions of different types of sensors, while network-wide sensing coverage is a collective performance measure for geographically distributed sensor nodes. This article surveys research progress made to address various coverage problems in sensor networks. We first provide discussions on sensor coverage models and design issues. The coverage problems in sensor networks can be classified into three categories according to the subject to be covered. We state the basic coverage problems in each category, and review representative solution approaches in the literature. We also provide comments and discussions on some extensions and variants of these basic coverage problems.
---
paper_title: Fault tolerance measures for large-scale wireless sensor networks
paper_content:
Connectivity, primarily a graph-theoretic concept, helps define the fault tolerance of wireless sensor networks (WSNs) in the sense that it enables the sensors to communicate with each other so their sensed data can reach the sink. On the other hand, sensing coverage, an intrinsic architectural feature of WSNs plays an important role in meeting application-specific requirements, for example, to reliably extract relevant data about a sensed field. Sensing coverage and network connectivity are not quite orthogonal concepts. In fact, it has been proven that connectivity strongly depends on coverage and hence considerable attention has been paid to establish tighter connection between them although only loose lower bound on network connectivity of WSNs is known. In this article, we investigate connectivity based on the degree of sensing coverage by studying k-covered WSNs, where every location in the field is simultaneously covered (or sensed) by at least k sensors (property known as k-coverage, where k is the degree of coverage). We observe that to derive network connectivity of k-covered WSNs, it is necessary to compute the sensor spatial density required to guarantee k-coverage. More precisely, we propose to use a model, called the Reuleaux Triangle, to characterize k-coverage with the help of Helly's Theorem and the analysis of the intersection of sensing disks of k sensors. Using a deterministic approach, we show that the sensor spatial density to guarantee k-coverage of a convex field is proportional to k and inversely proportional to the sensing range of the sensors. We also prove that network connectivity of k-covered WSNs is higher than their sensing coverage k. Furthermore, we propose a new measure of fault tolerance for k-covered WSNs, called conditional fault tolerance, based on the concepts of conditional connectivity and forbidden faulty sensor set that includes all the neighbors of a given sensor. We prove that k-covered WSNs can sustain a large number of sensor failures provided that the faulty sensor set does not include a forbidden faulty sensor set.
---
paper_title: Highly connected random geometric graphs
paper_content:
Let P be a Poisson process of intensity 1 in a square S"n of area n. We construct a random geometric graph G"n","k by joining each point of P to its k nearest neighbours. For many applications it is desirable that G"n","k is highly connected, that is, it remains connected even after the removal of a small number of its vertices. In this paper we relate the study of the s-connectivity of G"n","k to our previous work on the connectivity of G"n","k. Roughly speaking, we show that for s=o(logn), the threshold (in k) for s-connectivity is asymptotically the same as that for connectivity, so that, as we increase k, G"n","k becomes s-connected very shortly after it becomes connected.
---
paper_title: Connectivity properties of a random radio network
paper_content:
Pure and routing-algorithm-based connectivities of random radio networks are studied by theoretical analyses and computer simulation respectively. The 'magic number' of average neighbours of each station from connectivity point of view is obtained.
---
paper_title: Error and attack tolerance of complex networks
paper_content:
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.
---
paper_title: Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection
paper_content:
For multivariate data sets, we study the relationship between the connectivity of a mutual k-nearest-neighbor graph, and the presence of clustering structure and outliers in the data. A test for detection of clustering structure and outliers is proposed and its performance is evaluated in simulated data.
---
paper_title: Error and Attack Tolerance of Complex Networks
paper_content:
Communication/transportation systems are often subjected to failures and attacks. Here we represent such systems as networks and we study their ability to resist failures (attacks) simulated as the breakdown of a group of nodes of the network chosen at random (chosen accordingly to degree or load). We consider and compare the results for two different network topologies: the Erdos–Renyi random graph and the Barabasi–Albert scale-free network. We also discuss briefly a dynamical model recently proposed to take into account the dynamical redistribution of loads after the initial damage of a single node of the network.
---
paper_title: Network robustness and fragility: percolation on random graphs.
paper_content:
Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.
---
paper_title: Critical Sensor Density for Partial Connectivity in Large Area Wireless Sensor Networks
paper_content:
Assume sensor deployment follows the Poisson distribution. For a given partial connectivity requirement ρ, 0.5 < ρ < 1, we prove, for a hexagon model, that there exists a critical sensor density λ0, around which the probability that at least 100ρ% of sensors are connected in the network increases sharply from ε to 1-ε within a short interval of sensor density ρ. The location of ρ0 is at the sensor density where the above probability is about 1/2. We also extend the results to the disk model. Simulations are conducted to confirm the theoretical results.
---
paper_title: A Novel Topology Control Approach to Maintain the Node Degree in Dynamic Wireless Sensor Networks
paper_content:
Topology control is an important technique to improve the connectivity and the reliability of Wireless Sensor Networks (WSNs) by means of adjusting the communication range of wireless sensor nodes. In this paper, a novel Fuzzy-logic Topology Control (FTC) is proposed to achieve any desired average node degree by adaptively changing communication range, thus improving the network connectivity, which is the main target of FTC. FTC is a fully localized control algorithm, and does not rely on location information of neighbors. Instead of designing membership functions and if-then rules for fuzzy-logic controller, FTC is constructed from the training data set to facilitate the design process. FTC is proved to be accurate, stable and has short settling time. In order to compare it with other representative localized algorithms (NONE, FLSS, k-Neighbor and LTRT), FTC is evaluated through extensive simulations. The simulation results show that: firstly, similar to k-Neighbor algorithm, FTC is the best to achieve the desired average node degree as node density varies; secondly, FTC is comparable to FLSS and k-Neighbor in terms of energy-efficiency, but is better than LTRT and NONE; thirdly, FTC has the lowest average maximum communication range than other algorithms, which indicates that the most energy-consuming node in the network consumes the lowest power.
---
paper_title: Impact of random failures and attacks on Poisson and power-law random networks
paper_content:
It appeared recently that the underlying degree distribution of networks may play a crucial role concerning their robustness. Empiric and analytic results have been obtained, based on asymptotic and mean-field approximations. Previous work insisted on the fact that power-law degree distributions induce high resilience to random failure but high sensitivity to attack strategies, while Poisson degree distributions are quite sensitive in both cases. Then, much work has been done to extend these results. ::: We aim here at studying in depth these results, their origin, and limitations. We review in detail previous contributions and give full proofs in a unified framework, and identify the approximations on which these results rely. We then present new results aimed at enlightening some important aspects. We also provide extensive rigorous experiments which help evaluate the relevance of the analytic results. ::: We reach the conclusion that, even if the basic results of the field are clearly true and important, they are in practice much less striking than generally thought. The differences between random failures and attacks are not so huge and can be explained with simple facts. Likewise, the differences in the behaviors induced by power-law and Poisson distributions are not as striking as often claimed.
---
paper_title: The Japan earthquake: the impact on traffic and routing observed by a local ISP
paper_content:
The Great East Japan Earthquake and Tsunami on March 11, 2011, disrupted a significant part of communications infrastructures both within the country and in connectivity to the rest of the world. Nonetheless, many users, especially in the Tokyo area, reported experiences that voice networks did not work yet the Internet did. At a macro level, the Internet was impressively resilient to the disaster, aside from the areas directly hit by the quake and ensuing tsunami. However, little is known about how the Internet was running during this period. We investigate the impact of the disaster to one major Japanese Internet Service Provider (ISP) by looking at measurements of traffic volumes and routing data from within the ISP, as well as routing data from an external neighbor ISP. Although we can clearly see circuit failures and subsequent repairs within the ISP, surprisingly little disruption was observed from outside.
---
paper_title: Localized and Energy-Efficient Topology Control in Wireless Sensor Networks Using Fuzzy-Logic Control Approaches
paper_content:
The sensor nodes in the Wireless Sensor Networks (WSNs) are prone to failures due to many reasons, for example, running out of battery or harsh environment deployment; therefore, the WSNs are expected to be able to maintain network connectivity and tolerate certain amount of node failures. By applying fuzzy-logic approach to control the network topology, this paper aims at improving the network connectivity and fault-tolerant capability in response to node failures, while taking into account that the control approach has to be localized and energy efficient. Two fuzzy controllers are proposed in this paper: one is Learning-based Fuzzy-logic Topology Control (LFTC), of which the fuzzy controller is learnt from a training data set; another one is Rules-based Fuzzy-logic Topology Control (RFTC), of which the fuzzy controller is obtained through designing if-then rules and membership functions. Both LFTC and RFTC do not rely on location information, and they are localized. Comparing them with other three representative algorithms (LTRT, List-based, and NONE) through extensive simulations, our two proposed fuzzy controllers have been proved to be very energy efficient to achieve desired node degree and improve the network connectivity when sensor nodes run out of battery or are subject to random attacks.
---
paper_title: The Number of Neighbors Needed for Connectivity of Wireless Networks
paper_content:
Unlike wired networks, wireless networks do not come with links. Rather, links have to be fashioned out of the ether by nodes choosing neighbors to connect to. Moreover the location of the nodes may be random.The question that we resolve is: How many neighbors should each node be connected to in order that the overall network is connected in a multi-hop fashion? We show that in a network with n randomly placed nodes, each node should be connected to Θ(log n) nearest neighbors. If each node is connected to less than 0.074 log n nearest neighbors then the network is asymptotically disconnected with probability one as n increases, while if each node is connected to more than 5.1774 log n nearest neighbors then the network is asymptotically connected with probability approaching one as n increases. It appears that the critical constant may be close to one, but that remains an open problem.These results should be contrasted with some works in the 1970s and 1980s which suggested that the "magic number" of nearest neighbors should be six or eight.
---
paper_title: Deploying wireless sensors to achieve both coverage and connectivity
paper_content:
It is well-known that placing disks in the triangular lattice pattern is optimal for achieving full coverage on a plane. With the emergence of wireless sensor networks, however, it is now no longer enough to consider coverage alone when deploying a wireless sensor network; connectivity must also be con-sidered. While moderate loss in coverage can be tolerated by applications of wireless sensor networks, loss in connectivity can be fatal. Moreover, since sensors are subject to unanticipated failures after deployment, it is not enough to have a wireless sensor network just connected, it should be k-connected (for k > 1 ). In this paper, we propose an optimal deployment pattern to achieve both full coverage and 2-connectivity, and prove its optimality for all values of rc/rs, where rc is the communication radius, and rs is the sensing radius. We also prove the optimality of a previously proposed deployment pattern for achieving both full coverage and 1-connectivity, when rc/rs < √3 .Finally, we compare the efficiency of some popular regular deployment patterns such as the square grid and triangular lattice, in terms of the number of sensors needed to provide coverage and connectivity.
---
paper_title: Deploying sensor networks with guaranteed capacity and fault tolerance
paper_content:
We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multi-path connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing. We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k-connected network, for any desired parameter k. Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k. We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. We are also in the process of using our algorithms to deploy nodes in a physical sensor network using a mobile robot.
---
paper_title: Integrated coverage and connectivity configuration in wireless sensor networks
paper_content:
An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.
---
paper_title: Strong minimum energy topology in wireless sensor networks: NPcompleteness and heuristics
paper_content:
Wireless sensor networks have recently attracted lots of research effort due to the wide range of applications. These networks must operate for months or years. However, the sensors are powered by battery, which may not be able to be recharged after they are deployed. Thus, energy-aware network management is extremely important. In this paper, we study the following problem: Given a set of sensors in the plane, assign transmit power to each sensor such that the induced topology containing only bidirectional links is strongly connected. This problem is significant in both theory and application. We prove its NP-completeness and propose two heuristics: power assignment based on minimum spanning tree (denoted by MST) and incremental power. We also show that the MST heuristic has a performance ratio of 2. Simulation study indicates that the performance of these two heuristics does not differ very much, but; on average, the incremental power heuristic is always better than MST.
---
paper_title: A Topology Reorganization Scheme for Reliable Communication in Underwater Wireless Sensor Networks Affected by Shadow Zones
paper_content:
Effective solutions should be devised to handle the effects of shadow zones in Underwater Wireless Sensor Networks (UWSNs). An adaptive topology reorganization scheme that maintains connectivity in multi-hop UWSNs affected by shadow zones has been developed in the context of two Spanish-funded research projects. A mathematical model has been proposed to find the optimal location for sensors with two objectives: the minimization of the transmission loss and the maintenance of network connectivity. The theoretical analysis and the numerical evaluations reveal that our scheme reduces the transmission loss under all propagation phenomena scenarios for all water depths in UWSNs and improves the signal-to-noise ratio.
---
paper_title: A fast distributed approximation algorithm for minimum spanning trees
paper_content:
We give a distributed algorithm that constructs a O(logn)- approximate minimum spanning tree (MST) in arbitrary networks. Our algorithm runs in time $\tilde{O}(D(G) + L(G,w))$ where L(G,w) is a parameter called the local shortest path diameter and D(G) is the (unweighted) diameter of the graph. Our algorithm is existentially optimal (up to polylogarithmic factors), i.e., there exists graphs which need Ω(D(G)+ L(G,w)) time to compute an H-approximation to the MST for any H ∈[1, Θ(logn)]. Our result also shows that there can be a significant time gap between exact and approximate MST computation: there exists graphs in which the running time of our approximation algorithm is exponentially faster than the time-optimal distributed algorithm that computes the MST. Finally, we show that our algorithm can be used to find an approximate MST in wireless networks and in random weighted networks in almost optimal $\tilde{O}(D(G))$ time.
---
paper_title: Fault tolerance measures for large-scale wireless sensor networks
paper_content:
Connectivity, primarily a graph-theoretic concept, helps define the fault tolerance of wireless sensor networks (WSNs) in the sense that it enables the sensors to communicate with each other so their sensed data can reach the sink. On the other hand, sensing coverage, an intrinsic architectural feature of WSNs plays an important role in meeting application-specific requirements, for example, to reliably extract relevant data about a sensed field. Sensing coverage and network connectivity are not quite orthogonal concepts. In fact, it has been proven that connectivity strongly depends on coverage and hence considerable attention has been paid to establish tighter connection between them although only loose lower bound on network connectivity of WSNs is known. In this article, we investigate connectivity based on the degree of sensing coverage by studying k-covered WSNs, where every location in the field is simultaneously covered (or sensed) by at least k sensors (property known as k-coverage, where k is the degree of coverage). We observe that to derive network connectivity of k-covered WSNs, it is necessary to compute the sensor spatial density required to guarantee k-coverage. More precisely, we propose to use a model, called the Reuleaux Triangle, to characterize k-coverage with the help of Helly's Theorem and the analysis of the intersection of sensing disks of k sensors. Using a deterministic approach, we show that the sensor spatial density to guarantee k-coverage of a convex field is proportional to k and inversely proportional to the sensing range of the sensors. We also prove that network connectivity of k-covered WSNs is higher than their sensing coverage k. Furthermore, we propose a new measure of fault tolerance for k-covered WSNs, called conditional fault tolerance, based on the concepts of conditional connectivity and forbidden faulty sensor set that includes all the neighbors of a given sensor. We prove that k-covered WSNs can sustain a large number of sensor failures provided that the faulty sensor set does not include a forbidden faulty sensor set.
---
paper_title: Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
paper_content:
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method.
---
paper_title: Efficient fault—tolerant routings in networks
paper_content:
Abstract We analyze the problem of constructing a network with a given number of nodes which has a fixed routing and which is highly fault tolerant. A construction is presented which forms a “product route graph” from two or more constituent “route graphs.” The analysis involves the surviving route graph , which consists of all nonfaulty nodes in the network with two nodes being connected by a directed edge iff the route from the first to the second is still intact after a set of component failures. The diameter of the surviving route graph is a measure of the worst-case performance degradation caused by the faults. The number of faults tolerated, the diameter, and the degree of the product graph are related in a simple way to the corresponding parameters of the constituent graphs. In addition, there is a “padding theorem” which allows one to add nodes to a graph and to extend a previous routing.
---
paper_title: Biconnectivity approximations and graph carvings
paper_content:
A spanning tree in a graph is the smallest connected spanning subgraph. Given a graph, how does one find the smallest (i.e., least number of edges) 2-connected spanning subgraph (connectivity refers to both edge and vertex connectivity, if not specified)? Unfortunately, the problem is known to be NP -hard. We consider the problem of finding an approximation to the smallest 2-connected subgraph, by an efficient algorithm. For 2-edge connectivity our algorithm guarantees a solution that is no more than 3/2 times the optimal. For 2-vertex connectivity our algorithm guarantees a solution that is no more than 5/3 times the optimal. The previous best approximation factor is 2 for each of these problems. The new algorithms (and their analyses) depend upon a structure called a carving of a graph, which is of independent interest. We show that approximating the optimal solution to within an additive constant is NP -hard as well. We also consider the case where the graph has edge weights. We show that an approximation factor of 2 is possible in polynomial time for finding a k -edge connected spanning subgraph. This improves an approximation factor of 3 for k =2 due to [FJ81], and extends it for any k (with an increased running time though).
---
paper_title: A randomized linear-time algorithm to find minimum spanning trees
paper_content:
We present a randomized linear-time algorithm to find a minimum spanning tree in a connected graph with edge weights. The algorithm uses random sampling in combination with a recently discovered linear-time algorithm for verifying a minimum spanning tree. Our computational model is a unit-cost random-access machine with the restriction that the only operations allowed on edge weights are binary comparisons.
---
paper_title: Relay Placement for Higher Order Connectivity in Wireless Sensor Networks
paper_content:
Sensors typically use wireless transmitters to communicate with each other. However, sensors may be located in a way that they cannot even form a connected network (e.g, due to failures of some sensors, or loss of battery power). In this paper we consider the problem of adding the smallest number of additional (relay) nodes so that the induced communication graph is 2-connected. The problem is NP -hard. In this paper we develop O(1)-approximation algorithms that find close to optimal solutions in time O((kn)) for achieving k-edge connectivity of n nodes. The worst case approximation guarantee is 10, but the algorithm produces solutions that are far better than this bound suggests. We also consider extensions to higher dimensions, and the scheme that we develop for points in the plane, yields a bound of 2dMST where dMST is the maximum degree of a minimum-degree Minimum Spanning Tree in d dimensions using Euclidean metrics. In addition, our methods extend with the same approximation guarantees to a generalization when the locations of relays are required to avoid certain polygonal regions (obstacles). We also prove that if the sensors are uniformly and identically distributed in a unit square, the expected number of relay nodes required goes to zero as the number of sensors goes to infinity.
---
paper_title: Complete optimal deployment patterns for full-coverage and k-connectivity (k≤6) wireless sensor networks
paper_content:
In this paper, we propose deployment patterns to achieve full coverage and three-connectivity, and full coverage and five-connectivity under different ratios of sensor communication range (denoted by Rc) over sensing range (denoted by Rs) for wireless sensor networks (WSNs). We also discover that there exists a hexagon-based universally elemental pattern which can generate all known optimal patterns. The previously proposed Voronoi-based approach can not be applied to prove the optimality of the new patterns due to their special features. We propose a new deployment-polygon based methodology, and prove their optimality among regular patterns when Rc/Rs ≥ 1. We conjecture that our patterns are globally optimal to achieve full coverage and three-connectivity, and full coverage and five-connectivity, under all ranges of Rc/Rs. With these new results, the set of optimal patterns to achieve full coverage and k-connectivity (k≤6) is complete, for the first time.
---
paper_title: A reliable node-disjoint multipath routing with low overhead in wireless ad hoc networks
paper_content:
Wireless ad hoc networks are characterized by the use of wireless links with limited bandwidth, dynamically varying network topology and multi-hop connectivity. AODV and DSR are the two most widely studied on-demand ad hoc routing protocols. Previous work has shown some limitations of the two protocols: whenever there is a link break on the active route, each of the two routing protocols has to invoke a route discovery process. This leads to increase in both delay and control overhead as well as decrease in packet delivery ratio. To alleviate these problems, we modify and extend AODV to include the path accumulation feature of DSR in route request/reply packets so that much lower route overhead is employed to discover multiple node-disjoint routing paths. The extended AODV is called Reliable Node-Disjoint Multipath Routing Protocol (NDMR), which has two novel aspects compared to the other on-demand multipath protocols: it reduces routing overhead dramatically and achieves multiple node-disjoint routing paths. Simulation results show that performance of NDMR is much better than that of AODV and DSR.
---
paper_title: Resilience is more than availability
paper_content:
In applied sciences there is a tendency to rely on terminology that is either ill-defined or applied inconsistently across areas of research and application domains. Examples in information assurance include the terms resilience, robustness and survivability, where there exists subtle shades of meaning between researchers. These nuances can result in confusion and misinterpretations of goals and results, hampering communication and complicating collaboration. In this paper, we propose security-related definitions for these terms. Using this terminology, we argue that research in these areas must consider the functionality of the system holistically, beginning with a careful examination of what we actually want the system to do. We note that much of the published research focuses on a single aspect of a system -- availability -- as opposed to the system's ability to complete its function without disclosing confidential information or, to a lesser extent, with the correct output. Finally, we discuss ways in which researchers can explore resilience with respect to integrity, availability and confidentiality.
---
paper_title: Random coverage with guaranteed connectivity: joint scheduling for wireless sensor networks
paper_content:
Sensor scheduling plays a critical role for energy efficiency of wireless sensor networks. Traditional methods for sensor scheduling use either sensing coverage or network connectivity, but rarely both. In this paper, we deal with a challenging task: without accurate location information, how do we schedule sensor nodes to save energy and meet both constraints of sensing coverage and network connectivity? Our approach utilizes an integrated method that provides statistical sensing coverage and guaranteed network connectivity. We use random scheduling for sensing coverage and then turn on extra sensor nodes, if necessary, for network connectivity. Our method is totally distributed, is able to dynamically adjust sensing coverage with guaranteed network connectivity, and is resilient to time asynchrony. We present analytical results to disclose the relationship among node density, scheduling parameters, coverage quality, detection probability, and detection delay. Analytical and simulation results demonstrate the effectiveness of our joint scheduling method
---
paper_title: On k-coverage in a mostly sleeping sensor network
paper_content:
Sensor networks are often desired to last many times longer than the active lifetime of individual sensors. This is usually achieved by putting sensors to sleep for most of their lifetime. On the other hand, event monitoring applications require guaranteed k-coverage of the protected region at all times. As a result, determining the appropriate number of sensors to deploy that achieves both goals simultaneously becomes a challenging problem. In this paper, we consider three kinds of deployments for a sensor network on a unit square--a √n × √n grid, random uniform (for all n points), and Poisson (with density n). In all three deployments, each sensor is active with probability p, independently from the others. Then, we claim that the critical value of the function npπ r2/ log(np) is 1 for the event of k-coverage of every point. We also provide an upper bound on the window of this phase transition. Although the conditions for the three deployments are similar, we obtain sharper bounds for the random deployments than the grid deployment, which occurs due to the boundary condition. In this paper, we also provide corrections to previously published results. Finally, we use simulation to show the usefulness of our analysis in real deployment scenarios.
---
paper_title: CLTC: a cluster-based topology control for ad hoc networks
paper_content:
The topology of an ad hoc network has a significant impact on its performance in that a dense topology may induce high interference and low capacity, while a sparse topology is vulnerable to link failure and network partitioning. Topology control aims to maintain a topology that optimizes network performance while minimizing energy consumption. Existing topology control algorithms utilize either a purely centralized or a purely distributed approach. A centralized approach, although able to achieve strong connectivity (k-connectivity for k /spl ges/ 2), suffers from scalability problems. In contrast, a distributed approach, although scalable, lacks strong connectivity guarantees. We propose a hybrid topology control framework, cluster-based topology control (CLTC) that achieves both scalability and strong connectivity. By varying the algorithms utilized in each of the three phases of the framework, a variety of optimization objectives and topological properties can be achieved. In this paper, we present the CLTC framework; describe topology control algorithms based on CLTC and prove that k-connectivity is achieved using those algorithms; analyze the message complexity of an implementation of CLTC, namely, CLTC-A, and present simulation studies that evaluate the effectiveness of CLTC-A for a range of networks.
---
paper_title: Topology Control for Fault-Tolerant Communication in Wireless Ad Hoc Networks ∗
paper_content:
Fault-tolerant communication and energy efficiency are important requirements for future-generation wireless ad hoc networks, which are increasingly being considered also for critical application domains like embedded systems in automotive and aerospace. Topology control, which enables multi-hop communication between any two network nodes via a suitably constructed overlay network, is the primary target for increasing connectivity and saving energy here. In this paper, we present a fault-tolerant distributed topology control algorithm that constructs and continuously maintains a k-regular and k-node-connected overlay for energy-efficient multi-hop communication. As a by-product, it also builds a hierarchy of clusters that reflects the node density in the network, with guaranteed and localized fault-tolerant communication between any pair of cluster members. The construction algorithm automatically adapts to a dynamically changing environment, is guaranteed to converge, and exhibits good performance as well.
---
paper_title: Low-Energy Fault-Tolerant Bounded-Hop Broadcast in Wireless Networks
paper_content:
This paper studies asymmetric power assignments in wireless ad hoc networks. The temporary, unfixed physical topology of wireless ad hoc networks is determined by the distribution of the wireless nodes as well as the transmission power (range) assignment of each node. We consider the problem of bounded-hop broadcast under k-fault resilience criterion for linear and planar layout of nodes. The topology that results from our power assignment allows a broadcast operation from a wireless node r to any other node in at most h hops and is k -fault resistant. We develop simple approximation algorithms for the two cases and obtain the following approximation ratios: linear case-O(k); planar case-we first prove a factor of O(k 3) , which is later decreased to O(k 2) by a finer analysis. Finally, we show a trivial power assignment with a cost O(h) times the optimum. To the best of our knowledge, these are the first nontrivial results for this problem.
---
paper_title: Span: an energy-efficient coordination algorithm for topology maintenance in ad hoc wireless networks
paper_content:
This paper presents Span, a power saving technique for multi-hop ad hoc wireless networks that reduces energy consumption without significantly diminishing the capacity or connectivity of the network. Span builds on the observation that when a region of a sharedchannel wireless network has a sufficient density of nodes, only a small number of them need be on at any time to forward traffic for active connections. Span is a distributed, randomized algorithm where nodes make local decisions on whether to sleep, or to join a forwarding backbone as a coordinator. Each node bases its decision on an estimate of how many of its neighbors will benefit from it being awake, and the amount of energy available to it. We give a randomized algorithm where coordinators rotate with time, demonstrating how localized node decisions lead to a connected, capacity-preserving global topology. Improvement in system lifetime due to Span increases as the ratio of idle-to-sleep energy consumption increases. Our simulations show that with a practical energy model, system lifetime of an 802.11 network in power saving mode with Span is a factor of two better than without. Additionally, Span also improves communication latency and capacity.
---
paper_title: Fault-tolerant and 3-dimensional distributed topology control algorithms in wireless multi-hop networks
paper_content:
We can control the topology of a multi-hop wireless network by varying the transmission power at each node. The life-time of such networks depends on battery power at each node. This paper presents a distributed fault-tolerant topology control algorithm for minimum energy consumption in these networks. More precisely, we present algorithms which preserve the connectivity of a network upon failing of, at most, k nodes (k is constant) and simultaneously minimize the transmission power at each node to some extent. In addition, we present simulations to support the effectiveness of our algorithm. We also demonstrate some optimizations to further minimize the power at each node. Finally, we show how our algorithms can be extended to 3-dimensions.
---
paper_title: Finding Minimum Energy Disjoint Paths in Wireless Ad-Hoc Networks
paper_content:
We develop algorithms for finding minimum energy disjoint paths in an all-wireless network, for both the node and link-disjoint cases. Our major results include a novel polynomial time algorithm that optimally solves the minimum energy 2 link-disjoint paths problem, as well as a polynomial time algorithm for the minimum energy k node-disjoint paths problem. In addition, we present efficient heuristic algorithms for both problems. Our results show that link-disjoint paths consume substantially less energy than node-disjoint paths. We also found that the incremental energy of additional link-disjoint paths is decreasing. This finding is some what surprising due to the fact that in general networks additional paths are typically longer than the shortest path. However, in a wireless network, additional paths can be obtained at lower energy due to the broadcast nature of the wireless medium. Finally, we discuss issues regarding distributed implementation and present distributed versions of the optimal centralized algorithms presented in the paper.
---
paper_title: On minimizing the total power of k-strongly connected wireless networks
paper_content:
Given a wireless network, we want to assign each node a transmission power, which will enable transmission between any two nodes (via other nodes). Moreover, due to possible faults, we want to have at least k vertex-disjoint paths from any node to any other, where k is some fixed integer, depending on the reliability of the nodes. The goal is to achieve this directed k-strong connectivity with a minimal overall power assignment. The problem is NP-Hard for any k ? 1 already for planar networks. Here we first present an optimal power assignment for uniformly spaced nodes on a line for any k ? 1. We also prove a number of useful properties of power assignment which are also of independent interest. Based on it, we design an approximation algorithm for linear radio networks with factor $$\hbox{min}\left\{2,\left(\frac {\Updelta} {\delta}\right)^\alpha \right\},$$ where Δ and ? are the maximal and minimal distances between adjacent nodes respectively and parameter ? ? 1 being the distance-power gradient. We then extend it to the weighted version. Finally, we develop an approximation algorithm with factor O(k 2), for planar case, which is, to the best of our knowledge, the first non-trivial result for this problem.
---
paper_title: Highly-resilient, energy-efficient multipath routing in wireless sensor networks
paper_content:
Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.
---
paper_title: Distributed fault-tolerant topology control in wireless multi-hop networks
paper_content:
In wireless multi-hop and ad-hoc networks, minimizing power consumption and at the same time maintaining desired properties of the network topology is of prime importance. In this work, we present a distributed algorithm for assigning minimum possible power to all the nodes in a static wireless network such that the resultant network topology is k-connected. In this algorithm, a node collects the location and maximum power information from all nodes in its vicinity, and then adjusts the power of these nodes in such a way that it can reach all of them through k optimal vertex-disjoint paths. The algorithm ensures k-connectivity in the final topology provided the topology induced when all nodes transmit with their maximum power is k-connected. We extend our topology control algorithm from static networks to networks having mobile nodes. We present proof of correctness for our algorithm for both static and mobile scenarios, and through extensive simulation we present its behavior.
---
paper_title: A survey of game-theoretic approaches in wireless sensor networks
paper_content:
Wireless sensor networks (WSNs) comprising of tiny, power-constrained nodes are gaining popularity due to their potential for use in a wide variety of environments like monitoring of environmental attributes, intrusion detection, and various military and civilian applications. While the sensing objectives of these environments are unique and application-dependent, a common performance criteria for wireless sensor networks is prolonging network lifetime while satisfying coverage and connectivity in the deployment region. Security is another important performance parameter in wireless sensor networks, where adverse and remote environments pose various kinds of threats to reliable network operation. In this paper, we look at the problems of security and energy efficiency and different formulations of these problems based on the approach of game theory. The potential applicability of WSNs to intruder detection environments also lends itself to game-theoretic formulation of these environments, where pursuit-evasion games provide a relevant framework to model detection, tracking and surveillance applications. The suitability of using game theory to study security and energy efficiency problems and pursuit-evasion scenarios using WSNs stems from the nature of strategic interactions between nodes. Approaches from game theory can be used to optimize node-level as well as network-wide performance by exploiting the distributed decision-making capabilities of WSNs. The use of game theory has proliferated, with a wide range of applications in wireless sensor networking. In the wake of this proliferation, we survey the use of game-theoretic approaches to formulate problems related to security and energy efficiency in wireless sensor networks.
---
paper_title: ATPC: adaptive transmission power control for wireless sensor networks
paper_content:
Extensive empirical studies presented in this paper confirm that the quality of radio communication between low power sensor devices varies significantly with time and environment. This phenomenon indicates that the previous topology control solutions, which use static transmission power, transmission range, and link quality, might not be effective in the physical world. To address this issue, online transmission power control that adapts to external changes is necessary. This paper presents ATPC, a lightweight algorithm of Adaptive Transmission Power Control for wireless sensor networks. In ATPC, each node builds a model for each of its neighbors, describing the correlation between transmission power and link quality. With this model, we employ a feedback-based transmission power control algorithm to dynamically maintain individual link quality over time. The intellectual contribution of this work lies in a novel pairwise transmission power control, which is significantly different from existing node-level or network-level power control methods. Also different from most existing simulation work, the ATPC design is guided by extensive field experiments of link quality dynamics at various locations and over a long period of time. The results from the real-world experiments demonstrate that 1) with pairwise adjustment, ATPC achieves more energy savings with a finer tuning capability and 2) with online control, ATPC is robust even with environmental changes over time.
---
paper_title: Transmission Power Control in Body Area Sensor Networks for Healthcare Monitoring
paper_content:
This paper investigates the opportunities and challenges in the use of dynamic radio transmit power control for prolonging the lifetime of body-wearable sensor devices used in continuous health monitoring. We first present extensive empirical evidence that the wireless link quality can change rapidly in body area networks, and a fixed transmit power results in either wasted energy (when the link is good) or low reliability (when the link is bad). We quantify the potential gains of dynamic power control in body-worn devices by benchmarking off-line the energy savings achievable for a given level of reliability.We then propose a class of schemes feasible for practical implementation that adapt transmit power in real-time based on feedback information from the receiver. We profile their performance against the offline benchmark, and provide guidelines on how the parameters can be tuned to achieve the desired trade-off between energy savings and reliability within the chosen operating environment. Finally, we implement and profile our scheme on a MicaZ mote based platform, and also report preliminary results from the ultra-low-power integrated healthcare monitoring platform we are developing at Toumaz Technology.
---
paper_title: Localized and Configurable Topology Control in Lossy Wireless Sensor Networks
paper_content:
Wireless sensor networks (WSNs) introduce new challenges to topology control due to the prevalence of lossy links. We propose a new topology control formulation for lossy WSNs that captures the stochastic nature of lossy links and quantifies the worst-case path quality in a network. We develop a novel localized scheme called Configurable Topology Control (CTC). The key feature of CTC is its capability of flexibly configuring the topology of a lossy WSN to achieve desired path quality bounds in a localized fashion. Furthermore, CTC can incorporate different control strategies (per-node/per-link) and optimization criteria. Simulations using a realistic radio model of Mica2 motes show that CTC significantly outperforms a representative traditional topology control algorithm called LMST in terms of both communication performance and energy efficiency. Our results demonstrate the importance of incorporating lossy links of WSNs in the design of topology control algorithms.
---
paper_title: Practical control of transmission power for Wireless Sensor Networks
paper_content:
Transmission power control (TPC) has the potential to reduce power consumption in Wireless Sensor Networks (WSNs). However, despite a multitude of existing protocols, they still face significant challenges in real-world deployments. A practical TPC protocol must be robust against complex and dynamic wireless properties, and efficient for resource-constrained sensors. This paper presents P-TPC, a practical TPC protocol designed on control-theoretic techniques. P-TPC features a highly efficient controller designed on a dynamic model that combines a theoretical link model with online parameter estimation. P-TPC's robustness and energy savings are demonstrated through trace-driven simulations and real-world experiments in a campus building and residential environments.
---
paper_title: Robust topology control for indoor wireless sensor networks
paper_content:
Topology control can reduce power consumption and channel contention in wireless sensor networks by adjusting the transmission power. However, topology control for wireless sensor networks faces significant challenges, especially in indoor environments where wireless characteristics are extremely complex and dynamic. We first provide insights on the design of robust topology control schemes based on an empirical study in an office building. For example, our analysis shows that Received Signal Strength Indicator and Link Quality Indicator are not always robust indicators of Packet Reception Rate in indoor environments due to significant multi-path effects. We then present Adaptive and Robust Topology control (ART), a novel and practical topology control algorithm with several salient features: (1) ART is robust in indoor environments as it does not rely on simplifying assumptions about the wireless properties; (2) ART can adapt to variations in both link quality and contention; (3) ART introduces zero communication overhead for applications which already use acknowledgements. We have implemented ART as a topology layer in TinyOS 2.x. Our topology layer only adds 12 bytes of RAM per neighbor and 1.5 kilobytes of ROM, and requires minimal changes to upper-layer routing protocols. The advantages of ART have been demonstrated through empirical results on a 28-node indoor testbed.
---
paper_title: Deploying sensor networks with guaranteed capacity and fault tolerance
paper_content:
We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multi-path connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high capacity through multi-path routing. We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k-connected network, for any desired parameter k. Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k. We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. We are also in the process of using our algorithms to deploy nodes in a physical sensor network using a mobile robot.
---
paper_title: Integrated coverage and connectivity configuration in wireless sensor networks
paper_content:
An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.
---
paper_title: Span: an energy-efficient coordination algorithm for topology maintenance in ad hoc wireless networks
paper_content:
This paper presents Span, a power saving technique for multi-hop ad hoc wireless networks that reduces energy consumption without significantly diminishing the capacity or connectivity of the network. Span builds on the observation that when a region of a sharedchannel wireless network has a sufficient density of nodes, only a small number of them need be on at any time to forward traffic for active connections. Span is a distributed, randomized algorithm where nodes make local decisions on whether to sleep, or to join a forwarding backbone as a coordinator. Each node bases its decision on an estimate of how many of its neighbors will benefit from it being awake, and the amount of energy available to it. We give a randomized algorithm where coordinators rotate with time, demonstrating how localized node decisions lead to a connected, capacity-preserving global topology. Improvement in system lifetime due to Span increases as the ratio of idle-to-sleep energy consumption increases. Our simulations show that with a practical energy model, system lifetime of an 802.11 network in power saving mode with Span is a factor of two better than without. Additionally, Span also improves communication latency and capacity.
---
paper_title: PEAS: a robust energy conserving protocol for long-lived sensor networks
paper_content:
Small, inexpensive sensors with limited memory, computing power and short battery lifetimes are turning into reality. Due to adverse conditions such as high noise levels, extreme humidity or temperatures, or even destructions from unfriendly entities, sensor node failures may become norms rather than exceptions in real environments. To be practical, sensor networks must last for much longer times than that of individual nodes, and have yet to be robust against potentially frequent node failures. This paper presents the design of PEAS, a simple protocol that can build a long-lived sensor network and maintain robust operations using large quantities of economical, short-lived sensor nodes. PEAS extends system functioning time by keeping only a necessary set of sensors working and putting the rest into sleep mode. Sleeping ones wake up now and then, probing the local environment and replacing failed ones. The sleeping periods are self-adjusted dynamically, so as to keep the sensors' wakeup rate roughly constant, thus adapting to high node densities.
---
paper_title: A new approach to the maximum flow problem
paper_content:
All previously known efftcient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n)) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n'log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin
---
paper_title: Survey: A survey on relay placement with runtime and approximation guarantees
paper_content:
We discuss aspects and variants of the fundamental problem of relay placement: given a set of immobile terminals in the Euclidean plane, place a number of relays with limited viewing range such that the result is a low-cost communication infrastructure between the terminals. We first consider the problem from a global point of view. The problem here is similar to forming discrete Steiner tree structures. Then we investigate local variants of the problem, assuming mobile relays that must decide where to move based only on information from their local environment. We give a local algorithm for the general problem, but we show that no local algorithm can achieve good approximation factors for the number of relays. The following two restricted variants each address different aspects of locality. First we provide each relay with knowledge of two fixed neighbors, such that the relays form a chain between two terminals. The goal here is to let the relays move to the line segment between the terminals as fast as possible. Then we focus on the aspect of neighbors that are not fixed, but which may change over time. In return, we relax the objective function from geometric structures to just forming a single point. The goal in all our local formation problems is to use relays that are as limited as possible with respect to memory, sensing capabilities and so on. We focus on algorithms for which we can prove performance guarantees such as upper bounds on the required runtime, maximum traveled distances of the relays and approximation factors for the solution.
---
paper_title: Strategies and Techniques for Node Placement in Wireless Sensor Networks : A Survey
paper_content:
The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.
---
paper_title: Modelling and Solving Optimal Placement problems in Wireless Sensor Networks
paper_content:
In this work, the optimal sensor displacement problem in wireless sensor networks is addressed. It is assumed that a network, consisting of independent, collaborative and mobile nodes, is available. Starting from an initial configuration, the aim is to define a specific sensors displacement, which allows the network to achieve high performance, in terms of energy consumption and travelled distance. To mathematically represent the problem under study, different innovative optimization models are proposed and defined, by taking into account different performance objectives. An extensive computational phase is carried out in order to assess the behaviour of the developed models in terms of solution quality and computational effort. A comparison with distributed approaches is also given, by considering different scenarios.
---
paper_title: Design of fault tolerant wireless sensor networks satisfying survivability and lifetime requirements
paper_content:
Sensor networks are deployed to accomplish certain specific missions over a period of time. It is essential that the network continues to operate, even if some of its nodes fail. It is also important that the network is able to support the mission for a minimum specified period of time. Hence, the design of a sensor network should not only provide some guarantees that all data from the sensor nodes are gathered at the base station, even in the presence of some faults, but should also allow the network to remain functional for a specified duration. This paper considers a two-tier, hierarchical sensor network architecture, where some relay nodes, provisioned with higher power and other capabilities, are used as cluster heads. Given a distribution of sensor nodes in a sensor network, finding the locations to place a minimum number of relay nodes such that, each sensor node is covered by at least one relay node, is known to be a computationally difficult problem. In addition, for successful and reliable data communication, the relay nodes network needs to be connected, as well as resilient to node failures. In this paper, a novel integrated Integer Linear Program (ILP) formulation is proposed, which, unlike existing techniques, not only finds a suitable placement strategy for the relay nodes, but also assigns the sensor nodes to the clusters and determines a load-balanced routing scheme. Therefore, in addition to the desired levels of fault tolerance for both the sensor nodes and the relay nodes, the proposed approach also meets specified performance guarantees with respect to network lifetime by limiting the maximum energy consumption of the relay nodes.
---
paper_title: Power optimization in fault-tolerant topology control algorithms for wireless multi-hop networks
paper_content:
In ad hoc wireless networks, it is crucial to minimize power consumption while maintaining key network properties. This work studies power assignments of wireless devices that minimize power while maintaining k-fault tolerance. Specifically, we require all links established by this power setting be symmetric and form a k-vertex connected subgraph of the network graph. This problem is known to be NP-hard. We show current heuristic approaches can use arbitrarily more power than the optimal solution. Hence, we seek approximation algorithms for this problem. We present three approximation algorithms. The first algorithm gives an O(kalpha)-approximation where is the best approximation factor for the related problem in wired networks (the best alpha so far is O(log k)). With a more careful analysis, we show our second (slightly more complicated) algorithm is an O(k)-approximation. Our third algorithm assumes that the edge lengths of the network graph form a metric. In this case, we present simple and practical distributed algorithms for the cases of 2- and 3-connectivity with constant approximation factors. We generalize this algorithm to obtain an O(k2c+2)-approximation for general k-connectivity (2 les c les 4 is the power attenuation exponent). Finally, we show that these approximation algorithms compare favorably with existing heuristics. We note that all algorithms presented in this paper can be used to minimize power while maintaining -edge connectivity with guaranteed approximation factors. Recently, different set of authors used the notion of k-connectivity and the results of this paper to deal with the fault-tolerance issues for static wireless network settings.
---
paper_title: IPSD: new coverage preserving and connectivity maintenance scheme for improving lifetime of wireless sensor networks
paper_content:
In many applications it is necessary to have some guarantees on the coverage, connectivity and lifetime of the Wireless Sensor Networks (WSN). Coverage problem is regarding how to ensure that each of the points in the region to be monitored is covered by the sensors. In maximizing coverage, the sensors need to be placed not too close to each other so that the sensing capability of the network is fully utilized and at the same time they must not be located too far from each other to avoid the formation of coverage holes. On the other hand from connectivity point of view, the sensors need to be placed close enough so that they are within each other communication range thus connectivity is ensured. Once coverage and connectivity are ensured, the overall lifetime of the network gets increased thereby improving the quality of service (QoS) of the Wireless Sensor Network (WSN). The concept of Integer Programmed Sensor Deployment (IPSD) scheme is being proposed, with a set of relay nodes a triangular lattice is formed by the grid based approach thus providing maximum coverage and connectivity. Integer Linear Programming (ILP) is brought into existence for eliminating the unused relay nodes thereby enhancing the coverage and connectivity with minimum number of relay nodes. Simulation is performed using NS-2 and the results shows that the proposed scheme provides better results in large scale WSN with improved coverage and connectivity.
---
paper_title: Steiner tree problem with minimum number of Steiner points and bounded edge-length
paper_content:
Abstract In this paper, we study the Steiner tree problem with minimum number of Steiner points and bounded edge-length (STPMSPBEL), which asks for a tree interconnecting a given set of n terminal points and a minimum number of Steiner points such that the Euclidean length of each edge is no more than a given positive constant. This problem has applications in VLSI design, WDM optimal networks and wireless communications. We prove that this problem is NP-complete and present a polynomial time approximation algorithm whose worst-case performance ratio is 5.
---
paper_title: An improved LP-based approximation for steiner tree
paper_content:
The Steiner tree problem is one of the most fundamental NP-hard problems: given a weighted undirected graph and a subset of terminal nodes, find a minimum-cost tree spanning the terminals. In a sequence of papers, the approximation ratio for this problem was improved from 2 to the current best 1.55 [Robins,Zelikovsky-SIDMA'05]. All these algorithms are purely combinatorial. A long-standing open problem is whether there is an LP-relaxation for Steiner tree with integrality gap smaller than 2 [Vazirani,Rajagopalan-SODA'99]. In this paper we improve the approximation factor for Steiner tree, developing an LP-based approximation algorithm. Our algorithm is based on a, seemingly novel, iterative randomized rounding technique. We consider a directed-component cut relaxation for the k-restricted Steiner tree problem. We sample one of these components with probability proportional to the value of the associated variable in the optimal fractional solution and contract it. We iterate this process for a proper number of times and finally output the sampled components together with a minimum-cost terminal spanning tree in the remaining graph. Our algorithm delivers a solution of cost at most ln(4) times the cost of an optimal k-restricted Steiner tree. This directly implies a ln(4)+e
---
paper_title: An effective approach for tolerating simultaneous failures in wireless sensor and actor networks
paper_content:
Wireless Sensor and Actor Networks (WSANs) engage mobile nodes with actuators to respond to certain events in inhospitable environments. The harsh surroundings make actors susceptible to failure. Tolerating occasional actor failure is very important to avoid degrading the performance. In some cases, multiple actors simultaneously fail which makes the recovery very challenging. In this paper, we propose a new recovery approach which can handle simultaneous failures in WSANs. The approach is based on ranking network nodes relevant to a pre-assigned root actor. Ranking creates a tree to be used for coordinating the recovery among nodes. To minimize the recovery overhead, the nodes are virtually grouped to clusters and each node is assigned a recovery weight as well as a nearby cluster member which serves as a gateway to lost nodes. Designation of cluster heads is based on the number of children in the recovery tree. The simulation results confirm the correctness and effectiveness of the proposed approach in restoring network connectivity and also show that on the average the incurred overhead is low compared to contemporary single-failure recovery approaches.
---
paper_title: Recovery from multiple simultaneous failures in wireless sensor networks using minimum Steiner tree
paper_content:
In some applications, wireless sensor networks (WSNs) operate in very harsh environments and nodes become subject to increased risk of damage. Sometimes a WSN suffers from the simultaneous failure of multiple sensors and gets partitioned into disjoint segments. Restoring network connectivity in such a case is crucial in order to avoid negative effects on the application. Given that WSNs often operate unattended in remote areas, the recovery should be autonomous. This paper promotes an effective strategy for restoring the connectivity among these segments by populating the least number of relay nodes. Finding the optimal count and position of relay nodes is NP-hard and heuristics are thus pursued. We propose a Distributed algorithm for Optimized Relay node placement using Minimum Steiner tree (DORMS). Since in autonomously operating WSNs it is infeasible to perform a network-wide analysis to diagnose where segments are located, DORMS moves relay nodes from each segment toward the center of the deployment area. As soon as those relays become in range of each other, the partitioned segments resume operation. DORMS further model such initial inter-segment topology as Steiner tree in order to minimize the count of required relays. Disengaged relays can return to their respective segments to resume their pre-failure duties. We analyze DORMS mathematically and explain the beneficial aspects of the resulting topology with respect to connectivity, and traffic balance. The performance of DORMS is validated through extensive simulation experiments.
---
paper_title: Survey: A survey on relay placement with runtime and approximation guarantees
paper_content:
We discuss aspects and variants of the fundamental problem of relay placement: given a set of immobile terminals in the Euclidean plane, place a number of relays with limited viewing range such that the result is a low-cost communication infrastructure between the terminals. We first consider the problem from a global point of view. The problem here is similar to forming discrete Steiner tree structures. Then we investigate local variants of the problem, assuming mobile relays that must decide where to move based only on information from their local environment. We give a local algorithm for the general problem, but we show that no local algorithm can achieve good approximation factors for the number of relays. The following two restricted variants each address different aspects of locality. First we provide each relay with knowledge of two fixed neighbors, such that the relays form a chain between two terminals. The goal here is to let the relays move to the line segment between the terminals as fast as possible. Then we focus on the aspect of neighbors that are not fixed, but which may change over time. In return, we relax the objective function from geometric structures to just forming a single point. The goal in all our local formation problems is to use relays that are as limited as possible with respect to memory, sensing capabilities and so on. We focus on algorithms for which we can prove performance guarantees such as upper bounds on the required runtime, maximum traveled distances of the relays and approximation factors for the solution.
---
paper_title: Relay sensor placement in wireless sensor networks
paper_content:
This paper addresses the following relay sensor placement problem: given the set of duty sensors in the plane and the upper bound of the transmission range, compute the minimum number of relay sensors such that the induced topology by all sensors is globally connected. This problem is motivated by practically considering the tradeoff among performance, lifetime, and cost when designing sensor networks. In our study, this problem is modelled by a NP-hard network optimization problem named Steiner Minimum Tree with Minimum number of Steiner Points and bounded edge length (SMT-MSP). In this paper, we propose two approximate algorithms, and conduct detailed performance analysis. The first algorithm has a performance ratio of 3 and the second has a performance ratio of 2.5.
---
paper_title: Relay node placement in structurally damaged wireless sensor networks via triangular steiner tree approximation
paper_content:
Wireless sensor networks (WSNs) have many applications which operate in hostile environments. Due to the harsh surroundings, WSNs may suffer from a large scale damage that causes many nodes to fail simultaneously and the network to get partitioned into multiple disjoint segments. In such a case, restoring the network connectivity is very important in order to avoid negative effects on the applications. In this paper, we pursue the placement of the least number of relay nodes to re-establish a strongly connected network topology. The problem of finding the minimum count and the position of relay nodes is NP-hard and hence we pursue heuristics. We present a novel three-step algorithm called FeSTA which is based on steinerizing appropriate triangles. Each segment is represented by a terminal. Each subset of 3 terminals forms a triangle. Finding the optimal solution for a triangle (i.e. connecting 3 segments) is a relatively easier problem. In the first step, FeSTA finds the best triangles and form islands of segments by establishing intra-triangle connectivity. Then in the second, disjoint islands of segment are federated. In the final step, the steinerized edges are optimized. The performance of FeSTA is validated through simulation.
---
paper_title: Steiner tree problem with minimum number of Steiner points and bounded edge-length
paper_content:
Abstract In this paper, we study the Steiner tree problem with minimum number of Steiner points and bounded edge-length (STPMSPBEL), which asks for a tree interconnecting a given set of n terminal points and a minimum number of Steiner points such that the Euclidean length of each edge is no more than a given positive constant. This problem has applications in VLSI design, WDM optimal networks and wireless communications. We prove that this problem is NP-complete and present a polynomial time approximation algorithm whose worst-case performance ratio is 5.
---
paper_title: Optimal relay node placement in delay constrained wireless sensor network design
paper_content:
The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the Wireless Sensor Network (WSN) design. In WSN, Sensor Nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the Base Station, for further processing. The DCRNPP aims to place the minimum number of additional Relay Nodes at a subset of Candidate Relay Node locations in such a manner that signals from various Sensor Nodes can be communicated to the Base Station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.
---
paper_title: Augmenting the Connectivity of Geometric Graphs
paper_content:
Let G be a connected plane geometric graph with n vertices. In this paper, we study bounds on the number of edges required to be added to G to obtain 2-vertex or 2-edge connected plane geometric graphs. In particular, we show that for G to become 2-edge connected, 2n3 additional edges are required in some cases and that 6n7 additional edges are always sufficient. For the special case of plane geometric trees, these bounds decrease to n2 and 2n3, respectively.
---
paper_title: Independence free graphs and vertex connectivity augmentation
paper_content:
Given an undirected graph G and a positive integer k, the k-vertex-connectivity augmentation problem is to find a smallest set F of new edges for which G + F is k-vertex-connected. Polynomial algorithms for this problem have been found only for k ≤ 4 and a major open question in graph connectivity is whether this problem is solvable in polynomial time in general.In this paper, we develop an algorithm which delivers an optimal solution in polynomial time for every fixed k. In the case when the size of an optimal solution is large compared to k, our algorithm is polynomial for all k. We also derive a min-max formula for the size of a smallest augmenting set in this case. A key step in our proofs is a complete solution of the augmentation problem for a new family of graphs which we call k-independence free graphs. We also prove new splitting off theorems for vertex connectivity.
---
paper_title: Relay Node Placement for Multi-Path Connectivity in Heterogeneous Wireless Sensor Networks
paper_content:
Abstract In this paper, we study the problem of deploying additional relay nodes to provide multi-path connectivity (kconnectivity) between all nodes in wireless sensor network. Specifically given a disconnected wireless network, few additional nodes is to be deployed so that the augmented network will have higher degree of connectivity for fault tolerance. We propose an algorithm based on Particle Swarm Optimization, that places optimum number of energy constrain relay nodes to achieve desired connectivity between heterogeneous wireless sensor nodes in which the communication range of each sensor node is different. In simulation, the proposed algorithm is compared with the heuristic algorithm for number of relay nodes required to obtain desired k-connectivity and the average value degree of nodes in resulting network. Result shows that Particle Swarm Optimization based algorithm perform over heuristic algorithm.
---
paper_title: Relay placement for restoring connectivity in partitioned wireless sensor networks under limited information
paper_content:
Several factors such as initial deployment, battery depletion or hardware failures can cause partition wireless sensor networks (WSNs). This results in most of the sensors losing connectivity with the sink node and thus creating disruption of the delivery of the data. To restore connectivity, one possible solution is populating relay nodes to connect the partitions. However, this solution requires information regarding the availability of the damaged area, number of partitions in the network and the location of the remaining nodes which may not be obtained for all applications. Thus, a distributed self-deployment strategy may better fit the application requirements. In this paper, we propose two distributed relay node positioning approaches to guarantee network recovery for partitioned WSNs by minimizing the movement cost of the relay nodes. The first approach is based on virtual force-based movements of relays while the second exploits Game Theory among the leaders of the partitions. Force-based approach stretches the network gradually with the deployment of additional relays. In the game-theoretic approach, the partition to be connected with is determined by the leader relay nodes based on the probability distribution function (pdf) of the partitions. Partitions with a higher pdf have priority over other partitions for recovery. Once the partition is connected with the relay nodes, it becomes the part of the connected network. Recovery proceeds with the partition having the next highest priority until network is completely recovered by reaching the system-wide unique Nash equilibrium. Both approaches are analyzed and evaluated extensively through simulation. Game-theoretic approach has been shown to outperform force-based approach as well as a centralized approach under most of the conditions.
---
paper_title: Fault-Tolerant Relay Node Placement in Heterogeneous Wireless Sensor Networks
paper_content:
Existing work on placing additional relay nodes in wireless sensor networks to improve network connectivity typically assumes homogeneous wireless sensor nodes with an identical transmission radius. In contrast, this paper addresses the problem of deploying relay nodes to provide fault-tolerance with higher network connectivity in heterogeneous wireless sensor networks, where sensor nodes possess different transmission radii. Depending on the level of desired fault-tolerance, such problems can be categorized as: (1) full fault-tolerance relay node placement, which aims to deploy a minimum number of relay nodes to establish k (k ges 1) vertex-disjoint paths between every pair of sensor and/or relay nodes; (2) partial fault-tolerance relay node placement, which aims to deploy a minimum number of relay nodes to establish k (k ges 1) vertex-disjoint paths only between every pair of sensor nodes. Due to the different transmission radii of sensor nodes, these problems are further complicated by the existence of two different kinds of communication paths in heterogeneous wireless sensor networks, namely two-way paths, along which wireless communications exist in both directions; and one-way paths, along which wireless communications exist in only one direction. Assuming that sensor nodes have different transmission radii, while relay nodes use the same transmission radius, this paper comprehensively analyzes the range of problems introduced by the different levels of fault-tolerance (full or partial) coupled with the different types of path (one-way or two-way). Since each of these problems is NP-hard, we develop O(sigmak2)-approximation algorithms for both one-way and two-way partial fault-tolerance relay node placement, as well as O(sigmak3)-approximation algorithms for both one-way and two-way full fault-tolerance relay node placement (sigma is the best performance ratio of existing approximation algorithms for finding a minimum k-vertex connected spanning graph). To facilitate the applications in higher dimensions, we also extend these algorithms and derive their performance ratios in d-dimensional heterogeneous wireless sensor networks (d ges 3). Finally, heuristic implementations of these algorithms are evaluated via simulations.
---
paper_title: Fault-tolerant relay node placement in wireless sensor networks: formulation and approximation
paper_content:
A two-tiered network model has been proposed for prolonging lifetime and improving scalability in wireless sensor networks (Gupta, G. and Younis, M., Proc. IEEE WCNC'03, p.1579-84, 2003; Proc. IEEE ICC'03, p.1848-52, 2003). This two-tiered network is a cluster-based network. Relay nodes are placed in the playing field to act as cluster heads and to form a connected topology for data transmission in the higher tier. They are able to fuse data packets from sensor nodes in their clusters and send them to sinks through wireless multi-hop paths. However, this model is not fault-tolerant as the network may be disconnected if a relay node fails. We formulate and study a fault-tolerant relay node placement problem in wireless sensor networks. In this problem, we want to place a minimum number of relay nodes in the playing field of a sensor network such that (1) each sensor node can communicate with at least two relay nodes and (2) the relay node network is 2-connected. We present a polynomial time approximation algorithm for this problem and prove the worst-case performance given by our algorithm is bounded within O(D log n) times of the size of an optimal solution, where n is the number of sensor nodes in the network, D is the (2, 1) diameter of the network formed by a sufficient set of possible positions for relay nodes.
---
paper_title: Survey: A survey on relay placement with runtime and approximation guarantees
paper_content:
We discuss aspects and variants of the fundamental problem of relay placement: given a set of immobile terminals in the Euclidean plane, place a number of relays with limited viewing range such that the result is a low-cost communication infrastructure between the terminals. We first consider the problem from a global point of view. The problem here is similar to forming discrete Steiner tree structures. Then we investigate local variants of the problem, assuming mobile relays that must decide where to move based only on information from their local environment. We give a local algorithm for the general problem, but we show that no local algorithm can achieve good approximation factors for the number of relays. The following two restricted variants each address different aspects of locality. First we provide each relay with knowledge of two fixed neighbors, such that the relays form a chain between two terminals. The goal here is to let the relays move to the line segment between the terminals as fast as possible. Then we focus on the aspect of neighbors that are not fixed, but which may change over time. In return, we relax the objective function from geometric structures to just forming a single point. The goal in all our local formation problems is to use relays that are as limited as possible with respect to memory, sensing capabilities and so on. We focus on algorithms for which we can prove performance guarantees such as upper bounds on the required runtime, maximum traveled distances of the relays and approximation factors for the solution.
---
paper_title: Relay sensor placement in wireless sensor networks
paper_content:
This paper addresses the following relay sensor placement problem: given the set of duty sensors in the plane and the upper bound of the transmission range, compute the minimum number of relay sensors such that the induced topology by all sensors is globally connected. This problem is motivated by practically considering the tradeoff among performance, lifetime, and cost when designing sensor networks. In our study, this problem is modelled by a NP-hard network optimization problem named Steiner Minimum Tree with Minimum number of Steiner Points and bounded edge length (SMT-MSP). In this paper, we propose two approximate algorithms, and conduct detailed performance analysis. The first algorithm has a performance ratio of 3 and the second has a performance ratio of 2.5.
---
paper_title: REER: Robust and Energy Efficient Multipath Routing Protocol for Wireless Sensor Networks
paper_content:
Wireless Sensor Networks (WSNs) are subject to node failures because of energy constraints, as well nodes can be added to or removed from the network upon application demands, resulting in unpredictable topology changes. Furthermore, due to limited transmission range of wireless sensor nodes, multiple hops are usually needed for a node to exchange information with other nodes or sink node(s). This makes the design of routing protocols in such networks a challenging task. In all proposed single path routing schemes a periodic low-rate flooding of data is required to recover from path failures, which causes consumption of scarce resources of the sensor node. Thus multipath routing schemes is an optimal alternative to maximize the network lifetime. Multipath routing schemes distribute the traffic across multiple paths instead of routing all the traffic along a single path, which spreads consumed energy evenly across the nodes within the network, potentially resulting in longer lifetimes. ::: ::: In this paper, we propose a robust and energy efficient multipath routing protocol (shortly abbreviated as REER). REER uses the residual energy, node available buffer size, and Signal-to-Noise Ratio (SNR) to predict the best next hop through the paths construction phase. REER examines two methods of traffic allocation; the first method uses a single path among the discovered paths to transfer the data message, when this path cost falls bellow a certain threshold, it then switches to the next alternative path. The second method is to split up the transmitted message into number of segments of equal size, add XOR-based error correction codes, and then transmit it across multiple paths simultaneously to increase the probability that an essential portion of the packet is received at the destination without incurring excessive delay. ::: ::: Through computer simulation, we evaluate and study the performance of our routing protocol and compare it with other protocols. Simulation results show that our protocol achieves more energy savings, lower average delay and higher packet delivery ratio than other protocols.
---
paper_title: Survey paper: A survey on routing algorithms for wireless Ad-Hoc and mesh networks
paper_content:
Wireless networking technology is evolving as an inexpensive alternative for building federated and community networks (relative to the traditional wired networking approach). Besides its cost-effectiveness, a wireless network brings operational efficiencies, namely mobility and untethered convenience to the end user. A wireless network can operate in both the ''Ad-Hoc'' mode, where users are self-managed, and the ''Infrastructure'' mode, where an authority manages the network with some Infrastructure such as fixed wireless routers, base stations, access points, etc. An Ad-Hoc network generally supports multi-hopping, where a data packet may travel over multiple hops to reach its destination. Among the Infrastructure-based networks, a Wireless Mesh Network (with a set of wireless routers located at strategic points to provide overall network connectivity) also provides the flexibility of multi-hopping. Therefore, how to route packets efficiently in wireless networks is a very important problem. A variety of wireless routing solutions have been proposed in the literature. This paper presents a survey of the routing algorithms proposed for wireless networks. Unlike routing in a wired network, wireless routing introduces new paradigms and challenges such as interference from other transmissions, varying channel characteristics, etc. In a wireless network, routing algorithms are classified into various categories such as Geographical, Geo-casting, Hierarchical, Multi-path, Power-aware, and Hybrid routing algorithms. Due to the large number of surveys that study different routing-algorithm categories, we select a limited but representative number of these surveys to be reviewed in our work. This survey offers a comprehensive review of these categories of routing algorithms. In the early stages of development of wireless networks, basic routing algorithms, such as Dynamic Source Routing (DSR) and Ad-Hoc On-demand Distance Vector (AODV) routing, were designed to control traffic on the network. However, it was found that applying these basic routing algorithms directly on wireless networks could lead to some issues such as large area of flooding, Greedy Forwarding empty set of neighbors, flat addressing, widely-distributed information, large power consumption, interference, and load-balancing problems. Therefore, a number of routing algorithms have been proposed as extensions to these basic routing algorithms to enhance their performance in wireless networks. Hence, we study the features of routing algorithms, which are compatible with the wireless environment and which can overcome these problems.
---
paper_title: RELAX: An Energy Efficient Multipath Routing Protocol for Wireless Sensor Networks
paper_content:
This paper presents an energy efficient multipath routing protocol specifically designed for wireless sensor networks (referred as RELAX). RELAX protocol tries to utilize the relaxation phenomenon of certain batteries to increase the battery lifetime and hence increasing the overall lifetime of the sensor network. Relaxation periods enable the battery to recover a portion of its lost power; it has been proven that the intermittent operation of some alkaline batteries increases its lifespan by about 28%. RELAX uses a link cost function that depends on current residual energy, available buffer size, and link quality (in terms of Signal-to-Noise ratio) to predict the best next hop during the path construction phase. RELAX routes data across multiple paths to balance the energy consumed across multiple nodes and to increase the throughput as well as minimizing packet end-to-end delay. Before transmitting the data, RELAX protocol adds data redundancy through a light weight Forward Error Correction (FEC) technique to increase the protocol reliability and resiliency to path failures. Many simulation experiments have been cried out to evaluate the protocol performance. Results show that RELAX protocol achieves lower energy consumption, lower packet delay, higher throughput, and long node lifetime duration compared to other protocols.
---
paper_title: LIEMRO: A Low-Interference Energy-Efficient Multipath Routing Protocol for Improving QoS in Event-Based Wireless Sensor Networks
paper_content:
In the recent years, multipath routing techniques are recognized as an effective approach to improve QoS in Wireless Sensor Networks (WSNs). However, in most of the previously proposed protocols either the effects of inter-path interference are ignored, or establishing low-interference paths is very costly. In this paper, we propose a Low-Interference Energy-efficient Multipath ROuting protocol (LIEMRO) for WSNs. This protocol is mainly designed to improve packet delivery ratio, lifetime, and latency, through discovering multiple interference-minimized node-disjoint paths between source node and sink node. In addition, LIEMRO includes a load balancing algorithm to distribute source node’s traffic over multiple paths based on the relative quality of each path. Simulation results show that using LIEMRO in high traffic load conditions can increase data reception rate and network lifetime even more than 1.5x compared with single path routing approach, while end-to-end latency reduces significantly. Accordingly, LIEMRO is a multipath solution for event-driven applications in which lifetime, reliability, and latency are of great importance.
---
paper_title: PEAS: a robust energy conserving protocol for long-lived sensor networks
paper_content:
Small, inexpensive sensors with limited memory, computing power and short battery lifetimes are turning into reality. Due to adverse conditions such as high noise levels, extreme humidity or temperatures, or even destructions from unfriendly entities, sensor node failures may become norms rather than exceptions in real environments. To be practical, sensor networks must last for much longer times than that of individual nodes, and have yet to be robust against potentially frequent node failures. This paper presents the design of PEAS, a simple protocol that can build a long-lived sensor network and maintain robust operations using large quantities of economical, short-lived sensor nodes. PEAS extends system functioning time by keeping only a necessary set of sensors working and putting the rest into sleep mode. Sleeping ones wake up now and then, probing the local environment and replacing failed ones. The sleeping periods are self-adjusted dynamically, so as to keep the sensors' wakeup rate roughly constant, thus adapting to high node densities.
---
paper_title: Design of an enhanced energy conserving routing protocol based on route diversity in wireless sensor networks
paper_content:
This paper presents a new Energy Conserving Routing Protocol (ECRP) that aims to optimize the transmission cost over a path from a source to a defined destination in a Wireless Sensor Network. Energy consumption remains a significant metric related to the service lifetime of a Wireless Sensor Network. This parameter is included in our model in order to generate a set of maximally disjoint paths between each sensor and the sink, so as to avoid or partially reduce the appearance of congestion in the network. We aim to spread the data traffic over non-overlapping paths in order to increase the global network lifetime. Simulations have shown that our ECRP is more efficient than existing protocols in terms of congestion avoidance and energy saving.
---
paper_title: An Energy Efficient Fault Tolerant Multipath (EEFTM) Routing Protocol for Wireless Sensor Networks
paper_content:
Currently, there is very little research that aims at handling QoS requirements using multipath routing in a very energy constrained environment like sensor networks. In this paper, Energy efficient fault-tolerant multipath routing technique which utilizes multiple paths between source and the sink. has been proposed. This protocol is intended to provide a reliable transmission environment with low energy consumption, by efficiently utilizing the energy availability and the available bandwidth of the nodes to identify multiple routes to the destination. To achieve reliability and fault tolerance, this protocol selects reliable paths based on the average reliability rank (ARR) of the paths. Average reliability rank of a path is based on each node's reliability rank (RR), which represents the probability that a node correctly delivers data to the destination. In case the existing route encounters some unexpected link or route failure, the algorithm selects the path with the next highest ARR, from the list of selected paths. Simulation results show that the proposed protocol minimizes the energy and latency and maximizes the delivery ratio.
---
paper_title: Research on Key Technology and Applications for Internet of Things
paper_content:
Abstract The Internet of Things (IOT) has been paid more and more attention by the academe, industry, and government all over the world. The concept of IOT and the architecture of IOT are discussed. The key technologies of IOT, including Radio Frequency Identification technology, Electronic Product Code technology, and ZigBee technology are analyzed. The framework of digital agriculture application based on IOT is proposed.
---
paper_title: Octopus: a fault-tolerant and efficient ad-hoc routing protocol
paper_content:
Mobile ad-hoc networks (MANETs) are failure-prone environments; it is common for mobile wireless nodes to intermittently disconnect from the network, e.g., due to signal blockage. This paper focuses on withstanding such failures in large MANETs: we present Octopus, a fault-tolerant and efficient position-based routing protocol. Fault-tolerance is achieved by employing redundancy, i.e., storing the location of each node at many other nodes, and by keeping frequently refreshed soft state. At the same time, Octopus achieves a low location update overhead by employing a novel aggregation technique, whereby a single packet updates the location of many nodes at many other nodes. Octopus is highly scalable: for a fixed node density, the number of location update packets sent does not grow with the network size. And when the density increases, the overhead drops. Thorough empirical evaluation using the ns2 simulator with up to 675 mobile nodes shows that Octopus achieves excellent fault-tolerance at a modest overhead: when all nodes intermittently disconnect and reconnect, Octopus achieves the same high reliability as when all nodes are constantly up.
---
paper_title: Ad-hoc on-demand distance vector routing
paper_content:
An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm.
---
paper_title: The complexity of finding two disjoint paths with min-max objective function
paper_content:
Abstract Given a network G = (V,E) and two vertices s and t, we consider the problem of finding two disjoint paths from s to t such that the length of the longer path is minimized. The problem has several variants: The paths may be vertex-disjoint or arc-disjoint and the network may be directed or undirected. We show that all four versions as well as some related problems are strongly NP-complete. We also give a pseudo-polynomial-time algorithm for the acyclic directed case.
---
paper_title: Highly-resilient, energy-efficient multipath routing in wireless sensor networks
paper_content:
Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.
---
paper_title: Constructing disjoint paths for failure recovery and multipath routing
paper_content:
Applications such as Voice over IP and video streaming require continuous network service, requiring fast failure recovery mechanisms. Proactive failure recovery schemes have been recently proposed to improve network performance during the failure transients. These proactive failure recovery schemes need extra infrastructural support in the form of routing table entries, extra addresses etc. In this paper, we study if the extra infrastructure support can be exploited to build disjoint paths in those frameworks, while keeping the lengths of the recovery paths close to those of the primary paths. Our evaluations show that it is possible to extend the proactive failure recovery schemes to provide support for nearly-disjoint paths which can be employed in multipath routing for load balancing and QoS.
---
paper_title: Disjoint paths in a network
paper_content:
Routes between two given nodes of a network are called diversified if they are node-disjoint, except at the terminals. Diversified routes are required for reliability in communication, and an additional criterion is that their total cost, assumed to be the sum of individual arc lengths or costs, is minimum. An algorithm and related theory is described for a general number K of node-disjoint paths with minimum total length. The algorithm applies shortest path labeling algorithms familiar in the literature. K node-disjoint paths are found in K iterations of a single shortest path algorithm.
---
paper_title: Strategies and Techniques for Node Placement in Wireless Sensor Networks : A Survey
paper_content:
The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.
---
paper_title: How Reliable Can Two-Path Protection Be?
paper_content:
This paper investigates the subject of reliability via two link-disjoint paths in mesh networks. We address the issues of how reliable two-path protection can be and how to achieve the maximum reliability. This work differs from traditional studies, such as MIN-SUM, MIN-MAX, and MIN-MIN, in that the objective in this paper is to maximize the reliability of the two-path connection given the link reliability, or equivalently, to minimize the end-to-end failure probability. We refer to this problem as MAX-REL. Solving MAX-REL provides 100% protection against a single failure while maximizing the reliability regardless of how many link failures occur in the network. We prove that this problem is NP-complete and derive a corresponding upper bound, which is the theoretical maximum reliability for a source-destination pair, and a lower bound, which is the worst case of the proposed algorithm. The time efficiency of the algorithms is analyzed, and the performance of the algorithms is evaluated through simulation. We demonstrate that our heuristic algorithms not only achieve a low computing complexity, but also achieve nearly equivalent performance to the upper bound.
---
paper_title: A greedy-based stable multi-path routing protocol in mobile ad hoc networks
paper_content:
With the increasing popularity of multimedia, there is a growing tendency in mobile ad hoc networks (MANETs) to establish stable routes with long route lifetimes, low control overhead and high packet delivery ratios. According to recent analytical result, the lifetime of a route, which can reflect the route stability, depends on the length of the route and the lifetime of each link in the route. This paper presents a Greedy-based Backup Routing (GBR) protocol that considers both route length and link lifetime to achieve high route stability. In GBR, the primary path is constructed primarily based on a greedy forwarding mechanism, whereas the local-backup path for each link is established according to the link lifetime. Both analytical and simulation results demonstrate that GBR has excellent performance in terms of route lifetime, packet delivery ratio, and control overhead.
---
paper_title: An energy efficient and QoS aware multipath routing protocol for wireless sensor networks
paper_content:
Enabling real time applications in Wireless Sensor Networks (WSNs) demands certain delay and bandwidth requirements which pose more challenges in the design of networking protocols. Therefore, enabling such applications in this type of networks requires energy and Quality of Service (QoS) awareness in different layers of the protocol stack. In many of these applications (such as multimedia applications, or real time and mission critical applications), the network traffic is mixed of delay sensitive and reliability demanding data. Hence, QoS routing becomes an important issue. In this paper, we propose an Energy Efficient and QoS aware multipath routing protocol (we name it shortly as EQSR) that maximizes the network lifetime through balancing energy consumption across multiple nodes, uses the concept of service differentiation to allow high important traffic (or delay sensitive traffic) to reach the sink node within an acceptable delay, reduces the end to end delay through spreading out the traffic across multiple paths, and increases the throughput through introducing data redundancy. EQSR uses the residual energy, node available buffer size, and Signal-to-Noise Ratio (SNR) to predict the best next hop through the paths construction phase. Based on the concept of service differentiation the EQSR protocol employs a queuing model to handle both real time and non-real time traffic. By means of computer simulations, we evaluated and studied the performance of our routing protocol and compared it with another protocol. Simulation results have shown that our protocol achieves lower average delay and higher packet delivery ratio than the other protocol.
---
paper_title: Fibonacci Heaps And Their Uses In Improved Network Optimization Algorithms
paper_content:
In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in 0(log n) amortized time and all other standard heap operations in 0(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms.
---
paper_title: A new approach to the maximum flow problem
paper_content:
All previously known efftcient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n)) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n'log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin
---
paper_title: Finding Minimum Energy Disjoint Paths in Wireless Ad-Hoc Networks
paper_content:
We develop algorithms for finding minimum energy disjoint paths in an all-wireless network, for both the node and link-disjoint cases. Our major results include a novel polynomial time algorithm that optimally solves the minimum energy 2 link-disjoint paths problem, as well as a polynomial time algorithm for the minimum energy k node-disjoint paths problem. In addition, we present efficient heuristic algorithms for both problems. Our results show that link-disjoint paths consume substantially less energy than node-disjoint paths. We also found that the incremental energy of additional link-disjoint paths is decreasing. This finding is some what surprising due to the fact that in general networks additional paths are typically longer than the shortest path. However, in a wireless network, additional paths can be obtained at lower energy due to the broadcast nature of the wireless medium. Finally, we discuss issues regarding distributed implementation and present distributed versions of the optimal centralized algorithms presented in the paper.
---
paper_title: Highly-resilient, energy-efficient multipath routing in wireless sensor networks
paper_content:
Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures.
---
paper_title: A proactive maintaining algorithm for dynamic topology control in wireless sensor networks
paper_content:
A proactive topology control algorithm named PMD (Proactive Maintaining Algorithm for Dynamic Topology Control) is proposed for solving the problem of network partitioning. The algorithm controls the starting of BFS (Breadth-First Search) by recognizing the addition of invalid nodes, monitoring the network structure dynamically. The definition, 'Communication Quality', is proposed to measure the quality of communications link. Only after network partitioning happens does PMD (Proactive Maintaining Algorithm for Dynamic Topology Control) start the link rebuilding mechanism to maintain the topology. The algorithm restrains isolated nodes generation and makes the energy be used efficiently. The results show that PMD (Proactive Maintaining Algorithm for Dynamic Topology Control) not only improves the efficiency of the energy, but also constructs a robust topology.
---
paper_title: Local heuristic for the refinement of multi-path routing in wireless mesh networks
paper_content:
We consider wireless mesh networks and the problem of routing end-to-end traffic over multiple paths for the same origin-destination pair with minimal interference. We introduce a heuristic for path determination with two distinguishing characteristics. First, it works by refining an extant set of paths, determined previously by a single- or multi-path routing algorithm. Second, it is totally local, in the sense that it can be run by each of the origins on information that is available no farther than the node's immediate neighborhood. We have conducted extensive computational experiments with the new heuristic, using AODV and OLSR, as well as their multi-path variants, as underlying routing methods. For two different CSMA settings (as implemented by 802.11) and one TDMA setting running a path-oriented link scheduling algorithm, we have demonstrated that the new heuristic is capable of improving the average throughput network-wide. When working from the paths generated by the multi-path routing algorithms, the heuristic is also capable to provide a more evenly distributed traffic pattern.
---
paper_title: Strong minimum energy topology in wireless sensor networks: NPcompleteness and heuristics
paper_content:
Wireless sensor networks have recently attracted lots of research effort due to the wide range of applications. These networks must operate for months or years. However, the sensors are powered by battery, which may not be able to be recharged after they are deployed. Thus, energy-aware network management is extremely important. In this paper, we study the following problem: Given a set of sensors in the plane, assign transmit power to each sensor such that the induced topology containing only bidirectional links is strongly connected. This problem is significant in both theory and application. We prove its NP-completeness and propose two heuristics: power assignment based on minimum spanning tree (denoted by MST) and incremental power. We also show that the MST heuristic has a performance ratio of 2. Simulation study indicates that the performance of these two heuristics does not differ very much, but; on average, the incremental power heuristic is always better than MST.
---
paper_title: How Reliable Can Two-Path Protection Be?
paper_content:
This paper investigates the subject of reliability via two link-disjoint paths in mesh networks. We address the issues of how reliable two-path protection can be and how to achieve the maximum reliability. This work differs from traditional studies, such as MIN-SUM, MIN-MAX, and MIN-MIN, in that the objective in this paper is to maximize the reliability of the two-path connection given the link reliability, or equivalently, to minimize the end-to-end failure probability. We refer to this problem as MAX-REL. Solving MAX-REL provides 100% protection against a single failure while maximizing the reliability regardless of how many link failures occur in the network. We prove that this problem is NP-complete and derive a corresponding upper bound, which is the theoretical maximum reliability for a source-destination pair, and a lower bound, which is the worst case of the proposed algorithm. The time efficiency of the algorithms is analyzed, and the performance of the algorithms is evaluated through simulation. We demonstrate that our heuristic algorithms not only achieve a low computing complexity, but also achieve nearly equivalent performance to the upper bound.
---
|
Title: Resilient Wireless Sensor Networks Using Topology Control: A Review
Section 1: Introduction
Description 1: This section provides an overview of WSNs, their applications, challenges, and importance for IoT. It also sets the stage for discussing network resilience and outlines the paper's organization.
Section 2: Network Resilience
Description 2: This section defines network resilience, differentiates it from related concepts like robustness and survivability, and explains its significance in WSNs. It introduces stages of deployment impacting resilience: pre-deployment, post-deployment, and re-deployment.
Section 3: Resilient WSNs: k-Connected Network
Description 3: The section explains k-connected networks, introduces related terminologies and definitions, and discusses the importance of k-connectivity for resilient WSNs.
Section 4: Network Model
Description 4: This section delves into various network models, including homogeneous, heterogeneous, ideal, and irregular radio models, and the significance of asymmetric links in WSNs.
Section 5: Failure Model
Description 5: This section discusses various failure models affecting WSNs, including node failures, link failures, and attack-based failures, and how these influence network design and resilience.
Section 6: Topology Control
Description 6: This section explores topology control methods to preserve or improve WSN connectivity, focusing on three deployment stages: pre-deployment, post-deployment, and re-deployment.
Section 7: Multi-Path Routing Protocol
Description 7: This section introduces multi-path routing protocols, their importance for resilient WSNs, and popular routing strategies, including complete disjointed and brained paths.
Section 8: NP-Complete and NP-Hard Problems in Resilient WSNs
Description 8: This section summarizes the NP-complete and NP-hard problems related to resilient WSNs and discusses the significance of approximation algorithms and heuristics.
Section 9: Open Issues
Description 9: This section highlights remaining challenges and areas for future research, including mobility, distributed algorithms, and integration of approaches across deployment stages.
Section 10: Conclusions
Description 10: This section summarizes the paper's findings on resilient WSNs, emphasizing the importance of k-connectivity, realistic modeling, and multi-path routing. It offers guidelines for designing resilient WSNs and highlights the need for future research.
|
Hand Gesture Recognition Systems: A Survey
| 12 |
---
paper_title: A Survey of Hand Posture and Gesture Recognition Techniques and Technology
paper_content:
This paper surveys the use of hand postures and gestures as a mechanism for interaction with computers, describing both the various techniques for performing accurate recognition and the technological aspects inherent to posture- and gesture-based interaction. First, the technological requirements and limitations for using hand postures and gestures are described by discussing both glove-based and vision-based recognition systems along with advantages and disadvantages of each. Second, the various types of techniques used in recognizing hand postures and gestures are compared and contrasted. Third, the applications that have used hand posture and gesture interfaces are examined. The survey concludes with a summary and a discussion of future research directions.
---
paper_title: A Fast Algorithm for Vision-Based Hand Gesture Recognition for Robot Control
paper_content:
We propose a fast algorithm for automatically recognizing a limited set of gestures from hand images for a robot control application. Hand gesture recognition is a challenging problem in its general form. We consider a fixed set of manual commands and a reasonably structured environment, and develop a simple, yet effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture. The algorithm is invariant to translation, rotation, and scale of the hand. We demonstrate the effectiveness of the technique on real imagery.
---
paper_title: 3-Draw: a three dimensional computer aided design tool
paper_content:
3-Draw is a tool for computer-aided design targeted at the early concept-forming stages of design. 3-Draw is intended to preserve the benefits of paper and pencil while taking advantage of the computer to develop, manipulate, and display 3-D representations of objects interactively. Designers using 3-Draw sketch out their initial ideas directly in three dimensions, using the computer to display objects in perspective and undergoing real-time motion. The hardware consists of two six-degrees-of-freedom sensors and a Silicon Graphics IRIS-4D/70GT graphics workstation. One sensor is configured to control an object's position and orientation. The other sensor is a multiconfigurable 3-D drawing/editing tool. Since the sketching is done directly in 3-D, the computer can generate limitless perspective views of models and the designer does not need to perform the image projection mentally. >
---
paper_title: Hand modeling, analysis and recognition
paper_content:
Analyzing hand gestures is a comprehensive task involving motion modeling, motion analysis, pattern recognition, machine learning and even psycholinguistic studies. A comprehensive review of various techniques in hand modeling, analysis, and recognition is needed. Due to the multidisciplinary nature of this research topic, we cannot include all the works in the literature. Rather than function as a thorough review paper, this article serves as a tutorial to this research topic. We study 3-D hand models, various articulated motion analysis methods, and gesture recognition techniques employed in current research. We conclude with some thoughts about future research directions. We also include some of our own research results, some of which are shown as examples.
---
paper_title: Vision-based hand pose estimation: A review
paper_content:
Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.
---
paper_title: Real-time tracking of multiple fingertips and gesture recognition for augmented desk interface systems
paper_content:
We propose a fast and robust method for tracking a user's hand and multiple fingertips; we then demonstrate gesture recognition based on measured fingertip trajectories for augmented desk interface systems. Our tracking method is capable of tracking multiple fingertips in a reliable manner even in a complex background under a dynamically changing lighting condition without any markers. First, based on its geometrical features, the location of each fingertip is located in each input infrared image frame. Then, correspondences of detected fingertips between successive image frames are determined based on a prediction technique. Our gesture recognition system is particularly advantageous for human-computer interaction (HCI) in that users can achieve interactions based on symbolic gestures at the same time that they perform direct manipulation with their own hands and fingers. The effectiveness of our proposed method has been successfully demonstrated via a number of experiments.
---
paper_title: Dynamic hand gesture recognition using hidden Markov models
paper_content:
Hand gesture has become a powerful means for human-computer interaction. Traditional gesture recognition just consider hand trajectory. For some specific applications, such as virtual reality, more natural gestures are needed, which are complex and contain movement in 3-D space. In this paper, we introduce an HMM-based method to recognize complex singlehand gestures. Gesture images are gained by a common web camera. Skin color is used to segment hand area from the image to form a hand image sequence. Then we put forward a state-based spotting algorithm to split continuous gestures. After that, feature extraction is executed on each gesture. Features used in the system contain hand position, velocity, size, and shape. We raise a data aligning algorithm to align feature vector sequences for training. Then an HMM is trained alone for each gesture. The recognition results demonstrate that our methods are effective and accurate.
---
paper_title: Hand Gesture Modeling and Recognition using Geometric Features: A Review
paper_content:
Abstract — The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name, this kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is more easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices, the gesturing is also important between human-human interaction especially with hearing impaired, deaf and mute peoples, in this study, we have presented different researches that done in this area regarding the geometric features which considered as a live features compared with non-geometric features which considered as blind features, and we have focused on the researches gathered to achieve this important link between human and his made machines, also we have provide our algorithms for overcome some shortcomings existed in some mentioned algorithms in order to provide a robust gesture recognition algorithm that does not have a rotation hinder which most of current algorithms have.
---
paper_title: Fingertip Detection for Hand Pose Recognition
paper_content:
In this paper, a novel algorithm is proposed for fingertip detection and finger type recognition. The algorithm is applied for locating fingertips in hand region extracted by Bayesian rule based skin color segmentation. Morphological operations are performed in the segmented hand region by observing key geometric features. A probabilistic modeling of the geometric features of finger movement has made the finger type recognition process significantly robust. Proposed method can be employed in a variety of applications like sign language recognition and human robot interactions.
---
paper_title: Hand Gesture Recognition Using Neural Networks.
paper_content:
Abstract : Gestural interfaces have the potential of enhancing control operations in numerous applications. For Air Force systems, machine-recognition of whole-hand gestures may be useful as an alternative controller, especially when conventional controls are less accessible. The objective of this effort was to explore the utility of a neural network-based approach to the recognition of whole-hand gestures. Using a fiber-optic instrumented glove, gesture data were collected for a set of static gestures drawn from the manual alphabet used by the deaf. Two types of neural networks (multilayer perceptron and Kohonen self-organizing feature map) were explored. Both showed promise, but the perceptron model was quicker to implement and classification is inherent in the model. The high gesture recognition rates and quick network retraining times found in the present study suggest that a neural network approach to gesture recognition be further evaluated.
---
paper_title: A static hand gesture recognition algorithm using k-mean based radial basis function neural network
paper_content:
The accurate classification of static hand gestures is a vital role to develop a hand gesture recognition system which is used for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC) application. A vision-based static hand gesture recognition algorithm consists of three stages: preprocessing, feature extraction and classification. The preprocessing stage involves following three sub-stages: segmentation which segments hand region from its background images using a histogram based thresholding algorithm and transforms into binary silhouette; rotation that rotates segmented gesture to make the algorithm, rotation invariant; filtering that effectively removes background noise and object noise from binary image by morphological filtering technique. To obtain a rotation invariant gesture image, a novel technique is proposed in this paper by coinciding the 1st principal component of the segmented hand gestures with vertical axes. A localized contour sequence (LCS) based feature is used here to classify the hand gestures. A k-mean based radial basis function neural network (RBFNN) is also proposed here for classification of hand gestures from LCS based feature set. The experiment is conducted on 500 train images and 500 test images of 25 class grayscale static hand gesture image dataset of Danish/international sign language hand alphabet. The proposed method performs with 99.6% classification accuracy which is better than earlier reported technique.
---
paper_title: Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
paper_content:
The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction.
---
paper_title: Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
paper_content:
The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: Two-hand gesture recognition using coupled switching linear model
paper_content:
We present a method coupling multiple switching linear models. The coupled switching linear model is an interactive process of two switching linear models. Coupling is given through causal influence between their hidden discrete states. The parameters of this model are learned via the EM algorithm. Tracking is performed through the coupled-forward algorithm based on Kalman filtering and a collapsing method. A model with maximum likelihood is selected out of a few learned models during tracking. We demonstrate the application of the proposed model to tracking and recognizing two-hand gestures.
---
paper_title: Hand Gesture Modeling and Recognition using Geometric Features: A Review
paper_content:
Abstract — The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name, this kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is more easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices, the gesturing is also important between human-human interaction especially with hearing impaired, deaf and mute peoples, in this study, we have presented different researches that done in this area regarding the geometric features which considered as a live features compared with non-geometric features which considered as blind features, and we have focused on the researches gathered to achieve this important link between human and his made machines, also we have provide our algorithms for overcome some shortcomings existed in some mentioned algorithms in order to provide a robust gesture recognition algorithm that does not have a rotation hinder which most of current algorithms have.
---
paper_title: Fingertip Detection for Hand Pose Recognition
paper_content:
In this paper, a novel algorithm is proposed for fingertip detection and finger type recognition. The algorithm is applied for locating fingertips in hand region extracted by Bayesian rule based skin color segmentation. Morphological operations are performed in the segmented hand region by observing key geometric features. A probabilistic modeling of the geometric features of finger movement has made the finger type recognition process significantly robust. Proposed method can be employed in a variety of applications like sign language recognition and human robot interactions.
---
paper_title: Hand gesture recognition using morphological principal component analysis and an improved CombNET-II
paper_content:
A new neural network structure dedicated to time series recognition, T-CombNET, is presented. The model is developed from a large scale neural network CombNet-II, designed to deal with a very large vocabulary for character recognition. Our specific modifications of the original CombNet-II model allows it to do temporal analysis, and to be used in a large set of human movement recognition systems. This paper also presents a feature extraction method based on morphological principal component analysis that completely describes a hand gesture in 2-dimensional time varying vector. The proposed feature extraction method along with the T-CombNET structure were then used to develop a complete Japanese Kana hand alphabet recognition system consisting of 42 static postures and 34 hand motions. We obtained a superior recognition rate of 99.4% in the gesture recognition experiments when compared to different neural network structures like multi-layer perceptron, learning vector quantization (LVQ), Elman and Jordan partially recurrent neural networks, CombNET-II and the proposed T-CombNET structure.
---
paper_title: Dynamic hand gesture recognition using hidden Markov models
paper_content:
Hand gesture has become a powerful means for human-computer interaction. Traditional gesture recognition just consider hand trajectory. For some specific applications, such as virtual reality, more natural gestures are needed, which are complex and contain movement in 3-D space. In this paper, we introduce an HMM-based method to recognize complex singlehand gestures. Gesture images are gained by a common web camera. Skin color is used to segment hand area from the image to form a hand image sequence. Then we put forward a state-based spotting algorithm to split continuous gestures. After that, feature extraction is executed on each gesture. Features used in the system contain hand position, velocity, size, and shape. We raise a data aligning algorithm to align feature vector sequences for training. Then an HMM is trained alone for each gesture. The recognition results demonstrate that our methods are effective and accurate.
---
paper_title: A Study on Hand Gesture Recognition Technique
paper_content:
Hand gesture recognition system can be used for interfacing between computer and human using hand gesture. This work presents a technique for a human computer interface through hand gesture recognition that is able to recognize 25 static gestures from the American Sign Language hand alphabet. The objective of this thesis is to develop an algorithm for recognition of hand gestures with reasonable accuracy. ::: The segmentation of gray scale image of a hand gesture is performed using Otsu thresholding algorithm. Otsu algorithm treats any segmentation problem as classification problem. Total image level is divided into two classes one is hand and other is background. The optimal threshold value is determined by computing the ratio between class variance and total class variance. A morphological filtering method is used to effectively remove background and object noise in the segmented image. Morphological method consists of dilation, erosion, opening, and closing operation. ::: Canny edge detection technique is used to find the boundary of hand gesture in image. A contour tracking algorithm is applied to track the contour in clockwise direction. Contour of a gesture is represented by a Localized Contour Sequence (L.C.S) whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. ::: These extracted features are applied as input to classifier. Linear classifier discriminates the images based on dissimilarity between two images. Multi Class Support Vector Machine (MCSVM) and Least Square Support Vector Machine (LSSVM) is also implemented for the classification purpose. Experimental result shows that 94.2% recognition accuracy is achieved by using linear classifier and 98.6% recognition accuracy is achieved using Multiclass Support Vector machine classifier. Least Square Support Vector Machine (LSSVM) classifier is also used for classification purpose and shows 99.2% recognition accuracy.
---
paper_title: Hand gesture recognition using morphological principal component analysis and an improved CombNET-II
paper_content:
A new neural network structure dedicated to time series recognition, T-CombNET, is presented. The model is developed from a large scale neural network CombNet-II, designed to deal with a very large vocabulary for character recognition. Our specific modifications of the original CombNet-II model allows it to do temporal analysis, and to be used in a large set of human movement recognition systems. This paper also presents a feature extraction method based on morphological principal component analysis that completely describes a hand gesture in 2-dimensional time varying vector. The proposed feature extraction method along with the T-CombNET structure were then used to develop a complete Japanese Kana hand alphabet recognition system consisting of 42 static postures and 34 hand motions. We obtained a superior recognition rate of 99.4% in the gesture recognition experiments when compared to different neural network structures like multi-layer perceptron, learning vector quantization (LVQ), Elman and Jordan partially recurrent neural networks, CombNET-II and the proposed T-CombNET structure.
---
paper_title: Hand Gesture Modeling and Recognition using Geometric Features: A Review
paper_content:
Abstract — The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name, this kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is more easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices, the gesturing is also important between human-human interaction especially with hearing impaired, deaf and mute peoples, in this study, we have presented different researches that done in this area regarding the geometric features which considered as a live features compared with non-geometric features which considered as blind features, and we have focused on the researches gathered to achieve this important link between human and his made machines, also we have provide our algorithms for overcome some shortcomings existed in some mentioned algorithms in order to provide a robust gesture recognition algorithm that does not have a rotation hinder which most of current algorithms have.
---
paper_title: Hand Gesture Modeling and Recognition using Geometric Features: A Review
paper_content:
Abstract — The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name, this kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is more easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices, the gesturing is also important between human-human interaction especially with hearing impaired, deaf and mute peoples, in this study, we have presented different researches that done in this area regarding the geometric features which considered as a live features compared with non-geometric features which considered as blind features, and we have focused on the researches gathered to achieve this important link between human and his made machines, also we have provide our algorithms for overcome some shortcomings existed in some mentioned algorithms in order to provide a robust gesture recognition algorithm that does not have a rotation hinder which most of current algorithms have.
---
paper_title: Edge Detection Techniques - An Overview
paper_content:
In computer vision and image processing, edge detection concerns the localization of significant variations of the grey level image and the identification of the physical phenomena that originated them. This information is very useful for applications in 3D reconstruction, motion, recognition, image enhancement and restoration, image registration, image compression, and so on. Usually, edge detection requires smoothing and differentiation of the image. Differentiation is an ill-conditioned problem and smoothing results in a loss of information. It is difficult to design a general edge detection algorithm which performs well in many contexts and captures the requirements of subsequent processing stages. Consequently, over the history of digital image processing a variety of edge detectors have been devised which differ in their mathematical and algorithmic properties. This paper is an account of the current state of our understanding of edge detection. We propose an overview of research in edge detection: edge definition, properties of detectors, the methodology of edge detection, the mutual influence between edges and detectors, and existing edge detectors and their implementation.
---
paper_title: Edge Detection by Morphological Operations and Fuzzy Reasoning
paper_content:
This article presents a new edge detection method for gray level images by morphological operators and fuzzy reasoning. The method concerns edge gradient magnitude in images by various morphological operators as fuzzy membership functions. A fuzzy reasoning acrobatic coupled with few rules was then acquired for edge sharpening. The whereabouts merit of good edge detection was compared and the experimental results show the efficiency of sharpening and edge detection.
---
paper_title: Hand Gesture Modeling and Recognition using Geometric Features: A Review
paper_content:
Abstract — The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name, this kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is more easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices, the gesturing is also important between human-human interaction especially with hearing impaired, deaf and mute peoples, in this study, we have presented different researches that done in this area regarding the geometric features which considered as a live features compared with non-geometric features which considered as blind features, and we have focused on the researches gathered to achieve this important link between human and his made machines, also we have provide our algorithms for overcome some shortcomings existed in some mentioned algorithms in order to provide a robust gesture recognition algorithm that does not have a rotation hinder which most of current algorithms have.
---
paper_title: REAL TIME HAND GESTURE RECOGNITION USING SIFT
paper_content:
The objective of the gesture recognition is to identify and distinguish the human gestures and utilizes these identified gestures for applications in specific domain. In this paper we propose a new approach to build a real time system to identify the standard gesture given by American Sign Language, or ASL, the dominant sign language of Deaf Americans, including deaf communities in the United States, in the English-speaking parts of Canada, and in some regions of Mexico. We propose a new method of improvised scale invariant feature transform (SIFT) and use the same to extract the features. The objective of the paper is to decode a gesture video into the appropriate alphabets.
---
paper_title: Detection and tracking of pianist hands and fingers
paper_content:
Current MIDI recording and transmitting technology allows teachers to teach piano playing remotely (or off-line): a teacher plays a MIDI-keyboard at one place and a student observes the played piano keys on another MIDI-keyboard at another place. What this technology does not allow is to see how the piano keys are played, namely: which hand and finger was used to play a key. In this paper we present a video recognition tool that makes it possible to provide this information. A video-camera is mounted on top of the piano keyboard and video recognition techniques are then used to calibrate piano image with MIDI sound, then to detect and track pianist hands and then to annotate the fingers that play the piano. The result of the obtained video annotation of piano playing can then be shown on a computer screen for further perusal by a piano teacher or a student.
---
paper_title: Hand gesture recognition using a neural network shape fitting technique
paper_content:
A new method for hand gesture recognition that is based on a hand gesture fitting procedure via a new Self-Growing and Self-Organized Neural Gas (SGONG) network is proposed. Initially, the region of the hand is detected by applying a color segmentation technique based on a skin color filtering procedure in the YCbCr color space. Then, the SGONG network is applied on the hand area so as to approach its shape. Based on the output grid of neurons produced by the neural network, palm morphologic characteristics are extracted. These characteristics, in accordance with powerful finger features, allow the identification of the raised fingers. Finally, the hand gesture recognition is accomplished through a likelihood-based classification technique. The proposed system has been extensively tested with success.
---
paper_title: Detection of Fingertips in Human Hand Movement Sequences
paper_content:
This paper presents an hierarchical approach with neural networks to locate the positions of the fingertips in grey-scale images of human hands. The first chapters introduce and sum up the research done in this area. Afterwards, our hierarchical approach and the preprocessing of the grey-scale images are described. A low-dimensional encoding of the images is done by the means of Gabor-Filters and a special kind of artificial neural net, the LLM-net, is employed to find the positions of the fingertips. The capabilities of the system are demonstrated on three tasks: locating the tip of the forefinger and of the thumb, finding the pointing-direction regardless of the operator’s pointing style, and detecting all 5 fingertips in hand movement sequences. The system is able to perform these tasks even when the fingertips are in an area with low contrast.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: Real-time tracking of multiple fingertips and gesture recognition for augmented desk interface systems
paper_content:
We propose a fast and robust method for tracking a user's hand and multiple fingertips; we then demonstrate gesture recognition based on measured fingertip trajectories for augmented desk interface systems. Our tracking method is capable of tracking multiple fingertips in a reliable manner even in a complex background under a dynamically changing lighting condition without any markers. First, based on its geometrical features, the location of each fingertip is located in each input infrared image frame. Then, correspondences of detected fingertips between successive image frames are determined based on a prediction technique. Our gesture recognition system is particularly advantageous for human-computer interaction (HCI) in that users can achieve interactions based on symbolic gestures at the same time that they perform direct manipulation with their own hands and fingers. The effectiveness of our proposed method has been successfully demonstrated via a number of experiments.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: Fingertip Detection for Hand Pose Recognition
paper_content:
In this paper, a novel algorithm is proposed for fingertip detection and finger type recognition. The algorithm is applied for locating fingertips in hand region extracted by Bayesian rule based skin color segmentation. Morphological operations are performed in the segmented hand region by observing key geometric features. A probabilistic modeling of the geometric features of finger movement has made the finger type recognition process significantly robust. Proposed method can be employed in a variety of applications like sign language recognition and human robot interactions.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
paper_title: A REVIEW OF VISION BASED HAND GESTURES RECOGNITION
paper_content:
With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactio ns with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.
---
|
Title: Hand Gesture Recognition Systems: A Survey
Section 1: INTRODUCTION
Description 1: This section provides an overview of the significance of gestures in human communication, their historical context, and the various applications of hand gesture recognition systems.
Section 2: HAND MODELING FOR GESTURE RECOGNITION
Description 2: This section discusses the human hand's structure, degrees of freedom, and the different modeling techniques used for gesture recognition, including both 2D and 3D models.
Section 3: SYSTEM ARCHITECTURE
Description 3: This section outlines the phases of a hand gesture recognition system, including data acquisition, hand segmentation, pre-processing, feature extraction, and recognition.
Section 4: Gesture Modeling
Description 4: This section describes the steps involved in gesture modeling, including hand segmentation, noise removal, edge detection, and normalization.
Section 5: Hand Segmentation
Description 5: This section details various methods for hand segmentation, such as skin-based approaches, background subtraction, statistical models, and color normalization.
Section 6: Noise Removal
Description 6: This section explains different noise removal techniques, like salt and pepper noise filters, morphological erosion, and multidimensional mean methods.
Section 7: Edge Detection
Description 7: This section covers techniques for edge detection in images, including gradient-based and Laplacian-based methods.
Section 8: Normalization
Description 8: This section discusses normalization techniques aimed at feature space reduction, such as cropping, dimension unification, and significant features location.
Section 9: Feature Extraction
Description 9: This section covers the extraction of features crucial for hand gesture recognition, discussing model-based, view-based, and low-level features based approaches.
Section 10: Hand Gesture Recognition
Description 10: This section explains how gestures are recognized using machine learning techniques and classifiers, and examines supervised and unsupervised methods.
Section 11: Research Results
Description 11: This section provides a summary of research works in the domain, presenting a comparative study of backgrounds, segmentation techniques, features, and recognition methods used along with key criteria such as robustness, computational efficiency, user's tolerance, and scalability.
Section 12: CONCLUSION
Description 12: This section summarizes the findings of the survey, highlights the importance of hand gesture recognition, and discusses the current limitations and future research directions.
Section 13: ACKNOWLEDGMENTS
Description 13: This section acknowledges contributions from individuals and organizations that assisted in the research.
|
A Review on Key Management Schemes in MANET
| 5 |
---
paper_title: An efficient group key management scheme for mobile ad hoc networks
paper_content:
Group key management is one of the basic building blocks in collaborative and group-oriented applications in Mobile AdHoc Networks (MANETs). Group key establishment involves creating and distributing a common secret for all group members. However, key management for a large and dynamic group is a difficult problem because of scalability and security. Modification of membership requires the group key to be refreshed to ensure backward and forward secrecy. In this paper, we propose a Simple and Efficient Group Key (SEGK) management scheme for MANETs. Group members compute the group key in a distributed manner.
---
paper_title: Distributed symmetric key management for mobile ad hoc networks
paper_content:
Key management is an essential cryptographic primitive upon which other security primitives are built. However, none of the existing key management schemes are suitable for ad hoc networks. They are either too inefficient, not functional on an arbitrary or unknown network topology, or not tolerant to a changing network topology or link failures. Recent research on distributed sensor networks suggests that key pre-distribution schemes (KPS) are the only practical option for scenarios where the network topology is not known prior to deployment. However, all of the existing KPS schemes rely on trusted third parties (TTP) rendering them inapplicable in many ad hoc networking scenarios and thus restricting them from wide-spread use in ad hoc networks. To eliminate this reliance on TTP, we introduce distributed key pre-distribution scheme (DKPS) and construct the first DKPS prototype to realize fully distributed and self-organized key pre-distribution without relying on any infrastructure support. DKPS overcomes the main limitations of the previous schemes, namely the needs of TTP and an established routing infrastructure. It minimizes the requirements posed on the underlying networks and can be easily applied to the ad hoc networking scenarios where key pre-distribution schemes were previously inapplicable. Finally, DKPS is robust to changing topology and broken links and can work before any routing infrastructure has been established, thus facilitating the widespread deployment of secure ad hoc networks.
---
paper_title: Self-Organized Public-Key Management for Mobile Ad-Hoc Networks
paper_content:
Ad-hoc networks are the networks do not rely on fixed infrastructure. In such network all the networking functions are performed by the nodes in a self organized manner. Due to infrastructure less architecture security in communication is of a major concern. Ad-hoc networks are the evolving technology in wireless networks. In such network all the networking functions are performed by the nodes in a self organized manner. The main objective is to develop a fully self-organized public-key management system for Ad-hoc networks aims to realize a public-key cryptographic method to perform authentication regardless of the network partitions and without any centralized services. The Security method to be realized is based on self organization among network nodes by updating information.
---
paper_title: Secure and highly efficient three level key management scheme for MANET
paper_content:
MANET(Moving Ad hoc Network) is a convenient infrastructure-less communication network which is commonly susceptible to various attacks. Many key management schemes for MANETs are presented to solve various security problems. Identity (ID)-based cryptography with threshold secret sharing, ECC and Bilinear Pairing computation is a popular approach for the key management design. In this article, we adopt these approaches to construct tree structure and cluster structure ad hoc network which has three level security communication framework. After constructing the security structure, we evaluate the security performance and efficiency of the scheme in detail.
---
paper_title: URSA: ubiquitous and robust access control for mobile ad hoc networks
paper_content:
Restricting network access of routing and packet forwarding to well-behaving nodes and denying access from misbehaving nodes are critical for the proper functioning of a mobile ad-hoc network where cooperation among all networking nodes is usually assumed. However, the lack of a network infrastructure, the dynamics of the network topology and node membership, and the potential attacks from inside the network by malicious and/or noncooperative selfish nodes make the conventional network access control mechanisms not applicable. We present URSA, a ubiquitous and robust access control solution for mobile ad hoc networks. URSA implements ticket certification services through multiple-node consensus and fully localized instantiation. It uses tickets to identify and grant network access to well-behaving nodes. In URSA, no single node monopolizes the access decision or is completely trusted. Instead, multiple nodes jointly monitor a local node and certify/revoke its ticket. Furthermore, URSA ticket certification services are fully localized into each node's neighborhood to ensure service ubiquity and resilience. Through analysis, simulations, and experiments, we show that our design effectively enforces access control in the highly dynamic, mobile ad hoc network.
---
paper_title: Secure ad hoc networks
paper_content:
A microfiche containing a plurality of microimages is positioned relative to the imaging station of a microimage display device by a pivoted member journaled for rotation and a sliding traversing member.
---
paper_title: Secure and Efficient Key Management in Mobile Ad Hoc Networks
paper_content:
In mobile ad hoc networks, due to unreliable wireless media, host mobility and lack of infrastructure, providing secure communications is a big challenge in this unique network environment. Usually cryptography techniques are used for secure communications in wired and wireless networks. The asymmetric cryptography is widely used because of its versatileness (authentication, integrity, and confidentiality) and simplicity for key distribution. However, this approach relies on a centralized framework of public key infrastructure (PKI). The symmetric approach has computation efficiency, yet it suffers from potential attacks on key agreement or key distribution. In fact, any cryptographic means is ineffective if the key management is weak. Key management is a central aspect for security in mobile ad hoc networks. In mobile ad hoc networks, the computational load and complexity for key management is strongly subject to restriction of the node's available resources and the dynamic nature of network topology. In this paper, we propose a secure and efficient key management framework (SEKM) for mobile ad hoc networks. SEKM builds PKI by applying a secret sharing scheme and an underlying multicast server group. In SEKM, the server group creates a view of the certification authority (CA) and provides certificate update service for all nodes, including the servers themselves. A ticket scheme is introduced for efficient certificate service. In addition, an efficient server group updating scheme is proposed.
---
paper_title: An efficient group key management scheme for mobile ad hoc networks
paper_content:
Group key management is one of the basic building blocks in collaborative and group-oriented applications in Mobile AdHoc Networks (MANETs). Group key establishment involves creating and distributing a common secret for all group members. However, key management for a large and dynamic group is a difficult problem because of scalability and security. Modification of membership requires the group key to be refreshed to ensure backward and forward secrecy. In this paper, we propose a Simple and Efficient Group Key (SEGK) management scheme for MANETs. Group members compute the group key in a distributed manner.
---
|
Title: A Review on Key Management Schemes in MANET
Section 1: Introduction
Description 1: Provide an overview of the importance of key management schemes in MANETs, discussing various cryptographic methods and their unique challenges due to the dynamic and resource-constrained nature of MANETs.
Section 2: Symmetric Key Management Schemes in MANET
Description 2: Discuss different symmetric key management schemes such as Distributed Key Pre-distribution Scheme (DKPS) and Peer Intermediaries for Key Establishment (PIKE), including their phases, features, and efficiencies.
Section 3: Asymmetric Key Management Schemes in MANET
Description 3: Outline several asymmetric key management schemes, including Secure Routing Protocol (SRP), Ubiquitous and Robust Access Control (URSA), Mobile Certificate Authority (MOCA), and others, explaining how they operate and their benefits and limitations.
Section 4: Group Key Management Schemes in MANET
Description 4: Detail methods for managing group keys, focusing on schemes like Simple and Efficient Group Key Management (SEGK), and explain their approaches for maintaining security in group communications.
Section 5: Hybrid or Composite Key Management Schemes in MANET
Description 5: Describe schemes that combine elements of symmetric and asymmetric key management, like the Zone-Based Key Management Scheme, detailing how they use hybrid approaches to improve overall security and efficiency.
Section 6: Conclusion & Future Work
Description 6: Summarize the different key management schemes discussed, highlighting the strengths and weaknesses of each. Suggest areas for future research to develop new or improved key management schemes.
|
An Overview of Localization for Wireless Sensor Networks
| 9 |
---
paper_title: Localization in cooperative Wireless Sensor Networks: A review
paper_content:
Localization in Wireless Sensor Networks has become a significant research challenge, attracting many researchers in the past decade. This paper provides a review of basic techniques and the state-of-the-art approaches for wireless sensors localization. The challenges and future research opportunities are discussed in relation to the design of the collaborative workspaces based on cooperative wireless sensor networks.
---
paper_title: GPS-less Low Cost Outdoor Localization For Very Small Devices
paper_content:
Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like environmental monitoring of water and soil, requires that these nodes be very small, lightweight, untethered, and unobtrusive. The problem of localization, that is, determining where a given node is physically located in a network, is a challenging one, and yet extremely crucial for many of these applications. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the reliance on GPS of all nodes in these networks. We review localization techniques and evaluate the effectiveness of a very simple connectivity metric method for localization in outdoor environments that makes use of the inherent RF communications capabilities of these devices. A fixed number of reference points in the network with overlapping regions of coverage transmit periodic beacon signals. Nodes use a simple connectivity metric, which is more robust to environmental vagaries, to infer proximity to a given subset of these reference points. Nodes localize themselves to the centroid of their proximate reference points. The accuracy of localization is then dependent on the separation distance between two-adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90 percent of our data points is within one-third of the separation distance. However, future work is needed to extend the technique to more cluttered environments.
---
paper_title: Localization in cooperative Wireless Sensor Networks: A review
paper_content:
Localization in Wireless Sensor Networks has become a significant research challenge, attracting many researchers in the past decade. This paper provides a review of basic techniques and the state-of-the-art approaches for wireless sensors localization. The challenges and future research opportunities are discussed in relation to the design of the collaborative workspaces based on cooperative wireless sensor networks.
---
paper_title: Convex position estimation in wireless sensor networks
paper_content:
A method for estimating unknown node positions in a sensor network based exclusively on connectivity-induced constraints is described. Known peer-to-peer communication in the network is modeled as a set of geometric constraints on the node positions. The global solution of a feasibility problem for these constraints yields estimates for the unknown positions of the nodes in the network. Providing that the constraints are tight enough, simulation illustrates that this estimate becomes close to the actual node positions. Additionally, a method for placing rectangular bounds around the possible positions for all unknown nodes in the network is given. The area of the bounding rectangles decreases as additional or tighter constraints are included in the problem. Specific models are suggested and simulated for isotropic and directional communication, representative of broadcast-based and optical transmission respectively, though the methods presented are not limited to these simple cases.
---
paper_title: Range-free localization schemes for large scale sensor networks
paper_content:
Wireless Sensor Networks have been proposed for a multitude of location-dependent applications. For such systems, the cost and limitations of the hardware on sensing nodes prevent the use of range-based localization schemes that depend on absolute point-to-point distance estimates. Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. In this paper, we present APIT, a novel localization algorithm that is range-free. We show that our APIT scheme performs best when an irregular radio pattern and random node placement are considered, and low communication overhead is desired. We compare our work via extensive simulation, with three state-of-the-art range-free localization schemes to identify the preferable system configurations of each. In addition, we study the effect of location error on routing and tracking performance. We show that routing performance and tracking accuracy are not significantly affected by localization error when the error is less than 0.4 times the communication radio radius.
---
paper_title: Range-free localization schemes for large scale sensor networks
paper_content:
Wireless Sensor Networks have been proposed for a multitude of location-dependent applications. For such systems, the cost and limitations of the hardware on sensing nodes prevent the use of range-based localization schemes that depend on absolute point-to-point distance estimates. Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. In this paper, we present APIT, a novel localization algorithm that is range-free. We show that our APIT scheme performs best when an irregular radio pattern and random node placement are considered, and low communication overhead is desired. We compare our work via extensive simulation, with three state-of-the-art range-free localization schemes to identify the preferable system configurations of each. In addition, we study the effect of location error on routing and tracking performance. We show that routing performance and tracking accuracy are not significantly affected by localization error when the error is less than 0.4 times the communication radio radius.
---
paper_title: Convex position estimation in wireless sensor networks
paper_content:
A method for estimating unknown node positions in a sensor network based exclusively on connectivity-induced constraints is described. Known peer-to-peer communication in the network is modeled as a set of geometric constraints on the node positions. The global solution of a feasibility problem for these constraints yields estimates for the unknown positions of the nodes in the network. Providing that the constraints are tight enough, simulation illustrates that this estimate becomes close to the actual node positions. Additionally, a method for placing rectangular bounds around the possible positions for all unknown nodes in the network is given. The area of the bounding rectangles decreases as additional or tighter constraints are included in the problem. Specific models are suggested and simulated for isotropic and directional communication, representative of broadcast-based and optical transmission respectively, though the methods presented are not limited to these simple cases.
---
paper_title: CFL: A clustering algorithm for localization in Wireless Sensor Networks
paper_content:
Wireless sensor networks (WSNs) are a new type of communication networks which consist of some small sensor nodes which are distributed in an area and are responsible to gather information from their environments. The collected information is transferred to a base station which is called sink node. Having minimum energy consumption an maximum lifetime are two important objectives in WSNs. Clustering is a standard approach for achieving efficient and scalable performance in this type of networks. It facilitates the distribution of control over the network and, hence, enables locality of communication. In this paper, we propose a clustering algorithm called CFL (clustering for localization). CFL is designed in such a way to consider principle of designing a clustering algorithm, in addition to providing an environment for designing a localization algorithm based on clustering. The proposed algorithm uses a combined weight function and tries to classify the sensor nodes so that minimum number of clusters with maximum number of nodes in each cluster could be achieved. The simulation results confirm that the proposed CFL algorithm has better performance than that of the existing algorithms.
---
paper_title: Research on the Self-localization of Wireless Sensor Networks
paper_content:
Self-localization is a key function in wireless sensor networks (WSNs). Many applications and internal mechanisms require nodes to know their location. Based on two different application environments and from the view of anchor nodes density, this paper proposes two new algorithms for distributed cooperative localization: Centroid-based with Preplaced Beacon Localization and Centroid-based with Scalable Node Localization. Both of them are simulated and analyzed in a two-dimensional (2-D) space simulation model of Matlab.
---
paper_title: Research on the Self-localization of Wireless Sensor Networks
paper_content:
Self-localization is a key function in wireless sensor networks (WSNs). Many applications and internal mechanisms require nodes to know their location. Based on two different application environments and from the view of anchor nodes density, this paper proposes two new algorithms for distributed cooperative localization: Centroid-based with Preplaced Beacon Localization and Centroid-based with Scalable Node Localization. Both of them are simulated and analyzed in a two-dimensional (2-D) space simulation model of Matlab.
---
paper_title: Dynamic-Anchor Distributed Localization in Wireless Sensor Networks
paper_content:
In recent years there has been a growing interest in wireless sensor networks(WSN)applications. Such sensor networks can be used to control temperature, humidity,contamination,pollution etc.Positon information of individual nodes is useful in implementing functions such as routing and querying in ad-hoc networks. The DV- Hop is a main stream localization algorithm.Since the DV-Hop algorithm is not accurate as the nodes scattered uneven, quantities huge, and topology dynamicly change, we proposed a distributed localization algorithm called DA, which can overcome these disadvantages of the DV-Hop. In the DA algorithm, the most appropriate node is selected to substitute the centroid of the area through the measurement of the RSSI, and the selected node is dynamicly promoted to be an anchor in order to enhance the anchor density. The simulation results demonstrate that the DA is more efficient in precision than the DV-Hop.
---
paper_title: Distributed localization of wireless sensor networks using self-organizing maps
paper_content:
As larger sets of wireless sensor networks are being deployed, an important characteristic of the network which could enhance its capabilities is position awareness. While several approaches have been proposed for localization, that is, position awareness without using GPS, most techniques are either centralized or rely on anchor nodes. In this paper, a decentralized localization method is developed, based upon self-organizing maps. The algorithm is implemented for different size networks and the simulation results show the algorithm is efficient when compared to single processor or centralized localization methods; further the approach does not require anchor nodes. An error analysis shows that the proposed approach is a feasible method for computing the localization of sensor networks using a distributed architecture.
---
paper_title: Localization in wireless sensor networks using a mobile anchor node
paper_content:
In wireless sensor networks (WSN), sensor location plays a critical role in many applications. Having a GPS receiver on every sensor node is costly. In the past, several approaches, including range-based and range-free, have been proposed to calculate positions for randomly deployed sensor nodes. Most of them use some special nodes, called anchor nodes, which are assumed to know their own locations. Other sensors compute their locations based on the information provided by these anchor nodes. This paper describes MACL, a mobile anchor centroid localization method, which uses a single mobile anchor node to move in the sensing field and broadcast its current position periodically. The proposed method is radio-frequency based, so no extra hardware or data communication is needed between the sensor nodes. We use simulations and tests from an indoor deployment using the Cricket location system to investigate the localization accuracy of MACL, and find that the localization method is principle simple, less computing and communication overhead, low costly, and flexible accuracy.
---
|
Title: An Overview of Localization for Wireless Sensor Networks
Section 1: Introduction
Description 1: Introduce the basic concept of wireless sensor networks (WSNs) and the importance of localization within these networks. Highlight the design factors of WSNs and provide an overview of the protocol stack.
Section 2: Need of Localization
Description 2: Discuss the necessity of localization in WSNs, illustrating its importance through various applications such as object tracking, location-based routing, and surveillance.
Section 3: Basic Localization Techniques
Description 3: Present an overview of basic localization techniques, dividing them into range-based and range-free categories, and explain their fundamental principles and examples.
Section 4: Range-Based Localization Schemes
Description 4: Describe the various range-based localization schemes, including methodologies such as Received Signal Strength Indication (RSSI), Time Difference of Arrival (TDoA), and Angle of Arrival (AoA).
Section 5: Range-Free Localization Schemes
Description 5: Outline range-free localization schemes like Centroid algorithm, DV-Hop scheme, and Amorphous Localization algorithm. Explain their mechanisms and effectiveness in WSNs.
Section 6: Advanced Range-Based Approaches
Description 6: Detail advanced range-based localization approaches that build on basic techniques, such as self-localization methods and clustering algorithms for localization.
Section 7: Advanced Range-Free Approaches
Description 7: Highlight advanced range-free localization techniques, including Dynamic Anchor distributed Localization, Mobile anchor Centroid localization, and distributed SOM-based localization schemes.
Section 8: Estimators for Localization
Description 8: Discuss the importance of estimators to improve accuracy in localization by addressing error-prone metrics such as RSS measurements.
Section 9: Conclusion
Description 9: Summarize the discussed range-based and range-free localization schemes and their respective advancements. Reflect on the future research directions to enhance localization accuracy and security in WSNs.
|
A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research
| 12 |
---
paper_title: Pygmalion: A COMPUTER PROGRAM TO Model and Stimulate Creative Thought
paper_content:
iii Preface The following is a map of this document. Chapters 1,2 --A psychological model of creative thought, forming the basis for the PYGMALION design principles. Chapter 3 --Other projects which adhere to some of the same principles. Chapters 4,5 --The PYGMALION programming environment in detail. Chapter 6 --Examples of PYGMALION programs and data structures. Chapter 7 --Conclusions and suggestions for the future. This paper places equal emphasis oil presenting a psychological model of thought and using the model in a computer environment. Readers interested in aspects of creative thoug.ht which can be assisted by a computer should read chapters I and 2. Readers interested in how the PYGMALION system attempts to stimulate creative thought should look at chapter 6 (mostly pictures) to get the flavor, then read chapters 4 and 5. The works of others which deal with the same aspects are described in chapter 3. Chapter 7 suggests areas for future exploration. Thorough readers will read the chapters in order. Chapter 6 and 4-A through 4-D are a minimal set for readers in a hurry. There are three parts to this report.
---
paper_title: Overview of auditory representations in human-machine interfaces
paper_content:
In recent years, a large number of research projects have focused on the use of auditory representations in a broadened scope of application scenarios. Results in such projects have shown that auditory elements can effectively complement other modalities not only in the traditional desktop computer environment but also in virtual and augmented reality, mobile platforms, and other kinds of novel computing environments. The successful use of auditory representations in this growing number of application scenarios has in turn prompted researchers to rediscover the more basic auditory representations and extend them in various directions. The goal of this article is to survey both classical auditory representations (e.g., auditory icons and earcons) and those auditory representations that have been created as extensions to earlier approaches, including speech-based sounds (e.g., spearcons and spindex representations), emotionally grounded sounds (e.g., auditory emoticons and spemoticons), and various other sound types used to provide sonifications in practical scenarios. The article concludes by outlining the latest trends in auditory interface design and providing examples of these trends.
---
paper_title: Tactons: Structured Tactile Messages for Non-Visual Information Display
paper_content:
Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given.
---
paper_title: Perceptual design of haptic icons
paper_content:
The bulk of applications for haptic feedback employ direct rendering approaches wherein a user touches a virtual model of some "real" thing, often displayed graphically as well. We propose a new class of applications based on abstract messages, ranging from "haptic icons" - brief signals conveying an ob- ject's or event's state, function or content - to an expressive haptic language for interpersonal communication. Building this language requires us to understand how synthetic haptic signals are perceived, and what they can mean to us. Experiments presented here address the perception question by using an effi- cient version of Multidimensional Scaling (MDS) to extract perceptual axes for complex haptic icons: once this space is mapped, icons can be designed to maximize both differentiability and individual salience. Results show that a set of icons constructed by varying the frequency, magnitude and shape of 2-sec, time-invariant wave shapes map to two perceptual axes, which differ depending on the signals' frequency range; and suggest that expressive capability is maxi- mized in one frequency subspace.
---
paper_title: Spectral discrimination thresholds comparing audio and haptics for complex stimuli
paper_content:
Individuals with normal hearing are generally able to discriminate auditory stimuli that have the same fundamental frequency but different spectral content. This study concerns to what extent it is possible to perform the same differentiation considering vibratory tactile stimuli. Three perceptual experiments have been carried out in an attempt to compare discrimination thresholds in terms of spectral differences between auditory and vibratory tactile stimulations. The first test consists of assessing the subject's ability in discriminating between three signals with distinct spectral content. The second test focuses on the measurement of the discrimination threshold between a pure tone signal and a signal composed of two pure tones, varying the amplitude and frequency of the second tone. Finally, in the third test the discrimination threshold is measured between a tone with even harmonic components and a tone with odd ones. The results show that it is indeed possible to discriminate between haptic signals having the same fundamental frequency but different spectral. The threshold of sensitivity for detection is markedly less than for audio stimuli.
---
paper_title: Pygmalion: A COMPUTER PROGRAM TO Model and Stimulate Creative Thought
paper_content:
iii Preface The following is a map of this document. Chapters 1,2 --A psychological model of creative thought, forming the basis for the PYGMALION design principles. Chapter 3 --Other projects which adhere to some of the same principles. Chapters 4,5 --The PYGMALION programming environment in detail. Chapter 6 --Examples of PYGMALION programs and data structures. Chapter 7 --Conclusions and suggestions for the future. This paper places equal emphasis oil presenting a psychological model of thought and using the model in a computer environment. Readers interested in aspects of creative thoug.ht which can be assisted by a computer should read chapters I and 2. Readers interested in how the PYGMALION system attempts to stimulate creative thought should look at chapter 6 (mostly pictures) to get the flavor, then read chapters 4 and 5. The works of others which deal with the same aspects are described in chapter 3. Chapter 7 suggests areas for future exploration. Thorough readers will read the chapters in order. Chapter 6 and 4-A through 4-D are a minimal set for readers in a hurry. There are three parts to this report.
---
paper_title: Tone-2 tones discrimination task comparing audio and haptics
paper_content:
To investigating the capabilities of human beings to differentiate between tactile-vibratory stimuli with the same fundamental frequency but with different spectral content, this study concerns discrimination tasks comparing audio and haptic performances. Using an up-down 1 dB step adaptive procedure, the experimental protocol consists of measuring the discrimination threshold between a pure tone signal and a stimulus composed of two concurrent pure tones, changing the amplitude and frequency of the second tone. The task is performed employing exactly the same experimental apparatus (computer, AD-DA converters, amplifiers and drivers) for both audio and tactile modalities. The results show that it is indeed possible to discriminate between signals having the same fundamental frequency but different spectral content for both haptic and audio modalities, the latter being notably more sensitive. Furthermore, particular correlations have been found between the frequency of the second tone and the discrimination threshold values, for both audio and tactile modalities.
---
paper_title: Auditory Icons: Using Sound in Computer Interfaces
paper_content:
Carroll and Campbell have exercised themselves over a straw man not subscribed to by us. In doing so, they have misrepresented our position and even the statements in our paper. In reply, we restate as clearly as we can the position for which we actually did and do argue and give examples of their misrepresentations. The underlying issue seems to concern the advantages of using technical psychological theories to identify underlying mechanisms in human-computer interaction. We argue that such theories are an important part of a science of human-computer interaction. We argue further that technical theories must be considered in the context of the uses to which they are put. Such considerations help the theorist to determine what is a good approximation, the degree of formalization that is justified, the appropriate commingling of qualitative and quantitative techniques, and encourages cumulative progress through the heuristic of divide and conquer.
---
paper_title: Earcons and icons: their structure and common design principles
paper_content:
In this article we examine earcons, which are audio messages used in the user-computer interface to provide information and feedback to the user about computer entities. (Earcons include messages and functions, as well as states and labels.) We identify some design principles that are common to both visual symbols and auditory messages, and discuss the use of representational and abstract icons and earcons. We give some examples of audio patterns that may be used to design modules for earcons, which then may be assembled into larger groupings called families. The modules are single pitches or rhythmicized sequences of pitches called motives. The families are constructed about related motives that serve to identify a family of related messages. Issues concerned with learning and remembering earcons are discussed.
---
paper_title: The SonicFinder: an interface that uses auditory icons
paper_content:
The appropriate use of nonspeech sounds has the potential to add a great deal to the functionality of computer interfaces. Sound is a largely unexploited medium of output, even though it plays an integral role in our everyday encounters with the world, a role that is complementary to vision. Sound should be used in computers as it is in the world, where it conveys information about the nature of sound-producing events. Such a strategy leads to auditory icons, which are everyday sounds meant to convey information about computer events by analogy with everyday events. Auditory icons are an intuitively accessible way to use sound to provide multidimensional, organized information to users. ::: ::: These ideas are instantiated in the SonicFinder, which is an auditory interface I developed at Apple Computer, Inc. In this interface, information is conveyed using auditory icons as well as standard graphical feedback. I discuss how events are mapped to auditory icons in the SonicFinder, and illustrate how sound is used by describing a typical interaction with this interface. ::: ::: Two major gains are associated with using sound in this interface: an increase in direct engagement with the model world of the computer and an added flexibility for users in getting information about that world. These advantages seem to be due to the iconic nature of the mappings used between sound and the information it is to convey. I discuss sound effects and source metaphors as methods of extending auditory icons beyond the limitations implied by literal mappings, and I speculate on future directions for such interfaces.
---
paper_title: The hapticon editor: a tool in support of haptic communication research
paper_content:
We define haptic icons, or "hapticons", as brief programmed forces applied to a user through a haptic interface, with the role of communicating a simple idea in manner similar to visual or auditory icons. In this paper we present the design and implementation of an innovative software tool and graphical interface for the creation and editing of hapticons. The tool's features include various methods for creating new icons including direct recording of manual trajectories and creation from a choice of basis waveforms; novel direct-manipulation icon editing mechanisms, integrated playback and convenient storage of icons to file. We discuss some ways in which the tool has aided our research in the area of haptic iconography and present an innovative approach for generating and rendering simple textures on a low degree of freedom haptic device using what we call terrain display.
---
paper_title: Human Spatial Navigation via a Visuo-Tactile Sensory Substitution System
paper_content:
Spatial navigation within a real 3-D maze was investigated to study space perception on the sole basis of tactile information transmitted by means of a ‘tactile vision substitution system' (TVSS) allowing the conversion of optical images—collected by a micro camera—into ‘tactile images’ via a matrix in contact with the skin. The development of such a device is based on concepts of cerebral and functional plasticity, enabling subjective reproduction of visual images from tactile data processing. Blindfolded sighted subjects had to remotely control the movements of a robot on which the TVSS camera was mounted. Once familiarised with the cues in the maze, the subjects were given two exploration sessions. Performance was analysed according to an objective point of view (exploration time, discrimination capacity), as well as a subjective one (speech). The task was successfully carried out from the very first session. As the subjects took a different path during each navigation, a gradual improvement in perform...
---
paper_title: Brain plasticity: ‘visual’ acuity of blind persons via the tongue
paper_content:
Abstract The ‘visual’ acuity of blind persons perceiving information through a newly developed human–machine interface, with an array of electrical stimulators on the tongue, has been quantified using a standard Ophthalmological test (Snellen Tumbling E). Acuity without training averaged 20/860. This doubled with 9 h of training. The interface may lead to practical devices for persons with sensory loss such as blindness, and offers a means of exploring late brain plasticity.
---
paper_title: Characteristics of reading rate and manual scanning patterns of blind Optacon readers.
paper_content:
Tactual reading is slow compared with sighted reading, and the rate-limiting constraints imposed on tactile readers are only vaguely understood. Like the eye movements of sighted reading, the text-scanning hand movements of tactual reading provide a means to investigate the operative system. We examined the reading hand movements of 10 blind readers using the Optacon, the electronic reading aid used most commonly by blind readers. Subjects read texts of graded difficulty, and their reading hand movements were recorded.Rate and scanning measures were used to characterize reading performance. The group mean reading rate was found to be 28.2 words/min. Reading rate measured in letter spaces per minute was independent of text difficulty.
---
paper_title: Waypoint navigation with a vibrotactile waist belt
paper_content:
Presenting waypoint navigation on a visual display is not suited for all situations. The present experiments investigate if it is feasible to present the navigation information on a tactile display. Important design issue of the display is how direction and distance information must be coded. Important usability issues are the resolution of the display and its usefulness in vibrating environments. In a pilot study with 12 pedestrians, different distance-coding schemes were compared. The schemes translated distance to vibration rhythm while the direction was translated into vibration location. The display consisted of eight tactors around the user's waist. The results show that mapping waypoint direction on the location of vibration is an effective coding scheme that requires no training, but that coding for distance does not improve performance compared to a control condition with no distance information. In Experiment 2, the usefulness of the tactile display was shown in two case studies with a helicopter and a fast boat.
---
paper_title: Vibrotactile pattern perception: extraordinary observers.
paper_content:
Two sighted peopled showed a remarkable ability to perceive vibrotactile patterns generated by the Optacon, a reading aid for the blind. These individuals were able to read at very high rates, 70 to 100 words per minute, through their fingertips. Additional testing showed them to be much better than other people at discriminating and recognizing vibrotactile patterns.
---
paper_title: Tactile Guidance of Movement
paper_content:
In some prototype mobility aids for the blind information about the environment is obtained with the aid of patterns within a matrix of tactile point stimuli. The aim of this report is to summarize some experiments on the possibilities of guiding movements in 3-D space by devices of this kind, the movements studied being: (1) batting a ball, (2) walking and pointing to a target, and (3) slalom walking. The results were that movements could be guided by such a matrix with reasonable precision and time consumption. There are many remaining problems, especially in a cluttered environment, but they can be expected to be decreased if we are able to increase our knowledge about how touch, or rather the haptic system, is functioning, and if we can utilize this knowledge in constructing more effective tactile displays.
---
paper_title: Seeing with the brain
paper_content:
We see with the brain, not the eyes (Bach-y-Rita, 1972); images that pass through our pupils go no further than the retina. From there image information travels to the rest of the brain by means of coded pulse trains, and the brain, being highly plastic, can learn to interpret them in visual terms. Perceptual levels of the brain interpret the spatially encoded neural activity, modified and augmented by nonsynaptic and other brain plasticity mechanisms (Bach-y-Rita, 1972, 1995, 1999, in press). However, the cognitive value of that information is not merely a process of image analysis. Perception of the image relies on memory, learning, contextual interpretation (e.g., we perceive intent of the driver in the slight lateral movements of a car in front of us on the highway), cultural, and other social factors that are probably exclusively human characteristics that provide “qualia” (Bach-y-Rita, 1996b). This is the basis for our tactile vision substitution system (TVSS) studies that, starting in 1963, have demonstrated that visual information and the subjective qualities of seeing can be obtained tactually using sensory substitution systems.1 The description of studies with this system have been taken INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 15(2), 285–295 Copyright © 2003, Lawrence Erlbaum Associates, Inc.
---
paper_title: Braille Display by Lateral Skin Deformation with the STReSS2 Tactile Transducer
paper_content:
Earlier work with a 1D tactile transducer demonstrated that lateral skin deformation is sufficient to produce sensations similar to those felt when brushing a finger against a line of Braille dots. Here, we extend this work to the display of complete 6-dot Braille characters using a general purpose 2D tactile transducer called STReSS2. The legibility of the produced Braille was evaluated by asking seven expert Braille readers to identify meaningless 5-letter strings as well as familiar words. Results indicate that reading was difficult but possible for most individuals. The superposition of texture to the sensation of a dot improved performance. The results contain much information to guide the design of a specialized Braille display operating by lateral skin deformation. They also suggest that rendering for contrast rather than realism may facilitate Braille reading when using a weak tactile transducer
---
paper_title: Tactile letter recognition: Pattern duration and modes of pattern generation
paper_content:
Measurements were made of the ability of subjects to identify vibrotactile patterns presented to their fingertips. The patterns were letters of the alphabet generated on the tactile display of the Optacon. Five different modes of pattern generation were examined. Two of the modes, static and scan, involved full-field presentations of the letters. In the remaining three modes, patterns were generated by presenting parts of the letters sequentially. In one mode, the letters were exposed by a slit passing across them. In the other two modes, the patterns were generated as though the letter were being drawn on the skin. Performance in all five modes was examined as a function of pattern duration, with durations ranging from 4 to 1,000 msec. Increasing duration, up to 400 msec, resulted in generally improved performance, although the functions relating performance and duration differed according to the mode of presentation. Contrary to previous results, the static mode produced the best overall performance level. Some possible reasons for the disagreement between the present results and previous results and some models of cutaneous pattern recognition are discussed.
---
paper_title: SWAN: System for Wearable Audio Navigation
paper_content:
Wearable computers can certainly support audio-only presentation of information; a visual interface need not be present for effective user interaction. A system for wearable audio navigation (SWAN) is being developed to serve as a navigation and orientation aid for persons temporarily or permanently visually impaired. SWAN is a wearable computer consisting of audio-only output and tactile input via a handheld interface. SWAN aids a user in safe pedestrian navigation and includes the ability for the user to author new GIS data relevant to their needs of wayfinding, obstacle avoidance, and situational awareness support. Emphasis is placed on representing pertinent data with non-speech sounds through a process of sonification. SWAN relies on a geographic information system (GIS) infrastructure for supporting geocoding and spatialization of data. Furthermore, SWAN utilizes novel tracking technology.
---
paper_title: Generalized learning of visual-to-auditory substitution in sighted individuals
paper_content:
Abstract Visual-to-auditory substitution involves delivering information about the visual world using auditory input. Although the potential suitability of sound as visual substitution has previously been demonstrated, the basic mechanism behind crossmodal learning is largely unknown; particularly, the degree to which learning generalizes to new stimuli has not been formally tested. We examined learning processes involving the use of the image-to-sound conversion system developed by Meijer [Meijer, P., 1992. An experimental system for auditory image representations. IEEE Trans Biom Eng. 39 (2), 112-121.] that codes visual vertical and horizontal axes into frequency and time representations, respectively. Two behavioral experiments provided training to sighted individuals in a controlled environment. The first experiment explored the early learning stage, comparing performance of individuals who received short-term training and those who were only explicitly given the conversion rules. Both groups performed above chance, suggesting an intuitive understanding of the image–sound relationship; the lack of group difference indicates that this intuition could be acquired simply on the basis of explicit knowledge. The second experiment involved training over a three-week period using a larger variety of stimuli. Performance on both previously trained and novel items was examined over time. Performance on the familiar items was higher than on the novel items, but performance on the latter improved over time. While the lack of improvement with the familiar items suggests memory-based performance, the improvement with novel items demonstrated generalized learning, indicating abstraction of the conversion rules such that they could be applied to interpret auditory patterns coding new visual information. Such generalization could provide a basis for the substitution in a constantly changing visual environment.
---
paper_title: A real-time experimental prototype for enhancement of vision rehabilitation using auditory substitution
paper_content:
The rehabilitation of blindness, using noninvasive methods, requires sensory substitution. A theoretical frame for sensory substitution has been proposed (C. Veraart, 1989) which consists of a model of the deprived sensory system connected to an inverse model of the substitutive sensory system. This paper addresses the feasibility of this conceptual model in the case of auditory substitution, and its implementation as a rough model of the retina connected to an inverse linear model of the cochlea. The authors have developed an experimental prototype. It aims at allowing optimization of the sensory substitution process. This prototype is based on a personal computer which is connected to a miniature head-fixed video camera and to headphones. A visual scene is captured. Image processing achieves edge detection and graded resolution. Each picture element (pixel) of the processed image is assigned a sinusoidal tone; weighted summation of these sinewaves builds up a complex auditory signal which is transduced by the headphones. On-line selection of various parameters and real-time functioning of the device allow optimization of parameters during psychophysical experimentations. Assessment of this implementation has been initiated, and has so far demonstrated prototype usefulness for pattern recognition. An integrated circuit of this system is to be developed.
---
paper_title: Navigation Performance With a Virtual Auditory Display: Effects of Beacon Sound, Capture Radius, and Practice
paper_content:
OBJECTIVE: We examined whether spatialized nonspeech beacons could guide navigation and how sound timbre, waypoint capture radius, and practice affect performance. BACKGROUND: Auditory displays may assist mobility and wayfinding for those with temporary or permanent visual impairment, but they remain understudied. Previous systems have used speech-based interfaces. METHOD: Participants (108 undergraduates) navigated three maps, guided by one of three beacons (pink noise, sonar ping, or 1000-Hz pure tone) spatialized by a virtual reality engine. Dependent measures were efficiency of time and path length. RESULTS: Overall navigation was very successful, with significant effects of practice and capture radius, and interactions with beacon sound. Overshooting and subsequent hunting for waypoints was exacerbated for small radius conditions. A human-scale capture radius (1.5 m) and sonar-like beacon yielded the optimal combination for safety and efficiency. CONCLUSION: The selection of beacon sound and capture radius depend on the specific application, including whether speed of travel or adherence to path are of primary concern. Extended use affects sound preferences and quickly leads to improvements in both speed and accuracy. APPLICATION: These findings should lead to improved wayfinding systems for the visually impaired as well as for first responders (e.g., firefighters) and soldiers. Language: en
---
paper_title: An application of bio-feedback in the rehabilitation of the blind.
paper_content:
The long term blind exhibit diminished awareness of limb position; kinaesthetic feedback alone providing insufficient positional information. Experiments to evaluate simple hand held electronic travel aids for the blind have shown that failure to hold the aid in the correct orientation leads to a failure to detect important hazards. Consequently some potential users may come to reject the aid. Experimental results show that by providing an auditory alarm signal whenever an aid is not held horizontally it is possible to train subjects to maintain the correct holding position. The positive effects of short periods of training with feedback are retained when the feedback is removed. It is proposed to make available a small number of modified (with feedback) aids for use during the early stages of client training.
---
paper_title: A sonar aid to enhance spatial perception of the blind: engineering design and evaluation
paper_content:
The design of an air sonar device with a new form of binaural display is described which aids the blind in perceiving their environment. Some of the limitations of knowledge of human perception and the influence this has on a specification for the device are discussed. Inherent limitations in the binaural aid both in terms of technology development and performance are also explained. The paper described what is expected of the man-machine control system in a mobility setting and discusses the technique of evaluating a manmachinesystem so as to assess the machine performance.
---
paper_title: Real-time assistance prototype — A new navigation aid for blind people
paper_content:
This paper presents a new prototype for being used as a travel aid for blind people. The system is developed to complement traditional navigation systems such as white cane and guide dogs. The system consists of two stereo cameras and a portable computer for processing the environmental information. The aim of the system is to detect the static and dynamic objects from the surrounding environment and transform them into acoustical signals. Through stereophonic headphones, the user perceives the acoustic image of the environment, the volume of the objects, moving object direction and trajectory, its distance relative to the user and the free paths in a range of 5m to 15m. The acoustic signals represent short train of delta sounds externalized with non-individual Head-Related Transfer Functions generated in an anechoic chamber. Experimental results show that users were able to control and navigate with the system safety both in familiar and unfamiliar environments.
---
paper_title: CREATING AND ACCESSING AUDIOTACTILE IMAGES WITH "HFVE" VISION SUBSTITUTION SOFTWARE
paper_content:
The HFVE (Heard and Felt Vision Effects) vision substitution system uses moving speech-like sounds and tactile effects to present aspects of visual images. This paper describes several audio- and interaction-related improvements. A separate "buzz track" allows more accurate perception of shape, and additional sound cues can be added to this new track, instead of distorting the speech. Details are given of improved ways of presenting image “layout”, and the HFVE approach is compared to other audio vision substitution systems. Blind users can create or add to images using a standard computer mouse (or joystick), by hearing similar sound cues. Finally, a facility for defining and capturing material visible on a computer screen is described.
---
paper_title: Auditory Information Design
paper_content:
The prospect of computer applications making “noises” is disconcerting to some. Yet the soundscape of the real world does not usually bother us. Perhaps we only notice a nuisance? This thesis is an approach for designing sounds that are useful information rather than distracting “noise”. The approach is called TaDa because the sounds are designed to be useful in a Task and true to the Data. ::: Previous researchers in auditory display have identified issues that need to be addressed for the field to progress. The TaDa approach is an integrated approach that addresses an array of these issues through a multifaceted system of methods drawn from HCI, visualisation, graphic design and sound design. A task-analysis addresses the issue of usefulness. A data characterisation addresses perceptual faithfulness. A case-based method provides semantic linkage to the application domain. A rule-based method addresses psychoacoustic control. A perceptually linearised sound space allows transportable auditory specifications. Most of these methods have not been used to design auditory displays before, and each has been specially adapted for this design domain. ::: The TaDa methods have been built into computer-aided design tools that can assist the design of a more effective display, and may allow less than experienced designers to make effective use of sounds. The case-based method is supported by a database of examples that can be searched by an information analysis of the design scenario. The rule-based method is supported by a direct manipulation interface which shows the available sound gamut of an audio device as a 3D coloured object that can be sliced and picked with the mouse. These computer-aided tools are the first of their kind to be developed in auditory display. ::: The approach, methods and tools are demonstrated in scenarios from the domains of mining exploration, resource monitoring and climatology. These practical applications show that sounds can be useful in a wide variety of information processing activities which have not been explored before. The sounds provide information that is difficult to obtain visually, and improve the directness of interactions by providing additional affordances.
---
paper_title: Accessing Audiotactile Images with HFVE Silooet
paper_content:
In this paper, recent developments of the HFVE vision-substitution system are described; and the initial results of a trial of the "Silooet" software are reported. The system uses audiotactile methods to present features of visual images to blind people. Included are details of presenting objects found in prepared media and live images; object-related layouts and moving effects (including symbolic paths); and minor enhancements that make the system more practical to use. Initial results are reported from a pilot study that tests the system with untrained users.
---
paper_title: A computer-vision based sensory substitution device for the visually impaired (See ColOr)
paper_content:
Audio-based Sensory Substitution Devices (SSDs) perform adequately when sensing and mapping low-level visual features into sound. Yet, their limitations become apparent when it comes to represent high-level or conceptual information involved in vision. We introduce See ColOr as an SSD that senses color and depth to convert them into musical instrument sounds. In addition and unlike any other approach, our SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (effortlessly for the user). Experiments reported in this thesis reveal that our See ColOr SSD is learnable, functional, and provides easy interaction. In moderate time, participants were able to grasp visual information from the environment out of which they could derive: spatial awareness, ability to find someone, location of daily objects, and skill to walk safely avoiding obstacles. Our encouraging results open a door towards autonomous mobility of the blind.
---
paper_title: From software product lines to software ecosystems
paper_content:
Software product line companies increasingly expand their platform outside their organizational boundaries, in effect transitioning to a software ecosystem approach. In this paper, we discuss the emerging trend of software ecosystems and provide a overview of the key concepts and implications of adopting a software ecosystem approach. We define the notion of software ecosystems and introduce a taxonomy. Finally, we explore the implications of software ecosystems to the way companies build software.
---
paper_title: From continuous improvement to collaborative innovation: the next challenge in supply chain management
paper_content:
This paper considers the growing importance of inter-company collaboration, and develops the concept of intra-company continuous improvement through to what may be termed collaborative innovation between members of an extended manufacturing enterprise (EME). The importance of ICTs to such company networks is considered but research has shown that no amount of technology can overcome a lack of trust and ineffective goal setting between key partners involved in the cross-company projects. Different governance models may also impact on the success or otherwise of the network. This paper provides an overview of the main topics considered in this Special Issue.
---
paper_title: Gyrophone: recognizing speech from gyroscope signals
paper_content:
We show that the MEMS gyroscopes found on modern smart phones are sufficiently sensitive to measure acoustic signals in the vicinity of the phone. The resulting signals contain only very low-frequency information (<200Hz). Nevertheless we show, using signal processing and machine learning, that this information is sufficient to identify speaker information and even parse speech. Since iOS and Android require no special permissions to access the gyro, our results show that apps and active web content that cannot access the microphone can nevertheless eavesdrop on speech in the vicinity of the phone.
---
paper_title: Towards Real-time Emergency Response using Crowd Supported Analysis of Social Media
paper_content:
This position paper outlines an ongoing research project that aims to incorporate crowdsourcing as part of an emergency response system. The proposed system's novelty is that it integrates crowdsourcing into its architecture to analyze and structure social media content posted by microbloggers and service users, including emergency response coordinators and victims, during the event or disaster. An important challenge in this approach is identifying appropriate tasks to crowdsource, and adopting effective motivation strategies. Author Keywords Emergency response, social media, crowdsourcing, text mining.
---
paper_title: A crowdsourcing platform for the construction of accessibility maps
paper_content:
We present in this article a crowdsourcing platform that enables the collaborative creation of accessibility maps. The platform provides means for integration of different kind of data, collected automatically or with user intervention, to augment standard maps with accessibility information. The article shows the architecture of the platform, dedicating special attention to the smartphone applications we developed for data collection. The article also describes a preliminar experiment conducted on field, showing how the analysis of data produced by our solution can bring novel insights in accessibility challenges that can be found in cities.
---
paper_title: Medusa: a programming framework for crowd-sensing applications
paper_content:
The ubiquity of smartphones and their on-board sensing capabilities motivates crowd-sensing, a capability that harnesses the power of crowds to collect sensor data from a large number of mobile phone users. Unlike previous work on wireless sensing, crowd-sensing poses several novel requirements: support for humans-in-the-loop to trigger sensing actions or review results, the need for incentives, as well as privacy and security. Beyond existing crowd-sourcing systems, crowd-sensing exploits sensing and processing capabilities of mobile devices. In this paper, we design and implement Medusa, a novel programming framework for crowd-sensing that satisfies these requirements. Medusa provides high-level abstractions for specifying the steps required to complete a crowd-sensing task, and employs a distributed runtime system that coordinates the execution of these tasks between smartphones and a cluster on the cloud. We have implemented ten crowd-sensing tasks on a prototype of Medusa. We find that Medusa task descriptions are two orders of magnitude smaller than standalone systems required to implement those crowd-sensing tasks, and the runtime has low overhead and is robust to dynamics and resource attacks.
---
paper_title: Combining crowdsourcing and google street view to identify street-level accessibility problems
paper_content:
Poorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility.
---
paper_title: Dissemination in opportunistic mobile ad-hoc networks: The power of the crowd
paper_content:
Opportunistic ad-hoc communication enables portable devices such as smartphones to effectively exchange information, taking advantage of their mobility and locality. The nature of human interaction makes information dissemination using such networks challenging. We use three different experimental traces to study fundamental properties of human interactions. We break our traces down in multiple areas and classify mobile users in each area according to their social behavior: Socials are devices that show up frequently or periodically, while Vagabonds represent the rest of the population. We find that in most cases the majority of the population consists of Vagabonds. We evaluate the relative role of these two groups of users in data dissemination. Surprisingly, we observe that under certain circumstances, which appear to be common in real life situations, the effectiveness of dissemination predominantly depends on the number of users in each class rather than their social behavior, contradicting some of the previous observations. We validate and extend the findings of our experimental study through a mathematical analysis.
---
paper_title: Real-time emergency response: improved management of real-time information during crisis situations
paper_content:
The decision-making process during crisis and emergency scenarios intertwines human intelligence with infocommunications. In such scenarios, the tasks of data acquisition, manipulation, and analysis involve a combination of cognitive processes and information and communications technologies, all of which are vital to effective situational awareness and response capability. To support such capabilities, we describe our real time emergency response (rtER) system, implemented with the intention of helping to manage the potential torrents of data that are available during a crisis, and that could easily overwhelm human cognitive capacity in the absence of technological mediation. Specifically, rtER seeks to address the research challenges surrounding the real-time collection of relevant data, especially live video, making this information rapidly available to a team of humans, and giving them the tools to manipulate, tag, and filter the most critical information of relevance to the situation.
---
paper_title: Collaborative navigation of visually impaired
paper_content:
A navigation system for visually impaired users can be much more efficient if it is based on collaboration among visually impaired persons and on utilising distributed knowledge about the environment in which the navigation task takes place. To design a new system of this kind, it is necessary to make a study of communication among visually impaired users while navigating in a given environment and on their regularly walked routes. A qualitative study was conducted to gain insight into the issue of communication among visually impaired persons while they are navigating in an unknown environment, and our hypotheses were validated by a quantitative study with a sample of 54 visually impaired respondents. A qualitative study was conducted with 20 visually impaired participants aimed at investigating regularly walked routes used by visually impaired persons. The results show that most visually impaired users already collaborate on navigation, and consider an environment description from other visually impaired persons to be adequate for safe and efficient navigation. It seems that the proposed collaborative navigation system is based on the natural behaviour of visually impaired persons. In addition, it has been shown that a network of regularly walked routes can significantly expand the urban area in which visually impaired persons are able to navigate safely and efficiently.
---
paper_title: Supporting Accessibility for Blind and Vision-impaired People With a Localized Gazetteer and Open Source Geotechnology
paper_content:
Disabled people, especially the blind and vision-impaired, are challenged by many transitory hazards in urban environments such as construction barricades, temporary fencing across walkways, and obstacles along curbs. These hazards present a problem for navigation, because they typically appear in an unplanned manner and are seldom included in databases used for accessibility mapping. Tactile maps are a traditional tool used by blind and vision-impaired people for navigation through urban environments, but such maps are not automatically updated with transitory hazards. As an alternative approach to static content on tactile maps, we use volunteered geographic information (VGI) and an Open Source system to provide updates of local infrastructure. These VGI updates, contributed via voice, text message, and e-mail, use geographic descriptions containing place names to describe changes to the local environment. After they have been contributed and stored in a database, we georeference VGI updates with a detailed gazetteer of local place names including buildings, administrative offices, landmarks, roadways, and dormitories. We publish maps and alerts showing transitory hazards, including location-based alerts delivered to mobile devices. Our system is built with several technologies including PHP, JavaScript, AJAX, Google Maps API, PostgreSQL, an Open Source database, and PostGIS, the PostgreSQL's spatial extension. This article provides insight into the integration of user-contributed geospatial information into a comprehensive system for use by the blind and vision-impaired, focusing on currently developed methods for geoparsing and georeferencing using a gazetteer.
---
paper_title: THE WALKING STRAIGHT MOBILE APPLICATION: HELPING THE VISUALLY IMPAIRED AVOID VEERING
paper_content:
The visually impaired community still faces many challenges with safely navigating their environment. They rely heavily on speechbased GPS in addition to their usual guiding help. However, GPSbased systems do not help with veering issues, which affect the ability of the visually impaired to maintain a straight path. Some research systems provide feedback intended to correct veering, but these tend to employ bulky, custom hardware. In response, we implemented our “Walking Straight” application on an existing consumer device, taking advantage of the built-in sensors on smartphones. First, we investigated whether a continuous or discrete form of non-speech audio feedback was more effective in keeping participants on a straight path. The most effective form was then tested with nine blind participants. The promising results demonstrate that Walking Straight significantly reduced the participants’ deviationfrom astraight pathascompared totheir usual behaviour, e.g., with a guide dog or cane, without affecting their pace.
---
paper_title: Navigational 3D audio-based game-training towards rich auditory spatial representation of the environment
paper_content:
As the number of people suffering from visual impairments continuously increases, there is strong need for efficient sensory substitution devices, that can support creating a rich mental spatial depiction of the environment. The use of the auditory sense has proved to be an effective approach towards creating a method of interaction with the elements of the surrounding space in a way which resembles the natural 3D visual representation of normal sighted people. Training is an essential component in the process of employing an auditory-based visual substitution device for blind people, as it helps them to learn and become proficient to process and decode the audio information and convert it into spatial mental representation. Taking into account the well-known advantages of game based learning, we propose a new method of training, consisting in a navigational 3D audio-based game. In this exploratory, goal-directed application, the player has to perform route-navigational tasks under different conditions, with the purpose of training and testing their orientation and mobility skills, relying exclusively on the perception of 3D audio cues. Experimental results showed that this game-based learning strategy leads to substantial improvements and can be a starting point for developing more enhanced sound-based navigational applications. The ludic-oriented, motivational training approach achieved straightforward immersion and concentration on the cognitive depiction of the environment, ensuring behavioral gains in the sound-directed spatial orientation.
---
paper_title: THE DESIGN OF AN AUDIO FILM FOR THE VISUALLY IMPAIRED
paper_content:
1. ABSTRACT Nowadays, Audio Description is used to enable visually impaired people to access films. However, it presents an important limitation, which consists in the need of the visually impaired audiences to rely on a describer, not being able to access the work directly. The aim of this project was to design a format of sonic art called audio film that eliminates the need of visual elements and of a describer, by providing information solely through sound, sound processing and spatialization, and which might be considered as an alternative to Audio Description. In order to explore the viability of this format an example has been designed based on Roald Dahl’s Lamb to the Slaughter (1954) using a 6.1 surround sound configuration. Through the design of this example it could be noticed that this format can successfully convey a story without the need of either visual elements or of a narrator.
---
paper_title: Definition and Synergies of Cognitive Infocommunications
paper_content:
In this paper, we provide the finalized definition of Cognitive Infocommunications (CogInfoCom). Following the definition, we briefly describe the scope and goals of CogInfoCom, and discuss the common interests between CogInfoCom and the various research disciplines which contribute to this new field in a synergistic way.
---
paper_title: Sensory dominance in combinations of audio, visual and haptic stimuli
paper_content:
Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.
---
paper_title: Visual Touch in Virtual Environments: An Exploratory Study of Presence, Multimodal Interfaces, and Cross-Modal Sensory Illusions
paper_content:
How do users generate an illusion of presence in a rich and consistent virtual environment from an impoverished, incomplete, and often inconsistent set of sensory cues? We conducted an experiment to explore how multimodal perceptual cues are integrated into a coherent experience of virtual objects and spaces. Specifically, we explored whether inter-modal integration contributes to generating the illusion of presence in virtual environments. ::: ::: To discover whether intermodal integration might play a role in presence, we looked for evidence of intermodal integration in the form of cross-modal interactions---perceptual illusions in which users use sensory cues in one modality to “fill in” the “missing” components of perceptual experience. One form of cross-modal interaction, a cross-modal transfer, is defined as a form of synesthesia, that is, a perceptual illusion in which stimulation to a sensory modality connected to the interface (such as the visual modality) is accompanied by perceived stimulation to an unconnected sensory modality that receives no apparent stimulation from the virtual environment (such as the haptic modality). Users of our experimental virtual environment who manipulated the visual analog of a physical force, a virtual spring, reported haptic sensations of “physical resistance”, even though the interface included no haptic displays. A path model of the data suggested that this cross-modal illusion was correlated with and dependent upon the sensation of spatial and sensory presence. ::: ::: We conclude that this is evidence that presence may derive from the process of multi-modal integration and, therefore, may be associated with other illusions, such as cross-modal transfers, that result from the process of creating a coherent mental model of the space. Finally, we suggest that this perceptual phenomenon might be used to improve user experiences with multimodal interfaces, specifically by supporting limited sensory displays (such as haptic displays) with appropriate synesthetic stimulation to other sensory modalities (such as visual and auditory analogs of haptic forces).
---
paper_title: The Spiral Discovery Method: An Interpretable Tuning Model for CogInfoCom Channels
paper_content:
Cognitive Infocommunications (CogInfoCom) messages that are used to carry information on the state of the same high-level concept can be regarded as belonging to a CogInfoCom channel. Such channels can be generated using any kind of parametric model. By changing the values of the parameters, it is possible to arrive at a large variety of CogInfoCom messages, a subset of which can belong to a CogInfoCom channel -provided they are perceptually well-suited to the purpose of conveying information on the same highlevel concept. Thus, for any CogInfoCom channel, we may speak of a parameter space and a perceptual space that is created by the totality of messages in the CogInfoCom channel. In this paper, we argue that in general, the relationship between the parameter space and the perceptual space is highly non-linear. For this reason, it is extremely difficult for the designer of a CogInfoCom channel to tune the parameters in such a way that the resulting CogInfoCom messages are perceptually continuous, and suitable to carry information on a single high-level concept. To address this problem, we propose a cognitive artifact that uses a rank concept available in tensor algebra to provide the designer of CogInfoCom channels with practical tradeoffs between complexity and interpretability. We refer to the artifact as the Spiral Discovery Method (SDM).
---
paper_title: Oversketching and associated audio-based feedback channels for a virtual sketching application
paper_content:
With the growing relevance of human interaction with infocommunications in general and augmented/virtual environments in particular, it is becoming increasingly important to provide users with a “virtual physics” capable of emulating the richness and subtle informativeness of multimodal feedback in the physical world. In this paper, we describe an extension to an existing immersive virtual sketching application which consists of an oversketching interaction mode, and a set of associated audio-based feedback signals. The implemented oversketching functionality allows users make incremental corrections to the drawing, helping them to emphasize certain aspects while deemphasizing others. The audio-based feedback signals, in turn, support a better understanding of the progress of oversketching, when the goal is to transition from curved to straight line segments, or vice versa.
---
paper_title: Visual cues and virtual touch: Role of visual stimuli and intersensory integration in cross-modal haptic illusions and the sense of presence.
paper_content:
Summary Intermodal integration (sometimes referred to as intersensory integration) may be a key psychological mechanism contributing to a sense of presence in virtual environments. Sensorimotor processes associated with multimodal integration may integrate perceptual cues and motor actions into a coherent experience and relatively consistent model of objects and spaces. When the cues come from virtual environments, intermodal integration may generate a sense of presence in a coherent virtual world. Because stimuli from virtual environments frequently fail to provide coherent and consistent cues, evidence of the role of intermodal integration might be found in intersensory illusions, the results of the user’s attempt to integrate an inconsistent environment. Secondly, if intermodal integration plays a role in the generation of presence, then intersensory illusions should be correlated with the illusion of presence in a coherent virtual world.
---
|
Title: A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research
Section 1: Introduction
Description 1: Introduce the prevalence of state-of-the-art technology among visually impaired users, discuss the challenges in designing user interfaces, and present the focus and structure of the paper.
Section 2: Overview of auditory and haptic feedback methods
Description 2: Describe basic auditory and haptic feedback techniques, including historical context, theoretical underpinnings, and various representations used in these domains.
Section 3: Systems in assistive engineering based on tactile solutions
Description 3: Discuss assistive engineering systems that rely on tactile solutions, covering specific examples, historical evolution, and current trends.
Section 4: Systems in assistive engineering based on auditory solutions
Description 4: Examine assistive engineering systems that utilize auditory solutions, detailing early systems, technological advancements, and significant examples in the field.
Section 5: Systems in assistive engineering based on auditory and tactile solutions
Description 5: Explore systems that combine auditory and tactile modalities, focusing on recent developments, key examples, and the integration of multimodal feedback.
Section 6: Summary of key observations
Description 6: Summarize the key findings from previous sections, emphasizing the range of feedback solutions, their usability, and effectiveness.
Section 7: Generic capabilities of mobile computing platforms
Description 7: Highlight the general-purpose computing, advanced sensory capabilities, and data integration features of mobile platforms that support assistive technologies.
Section 8: State-of-the-art applications for mobile platforms
Description 8: Provide an overview of current applications available on Android and iOS platforms to assist visually impaired users in performing everyday tasks.
Section 9: Applications for Android
Description 9: Detail various existing assistive solutions specifically developed for Android, including text-to-speech, typing aids, and navigation support.
Section 10: Applications for iOS
Description 10: Examine assistive applications available on iOS, covering similar categories as Android, and highlighting unique features and solutions.
Section 11: Towards a research agenda for mobile assistive technology for visually impaired users
Description 11: Propose a research agenda for the further development of mobile assistive technology, identifying key areas and issues that need to be addressed.
Section 12: Summary
Description 12: Conclude the paper with a brief summary of discussed topics, emphasizing the significance and potential of assistive technologies on mobile platforms.
|
The Importance of System Integration in Intensive Care Units - A Review.
| 8 |
---
paper_title: Computerized physician order entry in the critical care and general inpatient setting: a narrative review.
paper_content:
Computerized physician order entry (CPOE) is an increasingly used technologic tool for entering clinician orders, especially for medications and laboratory and diagnostic tests. Studies in hospitalized patients, including critically ill patients, have demonstrated that CPOE, especially with decision support, improves several outcomes. These improved outcomes include clinical measures such as reductions in serious medication errors and enhanced antimicrobial management of critically ill patients resulting in reduced length of stay. Additionally, several process outcomes have improved with CPOE such as increased compliance with evidence-based practices, reductions in unnecessary laboratory tests and cost savings in pharmacotherapeutics. Future studies are needed to demonstrate the benefits of more patient specific decision support interventions and the seamless integration of CPOE into a wireless, computerized medication administration system.
---
paper_title: New approaches toward the fully digital integrated management of a burn unit
paper_content:
In this paper, the design of an application that allows the integrated management of a burn unit is reported. Starting with the problems associated with the current procedures, technical solutions are found from the requirements demanded by the specialists. The major design considerations and implementation details are outlined. Special attention is devoted to the prescription of drugs and inventory control, as well as reducing the time that healthcare professionals spend in administrative tasks. The developed implementation is an example of a low-cost system suitable for adoption in a wide range of units in a hospital organization.
---
paper_title: Computerized clinical documentation system in the pediatric intensive care unit
paper_content:
BackgroundTo determine whether a computerized clinical documentation system (CDS): 1) decreased time spent charting and increased time spent in patient care; 2) decreased medication errors; 3) improved clinical decision making; 4) improved quality of documentation; and/or 5) improved shift to shift nursing continuity.MethodsBefore and after implementation of CDS, a time study involving nursing care, medication delivery, and normalization of serum calcium and potassium values was performed. In addition, an evaluation of completeness of documentation and a clinician survey of shift to shift reporting were also completed. This was a modified one group, pretest-posttest design.ResultsWith the CDS there was: improved legibility and completeness of documentation, data with better accessibility and accuracy, no change in time spent in direct patient care or charting by nursing staff. Incidental observations from the study included improved management functions of our nurse manager; improved JCAHO documentation compliance; timely access to clinical data (labs, vitals, etc); a decrease in time and resource use for audits; improved reimbursement because of the ability to reconstruct lost charts; limited human data entry by automatic data logging; eliminated costs of printing forms. CDS cost was reasonable.ConclusionsWhen compared to a paper chart, the CDS provided a more legible, compete, and accessible patient record without affecting time spent in direct patient care. The availability of the CDS improved shift to shift reporting. Other observations showed that the CDS improved management capabilities; helped physicians deliver care; improved reimbursement; limited data entry errors; and reduced costs.
---
paper_title: The Impact of a Clinical Information System in an Intensive Care Unit
paper_content:
PURPOSE ::: Although clinical information systems (CISs) have been available and implemented in many Intensive care Units (ICUs) for more than a decade, there is little objective evidence of their impact on the quality of care and staff perceptions. This study was performed to compare time spent charting with pen and paper patient data versus time spent with the new electronic CIS and to evaluate staff perceptions of a CIS in an ICU. ::: ::: ::: MATERIALS AND METHODS ::: Time spent every day was calculated for each patient, for 7 days, for recording on the paper vital signs and physician therapeutic orders and time spent for computing fluid balance and scores. This time was then compared with time required to make the same activities by means of CIS, 10 months after its introduction in ICU. Four years after the installation of CIS, a questionnaire was given to all staff attending to the ICU to evaluate their opinions of the CIS. ::: ::: ::: RESULTS ::: The CIS took less staff time to record common ICU data than paper records (3 +/- 2 minutes/day versus 37 +/- 7 minutes/day respectively, P< 0.001). Perceptions of the CIS were that computers promoted an improving charting quality. ::: ::: ::: CONCLUSIONS ::: The implementation of a CIS was associated with a reduced time spent for daily activity and a positive medical and nursing staff perception.
---
paper_title: Computerized physician order entry in the critical care and general inpatient setting: a narrative review.
paper_content:
Computerized physician order entry (CPOE) is an increasingly used technologic tool for entering clinician orders, especially for medications and laboratory and diagnostic tests. Studies in hospitalized patients, including critically ill patients, have demonstrated that CPOE, especially with decision support, improves several outcomes. These improved outcomes include clinical measures such as reductions in serious medication errors and enhanced antimicrobial management of critically ill patients resulting in reduced length of stay. Additionally, several process outcomes have improved with CPOE such as increased compliance with evidence-based practices, reductions in unnecessary laboratory tests and cost savings in pharmacotherapeutics. Future studies are needed to demonstrate the benefits of more patient specific decision support interventions and the seamless integration of CPOE into a wireless, computerized medication administration system.
---
paper_title: Comparison of a commercially available clinical information system with other methods of measuring critical care outcomes data.
paper_content:
PURPOSE ::: To compare the quality of data recorded by a commercially available clinical information system (CIS) to other commonly used methods for obtaining large amounts of patient data. ::: ::: ::: MATERIALS AND METHODS ::: Five sets of clinical patient data were chosen as a cross-section of all the data collected by a CIS in our intensive care unit (ICU): 1) Length of stay in the ICU, 2) Vital signs, 3) Days of mechanical ventilation, 4) medications, and 5) diagnoses. Data generated by our ICU CIS was compared with other parallel data sets commonly used to obtain the same data for clinical research. ::: ::: ::: RESULTS ::: When compared with our CIS, the hospital database recorded a length of stay at least 1 day longer than the actual length of stay 53% of the time. A search of 139,387 sets of vital signs showed less than 0.1% rate of suspected artifact. When compared to direct observation, our CIS correctly recorded days of mechanical ventilation in 23 of 26 patients (88%). Two other data sets, medical diagnoses and medications given showed significant differences with other commonly used databases of the same information collected outside the ICU (billing codes and pharmacy records respectively ::: ::: ::: CONCLUSIONS ::: Compared to other commonly used data sources for clinical research, a commercially available CIS is an acceptable source of ICU patient data.
---
paper_title: Web-based remote monitoring of infant incubators in the ICU
paper_content:
A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.
---
paper_title: Computerized clinical documentation system in the pediatric intensive care unit
paper_content:
BackgroundTo determine whether a computerized clinical documentation system (CDS): 1) decreased time spent charting and increased time spent in patient care; 2) decreased medication errors; 3) improved clinical decision making; 4) improved quality of documentation; and/or 5) improved shift to shift nursing continuity.MethodsBefore and after implementation of CDS, a time study involving nursing care, medication delivery, and normalization of serum calcium and potassium values was performed. In addition, an evaluation of completeness of documentation and a clinician survey of shift to shift reporting were also completed. This was a modified one group, pretest-posttest design.ResultsWith the CDS there was: improved legibility and completeness of documentation, data with better accessibility and accuracy, no change in time spent in direct patient care or charting by nursing staff. Incidental observations from the study included improved management functions of our nurse manager; improved JCAHO documentation compliance; timely access to clinical data (labs, vitals, etc); a decrease in time and resource use for audits; improved reimbursement because of the ability to reconstruct lost charts; limited human data entry by automatic data logging; eliminated costs of printing forms. CDS cost was reasonable.ConclusionsWhen compared to a paper chart, the CDS provided a more legible, compete, and accessible patient record without affecting time spent in direct patient care. The availability of the CDS improved shift to shift reporting. Other observations showed that the CDS improved management capabilities; helped physicians deliver care; improved reimbursement; limited data entry errors; and reduced costs.
---
paper_title: The Impact of a Clinical Information System in an Intensive Care Unit
paper_content:
PURPOSE ::: Although clinical information systems (CISs) have been available and implemented in many Intensive care Units (ICUs) for more than a decade, there is little objective evidence of their impact on the quality of care and staff perceptions. This study was performed to compare time spent charting with pen and paper patient data versus time spent with the new electronic CIS and to evaluate staff perceptions of a CIS in an ICU. ::: ::: ::: MATERIALS AND METHODS ::: Time spent every day was calculated for each patient, for 7 days, for recording on the paper vital signs and physician therapeutic orders and time spent for computing fluid balance and scores. This time was then compared with time required to make the same activities by means of CIS, 10 months after its introduction in ICU. Four years after the installation of CIS, a questionnaire was given to all staff attending to the ICU to evaluate their opinions of the CIS. ::: ::: ::: RESULTS ::: The CIS took less staff time to record common ICU data than paper records (3 +/- 2 minutes/day versus 37 +/- 7 minutes/day respectively, P< 0.001). Perceptions of the CIS were that computers promoted an improving charting quality. ::: ::: ::: CONCLUSIONS ::: The implementation of a CIS was associated with a reduced time spent for daily activity and a positive medical and nursing staff perception.
---
paper_title: Computerized physician order entry in the critical care and general inpatient setting: a narrative review.
paper_content:
Computerized physician order entry (CPOE) is an increasingly used technologic tool for entering clinician orders, especially for medications and laboratory and diagnostic tests. Studies in hospitalized patients, including critically ill patients, have demonstrated that CPOE, especially with decision support, improves several outcomes. These improved outcomes include clinical measures such as reductions in serious medication errors and enhanced antimicrobial management of critically ill patients resulting in reduced length of stay. Additionally, several process outcomes have improved with CPOE such as increased compliance with evidence-based practices, reductions in unnecessary laboratory tests and cost savings in pharmacotherapeutics. Future studies are needed to demonstrate the benefits of more patient specific decision support interventions and the seamless integration of CPOE into a wireless, computerized medication administration system.
---
paper_title: Web-based remote monitoring of infant incubators in the ICU
paper_content:
A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.
---
paper_title: Comparison of a commercially available clinical information system with other methods of measuring critical care outcomes data.
paper_content:
PURPOSE ::: To compare the quality of data recorded by a commercially available clinical information system (CIS) to other commonly used methods for obtaining large amounts of patient data. ::: ::: ::: MATERIALS AND METHODS ::: Five sets of clinical patient data were chosen as a cross-section of all the data collected by a CIS in our intensive care unit (ICU): 1) Length of stay in the ICU, 2) Vital signs, 3) Days of mechanical ventilation, 4) medications, and 5) diagnoses. Data generated by our ICU CIS was compared with other parallel data sets commonly used to obtain the same data for clinical research. ::: ::: ::: RESULTS ::: When compared with our CIS, the hospital database recorded a length of stay at least 1 day longer than the actual length of stay 53% of the time. A search of 139,387 sets of vital signs showed less than 0.1% rate of suspected artifact. When compared to direct observation, our CIS correctly recorded days of mechanical ventilation in 23 of 26 patients (88%). Two other data sets, medical diagnoses and medications given showed significant differences with other commonly used databases of the same information collected outside the ICU (billing codes and pharmacy records respectively ::: ::: ::: CONCLUSIONS ::: Compared to other commonly used data sources for clinical research, a commercially available CIS is an acceptable source of ICU patient data.
---
|
Title: The Importance of System Integration in Intensive Care Units - A Review
Section 1: INTRODUCTION
Description 1: Write about the ICU setup, the necessity of detailed information for treatment, and introduce the purpose and questions of the study on system integration in ICUs.
Section 2: METHODS
Description 2: Describe the literature search process, inclusion criteria, and the steps taken to select and analyze relevant articles for the review.
Section 3: RESULTS
Description 3: Summarize the findings from the analyzed articles on the impact of system integration, CIS, and computerized medical records on patient outcomes.
Section 4: Computerized Medical Records and Outcome and Process Assessment
Description 4: Discuss the results from studies that focused on computerized medical records, including time spent, accuracy, and the effect on patient outcomes.
Section 5: Records and Computerized Decision Making
Description 5: Explore findings related to the impact of centralized clinical decision systems on decision-making speed, error reduction, and overall patient care.
Section 6: System Integration and Computerized Decision Making
Description 6: Highlight studies that examine the role of system integration in continuous monitoring and decision making within ICU settings.
Section 7: DISCUSSION
Description 7: Evaluate the importance of centralizing data from various sources within the ICU, review study limitations, and suggest implications for clinical practice.
Section 8: CONCLUSIONS
Description 8: Summarize the key points regarding the clinical use of integrated systems, weigh the benefits against the challenges, and propose directions for future research.
|
A Survey of Spatio-Temporal Grouping Techniques
| 17 |
---
paper_title: Video segmentation based on multiple features for interactive and automatic multimedia applications
paper_content:
In this thesis, a novel method for the segmentation of video sequences based on the analysis of multiple image features is presented. A key feature of the system is the distinction between two levels of segmentation, namely region and object segmentation. Regions are homogeneous areas of the images, which are extracted automatically by the computer. Semantically meaningful objects are obtained by grouping regions, automatically or through user interaction, according to the specific application. This splitting relieves the computer of ill-posed semantic problems, and allows a higher level of flexibility in the use of the results. The extraction of the regions is based the multidimensional analysis of several image features by a spatially constrained Fuzzy C-Means algorithm. The relative weighting of the different features is achieved by means of an adaptive system that takes into account the local level of reliability of each feature. The temporal tracking of the obtained regions is performed by means of a dual strategy in which the motion-compensated projection of the segmentation mask from previous frames is used to influence the segmentation of the current frame so as to achieve higher temporal coherence and stability.
---
paper_title: Motion Segmentation by Subspace Separation: Model Selection and Reliability Evaluation
paper_content:
Reformulating the Costeira–Kanade algorithm as a pure mathematical theorem, we present a robust segmentation procedure, which we call subspace separation, by incorporating model selection using the geometric AIC. We then study the problem of estimating the number of independent motions using model selection. Finally, we present criteria for evaluating the reliability of individual segmentation results. Again, model selection plays an important role. We confirm the effectiveness of our method by experiments using synthetic and real images.
---
paper_title: A probabilistic framework for spatio-temporal video representation and indexing
paper_content:
In this work we describe a novel statistical video representation and modeling scheme. Video representation schemes are needed to enable segmenting a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Guassian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static vs. dynamic video regions and video content editing are presented.
---
paper_title: Unsupervised video segmentation based on watersheds and temporal tracking
paper_content:
This paper presents a technique for unsupervised video segmentation. This technique consists of two phases: initial segmentation and temporal tracking, similar to a number of existing techniques. However, new algorithms for spatial segmentation, marker extraction, and modified watershed transformation are proposed for the present technique. The new algorithms make this technique differ from existing techniques by the following features: (1) it can effectively track fast moving objects, (2) it can detect the appearance of new objects as well as the disappearance of existing objects, and (3) it is computationally efficient because of the use of watershed transformations and a fast motion estimation algorithm. Simulation results demonstrate that the proposed technique can efficiently segment video sequences with fast moving, newly appearing, or disappearing objects in the scene.
---
paper_title: Concerning Bayesian Motion Segmentation, Model Averaging, Matching and the Trifocal Tensor
paper_content:
Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori.
---
paper_title: Efficient spatiotemporal grouping using the Nystrom method
paper_content:
Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation, but due to the computational demands, applications of such methods to spatiotemporal data have been slow to appear For even a short video sequence, the set of all pairwise voxel similarities is a huge quantity of data: one second of a 256/spl times/384 sequence captured at 30 Hz entails on the order of 10/sup 13/ pairwise similarities. The contribution of this paper is a method that substantially reduces the computational requirements of grouping algorithms based on spectral partitioning, making it feasible to apply them to very large spatiotemporal grouping problems. Our approach is based on a technique for the numerical solution of eigenfunction problems known as the Nystrom method This method allows extrapolation of the complete grouping solution using only a small number of "typical" samples. In doing so, we successfully exploit the fact that there are far fewer coherent groups in an image sequence than pixels.
---
paper_title: A Compact and Retrieval-Oriented Video Representation Using Mosaics
paper_content:
Compact yet intuitive representations of digital videos are required to combine high quality storage with interactive video indexing and retrieval capabilities. The advent of video mosaicing has provided a natural way to obtain content-based video representations which are both retrieval-oriented and compression-efficient. In this paper, an algorithm for extracting a robust mosaic representation of video content from sparse interest image points is described. The representation, which is obtained via visual motion clustering and segmentation, features the geometric and kinematic description of all salient objects in the scene, being thus well suited for video browsing, indexing and retrieval by visual content. Results of experiments on several TV sequences provide an insight into the main characteristics of the approach.
---
paper_title: Unsupervised Segmentation of Color-Texture Regions in Images and Video
paper_content:
A method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by their corresponding color class labels, thus forming a class-map of the image. The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed. Applying the criterion to local windows in the class-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions. A region growing method is then used to segment the image based on the multiscale J-images. A similar approach is applied to video sequences. An additional region tracking scheme is embedded into the region growing process to achieve consistent segmentation and tracking results, even for scenes with nonrigid object motion. Experiments show the robustness of the JSEG algorithm on real images and video.
---
paper_title: Segmentation using eigenvectors: a unifying view
paper_content:
Automatic grouping and segmentation of images remains a challenging problem in computer vision. Recently, a number of authors have demonstrated good performance on this task using methods that are based on eigenvectors of the affinity matrix. These approaches are extremely attractive in that they are based on simple eigendecomposition algorithms whose stability is well understood. Nevertheless, the use of eigendecompositions in the context of segmentation is far from well understood. In this paper we give a unified treatment of these algorithms, and show the close connections between them while highlighting their distinguishing features. We then prove results on eigenvectors of block matrices that allow us to analyze the performance of these algorithms in simple grouping settings. Finally, we use our analysis to motivate a variation on the existing methods that combines aspects from different eigenvector segmentation algorithms. We illustrate our analysis with results on real and synthetic images.
---
paper_title: A region-level motion-based graph representation and labeling for tracking a spatial image partition
paper_content:
Abstract This paper addresses two image sequence analysis issues under a common framework. These tasks are, first, motion-based segmentation and second, updating and tracking over time of a spatial partition of an image. By spatial partition, we mean that constituent regions display an intensity, color or texture-based homogeneity criterion. Several issues in dynamic scene analysis or in image sequence coding can motivate this kind of development. A general-purpose methodology involving a region-level motion-based graph representation of the partition is presented. This graph is built from the topology of the spatial segmentation map. A statistical motion-based labeling of its nodes is carried out and formalized within a Markovian approach. Groups of spatial regions with consistent motion are identified using this labeling framework, leading to a motion-based segmentation that is both useful in itself and for propagating the spatial partition over time. Results on synthetic and real-world image sequences are shown, and provide a validation of the proposed approach.
---
paper_title: A subspace approach to layer extraction
paper_content:
Representing images with layers has many important applications, such as video compression, motion analysis, and 3D scene analysis. This paper presents an approach to reliably extracting layers from images by taking advantages of the fact that homographies induced by planar patches in the scene form a low dimensional linear subspace. Layers in the input images will be mapped in the subspace, where it is proven that they form well-defined clusters and can be reliably identified by a simple mean-shift based clustering algorithm. Global optimality is achieved since all valid regions are simultaneously taken into account, and noise can be effectively reduced by enforcing the subspace constraint. Good layer descriptions are shown to be extracted in the experimental results.
---
paper_title: Computing occluding and transparent motions
paper_content:
Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions.
---
paper_title: Spatiotemporal Segmentation Based on Region Merging
paper_content:
This paper proposes a technique for spatio-temporal segmentation to identify the objects present in the scene represented in a video sequence. This technique processes two consecutive frames at a time. A region-merging approach is used to identify the objects in the scene. Starting from an oversegmentation of the current frame, the objects are formed by iteratively merging regions together. Regions are merged based on their mutual spatio-temporal similarity. We propose a modified Kolmogorov-Smirnov test for estimating the temporal similarity. The region-merging process is based on a weighted, directed graph. Two complementary graph-based clustering rules are proposed, namely, the strong rule and the weak rule. These rules take advantage of the natural structures present in the graph. Experimental results on different types of scenes demonstrate the ability of the proposed technique to automatically partition the scene into its constituent objects.
---
paper_title: Unsupervised video segmentation based on watersheds and temporal tracking
paper_content:
This paper presents a technique for unsupervised video segmentation. This technique consists of two phases: initial segmentation and temporal tracking, similar to a number of existing techniques. However, new algorithms for spatial segmentation, marker extraction, and modified watershed transformation are proposed for the present technique. The new algorithms make this technique differ from existing techniques by the following features: (1) it can effectively track fast moving objects, (2) it can detect the appearance of new objects as well as the disappearance of existing objects, and (3) it is computationally efficient because of the use of watershed transformations and a fast motion estimation algorithm. Simulation results demonstrate that the proposed technique can efficiently segment video sequences with fast moving, newly appearing, or disappearing objects in the scene.
---
paper_title: Learning flexible sprites in video layers
paper_content:
We propose a technique for automatically learning layers of "flexible sprites" (probabilistic 2-dimensional appearance maps and masks of moving, occluding objects). The model explains each input image as a layered composition of flexible sprites. A variational expectation maximization algorithm is used to learn a mixture of sprites from a video sequence. For each input image, probabilistic inference is used to infer the sprite class, translation, mask values and pixel intensities (including obstructed pixels) in each layer. Exact inference is intractable, but we show how a variational inference technique can be used to process 320/spl times/240 images at 1 frame/second. The only inputs to the learning algorithm are the video sequence, the number of layers and the number of flexible sprites. We give results on several tasks, including summarizing a video sequence with sprites, point-and-click video stabilization, and point-and-click object removal.
---
paper_title: A region-level motion-based graph representation and labeling for tracking a spatial image partition
paper_content:
Abstract This paper addresses two image sequence analysis issues under a common framework. These tasks are, first, motion-based segmentation and second, updating and tracking over time of a spatial partition of an image. By spatial partition, we mean that constituent regions display an intensity, color or texture-based homogeneity criterion. Several issues in dynamic scene analysis or in image sequence coding can motivate this kind of development. A general-purpose methodology involving a region-level motion-based graph representation of the partition is presented. This graph is built from the topology of the spatial segmentation map. A statistical motion-based labeling of its nodes is carried out and formalized within a Markovian approach. Groups of spatial regions with consistent motion are identified using this labeling framework, leading to a motion-based segmentation that is both useful in itself and for propagating the spatial partition over time. Results on synthetic and real-world image sequences are shown, and provide a validation of the proposed approach.
---
paper_title: A subspace approach to layer extraction
paper_content:
Representing images with layers has many important applications, such as video compression, motion analysis, and 3D scene analysis. This paper presents an approach to reliably extracting layers from images by taking advantages of the fact that homographies induced by planar patches in the scene form a low dimensional linear subspace. Layers in the input images will be mapped in the subspace, where it is proven that they form well-defined clusters and can be reliably identified by a simple mean-shift based clustering algorithm. Global optimality is achieved since all valid regions are simultaneously taken into account, and noise can be effectively reduced by enforcing the subspace constraint. Good layer descriptions are shown to be extracted in the experimental results.
---
paper_title: Computing occluding and transparent motions
paper_content:
Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions.
---
paper_title: Spatiotemporal Segmentation Based on Region Merging
paper_content:
This paper proposes a technique for spatio-temporal segmentation to identify the objects present in the scene represented in a video sequence. This technique processes two consecutive frames at a time. A region-merging approach is used to identify the objects in the scene. Starting from an oversegmentation of the current frame, the objects are formed by iteratively merging regions together. Regions are merged based on their mutual spatio-temporal similarity. We propose a modified Kolmogorov-Smirnov test for estimating the temporal similarity. The region-merging process is based on a weighted, directed graph. Two complementary graph-based clustering rules are proposed, namely, the strong rule and the weak rule. These rules take advantage of the natural structures present in the graph. Experimental results on different types of scenes demonstrate the ability of the proposed technique to automatically partition the scene into its constituent objects.
---
paper_title: Smoothness in layers: Motion segmentation using nonparametric mixture estimation
paper_content:
Grouping based on common motion, or "common fate" provides a powerful cue for segmenting image sequences. Recently a number of algorithms have been developed that successfully perform motion segmentation by assuming that the motion of each group can be described by a low dimensional parametric model (e.g. affine). Typically the assumption is that motion segments correspond to planar patches in 3D undergoing rigid motion. Here we develop an alternative approach, where the motion of each group is described by a smooth dense flow field and the stability of the estimation is ensured by means of a prior distribution on the class of flow fields. We present a variant of the EM algorithm that can segment image sequences by fitting multiple smooth flow fields to the spatiotemporal data. Using the method of Green's functions, we show how the estimation of a single smooth flow field can be performed in closed form, thus making the multiple model estimation computationally feasible. Furthermore, the number of models is estimated automatically using similar methods to those used in the parametric approach. We illustrate the algorithm's performance on synthetic and real image sequences.
---
paper_title: Mise en correspondance de partitions en vue du suivi d'objets
paper_content:
In the field of multimedia applications, the incoming standards promote the creation of new ways of communication, access and manipulation of audiovisual information that go far beyond the plain compression obtained by the preceding coding norms. Among the new functionalities, it is expected that the user will be allowed to access the image content by editing and manipulating the objects of interest. Nevertheless, standards are restricted to object representation and coding, leaving opened a large field of development concerning the problem of object extraction and tracking when they move along a video sequence. In a first step, we have proceeded to the study and fine tuning of widespread applicated algorithms for image filtering and segmentation, being these tools at the basis of all contentbased image and video analysis systems. More particularly, we have focused on a novel class of morphological filters known as levelings, as well as on a variant of the segmentation algorithms based on the constrained ooding of a gradient image. Segmentation techniques aim at yielding a partition image as close as possible to the one produced by the human eye, with a view to the later object recognition. Nevertheless, in most cases this last task needs human interaction. However, when we would like to retrieve an object from large collection of images, or when we would like to track an object through a long sequence, the surveillance of each image becomes infeasible. To face these situations, the development of matching algorithms able to propagate the information through a series of images become essential, human interaction being limited to a initialization step. Going from still images to sequences, the core of this thesis is devoted to the study of the partition matching problem. The method we have developed, named Joint Segmentation and Matching technique (JSM), can be defined as being of hybrid nature. It combines classical algorithms of graph matching with new editing techniques based on the hierarchy of partitions resulting from morphological segmentation. This mix provides a very robust algorithm, in spite of the instability classically associated to the segmentation processes. The result of segmenting two images can strongly differ if the segmentation process produces a single partition image, however we have shown that results are much more stable when producing a hierarchy of nested partitions, in which all contours are present and ranked through a weighted value. The JSM technique is considered a very promising approach according to the obtained results. Being flexible and powerful, it allows the recognition of an object when it reappears after occlusion thanks to the management of a memory graph. Although we have particularly focused our interest on the tracking problem, the developed algorithms can be extended to a large field of applications, being specially suited to perform object retrieval from image or video sequences databases. Finally, in the framework of the European project M4M (MPEG f(o)ur mobiles), we have focused on the development and implementation of a real-time demonstrator for detecting, segmenting and tracking the speaker in videophone sequences. In the view of this application, the real-time constraint has become the greatest challenge to overcome, forcing us to simplify and optimize our algorithms. The main interest in terms of new services is twofold : on one hand the automatic segmentation of the speaker permits the object-based coding, reducing the bitrate without loss of quality on the regions of interest ; on the other hand, it allows the user to edit the sequences by changing the scene composition, for example by introducing a new background, or grouping several speakers in a virtual meeting room.
---
paper_title: Unsupervised video segmentation based on watersheds and temporal tracking
paper_content:
This paper presents a technique for unsupervised video segmentation. This technique consists of two phases: initial segmentation and temporal tracking, similar to a number of existing techniques. However, new algorithms for spatial segmentation, marker extraction, and modified watershed transformation are proposed for the present technique. The new algorithms make this technique differ from existing techniques by the following features: (1) it can effectively track fast moving objects, (2) it can detect the appearance of new objects as well as the disappearance of existing objects, and (3) it is computationally efficient because of the use of watershed transformations and a fast motion estimation algorithm. Simulation results demonstrate that the proposed technique can efficiently segment video sequences with fast moving, newly appearing, or disappearing objects in the scene.
---
paper_title: Video retrieval based on dynamics of color flows
paper_content:
Content based video retrieval is particularly challenging because the huge amount of data associated with videos complicates the extraction of salient information content descriptors. Commercials are a video category where a large part of the content depends on low level perceptual features such as colors and color dynamics. These are related to the evolution-in terms of shrinking, growth and translation"of colored regions along consecutive frames. Each colored region, during its evolution, defines a 3D volume: a color flow. In the paper, a system is presented that supports description of color flows based on 3D wavelet decomposition and retrieval of commercials based on color flow similarity.
---
paper_title: NeTra-V: toward an object-based video representation
paper_content:
We present a prototype video analysis and retrieval system, called NeTra-V, that is being developed to build an object-based video representation for functionalities such as search and retrieval of video objects. A region-based content description scheme using low-level visual descriptors is proposed. In order to obtain regions for local feature extraction, a new spatio-temporal segmentation and region-tracking scheme is employed. The segmentation algorithm uses all three visual features: color, texture, and motion in the video data. A group processing scheme similar to the one in the MPEG-2 standard is used to ensure the robustness of the segmentation. The proposed approach can handle complex scenes with large motion. After segmentation, regions are tracked through the video sequence using extracted local features. The results of tracking are sequences of coherent regions, called "subobjects". Subobjects are the fundamental elements in our low-level content description scheme, which can be used to obtain meaningful physical objects in a high-level content description scheme. Experimental results illustrating segmentation and retrieval are provided.
---
paper_title: Unsupervised Segmentation of Color-Texture Regions in Images and Video
paper_content:
A method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by their corresponding color class labels, thus forming a class-map of the image. The focus of this work is on spatial segmentation, where a criterion for "good" segmentation using the class-map is proposed. Applying the criterion to local windows in the class-map results in the "J-image," in which high and low values correspond to possible boundaries and interiors of color-texture regions. A region growing method is then used to segment the image based on the multiscale J-images. A similar approach is applied to video sequences. An additional region tracking scheme is embedded into the region growing process to achieve consistent segmentation and tracking results, even for scenes with nonrigid object motion. Experiments show the robustness of the JSEG algorithm on real images and video.
---
paper_title: Video segmentation based on multiple features for interactive and automatic multimedia applications
paper_content:
In this thesis, a novel method for the segmentation of video sequences based on the analysis of multiple image features is presented. A key feature of the system is the distinction between two levels of segmentation, namely region and object segmentation. Regions are homogeneous areas of the images, which are extracted automatically by the computer. Semantically meaningful objects are obtained by grouping regions, automatically or through user interaction, according to the specific application. This splitting relieves the computer of ill-posed semantic problems, and allows a higher level of flexibility in the use of the results. The extraction of the regions is based the multidimensional analysis of several image features by a spatially constrained Fuzzy C-Means algorithm. The relative weighting of the different features is achieved by means of an adaptive system that takes into account the local level of reliability of each feature. The temporal tracking of the obtained regions is performed by means of a dual strategy in which the motion-compensated projection of the segmentation mask from previous frames is used to influence the segmentation of the current frame so as to achieve higher temporal coherence and stability.
---
paper_title: Learning flexible sprites in video layers
paper_content:
We propose a technique for automatically learning layers of "flexible sprites" (probabilistic 2-dimensional appearance maps and masks of moving, occluding objects). The model explains each input image as a layered composition of flexible sprites. A variational expectation maximization algorithm is used to learn a mixture of sprites from a video sequence. For each input image, probabilistic inference is used to infer the sprite class, translation, mask values and pixel intensities (including obstructed pixels) in each layer. Exact inference is intractable, but we show how a variational inference technique can be used to process 320/spl times/240 images at 1 frame/second. The only inputs to the learning algorithm are the video sequence, the number of layers and the number of flexible sprites. We give results on several tasks, including summarizing a video sequence with sprites, point-and-click video stabilization, and point-and-click object removal.
---
paper_title: Video segmentation by MAP labeling of watershed segments
paper_content:
This paper addresses the problem of spatio-temporal segmentation of video sequences. An initial intensity segmentation method (watershed segmentation) provides a number of initial segments which are subsequently labeled, with a known number of labels, according to motion information. The label field is modeled as a Markov random field where the statistical spatial and and temporal interactions are expressed on the basis of the initial watershed segments. The labeling criterion is the maximization of the conditional a posteriori probability of the label field given the motion hypotheses, the estimate of the label field of the previous frame, and the image intensities. For the optimization, an iterative motion estimation-labeling algorithm is proposed and experimental results are presented.
---
paper_title: Object tracking with Bayesian estimation of dynamic layer representations
paper_content:
Decomposing video frames into coherent 2D motion layers is a powerful method for representing videos. Such a representation provides an intermediate description that enables applications such as object tracking, video summarization and visualization, video insertion, and sprite-based video compression. Previous work on motion layer analysis has largely concentrated on two-frame or multi-frame batch formulations. The temporal coherency of motion layers and the domain constraints on shapes have not been exploited. This paper introduces a complete dynamic motion layer representation in which spatial and temporal constraints on shape, motion and layer appearance are modeled and estimated in a maximum a-posteriori (MAP) framework using the generalized expectation-maximization (EM) algorithm. In order to limit the computational complexity of tracking arbitrarily shaped layer ownership, we propose a shape prior that parameterizes the representation of shape and prevents motion layers from evolving into arbitrary shapes. In this work, a Gaussian shape prior is chosen to specifically develop a near-real-time tracker for vehicle tracking in aerial videos. However, the general idea of using a parametric shape representation as part of the state of a tracker is a powerful one that can be extended to other domains as well. Based on the dynamic layer representation, an iterative algorithm is developed for continuous object tracking over time. The proposed method has been successfully applied in an airborne vehicle tracking system. Its performance is compared with that of a correlation-based tracker and a motion change-based tracker to demonstrate the advantages of the new method. Examples of tracking when the backgrounds are cluttered and the vehicles undergo various rigid motions and complex interactions such as passing, turning, and stop-and-go demonstrate the strength of the complete dynamic layer representation.
---
paper_title: Object detection and tracking using an EM-based motion estimation and segmentation framework
paper_content:
This paper addresses the segmentation and subsequent tracking of the moving objects within a video sequence. The approach is to jointly produce segmentations and motion parameters which minimise the interframe coding rate of the video sequence. This minimisation is performed using a previously proposed framework based on the expectation-maximisation (EM) algorithm and the minimum description length (MDL) estimate. The paper concentrates on extensions of this framework which have been incorporated to ensure spatial and temporal coherence in the output segmentation sequence. The work has been conducted considering video coding applications and in particular, the MPEG-4 standardisation effort.
---
paper_title: Experimental Comparative Evaluation of Feature Point Tracking Algorithms
paper_content:
We consider dynamic scenes with multiple, independently moving objects. The objects are represented by feature points whose motion is tracked in long image sequences. The feature points may temporarily disappear, enter and leave the view field. This situation is typical for surveillance, scene monitoring (Courtney, 1997) and some other applications.
---
paper_title: Motion Segmentation by Subspace Separation: Model Selection and Reliability Evaluation
paper_content:
Reformulating the Costeira–Kanade algorithm as a pure mathematical theorem, we present a robust segmentation procedure, which we call subspace separation, by incorporating model selection using the geometric AIC. We then study the problem of estimating the number of independent motions using model selection. Finally, we present criteria for evaluating the reliability of individual segmentation results. Again, model selection plays an important role. We confirm the effectiveness of our method by experiments using synthetic and real images.
---
paper_title: Good features to track
paper_content:
No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. >
---
paper_title: Concerning Bayesian Motion Segmentation, Model Averaging, Matching and the Trifocal Tensor
paper_content:
Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori.
---
paper_title: A Compact and Retrieval-Oriented Video Representation Using Mosaics
paper_content:
Compact yet intuitive representations of digital videos are required to combine high quality storage with interactive video indexing and retrieval capabilities. The advent of video mosaicing has provided a natural way to obtain content-based video representations which are both retrieval-oriented and compression-efficient. In this paper, an algorithm for extracting a robust mosaic representation of video content from sparse interest image points is described. The representation, which is obtained via visual motion clustering and segmentation, features the geometric and kinematic description of all salient objects in the scene, being thus well suited for video browsing, indexing and retrieval by visual content. Results of experiments on several TV sequences provide an insight into the main characteristics of the approach.
---
paper_title: A multi-body factorization method for motion analysis
paper_content:
The structure from motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. We present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the structure is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object. >
---
paper_title: Multibody grouping via orthogonal subspace decomposition
paper_content:
Multibody structure from motion could be solved by the factorization approach. However, the noise measurements would make the segmentation difficult when analyzing the shape interaction matrix. This paper presents an orthogonal subspace decomposition and grouping technique to approach such a problem. We decompose the object shape spaces into signal subspaces and noise subspaces. We show that the signal subspaces of the object shape spaces are orthogonal to each other. Instead of using the shape interaction matrix contaminated by noise, we introduce the shape signal subspace distance matrix for shape space grouping. Outliers could be easily identified by this approach. The robustness of the proposed approach lies in the fact that the shape space decomposition alleviates the influence of noise, and has been verified with extensive experiments.
---
paper_title: Motion Segmentation by Subspace Separation: Model Selection and Reliability Evaluation
paper_content:
Reformulating the Costeira–Kanade algorithm as a pure mathematical theorem, we present a robust segmentation procedure, which we call subspace separation, by incorporating model selection using the geometric AIC. We then study the problem of estimating the number of independent motions using model selection. Finally, we present criteria for evaluating the reliability of individual segmentation results. Again, model selection plays an important role. We confirm the effectiveness of our method by experiments using synthetic and real images.
---
paper_title: Principal component analysis with missing data and its application to polyhedral object modeling
paper_content:
Observation-based object modeling often requires integration of shape descriptions from different views. To overcome the problems of errors and their accumulation, we have developed a weighted least-squares (WLS) approach which simultaneously recovers object shape and transformation among different views without recovering interframe motion. We show that object modeling from a range image sequence is a problem of principal component analysis with missing data (PCAMD), which can be generalized as a WLS minimization problem. An efficient algorithm is devised. After we have segmented planar surface regions in each view and tracked them over the image sequence, we construct a normal measurement matrix of surface normals, and a distance measurement matrix of normal distances to the origin for all visible regions over the whole sequence of views, respectively. These two matrices, which have many missing elements due to noise, occlusion, and mismatching, enable us to formulate multiple view merging as a combination of two WLS problems. A two-step algorithm is presented. After surface equations are extracted, spatial connectivity among the surfaces is established to enable the polyhedral object model to be constructed. Experiments using synthetic data and real range images show that our approach is robust against noise and mismatching and generates accurate polyhedral object models. >
---
paper_title: Motion segmentation based on factorization method and discriminant criterion
paper_content:
A motion segmentation algorithm based on factorization method and discriminant criterion is proposed. This method uses a feature with the most useful similarities for grouping, selected using motion information calculated by factorization method and discriminant criterion. A group is extracted based on discriminant analysis for the selected feature's similarities. The same procedure is applied recursively to the remaining features to extract other groups. This grouping is robust against noise and outliers because features with no useful information are automatically rejected. Numerical computation is simple and stable. No prior knowledge is needed on the number of objects. Experimental results are shown for synthetic data and real image sequences.
---
paper_title: Linear fitting with missing data: applications to structure-from-motion and to characterizing intensity images
paper_content:
Several vision problems can be reduced to the problem of fitting a linear surface of low dimension to data, including the problems of structure-from-affine-motion, and of characterizing the intensity images of a Lambertian scene by constructing the intensity manifold. For these problems, one must deal with a data matrix with some missing elements. In structure-from-motion, missing elements will occur if some point features are not visible in some frames. To construct the intensity manifold missing matrix elements will arise when the surface normals of some scene points do not face the light source in some images. We propose a novel method for fitting a low rank matrix to a matrix with missing elements. We show experimentally that our method produces good results in the presence of noise. These results can be either used directly, or can serve as an excellent starting point for an iterative method.
---
paper_title: Feature grouping in moving objects
paper_content:
We address the problem of grouping points or features common to a single object. In this paper we consider the processing of a sequence of two-dimensional orthogonal projections of a three-dimensional scene containing an unknown number of independently-moving rigid objects. We describe a computationally inexpensive algorithm that can determine the number of bodies and which points belong to which body. >
---
paper_title: Segmentation using eigenvectors: a unifying view
paper_content:
Automatic grouping and segmentation of images remains a challenging problem in computer vision. Recently, a number of authors have demonstrated good performance on this task using methods that are based on eigenvectors of the affinity matrix. These approaches are extremely attractive in that they are based on simple eigendecomposition algorithms whose stability is well understood. Nevertheless, the use of eigendecompositions in the context of segmentation is far from well understood. In this paper we give a unified treatment of these algorithms, and show the close connections between them while highlighting their distinguishing features. We then prove results on eigenvectors of block matrices that allow us to analyze the performance of these algorithms in simple grouping settings. Finally, we use our analysis to motivate a variation on the existing methods that combines aspects from different eigenvector segmentation algorithms. We illustrate our analysis with results on real and synthetic images.
---
paper_title: A multi-body factorization method for motion analysis
paper_content:
The structure from motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. We present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the structure is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object. >
---
paper_title: Concerning Bayesian Motion Segmentation, Model Averaging, Matching and the Trifocal Tensor
paper_content:
Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori.
---
paper_title: A Compact and Retrieval-Oriented Video Representation Using Mosaics
paper_content:
Compact yet intuitive representations of digital videos are required to combine high quality storage with interactive video indexing and retrieval capabilities. The advent of video mosaicing has provided a natural way to obtain content-based video representations which are both retrieval-oriented and compression-efficient. In this paper, an algorithm for extracting a robust mosaic representation of video content from sparse interest image points is described. The representation, which is obtained via visual motion clustering and segmentation, features the geometric and kinematic description of all salient objects in the scene, being thus well suited for video browsing, indexing and retrieval by visual content. Results of experiments on several TV sequences provide an insight into the main characteristics of the approach.
---
paper_title: Concerning Bayesian Motion Segmentation, Model Averaging, Matching and the Trifocal Tensor
paper_content:
Motion segmentation involves identifying regions of the image that correspond to independently moving objects. The number of independently moving objects, and type of motion model for each of the objects is unknown a priori.
---
paper_title: A probabilistic framework for spatio-temporal video representation and indexing
paper_content:
In this work we describe a novel statistical video representation and modeling scheme. Video representation schemes are needed to enable segmenting a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Guassian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static vs. dynamic video regions and video content editing are presented.
---
paper_title: Motion segmentation and tracking using normalized cuts
paper_content:
We propose a motion segmentation algorithm that aims to break a scene into its most prominent moving groups. A weighted graph is constructed on the image sequence by connecting pixels that are in the spatiotemporal neighborhood of each other. At each pixel, we define motion profile vectors which capture the probability distribution of the image velocity. The distance between motion profiles is used to assign a weight on the graph edges. Using normalised cuts we find the most salient partitions of the spatiotemporal graph formed by the image sequence. For segmenting long image sequences, we have developed a recursive update procedure that incorporates knowledge of segmentation in previous frames for efficiently finding the group correspondence in the new frame.
---
paper_title: The emergence of visual objects in space–time
paper_content:
It is natural to think that in perceiving dynamic scenes, vision takes a series of snapshots. Motion perception can ensue when the snapshots are different. The snapshot metaphor suggests two questions: (i) How does the visual system put together elements within each snapshot to form objects? This is the spatial grouping problem. (ii) When the snapshots are different, how does the visual system know which element in one snapshot corresponds to which element in the next? This is the temporal grouping problem. The snapshot metaphor is a caricature of the dominant model in the field—the sequential model—according to which spatial and temporal grouping are independent. The model we propose here is an interactive model, according to which the two grouping mechanisms are not separable. Currently, the experiments that support the interactive model are not conclusive because they use stimuli that are excessively specialized. To overcome this weakness, we created a new type of stimulus—spatiotemporal dot lattices—which allow us to independently manipulate the strength of spatial and temporal groupings. For these stimuli, sequential models make one fundamental assumption: if the spatial configuration of the stimulus remains constant, the perception of spatial grouping cannot be affected by manipulations of the temporal configuration of the stimulus. Our data are inconsistent with this assumption.
---
paper_title: Efficient spatiotemporal grouping using the Nystrom method
paper_content:
Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation, but due to the computational demands, applications of such methods to spatiotemporal data have been slow to appear For even a short video sequence, the set of all pairwise voxel similarities is a huge quantity of data: one second of a 256/spl times/384 sequence captured at 30 Hz entails on the order of 10/sup 13/ pairwise similarities. The contribution of this paper is a method that substantially reduces the computational requirements of grouping algorithms based on spectral partitioning, making it feasible to apply them to very large spatiotemporal grouping problems. Our approach is based on a technique for the numerical solution of eigenfunction problems known as the Nystrom method This method allows extrapolation of the complete grouping solution using only a small number of "typical" samples. In doing so, we successfully exploit the fact that there are far fewer coherent groups in an image sequence than pixels.
---
paper_title: A probabilistic framework for spatio-temporal video representation and indexing
paper_content:
In this work we describe a novel statistical video representation and modeling scheme. Video representation schemes are needed to enable segmenting a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Guassian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static vs. dynamic video regions and video content editing are presented.
---
paper_title: Motion segmentation and tracking using normalized cuts
paper_content:
We propose a motion segmentation algorithm that aims to break a scene into its most prominent moving groups. A weighted graph is constructed on the image sequence by connecting pixels that are in the spatiotemporal neighborhood of each other. At each pixel, we define motion profile vectors which capture the probability distribution of the image velocity. The distance between motion profiles is used to assign a weight on the graph edges. Using normalised cuts we find the most salient partitions of the spatiotemporal graph formed by the image sequence. For segmenting long image sequences, we have developed a recursive update procedure that incorporates knowledge of segmentation in previous frames for efficiently finding the group correspondence in the new frame.
---
paper_title: Normalized cuts and image segmentation
paper_content:
We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We have applied this approach to segmenting static images and found results very encouraging.
---
paper_title: Efficient spatiotemporal grouping using the Nystrom method
paper_content:
Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation, but due to the computational demands, applications of such methods to spatiotemporal data have been slow to appear For even a short video sequence, the set of all pairwise voxel similarities is a huge quantity of data: one second of a 256/spl times/384 sequence captured at 30 Hz entails on the order of 10/sup 13/ pairwise similarities. The contribution of this paper is a method that substantially reduces the computational requirements of grouping algorithms based on spectral partitioning, making it feasible to apply them to very large spatiotemporal grouping problems. Our approach is based on a technique for the numerical solution of eigenfunction problems known as the Nystrom method This method allows extrapolation of the complete grouping solution using only a small number of "typical" samples. In doing so, we successfully exploit the fact that there are far fewer coherent groups in an image sequence than pixels.
---
paper_title: Segmentation using eigenvectors: a unifying view
paper_content:
Automatic grouping and segmentation of images remains a challenging problem in computer vision. Recently, a number of authors have demonstrated good performance on this task using methods that are based on eigenvectors of the affinity matrix. These approaches are extremely attractive in that they are based on simple eigendecomposition algorithms whose stability is well understood. Nevertheless, the use of eigendecompositions in the context of segmentation is far from well understood. In this paper we give a unified treatment of these algorithms, and show the close connections between them while highlighting their distinguishing features. We then prove results on eigenvectors of block matrices that allow us to analyze the performance of these algorithms in simple grouping settings. Finally, we use our analysis to motivate a variation on the existing methods that combines aspects from different eigenvector segmentation algorithms. We illustrate our analysis with results on real and synthetic images.
---
paper_title: The Visual Analysis of Human Movement: A Survey
paper_content:
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions.
---
paper_title: Probabilistic Data Association Methods for Tracking Complex Visual Objects
paper_content:
We describe a framework that explicitly reasons about data association to improve tracking performance in many difficult visual environments. A hierarchy of tracking strategies results from ascribing ambiguous or missing data to: 1) noise-like visual occurrences, 2) persistent, known scene elements (i.e., other tracked objects), or 3) persistent, unknown scene elements. First, we introduce a randomized tracking algorithm adapted from an existing probabilistic data association filter (PDAF) that is resistant to clutter and follows agile motion. The algorithm is applied to three different tracking modalities-homogeneous regions, textured regions, and snakes-and extensibly defined for straightforward inclusion of other methods. Second, we add the capacity to track multiple objects by adapting to vision a joint PDAF which oversees correspondence choices between same-modality trackers and image features. We then derive a related technique that allows mixed tracker modalities and handles object overlaps robustly. Finally, we represent complex objects as conjunctions of cues that are diverse both geometrically (e.g., parts) and qualitatively (e.g., attributes). Rigid and hinge constraints between part trackers and multiple descriptive attributes for individual parts render the whole object more distinctive, reducing susceptibility to mistracking. Results are given for diverse objects such as people, microscopic cells, and chess pieces.
---
|
Title: A Survey of Spatio-Temporal Grouping Techniques
Section 1: Introduction
Description 1: Write an introduction explaining the importance of spatio-temporal grouping in video processing, including applications and challenges.
Section 2: Overview of Classification
Description 2: Describe the different approaches to spatio-temporal grouping, including segregation based on spatial and temporal dimensions, along with the order of operations.
Section 3: Building Blocks of the Grouping Process
Description 3: Discuss the fundamental elements used by various grouping methods, such as individual pixels or regions.
Section 4: Segmentation with Spatial Priority
Description 4: Explain methods that prioritize spatial segmentation before temporal grouping, including both motion and color/texture-based segmentation.
Section 5: Sequential Motion Segmentation
Description 5: Detail how spatio-temporal segmentation based on instantaneous motion works, including framewise motion segmentation and how temporal coherence is maintained.
Section 6: Framewise Motion Segmentation
Description 6: Discuss the techniques used in motion similarity and motion model fitting, emphasizing different motion parameter spaces and contextual similarities.
Section 7: Spatial Segmentation using Color/Texture
Description 7: Describe methods that rely on visual features such as color and texture for segmentation, and how these are merged temporally.
Section 8: Temporal Coherence
Description 8: Explain techniques for maintaining temporal coherence in spatio-temporal segmentation.
Section 9: Trajectory Grouping
Description 9: Describe methods that use long-term motion information by estimating and grouping trajectories.
Section 10: Direct Comparison of Trajectories
Description 10: Elaborate on methods that directly compare trajectories to define similarities and group them accordingly.
Section 11: Subspace Factorization
Description 11: Explain how subspace factorization is used to group trajectories based on their motion in matrix space.
Section 12: Hypothesize and Test
Description 12: Describe hypothesize and test methods, including RANSAC, and how they are used to validate trajectory groupings.
Section 13: Motion Mixture Models
Description 13: Discuss motion mixture models for associating trajectories with parametric motion models using an EM approach.
Section 14: Joint Spatial and Temporal Segmentation
Description 14: Overview methods that simultaneously consider spatial and temporal dimensions for video segmentation, including different clustering and graph-based approaches.
Section 15: Clustering in Feature Space
Description 15: Describe the approach of clustering pixels based on a multi-dimensional feature space that includes color, spatial, and temporal information.
Section 16: Graph-Based Segmentation
Description 16: Explain graph-based segmentation techniques and their application to spatio-temporal volumes using graph cut methods.
Section 17: Concluding Remarks
Description 17: Summarize the survey, discussing the categories of methods, their strengths and weaknesses, and potential areas for future research.
|
An Overview of OMG/CORBA
| 3 |
---
|
Title: An Overview of OMG/CORBA
Section 1: Motivation
Description 1: This section should discuss the reasons and background for integrating legacy systems with new technologies in corporate IT systems, highlighting the challenges and the need for a standardized distributed system framework like CORBA.
Section 2: OMG CORBA
Description 2: This section should provide a detailed explanation of the Object Management Architecture of omg CORBA, including the core components such as CORBAservices, CORBAfacilities, Domain Interfaces, and Application Objects.
Section 3: CORBA Services
Description 3: This section should outline the various CORBA services that have been standardized by the OMG Technical Committee, including Naming, Event, Life Cycle, Persistent Object, Relationship, Externalization, Concurrency Control, Transaction, Security, Time, Collection, Property, Licensing, Query, and Trading services.
Section 4: Summary and Future Adoptions
Description 4: This section should offer a brief summary of the key points covered in the paper and discuss future directions for the OMG CORBA standard, including planned service adoptions and ongoing work on Domain Interfaces and other task forces.
|
Requirements engineering: a review and research agenda
| 24 |
---
paper_title: Understanding and controlling software costs
paper_content:
A discussion is presented of the two primary ways of understanding software costs. The black-box or influence-function approach provides useful experimental and observational insights on the relative software productivity and quality leverage of various management, technical, environmental, and personnel options. The glass-box or cost distribution approach helps identify strategies for integrated software productivity and quality improvement programs using such structures as the value chain and the software productivity opportunity tree. The individual strategies for improving software productivity are identified. Issues related to software costs and controlling them are examined and discussed. It is pointed out that a good framework of techniques exists for controlling software budgets, schedules, and work completed, but that a great deal of further progress is needed to provide an overall set of planning and control techniques covering software product qualities and end-user system objectives. >
---
paper_title: Viewpoints for requirements definition
paper_content:
This paper is a survey of the current viewpoint-oriented requirements approaches and a description of an alternative object-oriented viewpoint-based approach. The paper sets out a case for a multiple viewpoint-oriented approach in requirements definition and, using a simple case study, examines the viewpoint approach adopted by three requirements methodologies. The paper concludes by proposing an alternative object-oriented viewpoint-based approach.
---
paper_title: Review of design methodology
paper_content:
The paper surveys design methodology, the science of methods of design. It discusses the aims of design methodology as well as objections to it. The various sources of design methodology are reviewed. The nature and structure of the design process are outlined. An organised presentation is given of methods of design concept generation. Finally the evaluation and decision steps in design are briefly analysed. In conclusion the authors state their contention that design methodology is a useful contribution to design.
---
paper_title: Understanding and controlling software costs
paper_content:
A discussion is presented of the two primary ways of understanding software costs. The black-box or influence-function approach provides useful experimental and observational insights on the relative software productivity and quality leverage of various management, technical, environmental, and personnel options. The glass-box or cost distribution approach helps identify strategies for integrated software productivity and quality improvement programs using such structures as the value chain and the software productivity opportunity tree. The individual strategies for improving software productivity are identified. Issues related to software costs and controlling them are examined and discussed. It is pointed out that a good framework of techniques exists for controlling software budgets, schedules, and work completed, but that a great deal of further progress is needed to provide an overall set of planning and control techniques covering software product qualities and end-user system objectives. >
---
paper_title: Reviewing and correcting specifications
paper_content:
We outline a scheme for marking suggested edits and annotations on software specifications, a particularly complex class of structured document, during the process of review and correction. The scheme is based on a formal model of document construction and review and on typographic marking methods. The scheme permits precise and interpretable marking and annotation of documents which use many different notations. It supports and guides the process of correction. Some examples and a sample visual notation are given. Tool support for using this scheme is briefly discussed.
---
|
Title: Requirements Engineering: A Review and Research Agenda
Section 1: Introduction
Description 1: Introduce the purpose of the paper, provide a definition of requirements engineering, and explain the motivation for research in this field.
Section 2: Organisational setting
Description 2: Discuss the various organisational contexts in which requirements engineering takes place and their implications on the development process.
Section 3: Contract and procurement procedures
Description 3: Explain how contractual and procurement issues frame the requirements engineering process and their impact on development.
Section 4: Personnel and staffing the requirements engineering process
Description 4: Highlight the importance of skills, especially communication skills, among engineers and client representatives in the requirements engineering process.
Section 5: Bounding
Description 5: Describe the process of establishing the scope and boundaries of the requirements and design space.
Section 6: Feasibility and risk
Description 6: Identify and assess the feasibility of requirements and the primary risks involved in the system development process.
Section 7: Stakeholder analysis
Description 7: Explain the process of identifying and understanding the stakeholders involved in the requirements engineering process.
Section 8: Participation
Description 8: Discuss the group process of requirements engineering, emphasizing cooperation, consensus building, and negotiation.
Section 9: Information gathering
Description 9: Outline the challenges and techniques involved in gathering information on needs and the environment in which these needs exist.
Section 10: Value modelling
Description 10: Describe the process of building a model that documents and relates the valued attributes of a system.
Section 11: Modelling goals and required services
Description 11: Focus on identifying the goals and required services that a projected system is required to satisfy.
Section 12: Domain modelling
Description 12: Explain the importance of constructing models of the environment in which the system will interact.
Section 13: Task analysis
Description 13: Discuss methods for understanding the tasks performed by users of the system and strategies for fitting tasks with system properties.
Section 14: Reuse
Description 14: Discuss the reuse of requirements engineering products and processes from previous projects to optimize future ones.
Section 15: Validation
Description 15: Describe methods for ensuring that the products of the requirements engineering process accurately embody stakeholder requirements.
Section 16: Exploration
Description 16: Explain the use of prototypes and system simulations to explore and refine requirements.
Section 17: Verification
Description 17: Highlight the importance of ensuring that subsequent development products reflect documented requirements accurately.
Section 18: Inspection
Description 18: Discuss the role of systematic inspection in eliminating errors and misconceptions early in the requirements engineering process.
Section 19: Metrics
Description 19: Explain the significance of measuring the products and process of requirements engineering to ensure predictability and control.
Section 20: Estimation
Description 20: Outline the responsibility of requirements engineering to supply preliminary estimates of development cost, effort, and schedule.
Section 21: Information management
Description 21: Discuss the importance of managing large volumes of technical information and documentation produced during requirements engineering.
Section 22: Recording rationale and argumentation
Description 22: Emphasize the need for recording decisions, assumptions, and alternative solutions during the requirements engineering process.
Section 23: Traceability
Description 23: Explain the importance of maintaining the ability to trace requirements both forward and backward through the development process.
Section 24: Standards and Conformance
Description 24: Discuss the need for conformance to external standards and codes of practice in documenting requirements and organizing the process.
|
Nanobiomechanics of living cells: a review
| 24 |
---
paper_title: Comparative study on the differential mechanical properties of human liver cancer and normal cells
paper_content:
Abstract Although cancerous cells and normal cells are known to have different elasticity values, there have been inconsistent reports in terms of the actual and relative values for these two cell types depending on the experimental conditions. This paper investigated the mechanical characterization of normal hepatocytes (THLE-2) and hepatocellular carcinoma cells (HepG2) using atomic force microscopy indentation experiments and the Hertz–Sneddon model, and the results were confirmed by an independent de-adhesion assay. To improve the reliability of the data, we considered the effects of tip geometry and indentation depth on the measured elasticity of the cells. This study demonstrated that THLE-2 cells had a higher elastic modulus compared with the HepG2 cells and that this difference was more significant when a conical tip was used. The inhibitor study indicated that this difference in the mechanical properties of THLE-2 and HepG2 cells was mainly attributed to differential arrangements in the cytoskele...
---
paper_title: Confocal microscopy indentation system for studying in situ chondrocyte mechanics
paper_content:
Abstract Chondrocytes synthesize extracellular matrix molecules, thus they are essential for the development, adaptation and maintenance of articular cartilage. Furthermore, it is well accepted that the biosynthetic activity of chondrocytes is influenced by the mechanical environment. Therefore, their response to mechanical stimuli has been studied extensively. Much of the knowledge in this area of research has been derived from testing of isolated cells, cartilage explants, and fixed cartilage specimens: systems that differ in important aspects from chondrocytes embedded in articular cartilage and observed during loading conditions. In this study, current model systems have been improved by working with the intact cartilage in real time. An indentation system was designed on a confocal microscope that allows for simultaneous loading and observation of chondrocytes in their native environment. Cell mechanics were then measured under precisely controlled loading conditions. The indentation system is based on a light transmissible cylindrical glass indentor of 0.17 mm thickness and 1.64 mm diameter that is aligned along the focal axis of the microscope and allows for real time observation of live cells in their native environment. The system can be used to study cell deformation and biological responses, such as calcium sparks, while applying prescribed loads on the cartilage surface. It can also provide novel information on the relationship between cell loading and cartilage adaptive/degenerative processes in the intact tissue.
---
paper_title: Transient adhesion mediated by ligand–receptor interaction on surfaces of variable nanotopography
paper_content:
Surface microtopography and nanotopography have been shown to influence cell adhesion and function, including proliferation and differentiation, leading both to fundamental questions and practical applications in the field of biomaterials and nanomedicine. However, the mechanisms of how cells sense topography remain obscure. In this study, we measured directly the effect of nanotopography on the kinetics of association and dissociation of ligand–receptor bonds, which are critically involved in the first steps of cell adhesion. We designed models of biological functionalised surfaces with controlled roughness varying from 2 to 400 nm of root mean square, and controlled ligand density. Tests of transient adhesion of receptor–coated microspheres on these surfaces were performed, using a laminar flow chamber assay. We probed Intercellular Adhesion Molecule ICAM–1–anti–ICAM–1 bond adhesion kinetics in the single molecule limit on smooth and rough substrates. Frequency of adhesion did not exhibit any noticeable dependence on roughness parameter, except at high bead velocity. Detachment rate was also independent of roughness. Finally, leucocyte transient adhesion tests were performed on similar substrates, using variable activating incubating media. Here also, no strong effect of roughness was observed in these conditions. Results are rationalised in terms of the role of local geometry on the access of ligands to receptors.
---
paper_title: Cellular Tensegrity and Mechanochemical Transduction
paper_content:
To explain how biological tissues form and function, we must first understand how different types of regulatory signals, both chemical and mechanical, integrate inside the cell. A clue to the mechanism of signal integration comes from recognition that the action of a force on any mass, regardless of scale, will result in a change in three dimensional structure. This is critical because recent studies reveal that many of the molecules that mediate signal transduction and stimulus-response coupling are physically bound to insoluble structural scaffoldings within the cytoskeleton and nucleus (Ingber 1993a). In this type of “solid-state” regulatory system, mechanically-induced structural arrangements could provide a mechanism for regulating cellular biochemistry and hence, efficiently integrating structure and function. However, this is a difficult question to address using conventional molecular biological approaches because this problem is not based on changes in chemical composition or local binding interactions. Rather, it is a question of architecture. As a result of this challenge, a new scientific discipline of “Molecular Cell Engineering” is beginning to emerge which combines elements of molecular cell biology, bioengineering, architecture, and biomechanics.
---
paper_title: MECHANICS OF THE HUMAN RED BLOOD CELL DEFORMED BY OPTICAL TWEEZERS
paper_content:
The mechanical deformation characteristics of living cells are known to influence strongly their chemical and biological functions and the onset, progression and consequences of a number of human diseases. The mechanics of the human red blood cell (erythrocyte) subjected to large deformation by optical tweezers forms the subject of this paper. Video photography of the cell deformed in a phosphate buffered saline solution at room temperature during the imposition of controlled stretching forces, in the tens to several hundreds picoNewton range, is used to assess experimentally the deformation characteristics. The mechanical responses of the cell during loading and upon release of the optical force are then analysed to extract the elastic properties of the cell membrane by recourse to several different constitutive formulations of the elastic and viscoelastic behavior within the framework of a fully three-dimensional finite element analysis. A parametric study of various geometric, loading and structural factors is also undertaken in order to develop quantitative models for the mechanics of deformation by means of optical tweezers. The outcome of the experimental and computational analyses is then compared with the information available on the mechanical response of the red blood cell from other independent experimental techniques. Potential applications of the optical tweezers method described in this paper to the study of mechanical deformation of living cells under different stress states and in response to the progression of some diseases are also highlighted.
---
paper_title: Nanotopographical control of human osteoprogenitor differentiation
paper_content:
Current load-bearing orthopaedic implants are produced in ‘bio-inert’ materials such as titanium alloys. When inserted into the reamed bone during hip or knee replacement surgery the implants interact with mesenchymal populations including the bone marrow. Bio-inert materials are shielded from the body by differentiation of the cells along the fibroblastic lineage producing scar tissue and inferior healing. This is exacerbated by implant micromotion, which can lead to capsule formation. Thus, next-generation implant materials will have to elicit influence over osteoprogenitor differentiation and mesenchymal populations in order to recruit osteoblastic cells and produce direct bone apposition onto the implant. A powerful method of delivering cues to cells is via topography. Micro-scale topography has been shown to affect cell adhesion, migration, cytoskeleton, proliferation and differentiation of a large range of cell types (thus far all cell types tested have been shown to be responsive to topographical cues). More recent research with nanotopography has also shown a broad range of cell response, with fibroblastic cells sensing down to 10 nm in height. Initial studies with human mesenchymal populations and osteoprogenitor populations have again shown strong cell responses to nanofeatures with increased levels of osteocalcin and osteopontin production from the cells on certain topographies. This is indicative of increased osteoblastic activity on the nanotextured materials. Looking at preliminary data, it is tempting to speculate that progenitor cells are, in fact, more responsive to topography than more mature cell types and that they are actively seeking cues from their environment. This review will investigate the range of nanotopographies available to researchers and our present understanding of mechanisms of progenitor cell response. Finally, it will make some speculations of the future of nanomaterials and progenitor cells in tissue engineering.
---
paper_title: Signal transduction pathways involved in mechanical regulation of HB-GAM expression in osteoblastic cells.
paper_content:
Protein kinase C (PKC), protein kinase A (PKA), prostaglandin synthesis, and various mitogen-activated protein kinases (MAPKs) have been reported to be activated in bone cells by mechanical loading. We studied the involvement of these signal transduction pathways in the downregulation of HB-GAM expression in osteoblastic cells after cyclic stretching. Specific antagonists and agonists of these signal transduction pathways were added to cells before loading and to non-loaded control cells. Quantitative RT-PCR was used to evaluate gene expression. The data demonstrated that the extracellular signal-regulated kinase (ERK) 1/2 pathway, PKC, PKA, p38, and c-Jun N-terminal kinase MAPK participated in the mechanical downregulation of HB-GAM expression, whereas prostaglandin synthesis did not seem to be involved.
---
paper_title: Cellular mechanotransduction: putting all the pieces together again
paper_content:
Analysis of cellular mechanotransduction, the mechanism by which cells convert mechanical signals into biochemical responses, has focused on identification of critical mechanosensitive molecules and cellular components. Stretch-activated ion channels, caveolae, integrins, cadherins, growth factor receptors, myosin motors, cytoskeletal filaments, nuclei, extracellular matrix, and numerous other structures and signaling molecules have all been shown to contribute to the mechanotransduction response. However, little is known about how these different molecules function within the structural context of living cells, tissues, and organs to produce the orchestrated cellular behaviors required for mechanosensation, embryogenesis, and physiological control. Recent work from a wide range of fields reveals that organ, tissue, and cell anatomy are as important for mechanotransduction as individual mechanosensitive proteins and that our bodies use structural hierarchies (systems within systems) composed of interconne...
---
paper_title: Mechanical models for living cells—a review☆
paper_content:
Abstract As physical entities, living cells possess structural and physical properties that enable them to withstand the physiological environment as well as mechanical stimuli occurring within and outside the body. Any deviation from these properties will not only undermine the physical integrity of the cells, but also their biological functions. As such, a quantitative study in single cell mechanics needs to be conducted. In this review, we will examine some mechanical models that have been developed to characterize mechanical responses of living cells when subjected to both transient and dynamic loads. The mechanical models include the cortical shell–liquid core (or liquid drop) models which are widely applied to suspended cells; the solid model which is generally used for adherent cells; the power-law structural damping model which is more suited for studying the dynamic behavior of adherent cells; and finally, the biphasic model which has been widely used to study musculoskeletal cell mechanics. Based upon these models, future attempts can be made to develop even more detailed and accurate mechanical models of living cells once these three factors are adequately addressed: structural heterogeneity, appropriate constitutive relations for each of the distinct subcellular regions and components, and active forces acting within the cell. More realistic mechanical models of living cells can further contribute towards the study of mechanotransduction in cells.
---
paper_title: Nanosize and vitality : TiO2 nanotube diameter directs cell fate
paper_content:
We generated, on titanium surfaces, self-assembled layers of vertically oriented TiO2 nanotubes with defined diameters between 15 and 100 nm and show that adhesion, spreading, growth, and differentiation of mesenchymal stem cells are critically dependent on the tube diameter. A spacing less than 30 nm with a maximum at 15 nm provided an effective length scale for accelerated integrin clustering/focal contact formation and strongly enhances cellular activities compared to smooth TiO2 surfaces. Cell adhesion and spreading were severely impaired on nanotube layers with a tube diameter larger than 50 nm, resulting in dramatically reduced cellular activity and a high extent of programmed cell death. Thus, on a TiO2 nanotube surface, a lateral spacing geometry with openings of 30−50 nm represents a critical borderline for cell fate.
---
paper_title: Manipulation and isolation of single cells and nuclei.
paper_content:
The heterogeneous behavior of cells within a cell population makes measurements at the multicellular level insensitive to changes in single cells. Single-cell and single-nucleus analyses are therefore important to address this deficiency which will aid in the understanding of fundamental biology at both the cellular and subcellular levels. Recent technological advancements have enabled the development of new methodologies capable of handling these new challenges. This review highlights various techniques used in single-cell and single-nucleus manipulation and isolation. In particular, the applications related to microfluidics, electrical, optical, and physical methods will be discussed. Ultimately, it is hoped that these techniques will enable fundamental tests to be conducted on single cells and nuclei. One important potential outcome is that this will contribute not only towards detection and isolation of diseased cells but also more accurate diagnosis and prognosis of human diseases.
---
paper_title: Unconfined creep compression of chondrocytes
paper_content:
Abstract The study of single cell mechanics offers a valuable tool for understanding cellular milieus. Specific knowledge of chondrocyte biomechanics could lead to elucidation of disease etiologies and the biomechanical factors most critical to stimulating regenerative processes in articular cartilage. Recent studies in our laboratory have suggested that it may be acceptable to approximate the shape of a single chondrocyte as a disc. This geometry is easily utilized for generating models of unconfined compression. In this study, three continuum mechanics models of increasing complexity were formulated and used to fit unconfined compression creep data. Creep curves were obtained from middle/deep zone chondrocytes ( n =15) and separately fit using the three continuum models. The linear elastic solid model yielded a Young's modulus of 2.55±0.85 kPa. The viscoelastic model (adapted from the Kelvin model) generated an instantaneous modulus of 2.47±0.85 kPa, a relaxed modulus of 1.48±0.35 kPa, and an apparent viscosity of 1.92±1.80 kPa-s. Finally, a linear biphasic model produced an aggregate modulus of 2.58±0.87 kPa, a permeability of 2.57×10 −12 ±3.09 m 4 /N-s, and a Poisson's ratio of 0.069±0.021. The results of this study demonstrate that similar values for the cell modulus can be obtained from three models of increasing complexity. The elastic model provides an easy method for determining the cell modulus, however, the viscoelastic and biphasic models generate additional material properties that are important for characterizing the transient response of compressed chondrocytes.
---
paper_title: Viscoelastic properties of chondrocytes from normal and osteoarthritic human cartilage
paper_content:
The deformation behavior and mechanical properties of articular chondrocytes are believed to play an important role in their response to mechanical loading of the extracellular matrix. This study utilized the micropipette aspiration test to measure the viscoelastic properties of chondrocytes isolated from macroscopically normal or end-stage osteoarthritic cartilage. A three-parameter standard linear solid was used to model the viscoelastic behavior of the cells. Significant differences were found between the mechanical properties of chondrocytes isolated from normal and osteoarthritic cartilage. Specifically, osteoarthritic chondrocytes exhibited a significantly higher equilibrium modulus (0.33 +/- 0.23 compared with 0.24 +/- 0.11 kPa), instantaneous modulus (0.63 +/- 0.51 compared with 0.41 +/- 0.17 kPa), and apparent viscosity (5.8 +/- 6.5 compared with 3.0 +/- 1.8 kPa-s) compared with chondrocytes isolated from macroscopically normal, nonosteoarthritic cartilage. The elastic moduli and relaxation time constant determined experimentally in this study were used to estimate the apparent biphasic properties of the chondrocyte on the basis of the equation for the gel relaxation time of a biphasic material. The differences in viscoelastic properties may reflect alterations in the structure and composition of the chondrocyte cytoskeleton that have previously been associated with osteoarthritic cartilage. Coupled with earlier theoretical models of cell-matrix interactions in articular cartilage, the increased elastic and viscous properties suggest that the mechanical environment of the chondrocyte may be altered in osteoarthritic cartilage.
---
paper_title: Mechanobiology of the Intervertebral Disc and Relevance to Disc Degeneration
paper_content:
Mechanical loading of the intervertebral disc may contribute to disc degeneration by initiating degeneration or by regulating cell-mediated remodeling events that occur in response to the mechanical stimuli of daily activity. This article is a review of the current knowledge of the role of mechanical stimuli in regulating intervertebral disc cellular responses to loading and the cellular changes that occur with degeneration. ::: ::: Intervertebral disc cells exhibit diverse biologic responses to mechanical stimuli, depending on the loading type, magnitude, duration, and anatomic zone of cell origin. The innermost cells respond to low-to-moderate magnitudes of static compression, osmotic pressure, or hydrostatic pressure with increases in anabolic cell responses. Higher magnitudes of loading may give rise to catabolic responses marked by elevated protease gene or protein expression or activity. The key regulators of these mechanobiologic responses for intervertebral disc cells will be the micromechanical stimuli experienced at the cellular level, which are predicted to differ from that measured for the extracellular matrix. Large hydrostatic pressures, but little volume change, are predicted to occur for cells of the nucleus pulposus during compression, while the highly oriented cells of the anulus fibrosus may experience deformations in tension or compression during matrix deformations. In general, the pattern of biologic response to applied loads suggests that the cells of the nucleus pulposus and inner portion of the anulus fibrosus experience comparable micromechanical stimuli in situ and may respond more similarly than cells of the outer portion of the anulus fibrosus. Changes in these features with degeneration are critically understudied, particularly degeneration-associated changes in cell-level mechanical stimuli and the associated mechanobiology. ::: ::: Little is known of the mechanisms that regulate cellular responses to intervertebral mechanobiology, nor is much known with regard to the precise mechanical stimuli experienced by cells during loading. Mechanical factors appear to regulate responses of the intervertebral disc cells through mechanisms involving intracellular Ca2+ transients and cytoskeletal remodeling that may regulate downstream effects such as gene expression and posttranslational biosynthesis. Future studies should address the broader biologic responses to mechanical stimuli in intervertebral disc mechanobiology, the involved signaling mechanisms, and the apparently important interactions among mechanical factors, genetic factors, cytokines, and inflammatory mediators that may be critical in the regulation of intervertebral disc degeneration.
---
paper_title: The Deformation of an Erythrocyte Under the Radiation Pressure by Optical Stretch
paper_content:
, p. 149, 1993) for liposomes. The case under study is the erythrocytestretched by a pair of laser beams in opposite directions within buffer solutions. Thestudy aims to elucidate the effect of radiation pressure from the optical laser because upto now little is known about its influence on the cell deformation. Following an earlierstudy by Guck et al. (Phys. Rev. Lett.,
---
paper_title: Viscoelasticity of the human red blood cell
paper_content:
We report here the first measurements of the complex modulus of the isolated red blood cell (RBC). Because the RBC is often larger than capillary diameter, important determinants of microcirculatory function are RBC deformability and its changes with pathologies, such as sickle cell disease and malaria. A functionalized ferrimagnetic microbead was attached to the membrane of healthy RBC and then subjected to an oscillatory magnetic field. The resulting torque caused cell deformation. From the oscillatory forcing and resulting bead motions, which were tracked optically, we computed elastic and frictional moduli, g' and g", respectively, from 0.1 to 100 Hz. The g' was nearly frequency independent and dominated the response at all but the highest frequencies measured. Over three frequency decades, g" increased as a power law with an exponent of 0.64, a result not predicted by any simple model. These data suggest that RBC relaxation times that have been reported previously, and any models that rest upon them, are artifactual; the artifact, we suggest, arises from forcing to an exponential fit data of limited temporal duration. A linear range of response was observed, but, as forcing amplitude increased, nonlinearities became clearly apparent. A finite element model suggests that membrane bending was localized to the vicinity of the bead and dominated membrane shear. While the mechanisms accounting for these RBC dynamics remain unclear, methods described here establish new avenues for the exploration of connections among the mechanical, chemical, and biological characteristics of the RBC in health and disease.
---
paper_title: Biomechanics approaches to studying human diseases
paper_content:
Nanobiomechanics has recently been identified as an emerging field that can potentially make significant contributions in the study of human diseases. Research into biomechanics at the cellular and molecular levels of some human diseases has not only led to a better elucidation of the mechanisms behind disease progression, because diseased cells differ physically from healthy ones, but has also provided important knowledge in the fight against these diseases. This article highlights some of the cell and molecular biomechanics research carried out on human diseases such as malaria, sickle cell anemia and cancer and aims to provide further important insights into the pathophysiology of such diseases. It is hoped that this can lead to new methods of early detection, diagnosis and treatment.
---
paper_title: Geometry as a Factor for Tissue Growth: Towards Shape Optimization of Tissue Engineering Scaffolds
paper_content:
Scaffolds for tissue engineering are usually designed to support cell viability with large adhesion surfaces and high permeability to nutrients and oxygen. Recent experiments support the idea that, in addition to surface roughness, elasticity and chemistry, the macroscopic geometry of the substrate also contributes to control the kinetics of tissue deposition. In this study, a previously proposed model for the behavior of osteoblasts on curved surfaces is used to predict the growth of bone matrix tissue in pores of different shapes. These predictions are compared to in vitro experiments with MC3T3-E1 pre-osteoblast cells cultivated in two-millimeter thick hydroxyapatite plates containing prismatic pores with square- or cross-shaped sections. The amount and shape of the tissue formed in the pores measured by phase contrast microscopy confirms the predictions of the model. In cross-shaped pores, the initial overall tissue deposition is twice as fast as in square-shaped pores. These results suggest that the optimization of pore shapes may improve the speed of ingrowth of bone tissue into porous scaffolds.
---
paper_title: The enhancement of chondrogenic differentiation of human mesenchymal stem cells by enzymatically regulated RGD functionalities
paper_content:
A thiol-acrylate photopolymerization was used to incorporate enzymatically cleavable peptide sequences into PEG hydrogels to induce chondrogenic differentiation of encapsulated human mesenchymal stem cells (hMSCs). An adhesive sequence, RGD, was designed with an MMP-13 specific cleavable linker. RGD promotes survival of hMSCs encapsulated in PEG gels and has shown to induce early stages of chondrogenesis, while its persistence can limit complete differentiation. Therefore, an MMP-13 cleavage site was incorporated into the peptide sequence to release RGD mimicking the native differentiation timeline. Active MMP-13 production of encapsulated hMSCs was seen to increase from days 9–14 and only in chondrogenic differentiating cultures. Seeded hMSCs attached to the material prior to enzymatic cleavage, but a significant population of the cells detach after cleavage and release of RGD. Finally, hMSCs encapsulated in RGD-releasing gels produce 10 times as much glycosaminoglycan as cells with uncleavable RGD functionalities, by day 21 of culture. Furthermore, 75% of the cells stain positive for collagen type II deposition where RGD is cleavable, as compared to 19% for cultures where RGD persists. Collectively, this data provides evidence that temporal regulation of integrin-binding peptides is important in the design of niches in differentiating hMSCs to chondrocytes.
---
paper_title: Cell mechanics, structure, and function are regulated by the stiffness of the three-dimensional microenvironment.
paper_content:
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells.
---
paper_title: Cytoindentation for obtaining cell biomechanical properties.
paper_content:
A novel biomechanical testing methodology was developed to obtain the intrinsic material properties of an individual cell attached to a rigid substrate. With use of a newly designed cell-indentation apparatus (cytoindenter), displacement-controlled indentation tests were conducted on the surface of individual MG63 cells and the corresponding surface reaction force of each cell was measured. The cells were modeled with a linear elasticity solution of half-space indentation and the linear biphasic theory on the assumption that the viscoelastic behavior of each cell was due to the interaction between the solid cytoskeletal matrix and the cytoplasmic fluid. To obtain the intrinsic material properties (aggregate modulus, Poisson's ratio, and permeability), the data for experimental surface reaction force and deformation were curve-fitted with use of solutions predicted with a linear biphasic finite element code in conjunction with optimization routines. The MG63 osteoblast-like cells had a compressive aggregate modulus of 2.05+/-0.89 kPa, which is two to three orders of magnitude smaller than that of articular cartilage, six to seven orders smaller than that of compact bone, and quite similar to that of leukocytes. The permeability was 1.18+/-0.65 (x10(-10)) m4/N-s, which is four to six orders of magnitude larger than that of cartilage. The Poisson's ratio was 0.37+/-0.03. The intrinsic material properties of the individual cell in this study can be useful in precisely quantifying mechanical stimuli acting on cells. This information is also needed for theories attempting to establish mechanotransductional relationships.
---
paper_title: AFM indentation study of breast cancer cells
paper_content:
Mechanical properties of individual living cells are known to be closely related to the health and function of the human body. Here, atomic force microscopy (AFM) indentation using a micro-sized spherical probe was carried out to characterize the elasticity of benign (MCF-10A) and cancerous (MCF-7) human breast epithelial cells. AFM imaging and confocal fluorescence imaging were also used to investigate their corresponding sub-membrane cytoskeletal structures. Malignant (MCF-7) breast cells were found to have an apparent Young's modulus significantly lower (1.4-1.8 times) than that of their non-malignant (MCF-10A) counterparts at physiological temperature (37 degrees C), and their apparent Young's modulus increase with loading rate. Both confocal and AFM images showed a significant difference in the organization of their sub-membrane actin structures which directly contribute to their difference in cell elasticity. This change may have facilitated easy migration and invasion of malignant cells during metastasis.
---
paper_title: Determination of the Poisson's ratio of the cell: recovery properties of chondrocytes after release from complete micropipette aspiration.
paper_content:
Chondrocytes in articular cartilage are regularly subjected to compression and recovery due to dynamic loading of the joint. Previous studies have investigated the elastic and viscoelastic properties of chondrocytes using micropipette aspiration techniques, but in order to calculate cell properties, these studies have generally assumed that cells are incompressible with a Poisson's ratio of 0.5. The goal of this study was to measure the Poisson's ratio and recovery properties of the chondrocyte by combining theoretical modeling with experimental measures of complete cellular aspiration and release from a micropipette. Chondrocytes isolated from non-osteoarthritic and osteoarthritic cartilage were fully aspirated into a micropipette and allowed to reach mechanical equilibrium. Cells were then extruded from the micropipette and cell volume and morphology were measured throughout the experiment. This experimental procedure was simulated with finite element analysis, modeling the chondrocyte as either a compressible two-mode viscoelastic solid, or as a biphasic viscoelastic material. By fitting the experimental data to the theoretically predicted cell response, the Poisson's ratio and the viscoelastic recovery properties of the cell were determined. The Poisson's ratio of chondrocytes was found to be 0.38 for non-osteoarthritic cartilage and 0.36 for osteoarthritic chondrocytes (no significant difference). Osteoarthritic chondrocytes showed an increased recovery time following full aspiration. In contrast to previous assumptions, these findings suggest that chondrocytes are compressible, consistent with previous studies showing cell volume changes with compression of the extracellular matrix.
---
paper_title: Micropipette aspiration of living cells.
paper_content:
The mechanical behavior of living cells is studied with micropipette suction in which the surface of a cell is aspirated into a small glass tube while tracking the leading edge of its surface. Such edges can be tracked in a light microscope to an accuracy of +/-25 nm and suction pressures as small as 0.1-0.2 pN/microm2 can be imposed on the cell. Both soft cells, such as neutrophils and red cells, and more rigid cells, such as chondrocytes and endothelial cells, are studied with this technique. Interpretation of the measurements with basic continuum models leads to values for a cell's elastic and viscous properties. In particular, neutrophils are found to behave as a liquid drop with a cortical (surface) tension of about 30 pN/microm and a viscosity on the order of 100 Pa s. On the other hand, chondrocytes and endothelial cells behave as solids with an elastic modulus of the order of 500 pN/microm2 (0.5 kPa).
---
paper_title: Measuring the Mechanical Properties of Living Cells Using Atomic Force Microscopy
paper_content:
Mechanical properties of cells and extracellular matrix (ECM) play important roles in many biological processes including stem cell differentiation, tumor formation, and wound healing. Changes in stiffness of cells and ECM are often signs of changes in cell physiology or diseases in tissues. Hence, cell stiffness is an index to evaluate the status of cell cultures. Among the multitude of methods applied to measure the stiffness of cells and tissues, micro-indentation using an Atomic Force Microscope (AFM) provides a way to reliably measure the stiffness of living cells. This method has been widely applied to characterize the micro-scale stiffness for a variety of materials ranging from metal surfaces to soft biological tissues and cells. The basic principle of this method is to indent a cell with an AFM tip of selected geometry and measure the applied force from the bending of the AFM cantilever. Fitting the force-indentation curve to the Hertz model for the corresponding tip geometry can give quantitative measurements of material stiffness. This paper demonstrates the procedure to characterize the stiffness of living cells using AFM. Key steps including the process of AFM calibration, force-curve acquisition, and data analysis using a MATLAB routine are demonstrated. Limitations of this method are also discussed.
---
paper_title: Connections between single-cell biomechanics and human disease states: gastrointestinal cancer and malaria.
paper_content:
We investigate connections between single-cell mechanical properties and subcellular structural reorganization from biochemical factors in the context of two distinctly different human diseases: gastrointestinal tumor and malaria. Although the cell lineages and the biochemical links to pathogenesis are vastly different in these two cases, we compare and contrast chemomechanical pathways whereby intracellular structural rearrangements lead to global changes in mechanical deformability of the cell. This single-cell biomechanical response, in turn, seems to mediate cell mobility and thereby facilitates disease progression in situations where the elastic modulus increases or decreases due to membrane or cytoskeleton reorganization. We first present new experiments on elastic response and energy dissipation under repeated tensile loading of epithelial pancreatic cancer cells in force- or displacement-control. Energy dissipation from repeated stretching significantly increases and the cell's elastic modulus decreases after treatment of Panc-1 pancreatic cancer cells with sphingosylphosphorylcholine (SPC), a bioactive lipid that influences cancer metastasis. When the cell is treated instead with lysophosphatidic acid, which facilitates actin stress fiber formation, neither energy dissipation nor modulus is noticeably affected. Integrating recent studies with our new observations, we ascribe these trends to possible SPC-induced reorganization primarily of keratin network to perinuclear region of cell; the intermediate filament fraction of the cytoskeleton thus appears to dominate deformability of the epithelial cell. Possible consequences of these results to cell mobility and cancer metastasis are postulated. We then turn attention to progressive changes in mechanical properties of the human red blood cell (RBC) infected with the malaria parasite Plasmodium falciparum. We present, for the first time, continuous force-displacement curves obtained from in-vitro deformation of RBC with optical tweezers for different intracellular developmental stages of parasite. The shear modulus of RBC is found to increase up to 10-fold during parasite development, which is a noticeably greater effect than that from prior estimates. By integrating our new experimental results with published literature on deformability of Plasmodium-harbouring RBC, we examine the biochemical conditions mediating increases or decreases in modulus, and their implications for disease progression. Some general perspectives on connections among structure, single-cell mechanical properties and biological responses associated with pathogenic processes are also provided in the context of the two diseases considered in this work.
---
paper_title: Quantitative analysis of the viscoelastic properties of thin regions of fibroblasts using atomic force microscopy.
paper_content:
Viscoelasticity of the leading edge, i.e., the lamellipodium, of a cell is the key property for a deeper understanding of the active extension of a cell's leading edge. The fact that the lamellipodium of a cell is very thin (<1000 nm) imparts special challenges for accurate measurements of its viscoelastic behavior. It requires addressing strong substrate effects and comparatively high stresses (>1 kPa) on thin samples. We present the method for an atomic force microscopy-based microrheology that allows us to fully quantify the viscoelastic constants (elastic storage modulus, viscous loss modulus, and the Poisson ratio) of thin areas of a cell (<1000 nm) as well as those of thick areas. We account for substrate effects by applying two different models-a model for well-adhered regions (Chen model) and a model for nonadhered regions (Tu model). This method also provides detailed information about the adhered regions of a cell. The very thin regions relatively near the edge of NIH 3T3 fibroblasts can be identified by the Chen model as strongly adherent with an elastic strength of approximately 1.6 +/- 0.2 kPa and with an experimentally determined Poisson ratio of approximately 0.4 to 0.5. Further from the edge of these cells, the adherence decreases, and the Tu model is effective in evaluating its elastic strength ( approximately 0.6 +/- 0.1 kPa). Thus, our AFM-based microrheology allows us to correlate two key parameters of cell motility by relating elastic strength and the Poisson ratio to the adhesive state of a cell. This frequency-dependent measurement allows for the decomposition of the elastic modulus into loss and storage modulus. Applying this decomposition and Tu's and Chen's finite depth models allow us to obtain viscoelastic signatures in a frequency range from 50 to 300 Hz, showing a rubber plateau-like behavior.
---
paper_title: Nanomechanical characterization of tissue engineered bone grown on titanium alloy in vitro
paper_content:
Intensive work has been performed on the characterization of the mechanical properties of mineralised tissues formed in vivo. However, the mechanical properties of bone-like tissue formed in vitro have rarely been characterised. Most research has either focused on compact cortical bone or cancellous bone, whilst leaving woven bone unaddressed. In this study, bone-like mineralised matrix was produced by osteoblasts cultured in vitro on the surface of titanium alloys. The volume of this tissue-engineered bone is so small that the conventional tensile tests or bending tests are implausible. Therefore, nanoindentation techniques which allow the characterization of the test material from the nanoscale to the microscale were adopted. These reveal the apparent elastic modulus and hardness of the calcospherulite crystals (a representative element for woven bone) are 2.35 ± 0.73 and 0.41 ± 0.15 GPa, respectively. The nanoscale viscoelasticity of such woven bone was further assessed by dynamic indentation analysis.
---
paper_title: A compact torsional reference device for easy, accurate and traceable AFM piconewton calibration.
paper_content:
The invention of the atomic force microscope led directly to the possibility of carrying out nanomechanical tests with forces below the nanonewton and the ability to test nanomaterials and single molecules. As a result there is a pressing need for accurate and traceable force calibration of AFM measurements that is not satisfactorily met by existing calibration methods. Here we present a force reference device that makes it possible to calibrate the normal stiffness of typical AFM microcantilevers down to 90?pN?nm?1 with very high accuracy and repeatability and describe how it can be calibrated traceably to the International System of Units via the ampere and the metre, avoiding in that way the difficulties associated with traceability to the SI kilogram. We estimate the total uncertainty associated with cantilever calibration including traceability to be better than 3.5%, thus still offering room for future improvement.
---
paper_title: Introduction to Polymers
paper_content:
CONCEPTS, NOMENCLATURE AND SYNTHESIS OF POLYMERS Concepts and Nomenclature The Origins of Polymer Science and the Polymer Industry Basic Definitions and Nomenclature Molar Mass and Degree of Polymerization Principles of Polymerization Introduction Classification of Polymerization Reactions Monomer Functionality and Polymer Skeletal Structure Functional Group Reactivity and Molecular Size: The Principle of Equal Reactivity Step Polymerization Introduction Linear Step Polymerization Non-Linear Step Polymerization Radical Polymerization Introduction to Radical Polymerization The Chemistry of Conventional Free-Radical Polymerization Kinetics of Conventional Free-Radical Polymerization Free-Radical Polymerization Processes Reversible-Deactivation (`Living') Radical Polymerizations Non-Linear Radical Polymerizations Ionic Polymerization Introduction to Ionic Polymerization Cationic Polymerization Anionic Polymerization Group-Transfer Polymerization Stereochemistry and Coordination Polymerization Introduction to Stereochemistry of Polymerization Tacticity of Polymers Geometric Isomerism in Polymers Prepared from Conjugated Dienes Ziegler-Natta Coordination Polymerization Metallocene Coordination Polymerization Ring-Opening Polymerization Introduction to Ring-Opening Polymerization Cationic Ring-Opening Polymerization Anionic Ring-Opening Polymerization Free-Radical Ring-Opening Polymerization Ring-Opening Metathesis Polymerization Specialized Methods of Polymer Synthesis Introduction Solid-State Topochemical Polymerization Polymerization by Oxidative Coupling Precursor Routes to Intractable Polymers Supramolecular Polymerization (Polyassociation) Copolymerization Introduction Step Copolymerization Chain Copolymerization Block Copolymer Synthesis Graft Copolymer Synthesis CHARACTERIZATION OF POLYMERS Theoretical Description of Polymers in Solution Introduction Thermodynamics of Polymer Solutions Chain Dimensions Frictional Properties of Polymer Molecules in Dilute Solution Number-Average Molar Mass Introduction to Measurements of Number-Average Molar Mass Membrane Osmometry Vapour Pressure Osmometry Ebulliometry and Cryoscopy End-Group Analysis Effects of Low Molar Mass Impurities upon Mn Scattering Methods Introduction Static Light Scattering Dynamic Light Scattering Small-Angle X-Ray and Neutron Scattering Frictional Properties of Polymers in Solution Introduction Dilute Solution Viscometry Ultracentrifugation Molar Mass Distribution Introduction Fractionation Gel Permeation Chromatography Field-Flow Fractionation Mass Spectroscopy Chemical Composition and Molecular Microstructure Introduction Principles of Spectroscopy Ultraviolet and Visible Light Absorption Spectroscopy Infrared Spectroscopy Raman Spectroscopy Nuclear Magnetic Resonance Spectroscopy Mass Spectroscopy PHASE STRUCTURE AND MORPHOLOGY OF BULK POLYMERS The Amorphous State Introduction The Glass Transition Factors Controlling the Tg Macromolecular Dynamics The Crystalline State Introduction Determination of Crystal Structure Polymer Single Crystals Semi-Crystalline Polymers Liquid Crystalline Polymers Defects in Crystalline Polymers Crystallization Melting Multicomponent Polymer Systems Introduction Polymer Blends Block Copolymers PROPERTIES OF BULK POLYMERS Elastic Deformation Introduction Elastic Deformation Elastic Deformation of Polymers Viscoelasticity Introduction Viscoelastic Mechanical Models Boltzmann Superposition Principle Dynamic Mechanical Testing Frequency Dependence of Viscoelastic Behaviour Transitions and Polymer Structure Time-Temperature Superposition Effect of Entanglements Non-Linear Viscoelasticity Elastomers Introduction Thermodynamics of Elastomer Deformation Statistical Theory of Elastomer Deformation Stress-Strain Behaviour of Elastomers Factors Affecting Mechanical Behaviour Yield and Crazing Introduction Phenomenology of Yield Yield Criteria Deformation Mechanisms Crazing Fracture and Toughening Introduction Fundamentals of Fracture Mechanics of Fracture Fracture Phenomena Toughened Polymers Polymer Composites Introduction to Composite Materials Matrix Materials Types of Reinforcement Composite Composition Particulate Reinforcement Fibre Reinforcement Nanocomposites Electrical Properties Introduction to Electrical Properties Dielectric Properties Conduction in Polymers Polymer Electronics Answers to Problems Index Problems and Further Reading appear at the end of each chapter.
---
paper_title: Cytoindentation for obtaining cell biomechanical properties.
paper_content:
A novel biomechanical testing methodology was developed to obtain the intrinsic material properties of an individual cell attached to a rigid substrate. With use of a newly designed cell-indentation apparatus (cytoindenter), displacement-controlled indentation tests were conducted on the surface of individual MG63 cells and the corresponding surface reaction force of each cell was measured. The cells were modeled with a linear elasticity solution of half-space indentation and the linear biphasic theory on the assumption that the viscoelastic behavior of each cell was due to the interaction between the solid cytoskeletal matrix and the cytoplasmic fluid. To obtain the intrinsic material properties (aggregate modulus, Poisson's ratio, and permeability), the data for experimental surface reaction force and deformation were curve-fitted with use of solutions predicted with a linear biphasic finite element code in conjunction with optimization routines. The MG63 osteoblast-like cells had a compressive aggregate modulus of 2.05+/-0.89 kPa, which is two to three orders of magnitude smaller than that of articular cartilage, six to seven orders smaller than that of compact bone, and quite similar to that of leukocytes. The permeability was 1.18+/-0.65 (x10(-10)) m4/N-s, which is four to six orders of magnitude larger than that of cartilage. The Poisson's ratio was 0.37+/-0.03. The intrinsic material properties of the individual cell in this study can be useful in precisely quantifying mechanical stimuli acting on cells. This information is also needed for theories attempting to establish mechanotransductional relationships.
---
paper_title: Plastic indentation in metals with cones
paper_content:
Abstract Q uasi-static indentations with diamond cones have been made in specimens of copper and mild steel work-hardened by varying amounts. The relationship between the hardness so found, the cone angle, and the yield stress is given. The results agree well with the indentation theory of L ockett (1968a) based on classical slip line analysis. This shows that for a work-hardened material the hardness increases with increasing cone angle for cone angles above 105°. Lockett's theory is degenerate below 105° and the experiments here also show a change in deformation mode in this region. Although this agreement with theory appears satisfactory, the deformation process appears to be substantially different from that implied in Lockett's analysis. For large angle cones the deformation resembles a ‘radial’ compression similar to that described by S amuels and M ulhearn (1957): for small cone angles the deformation resembles more clearly the classical slip-line form given by Lockett, though this is precisely the region where the theory becomes degenerate. The effective strain produced by each cone during the indentation process itself is determined for each cone and it is shown that it is possible to construct the stress-strain curve of a given material from a series of cone indentations.
---
paper_title: Plastic indentation in metals with cones
paper_content:
Abstract Q uasi-static indentations with diamond cones have been made in specimens of copper and mild steel work-hardened by varying amounts. The relationship between the hardness so found, the cone angle, and the yield stress is given. The results agree well with the indentation theory of L ockett (1968a) based on classical slip line analysis. This shows that for a work-hardened material the hardness increases with increasing cone angle for cone angles above 105°. Lockett's theory is degenerate below 105° and the experiments here also show a change in deformation mode in this region. Although this agreement with theory appears satisfactory, the deformation process appears to be substantially different from that implied in Lockett's analysis. For large angle cones the deformation resembles a ‘radial’ compression similar to that described by S amuels and M ulhearn (1957): for small cone angles the deformation resembles more clearly the classical slip-line form given by Lockett, though this is precisely the region where the theory becomes degenerate. The effective strain produced by each cone during the indentation process itself is determined for each cone and it is shown that it is possible to construct the stress-strain curve of a given material from a series of cone indentations.
---
paper_title: On the factors affecting the critical indenter penetration for measurement of coating hardness
paper_content:
Abstract The nanoindentation test is the only viable approach to assess the properties of very thin coatings (
---
paper_title: On the relationship between plastic zone radius and maximum depth during nanoindentation
paper_content:
The relationship between plastic zone size and plastic depth during indentation has been studied by a number of workers and an expression relating the plastic zone radius to the residual indentation depth (assumed to be the plastic depth) was developed by Lawn et al. [B.R. Lawn, A.G. Evans and D.B. Marshall, J. Am. Ceram. Soc. 63 (1980)198.] based on microindentation testing. In this study, the relationship between the plastic zone radius and residual indentation depth was examined using finite element analysis for conical indentation in elastic-perfectly-plastic bulk materials. The simulations show that the Lawn method overestimates the plastic zone size for different materials with a wide range of Young's modulus over hardness ratio and for indenters with different geometries and it does not consider tip rounding effects. Therefore, an analytical expression is outlined which agrees well with finite element data. For practical application, the relationship between the radius of the plastic deformation zone and the maximum penetration depth is developed here which has been used to modify the energy-based model developed at Newcastle University to predict the hardness and Young's modulus of coated systems. It is found that the new relationship is easy to apply and predictions of the hardness and Young's modulus of coated glass show good agreement with experimental results.
---
paper_title: Cell deformation behavior in mechanically loaded rabbit articular cartilage 4 weeks after anterior cruciate ligament transection
paper_content:
OBJECTIVE ::: Chondrocyte stresses and strains in articular cartilage are known to modulate tissue mechanobiology. Cell deformation behavior in cartilage under mechanical loading is not known at the earliest stages of osteoarthritis. Thus, the aim of this study was to investigate the effect of mechanical loading on volume and morphology of chondrocytes in the superficial tissue of osteoarthritic cartilage obtained from anterior cruciate ligament transected (ACLT) rabbit knee joints, 4 weeks after intervention. ::: ::: ::: METHODS ::: A unique custom-made microscopy indentation system with dual-photon microscope was used to apply controlled 2 MPa force-relaxation loading on patellar cartilage surfaces. Volume and morphology of chondrocytes were analyzed before and after loading. Also global and local tissue strains were calculated. Collagen content, collagen orientation and proteoglycan content were quantified with Fourier transform infrared microspectroscopy, polarized light microscopy and digital densitometry, respectively. ::: ::: ::: RESULTS ::: Following the mechanical loading, the volume of chondrocytes in the superficial tissue increased significantly in ACLT cartilage by 24% (95% confidence interval (CI) 17.2-31.5, P < 0.001), while it reduced significantly in contralateral group tissue by -5.3% (95% CI -8.1 to -2.5, P = 0.003). Collagen content in ACLT and contralateral cartilage were similar. PG content was reduced and collagen orientation angle was increased in the superficial tissue of ACLT cartilage compared to the contralateral cartilage. ::: ::: ::: CONCLUSIONS ::: We found the novel result that chondrocyte deformation behavior in the superficial tissue of rabbit articular cartilage is altered already at 4 weeks after ACLT, likely because of changes in collagen fibril orientation and a reduction in PG content.
---
paper_title: Confocal microscopy indentation system for studying in situ chondrocyte mechanics
paper_content:
Abstract Chondrocytes synthesize extracellular matrix molecules, thus they are essential for the development, adaptation and maintenance of articular cartilage. Furthermore, it is well accepted that the biosynthetic activity of chondrocytes is influenced by the mechanical environment. Therefore, their response to mechanical stimuli has been studied extensively. Much of the knowledge in this area of research has been derived from testing of isolated cells, cartilage explants, and fixed cartilage specimens: systems that differ in important aspects from chondrocytes embedded in articular cartilage and observed during loading conditions. In this study, current model systems have been improved by working with the intact cartilage in real time. An indentation system was designed on a confocal microscope that allows for simultaneous loading and observation of chondrocytes in their native environment. Cell mechanics were then measured under precisely controlled loading conditions. The indentation system is based on a light transmissible cylindrical glass indentor of 0.17 mm thickness and 1.64 mm diameter that is aligned along the focal axis of the microscope and allows for real time observation of live cells in their native environment. The system can be used to study cell deformation and biological responses, such as calcium sparks, while applying prescribed loads on the cartilage surface. It can also provide novel information on the relationship between cell loading and cartilage adaptive/degenerative processes in the intact tissue.
---
paper_title: Biomechanics of single zonal chondrocytes
paper_content:
Articular cartilage has a distinct zonal architecture, and previous work has shown that chondrocytes from different zones exhibit variations in gene expression and biosynthesis. In this study, the material properties of single chondrocytes from the superficial and middle/deep zones of bovine distal metatarsal articular cartilage were determined using unconfined compression and digital videocapture. To determine the viscoelastic properties of zonal chondrocytes, unconfined creep compression experiments were performed and the resulting creep curves of individual cells were fit using a standard linear viscoelastic solid model. In the model, a fixed value of the Poisson's ratio was used, determined optically from direct compression of middle/deep chondrocytes. The two approaches used in this study yielded the following average material properties of single chondrocytes: Poisson's ratio of 0.26±0.08, instantaneous modulus of 1.06±0.82 kPa, relaxed modulus of 0.78±0.58 kPa, and apparent viscosity of 4.08±7.20 kPa s. Superficial zone chondrocytes were found to be significantly stiffer than middle/deep zone chondrocytes. Attachment time did not affect the stiffness of the cells. The zonal variation in viscoelastic properties may result from the distinct mechanical environments experienced by the cells in vivo. Identifying intrinsic differences in the biomechanics of superficial and middle/deep zone chondrocytes is an important component in understanding how biomechanics influence articular cartilage health and disease.
---
paper_title: Mechanical characterization of differentiated human embryonic stem cells
paper_content:
Human embryonic stem cells (hESCs) possess an immense potential in a variety of regenerative applications. A firm understanding of hESC mechanics, on the single cell level, may provide great insight into the role of biophysical forces in the maintenance of cellular phenotype and elucidate mechanical cues promoting differentiation along various mesenchymal lineages. Moreover, cellular biomechanics can provide an additional tool for characterizing stem cells as they follow certain differentiation lineages, and thus may aid in identifying differentiated hESCs, which are most suitable for tissue engineering. This study examined the viscoelastic properties of single undifferentiated hESCs, chondrogenically differentiated hESC subpopulations, mesenchymal stem cells (MSCs), and articular chondrocytes (ACs). hESC chondrogenesis was induced using either transforming growth factor-β1 (TGF-β1) or knock out serum replacer as differentiation agents, and the resulting cell populations were separated based on density. All cell groups were mechanically tested using unconfined creep cytocompression. Analyses of subpopulations from all differentiation regimens resulted in a spectrum of mechanical and morphological properties spanning the range of hESCs to MSCs to ACs. Density separation was further successful in isolating cellular subpopulations with distinct mechanical properties. The instantaneous and relaxed moduli of subpopulations from TGF-β1 differentiation regimen were statistically greater than those of undifferentiated hESCs. In addition, two subpopulations from the TGF-β1 group were identified, which were not statistically different from native articular chondrocytes in their instantaneous and relaxed moduli, as well as their apparent viscosity. Identification of a differentiated hESC subpopulation with similar mechanical properties as native chondrocytes may provide an excellent cell source for tissue engineering applications. These cells will need to withstand any mechanical stimulation regimen employed to augment the mechanical and biochemical characteristics of the neotissue. Density separation was effective at purifying distinct populations of cells. A differentiated hESC subpopulation was identified with both similar mechanical and morphological characteristics as ACs. Future research may utilize this cell source in cartilage regeneration efforts.
---
paper_title: TENSEGRITY: THE ARCHITECTURAL BASIS OF CELLULAR MECHANOTRANSDUCTION
paper_content:
Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.
---
paper_title: Cell shape, cytoskeletal mechanics, and cell cycle control in angiogenesis
paper_content:
Capillary endothelial cells can be switched between growth and differentiation by altering cell-extracellular matrix interactions and thereby, modulating cell shape. Studies were carried out to determine when cell shape exerts its growth-regulatory influence during cell cycle progression and to explore the role of cytoskeletal structure and mechanics in this control mechanism. When G0-synchronized cells were cultured in basic fibroblast growth factor (FGF)-containing defined medium on dishes coated with increasing densities of fibronectin or a synthelic integrin ligand (RGD-containing peptide), cell spreading, nuclear extention, and DNA synthesis all increased in parallel. To determine the minimum time cells must be adherent and spread on extracellular matrix (ECM) to gain entry into S phase, cells were removed with trypsin or induced to retract using cytochalasin D at different times after plating. Both approaches revealed that cells must remain extended for approximately 12–15 h and hence, most of G1, in order to enter S phase. After this restriction point was passed, normally ‘anchorage-dependent’ endothelial cells turned on DNA synthesis even when round and in suspension. The importance of actin-containing microfilaments in shape-dependent growth control was confirmed by culturing cells in the presence of cytochalasin D (25–1000 ng ml−1): dose-dependent inhibition of cell spreading, nuclear extension, and DNA synthesis resulted. In contrast, induction of microtubule disassembly using nocodazole had little effect on cell or nuclear spreading and only partially inhibited DNA synthesis. Interestingly, combination of nocodazole with a suboptimal dose of cytochalasin D (100 ng ml−1) resulted in potent inhibition of both spreading and growth, suggesting that microtubules are redundant structural elements which can provide critical load-bearing functions when microfilaments are partially compromised. Similar synergism between nocodazole and cytochalasin D was observed when cytoskeletal stiffness was measured directly in living cells using magnetic twisting cytometry. These results emphasize the importance of matrix-dependent changes in cell and nuclear shape as well as higher order structural interactions between different cytoskeletal filament systems for control of capillary cell growth during angiogenesis.
---
paper_title: The mechanical environment of the chondrocyte: a biphasic finite element model of cell-matrix interactions in articular cartilage.
paper_content:
Abstract Mechanical compression of the cartilage extracellular matrix has a significant effect on the metabolic activity of the chondrocytes. However, the relationship between the stress–strain and fluid-flow fields at the macroscopic “tissue” level and those at the microscopic “cellular” level are not fully understood. Based on the existing experimental data on the deformation behavior and biomechanical properties of articular cartilage and chondrocytes, a multi-scale biphasic finite element model was developed of the chondrocyte as a spheroidal inclusion embedded within the extracellular matrix of a cartilage explant. The mechanical environment at the cellular level was found to be time-varying and inhomogeneous, and the large difference (∼3 orders of magnitude) in the elastic properties of the chondrocyte and those of the extracellular matrix results in stress concentrations at the cell–matrix border and a nearly two-fold increase in strain and dilatation (volume change) at the cellular level, as compared to the macroscopic level. The presence of a narrow “pericellular matrix” with different properties than that of the chondrocyte or extracellular matrix significantly altered the principal stress and strain magnitudes within the chondrocyte, suggesting a functional biomechanical role for the pericellular matrix. These findings suggest that even under simple compressive loading conditions, chondrocytes are subjected to a complex local mechanical environment consisting of tension, compression, shear, and fluid pressure. Knowledge of the local stress and strain fields in the extracellular matrix is an important step in the interpretation of studies of mechanical signal transduction in cartilage explant culture models.
---
paper_title: Determination of the Poisson's ratio of the cell: recovery properties of chondrocytes after release from complete micropipette aspiration.
paper_content:
Chondrocytes in articular cartilage are regularly subjected to compression and recovery due to dynamic loading of the joint. Previous studies have investigated the elastic and viscoelastic properties of chondrocytes using micropipette aspiration techniques, but in order to calculate cell properties, these studies have generally assumed that cells are incompressible with a Poisson's ratio of 0.5. The goal of this study was to measure the Poisson's ratio and recovery properties of the chondrocyte by combining theoretical modeling with experimental measures of complete cellular aspiration and release from a micropipette. Chondrocytes isolated from non-osteoarthritic and osteoarthritic cartilage were fully aspirated into a micropipette and allowed to reach mechanical equilibrium. Cells were then extruded from the micropipette and cell volume and morphology were measured throughout the experiment. This experimental procedure was simulated with finite element analysis, modeling the chondrocyte as either a compressible two-mode viscoelastic solid, or as a biphasic viscoelastic material. By fitting the experimental data to the theoretically predicted cell response, the Poisson's ratio and the viscoelastic recovery properties of the cell were determined. The Poisson's ratio of chondrocytes was found to be 0.38 for non-osteoarthritic cartilage and 0.36 for osteoarthritic chondrocytes (no significant difference). Osteoarthritic chondrocytes showed an increased recovery time following full aspiration. In contrast to previous assumptions, these findings suggest that chondrocytes are compressible, consistent with previous studies showing cell volume changes with compression of the extracellular matrix.
---
paper_title: Finite element modelling of nanoindentation based methods for mechanical properties of cells.
paper_content:
The viscoelastic properties of the living cells are for quantifying the biomechanical effects of drug treatment, diseases and aging. Nanoindentation techniques have proven effective to characterize the viscoelastic properties of living cells. However, most studies utilized the Hertz contact model and assumed the Heaviside step loading, which does not represent real tests. Therefore, new mathematical models have been developed to determine the viscoelastic properties of the cells for nanoindentation tests. Finite element method was used to determine the empirical correction parameter in the mathematical model to account for large deformation, in which case the combined effect of finite lateral and vertical dimensions of the cell is essential. The viscoelastic integral operator was used to account for the realistic deformation rate. The predictive model captures the mechanical responses of the cells observed from previous experimental study. This work has demonstrated that the new model consistently predicts viscoelastic properties for both ramping and stress relaxation periods, which cannot be achieved by the commonly used models. Utilization of this new model can enrich the experimental cell mechanics in interpretation of nanoindentation of cells.
---
paper_title: Plastic indentation in metals with cones
paper_content:
Abstract Q uasi-static indentations with diamond cones have been made in specimens of copper and mild steel work-hardened by varying amounts. The relationship between the hardness so found, the cone angle, and the yield stress is given. The results agree well with the indentation theory of L ockett (1968a) based on classical slip line analysis. This shows that for a work-hardened material the hardness increases with increasing cone angle for cone angles above 105°. Lockett's theory is degenerate below 105° and the experiments here also show a change in deformation mode in this region. Although this agreement with theory appears satisfactory, the deformation process appears to be substantially different from that implied in Lockett's analysis. For large angle cones the deformation resembles a ‘radial’ compression similar to that described by S amuels and M ulhearn (1957): for small cone angles the deformation resembles more clearly the classical slip-line form given by Lockett, though this is precisely the region where the theory becomes degenerate. The effective strain produced by each cone during the indentation process itself is determined for each cone and it is shown that it is possible to construct the stress-strain curve of a given material from a series of cone indentations.
---
paper_title: How can cells sense the elasticity of a substrate? An analysis using a cell tensegrity model
paper_content:
A eukaryotic cell attaches and spreads on substrates, whether it is the extracellular matrix naturally produced by the cell itself, or artificial materials, such as tissue-engineered scaffolds. Attachment and spreading require the cell to apply forces in the nN range to the substrate via adhesion sites, and these forces are balanced by the elastic response of the substrate. This mechanical interaction is one determinant of cell morphology and, ultimately, cell phenotype. In this paper we use a finite element model of a cell, with a tensegrity structure to model the cytoskeleton of actin filaments and microtubules, to explore the way cells sense the stiffness of the substrate and thereby adapt to it. To support the computational results, an analytical 1D model is developed for comparison. We find that (i) the tensegrity hypothesis of the cytoskeleton is sufficient to explain the matrix-elasticity sensing, (ii) cell sensitivity is not constant but has a bell-shaped distribution over the physiological matrix-elasticity range, and (iii) the position of the sensitivity peak over the matrix-elasticity range depends on the cytoskeletal structure and in particular on the F-actin organisation. Our model suggests that F-actin reorganisation observed in mesenchymal stem cells (MSCs) in response to change of matrix elasticity is a structural-remodelling process that shifts the sensitivity peak towards the new value of matrix elasticity. This finding discloses a potential regulatory role of scaffold stiffness for cell differentiation.
---
paper_title: A three-dimensional finite element model of an adherent eukaryotic cell.
paper_content:
Mechanical stimulation is known to cause alterations in the behaviour of cells adhering to a substrate. The mechanisms by which forces are transduced into biological responses within the cell remain largely unknown. Since cellular deformation is likely involved, further understanding of the biomechanical origins of alterations in cellular response can be aided by the use of computational models in describing cellular structural behaviour and in determining cellular deformation due to imposed loads of various magnitudes. In this paper, a finite element modelling approach that can describe the biomechanical behaviour of adherent eukaryotic cells is presented. It fuses two previous modelling approaches by incorporating, in an idealised geometry, all cellular components considered structurally significant, i.e. prestressed cytoskeleton, cytoplasm, nucleus and membrane components. The aim is to determine if we can use this model to describe the non-linear structural behaviour of an adherent cell and to determine the contribution of the various cellular components to cellular stability. Results obtained by applying forces (in the picoNewton range) to the model membrane nodes suggest a key role for the cytoskeleton in determining cellular stiffness. The model captures non-linear structural behaviours such as strain hardening and prestress effects (in the region of receptor sites), and variable compliance along the cell surface. The role of the cytoskeleton in stiffening a cell during the process of cell spreading is investigated by applying forces to five increasingly spread cell geometries. Parameter studies reveal that material properties of the cytoplasm (elasticity and compressibility) also have a large influence on cellular stiffness. The computational model of a single cell developed here is proposed as one that is sufficiently complex to capture the non-linear behaviours of the cell response to forces whilst not being so complex that the parameters cannot be specified. The model could be very useful in computing cellular structural behaviour in response to various in vitro mechanical stimuli (e.g. fluid flow, substrate strain), or for use in algorithms that attempt to simulate mechanobiological processes.
---
paper_title: Cellular tensegrity: exploring how mechanical changes in the cytoskeleton regulate cell growth, migration, and tissue pattern during morphogenesis.
paper_content:
Publisher Summary This chapter focuses on the role of the intracellular cytoskeleton (CSK) in cell shape determination and tissue morphogenesis. The role of mechanical changes in the CSK during embryological development is reviewed. The chapter focuses on the mechanism by which mechanical forces are transmitted across the cell surface and through the CSK, as well as how they regulate cell shape. An analysis of the biomechanical basis of cell shape control addresses two central questions: (1) how do changes in mechanical forces alter CSK organization, and (2) how do changes in CSK structure regulate cell growth and function. The results from recent studies showing that the CSK can respond directly to mechanical stress are also reviewed. The particular type of mechanical response that living cells exhibit is consistent with a theory of CSK architecture that is based on tensional integrity and is known as “tensegrity”. Inherent to the tensegrity model is an efficient mechanism for integrating changes in structure and function at the tissue, cell, nuclear, and molecular levels. The chapter explores the possibility that CSK tensegrity may also provide a mechanical basis for cell locomotion as well as a structural mechanism for coupling mechanical and chemical signaling pathways inside the cell.
---
paper_title: Cell mechanics, structure, and function are regulated by the stiffness of the three-dimensional microenvironment.
paper_content:
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells.
---
paper_title: AFM indentation study of breast cancer cells
paper_content:
Mechanical properties of individual living cells are known to be closely related to the health and function of the human body. Here, atomic force microscopy (AFM) indentation using a micro-sized spherical probe was carried out to characterize the elasticity of benign (MCF-10A) and cancerous (MCF-7) human breast epithelial cells. AFM imaging and confocal fluorescence imaging were also used to investigate their corresponding sub-membrane cytoskeletal structures. Malignant (MCF-7) breast cells were found to have an apparent Young's modulus significantly lower (1.4-1.8 times) than that of their non-malignant (MCF-10A) counterparts at physiological temperature (37 degrees C), and their apparent Young's modulus increase with loading rate. Both confocal and AFM images showed a significant difference in the organization of their sub-membrane actin structures which directly contribute to their difference in cell elasticity. This change may have facilitated easy migration and invasion of malignant cells during metastasis.
---
paper_title: Biomechanical properties of single chondrocytes and chondrons determined by micromanipulation and finite-element modelling
paper_content:
A chondrocyte and its surrounding pericellular matrix (PCM) are defined as a chondron. Single chondrocytes and chondrons isolated from bovine articular cartilage were compressed by micromanipulation between two parallel surfaces in order to investigate their biomechanical properties and to discover the mechanical significance of the PCM. The force imposed on the cells was measured directly during compression to various deformations and then holding. When the nominal strain at the end of compression was 50 per cent, force relaxation showed that the cells were viscoelastic, but this viscoelasticity was generally insignificant when the nominal strain was 30 per cent or lower. The viscoelastic behaviour might be due to the mechanical response of the cell cytoskeleton and/or nucleus at higher deformations. A finite-element analysis was applied to simulate the experimental force-displacement/time data and to obtain mechanical property parameters of the chondrocytes and chondrons. Because of the large strains in the cells, a nonlinear elastic model was used for simulations of compression to 30 per cent nominal strain and a nonlinear viscoelastic model for 50 per cent. The elastic model yielded a Young’s modulus of 14+ 1 kPa (mean+ s.e.) for chondrocytes and 19+ 2 kPa for chondrons, respectively. The viscoelastic model generated an instantaneous elastic modulus of 21+ 3a nd 27+ 4 kPa, a long-term modulus of 9.3+ 0.8 and 12+ 1 kPa and an apparent viscosity of 2.8+ 0.5 and 3.4+ 0.6 kPa s for chondrocytes and chondrons, respectively. It was concluded that chondrons were generally stiffer and showed less viscoelastic behaviour than chondrocytes, and that the PCM significantly influenced the mechanical properties of the cells.
---
paper_title: Large elastic deformations of isotropic materials VII. Experiments on the deformation of rubber
paper_content:
It is shown in this part how the theory of large elastic deformations of incompressible isotropic materials, developed in previous parts, can be used to interpret the load-deformation curves obtained for certain simple types of deformation of vulcanized rubber test-pieces in terms of a single stored-energy function. The types of experiment described are: (i) the pure homogeneous deformation of a thin sheet of rubber in which the deformation is varied in such a manner that one of the invariants of the strain, I 1 or I 2 , is maintained constant; (ii) pure shear of a thin sheet of rubber (i.e. pure homogeneous deformation in which one of the extension ratios in the plane of the sheet is maintained at unity, while the other is varied); (iii) simultaneous simple extension and pure shear of a thin sheet (i.e. pure homogeneous deformation in which one of the extension ratios in the plane of the sheet is maintained constant at a value less than unity, while the other is varied); (iv) simple extension of a strip of rubber; (v) simple compression (i.e. simple extension in which the extension ratio is less than unity); (vi) simple torsion of a right-circular cylinder; (vii) superposed axial extension and torsion of a right-circular cylindrical rod. It is shown that the load-deformation curves in all these cases can be interpreted on the basis of the theory in terms of a stored-energy function W which is such that δ W /δ I 1 is independent of I 1 and I 2 and the ratio (δ W /δ I 2 ) (δ W /δ I 1 ) is independent of I 1 and falls, as I 2 increases, from about 0*25 at I 2 = 3.
---
paper_title: A microstructural approach to cytoskeletal mechanics based on tensegrity.
paper_content:
Mechanical properties of living cells are commonly described in terms of the laws of continuum mechanics. The purpose of this report is to consider the implications of an alternative approach that emphasizes the discrete nature of stress bearing elements in the cell and is based on the known structural properties of the cytoskeleton. We have noted previously that tensegrity architecture seems to capture essential qualitative features of cytoskeletal shape distortion in adherent cells (Ingber, 1993a; Wang et al., 1993). Here we extend those qualitative notions into a formal microstructural analysis. On the basis of that analysis we attempt to identify unifying principles that might underlie the shape stability of the cytoskeleton. For simplicity, we focus on a tensegrity structure containing six rigid struts interconnected by 24 linearly elastic cables. Cables carry initial tension ("prestress") counterbalanced by compression of struts. Two cases of interconnectedness between cables and struts are considered: one where they are connected by pin-joints, and the other where the cables run through frictionless loops at the junctions. At the molecular level, the pinned structure may represent the case in which different cytoskeletal filaments are cross-linked whereas the looped structure represents the case where they are free to slip past one another. The system is then subjected to uniaxial stretching. Using the principal of virtual work, stretching force vs. extension and structural stiffness vs. stretching force relationships are calculated for different prestresses. The stiffness is found to increase with increasing prestress and, at a given prestress, to increase approximately linearly with increasing stretching force. This behavior is consistent with observations in living endothelial cells exposed to shear stresses (Wang & Ingber, 1994). At a given prestress, the pinned structure is found to be stiffer than the looped one, a result consistent with data on mechanical behavior of isolated, cross-linked and uncross-linked actin networks (Wachsstock et al., 1993). On the basis of our analysis we concluded that architecture and the prestress of the cytoskeleton might be key features that underlie a cell's ability to regulate its shape.
---
paper_title: In situ mechanical properties of the chondrocyte cytoplasm and nucleus
paper_content:
Abstract The way in which the nucleus experiences mechanical forces has important implications for understanding mechanotransduction. Knowledge of nuclear material properties and, specifically, their relationship to the properties of the bulk cell can help determine if the nucleus directly experiences mechanical loads, or if it is a signal transduction mechanism secondary to cell membrane deformation that leads to altered gene expression. Prior work measuring nuclear material properties using micropipette aspiration suggests that the nucleus is substantially stiffer than the bulk cell [Guilak, F., Tedrow, J.R., Burgkart, R., 2000. Viscoelastic properties of the cell nucleus. Biochem. Biophys. Res. Commun. 269, 781–786], whereas recent work with unconfined compression of single chondrocytes showed a nearly one-to-one correlation between cellular and nuclear strains [Leipzig, N.D., Athanasiou, K.A., 2008. Static compression of single chondrocytes catabolically modifies single-cell gene expression. Biophys. J. 94, 2412–2422]. In this study, a linearly elastic finite element model of the cell with a nuclear inclusion was used to simulate the unconfined compression data. Cytoplasmic and nuclear stiffnesses were varied from 1 to 7 kPa for several combinations of cytoplasmic and nuclear Poisson's ratios. It was found that the experimental data were best fit when the ratio of cytoplasmic to nuclear stiffness was 1.4, and both cytoplasm and nucleus were modeled as incompressible. The cytoplasmic to nuclear stiffness ratio is significantly lower than prior reports for isolated nuclei. These results suggest that the nucleus may behave mechanically different in situ than when isolated.
---
paper_title: Determining the Instantaneous Modulus of Viscoelastic Solids Using Instrumented Indentation Measurements
paper_content:
Instrumented indentation is often used in the study of small-scale mechanical behavior of "soft" matters that exhibit viscoelastic behavior. A number of techniques have recently been proposed to obtain the viscoelastic properties from indentation load-dis
---
paper_title: Cell mechanics, structure, and function are regulated by the stiffness of the three-dimensional microenvironment.
paper_content:
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells.
---
paper_title: Separating poroviscoelastic deformation mechanisms in hydrogels
paper_content:
Hydrogels have applications in drug delivery, mechanical actuation, and regenerative medicine. When hydrogels are deformed, load-relaxation arising from fluid flow—poroelasticity—and from rearrangement of the polymer network—viscoelasticity—is observed. The physical mechanisms are different in that poroelastic relaxation varies with experimental length-scale while viscoelastic does not. Here, we show that poroviscoelastic load-relaxation is the product of the two individual responses. The difference in length-scale dependence of the two mechanisms can be exploited to uniquely determine poroviscoelastic properties from simultaneous analysis of multi-scale indentation experiments, providing insight into hydrogel physical behavior.
---
paper_title: Nanoindentation of biological and biomimetic materials
paper_content:
Nanoindentation techniques have recently been adapted for the study of biological materials. This feature will consider the experimental adaptations required for such studies. Following a brief review of the structure and constitutive behavior of biological materials, we examine the experimental aspects in detail, including working with hydrated samples, time-dependent mechanical behavior and extremely compliant materials. The analysis of experimental data, consistent with the constitutive response of the material, will then be treated. Examples of nanoindentation data collected using commercially-available instruments are shown, including nanoindentation creep curves of biological materials and relaxation responses of biomimetic hydrogels. Finally, we conclude by examining the current state and future needs of the biological nanoindentation community.
---
paper_title: Unconfined creep compression of chondrocytes
paper_content:
Abstract The study of single cell mechanics offers a valuable tool for understanding cellular milieus. Specific knowledge of chondrocyte biomechanics could lead to elucidation of disease etiologies and the biomechanical factors most critical to stimulating regenerative processes in articular cartilage. Recent studies in our laboratory have suggested that it may be acceptable to approximate the shape of a single chondrocyte as a disc. This geometry is easily utilized for generating models of unconfined compression. In this study, three continuum mechanics models of increasing complexity were formulated and used to fit unconfined compression creep data. Creep curves were obtained from middle/deep zone chondrocytes ( n =15) and separately fit using the three continuum models. The linear elastic solid model yielded a Young's modulus of 2.55±0.85 kPa. The viscoelastic model (adapted from the Kelvin model) generated an instantaneous modulus of 2.47±0.85 kPa, a relaxed modulus of 1.48±0.35 kPa, and an apparent viscosity of 1.92±1.80 kPa-s. Finally, a linear biphasic model produced an aggregate modulus of 2.58±0.87 kPa, a permeability of 2.57×10 −12 ±3.09 m 4 /N-s, and a Poisson's ratio of 0.069±0.021. The results of this study demonstrate that similar values for the cell modulus can be obtained from three models of increasing complexity. The elastic model provides an easy method for determining the cell modulus, however, the viscoelastic and biphasic models generate additional material properties that are important for characterizing the transient response of compressed chondrocytes.
---
paper_title: Using indentation to characterize the poroelasticity of gels
paper_content:
When an indenter is pressed into a gel to a fixed depth, the solvent in the gel migrates, and the force on the indenter relaxes. Within the theory of poroelasticity, the force relaxation curves for indenters of several types are obtained in a simple form, enabling indentation to be used with ease as a method for determining the elastic constants and permeability of the gel. The method is demonstrated with a conical indenter on an alginate hydrogel.
---
paper_title: General Theory of Three‐Dimensional Consolidation
paper_content:
The settlement of soils under load is caused by a phenomenon called consolidation, whose mechanism is known to be in many cases identical with the process of squeezing water out of an elasticporous medium. The mathematical physical consequences of this viewpoint are established in the present paper. The number of physical constants necessary to determine the properties of the soil is derived along with the general equations for the prediction of settlements and stresses in three‐dimensional problems. Simple applications are treated as examples. The operational calculus is shown to be a powerful method of solution of consolidation problems.
---
paper_title: Incompressible porous media models by use of the theory of mixtures
paper_content:
Abstract This work illustrates the use of the thermodynamics of mixtures to formulate incompressible porous media models. An incompressible porous material is a mixture where the solid and the fluid constituents are each incompressible. Among the results are formulas which show how the chemical potentials for the constituents determine the stress tensor for the mixture and the pore pressure for each pore fluid. The general results are specialized to the forms necessary to produce models used in the applications. For example, the classical models used to study the flow of immiscible fluids in deformable and rigid solids are given.
---
paper_title: A Mixture Theory for Charged-Hydrated Soft Tissues Containing Multi-electrolytes: Passive Transport and Swelling Behaviors
paper_content:
A new mixture theory was developed to model the mechano-electrochemical behaviors of charged-hydrated soft tissues containing multi-electrolytes. The mixture is composed of n + 2 constituents (1 charged solid phase, 1 noncharged solvent phase, and n ion species ). Results from this theory show that three types offorce are involved in the transport of ions and solvent through such materials : (1) a mechanochemical force (including hydraulic and osmotic pressures); (2) an electrochemical force; and (3) an electrical force. Our results also show that three types of material coefficients are required to characterize the transport rates of these ions and solvent: (1) a hydraulic permeability ; (2) mechano-electrochemical coupling coefficients; and (3) an ionic conductance matrix. Specifically, we derived the fundamental governing relationships between these forces and material coefficients to describe such mechano-electrochemical transduction effects as streaming potential, streaming current, diffusion (membrane) potential, electro-osmosis, and anomalous (negative) osmosis. As an example, we showed that the well-known formula for the resting cell membrane potential (Hodgkin and Huxley, 1952a, b) could be derived using our new n + 2 mixture model (a generalized triphasic theory). In general, the n + 2 mixture theory is consistent with and subsumes all previous theories pertaining to specific aspects of charged-hydrated tissues. In addition, our results provided the stress, strain, and fluid velocity fields within a tissue of finite thickness during a one-dimensional steady diffusion process. Numerical results were provided for the exchange of Na + and Ca ++ through the tissue. These numerical results support our hypothesis that tissue fixed charge density (c F ) plays a significant role in modulating kinetics of ions and solvent transport through charged-hydrated soft tissues.
---
paper_title: The biphasic poroviscoelastic behavior of articular cartilage: role of the surface zone in governing the compressive behavior.
paper_content:
Surface fibrillation of articular cartilage is an early sign of degenerative changes in the development of osteoarthritis. To assess the influence of the surface zone on the viscoelastic properties of cartilage under compressive loading, we prepared osteochondral plugs from skeletally mature steers, with and without the surface zone of articular cartilage, for study in the confined compression creep experiment. The relative contributions of two viscoelastic mechanisms, i.e. a flow-independent mechanism [Hayes and Bodine, J. Biomechanics 11, 407-419 (1978)], and a flow-dependent mechanism [Mow et al. J. biomech. Engng 102, 73-84 (1980)], to the compressive creep response of these two types of specimens were determined using the biphasic poroviscoelastic theory proposed by Mak. [J. Biomechanics 20, 703-714 (1986)]. From the experimental results and the biphasic poroviscoelastic theory, we found that frictional drag associated with interstitial fluid flow and fluid pressurization are the dominant mechanisms of load support in the intact specimens, i.e. the flow-dependent mechanisms alone were sufficient to describe normal articular cartilage compressive creep behavior. For specimens with the surface removed, we found an increased creep rate which was derived from an increased tissue permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. From these tissue properties and the biphasic poroviscoelastic theory, we determined that the flow-dependent mechanisms of load support, i.e. frictional drag and fluid pressurization, were greatly diminished in cartilage without the articular surface. Calculations based upon these material parameters show that for specimens with the surface zone removed, the cartilage solid matrix became more highly loaded during the early stages of creep. This suggests that an important function of the articular surface is to provide for a low fluid permeability, and thereby serve to restrict fluid exudation and increase interstitial fluid pressurization. Thus, it is likely that with increasing severity of damage to the articular surface, load support in cartilage under compression shifts from the flow-dependent modes of fluid drag and pressurization to increased solid matrix stress. This suggests that it is important to maintain the integrity of the articular surface in preserving normal compressive behavior of the tissue and normal load carriage in the joint.
---
paper_title: Scaling the Microrheology of Living Cells
paper_content:
We report a scaling law that governs both the elastic and frictional properties of a wide variety of living cell types, over a wide range of time scales and under a variety of biological interventions. This scaling identifies these cells as soft glassy materials existing close to a glass transition, and implies that cytoskeletal proteins may regulate cell mechanical properties mainly by modulating the effective noise temperature of the matrix. The practical implications are that the effective noise temperature is an easily quantified measure of the ability of the cytoskeleton to deform, flow, and reorganize.
---
paper_title: The relation between load and penetration in the axisymmetric boussinesq problem for a punch of arbitrary profile
paper_content:
Abstract A solution of the axisymmetric Boussinesq problem is derived from which are deduced simple formulae for the depth of penetration of the tip of a punch of arbitrary profile and for the total load which must be applied to the punch to achieve this penetration. Simple expressions are also derived for the distribution of pressure under the punch and for the shape of the deformed surface. The results are illustrated by the evaluation of the expressions for several simple punch shapes.
---
paper_title: A method for interpreting the data from depth-sensing indentation instruments
paper_content:
Depth-sensing indentation instruments provide a means for studying the elastic and plastic properties of thin films. A method for obtaining hardness and Young's modulus from the data obtained from these types of instruments is described. Elastic displacements are determined from the data obtained during unloading of the indentation. Young's modulus can be calculated from these measurements. In addition, the elastic contribution to the total displacement can be removed in order to calculate hardness. Determination of the exact shape of the indenter at the tip is critical to the measurement of both hardness and elastic modulus for indentation depths less than a micron. Hardness is shown to depend on strain rate, especially when the hardness values are calculated from the data along the loading curves.
---
paper_title: Non-equilibration of hydrostatic pressure in blebbing cells
paper_content:
Current models for protrusive motility in animal cells focus on cytoskeleton-based mechanisms, where localized protrusion is driven by local regulation of actin biochemistry. In plants and fungi, protrusion is driven primarily by hydrostatic pressure. For hydrostatic pressure to drive localized protrusion in animal cells, it would have to be locally regulated, but current models treating cytoplasm as an incompressible viscoelastic continuum or viscous liquid require that hydrostatic pressure equilibrates essentially instantaneously over the whole cell. Here, we use cell blebs as reporters of local pressure in the cytoplasm. When we locally perfuse blebbing cells with cortex-relaxing drugs to dissipate pressure on one side, blebbing continues on the untreated side, implying non-equilibration of pressure on scales of approximately 10 microm and 10 s. We can account for localization of pressure by considering the cytoplasm as a contractile, elastic network infiltrated by cytosol. Motion of the fluid relative to the network generates spatially heterogeneous transients in the pressure field, and can be described in the framework of poroelasticity.
---
paper_title: Using indentation to characterize the poroelasticity of gels
paper_content:
When an indenter is pressed into a gel to a fixed depth, the solvent in the gel migrates, and the force on the indenter relaxes. Within the theory of poroelasticity, the force relaxation curves for indenters of several types are obtained in a simple form, enabling indentation to be used with ease as a method for determining the elastic constants and permeability of the gel. The method is demonstrated with a conical indenter on an alginate hydrogel.
---
paper_title: Estimating the sensitivity of mechanosensitive ion channels to membrane strain and tension
paper_content:
Bone adapts to its environment by a process in which osteoblasts and osteocytes sense applied mechanical strain. One possible pathway for the detection of strain involves mechanosensitive channels and we sought to determine their sensitivity to membrane strain and tension. We used a combination of experimental and computational modeling techniques to gain new insights into cell mechanics and the regulation of mechanosensitive channels. Using patch-clamp electrophysiology combined with video microscopy, we recorded simultaneously the evolution of membrane extensions into the micropipette, applied pressure, and membrane currents. Nonselective mechanosensitive cation channels with a conductance of 15 pS were observed. Bleb aspiration into the micropipette was simulated using finite element models incorporating the cytoplasm, the actin cortex, the plasma membrane, cellular stiffening in response to strain, and adhesion between the membrane and the micropipette. Using this model, we examine the relative importance of the different cellular components in resisting suction into the pipette and estimate the membrane strains and tensions needed to open mechanosensitive channels. Radial membrane strains of 800% and tensions of 5 10−4 N.m−1 were needed to open 50% of mechanosensitive channels. We discuss the relevance of these results in the understanding of cellular reactions to mechanical strain and bone physiology.
---
paper_title: Flat‐punch indentation of viscoelastic material
paper_content:
The indentation of standard viscoelastic solids, that is, the three-element viscoelastic material, by an axisymmetric, flat-ended indenter has been investigated theoretically. Under the boundary conditions of flat-punch indentation of a viscoelastic half-space, the solutions of the equations of viscoelastic deformation are derived for the standard viscoelastic material. Their generality resides in their inclusion of compressible as well as incompressible solids. They cover the two transient situations: flat-punch creep test and load-relaxation test. In experimental tests of their applicability, nanoindentation and microindentation probes under creep and relaxation conditions yielded a modulus from 0.1 to 1.1 GPa and viscosity from 1 to 37 Gpa · s for a crosslinked glassy polyurethane coatings. For bulk polystyrene, the values vary from 1 to 2 GPa and from 20 to 40 Gpa · s, respectively. The analysis here provides a fundamental basis for probing elastic and viscous properties of coatings with nanoindentation or microindentation tests. © 2000 John Wiley & Sons, Inc. J Polym Sci B: Polym Phys 38: 10–22, 2000
---
paper_title: Quantitative analysis of the viscoelastic properties of thin regions of fibroblasts using atomic force microscopy.
paper_content:
Viscoelasticity of the leading edge, i.e., the lamellipodium, of a cell is the key property for a deeper understanding of the active extension of a cell's leading edge. The fact that the lamellipodium of a cell is very thin (<1000 nm) imparts special challenges for accurate measurements of its viscoelastic behavior. It requires addressing strong substrate effects and comparatively high stresses (>1 kPa) on thin samples. We present the method for an atomic force microscopy-based microrheology that allows us to fully quantify the viscoelastic constants (elastic storage modulus, viscous loss modulus, and the Poisson ratio) of thin areas of a cell (<1000 nm) as well as those of thick areas. We account for substrate effects by applying two different models-a model for well-adhered regions (Chen model) and a model for nonadhered regions (Tu model). This method also provides detailed information about the adhered regions of a cell. The very thin regions relatively near the edge of NIH 3T3 fibroblasts can be identified by the Chen model as strongly adherent with an elastic strength of approximately 1.6 +/- 0.2 kPa and with an experimentally determined Poisson ratio of approximately 0.4 to 0.5. Further from the edge of these cells, the adherence decreases, and the Tu model is effective in evaluating its elastic strength ( approximately 0.6 +/- 0.1 kPa). Thus, our AFM-based microrheology allows us to correlate two key parameters of cell motility by relating elastic strength and the Poisson ratio to the adhesive state of a cell. This frequency-dependent measurement allows for the decomposition of the elastic modulus into loss and storage modulus. Applying this decomposition and Tu's and Chen's finite depth models allow us to obtain viscoelastic signatures in a frequency range from 50 to 300 Hz, showing a rubber plateau-like behavior.
---
paper_title: Non-equilibration of hydrostatic pressure in blebbing cells
paper_content:
Current models for protrusive motility in animal cells focus on cytoskeleton-based mechanisms, where localized protrusion is driven by local regulation of actin biochemistry. In plants and fungi, protrusion is driven primarily by hydrostatic pressure. For hydrostatic pressure to drive localized protrusion in animal cells, it would have to be locally regulated, but current models treating cytoplasm as an incompressible viscoelastic continuum or viscous liquid require that hydrostatic pressure equilibrates essentially instantaneously over the whole cell. Here, we use cell blebs as reporters of local pressure in the cytoplasm. When we locally perfuse blebbing cells with cortex-relaxing drugs to dissipate pressure on one side, blebbing continues on the untreated side, implying non-equilibration of pressure on scales of approximately 10 microm and 10 s. We can account for localization of pressure by considering the cytoplasm as a contractile, elastic network infiltrated by cytosol. Motion of the fluid relative to the network generates spatially heterogeneous transients in the pressure field, and can be described in the framework of poroelasticity.
---
paper_title: Slow Stress Propagation in Adherent Cells
paper_content:
Mechanical cues influence a wide range of cellular behaviors including motility, differentiation, and tumorigenesis. Although previous studies elucidated the role of specific players such as ion channels and focal adhesions as local mechanosensors, the investigation of how mechanical perturbations propagate across the cell is necessary to understand the spatial coordination of cellular processes. Here we quantify the magnitude and timing of intracellular stress propagation, using atomic force microscopy and particle tracking by defocused fluorescence microscopy. The apical cell surface is locally perturbed by atomic force microscopy cantilever indentation, and distal displacements are measured in three dimensions by tracking integrin-bound fluorescent particles. We observe an immediate response and slower equilibration, occurring over times that increase with distance from perturbation. This distance-dependent equilibration occurs over several seconds and can be eliminated by disruption of the actin cytoskeleton. Our experimental results are not explained by traditional viscoelastic models of cell mechanics, but they are consistent with predictions from poroelastic models that include both cytoskeletal deformation and flow of the cytoplasm. Our combined atomic force microscopy-particle tracking measurements provide direct evidence of slow, distance-dependent dissipative stress propagation in response to external mechanical cues and offer new insights into mechanical models and physiological behaviors of adherent cells.
---
paper_title: The cytoplasm of living cells behaves as a poroelastic material
paper_content:
The cytoplasm is the largest part of the cell by volume and hence its rheology sets the rate at which cellular shape changes can occur. Recent experimental evidence suggests that cytoplasmic rheology can be described by a poroelastic model, in which the cytoplasm is treated as a biphasic material consisting of a porous elastic solid meshwork (cytoskeleton, organelles, macromolecules) bathed in an interstitial fluid (cytosol). In this picture, the rate of cellular deformation is limited by the rate at which intracellular water can redistribute within the cytoplasm. However, direct supporting evidence for the model is lacking. Here we directly validate the poroelastic model to explain cellular rheology at short timescales using microindentation tests in conjunction with mechanical, chemical and genetic treatments. Our results show that water redistribution through the solid phase of the cytoplasm (cytoskeleton and macromolecular crowders) plays a fundamental role in setting cellular rheology at short timescales.
---
paper_title: Poro-Viscoelastic Behavior of Gelatin Hydrogels Under Compression-Implications for Bioelasticity Imaging
paper_content:
Ultrasonic elasticity imaging enables visualization of soft tissue deformation for medical diagnosis. Our aim is to understand the role of flow-dependent and flow-independent viscoelastic mechanisms in the response of biphasic polymeric media, including biological tissues and hydrogels, to low-frequency forces. Combining the results of confined and unconfined compression experiments on gelatin hydrogels with finite element analysis (FEA) simulations of the experiments, we explore the role of polymer structure, loading, and boundary conditions in generating contrast for viscoelastic features. Feature estimation is based on comparisons between the biphasic poro-elastic and biphasic poro-viscoelastic (BPVE) material models, where the latter adds the viscoelastic response of the solid polymer matrix. The approach is to develop a consistent FEA material model (BPVE) from confined compression-stress relaxation measurements to extract the strain dependent hydraulic permeability variation and cone-plate rheometer measurements to obtain the flow-independent viscoelastic constants for the solid-matrix phase. The model is then applied to simulate the unconfined compression experiment to explore the mechanics of hydropolymers under conditions of quasi-static elasticity imaging. The spatiotemporal distributions of fluid and solid-matrix behavior within the hydrogel are studied to propose explanations for strain patterns that arise during the elasticity imaging of heterogeneous media.
---
paper_title: A Mixture Theory for Charged-Hydrated Soft Tissues Containing Multi-electrolytes: Passive Transport and Swelling Behaviors
paper_content:
A new mixture theory was developed to model the mechano-electrochemical behaviors of charged-hydrated soft tissues containing multi-electrolytes. The mixture is composed of n + 2 constituents (1 charged solid phase, 1 noncharged solvent phase, and n ion species ). Results from this theory show that three types offorce are involved in the transport of ions and solvent through such materials : (1) a mechanochemical force (including hydraulic and osmotic pressures); (2) an electrochemical force; and (3) an electrical force. Our results also show that three types of material coefficients are required to characterize the transport rates of these ions and solvent: (1) a hydraulic permeability ; (2) mechano-electrochemical coupling coefficients; and (3) an ionic conductance matrix. Specifically, we derived the fundamental governing relationships between these forces and material coefficients to describe such mechano-electrochemical transduction effects as streaming potential, streaming current, diffusion (membrane) potential, electro-osmosis, and anomalous (negative) osmosis. As an example, we showed that the well-known formula for the resting cell membrane potential (Hodgkin and Huxley, 1952a, b) could be derived using our new n + 2 mixture model (a generalized triphasic theory). In general, the n + 2 mixture theory is consistent with and subsumes all previous theories pertaining to specific aspects of charged-hydrated tissues. In addition, our results provided the stress, strain, and fluid velocity fields within a tissue of finite thickness during a one-dimensional steady diffusion process. Numerical results were provided for the exchange of Na + and Ca ++ through the tissue. These numerical results support our hypothesis that tissue fixed charge density (c F ) plays a significant role in modulating kinetics of ions and solvent transport through charged-hydrated soft tissues.
---
paper_title: Poro-viscoelastic properties of anisotropic cylindrical composite materials
paper_content:
A new poro-viscoelastic mechanical model, which is able to predict strain creep, stress relaxation and instant elasticity for anisotropic cylindrical composite materials based on the characteristics of dentinal biomaterials of cylindrical microstructures, is described. The model enables evaluation of the poro-viscoelastic properties of dentinal biomaterials with no cracks and with fatigue-cracks. The predicted poro-viscoelastic properties obtained from the present anisotropic constitutive model agree well with the available experimental data for both wet and dry situations. The present model can be applied to anisotropic cylindrical composite materials other than biomaterials.
---
paper_title: Viscoelastic properties of transformed cells: Role in tumor cell progression and metastasis formation
paper_content:
The micropipette aspiration technique was used to investigate the deformation properties of a panel of nontransformed and transformed rat fibroblasts derived from the same normal cell line. In this method, a step negative pressure is applied to the cell via a micropipette and the aspiration distance into the pipette as a function of time is determined using video techniques. A standard solid viscoelastic model was then used to analyze the viscoelastic properties of the cell. From these results, it is concluded that a direct correlation exists between an increase in deformability and progression of the transformed phenotype from a nontumorigenic cell line into a tumorigenic, metastatic cell line.
---
paper_title: Large Deformation Finite Element Analysis of Micropipette Aspiration to Determine the Mechanical Properties of the Chondrocyte
paper_content:
Chondrocytes, the cells in articular cartilage, exhibit solid-like viscoelastic behavior in response to mechanical stress. In modeling the creep response of these cells during micropipette aspiration, previous studies have attributed the viscoelastic behavior of chondrocytes to either intrinsic viscoelasticity of the cytoplasm or to biphasic effects arising from fluid-solid interactions within the cell. However, the mechanisms responsible for the viscoelastic behavior of chondrocytes are not fully understood and may involve one or both of these phenomena. In this study, the micropipette aspiration experiment was modeled using a large strain finite element simulation that incorporated contact boundary conditions. The cell was modeled using finite strain incompressible and compressible elastic models, a two-mode compressible viscoelastic model, or a biphasic elastic or viscoelastic model. Comparison of the model to the experimentally measured response of chondrocytes to a step increase in aspiration pressure showed that a two-mode compressible viscoelastic formulation accurately captured the creep response of chondrocytes during micropipette aspiration. Similarly, a biphasic two-mode viscoelastic analysis could predict all aspects of the cell's creep response to a step aspiration. In contrast, a biphasic elastic formulation was not capable of predicting the complete creep response, suggesting that the creep response of the chondrocytes under micropipette aspiration is predominantly due to intrinsic viscoelastic phenomena and is not due to the biphasic behavior.
---
paper_title: Poro-viscoelastic constitutive modeling of unconfined creep of hydrogels using finite element analysis with integrated optimization method
paper_content:
Abstract Hydrogels are cross-linked polymer networks swollen with water and are being considered as potential replacements for deceased load bearing tissues such as cartilage. Hydrogels show nonlinear time dependent behavior, and are a challenge to model. A three-element poro-viscoelastic constitutive model was developed based on the structure and nature of the hydrogel. To identify the material parameters, an inverse finite element (FE) technique was used that combines experimental results with FE modeling and an optimization method. Unconfined compression creep tests were conducted on poly(vinyl alcohol) (PVA) and poly(ethylene-co-vinyl alcohol)–poly(vinyl pyrrolidone) (EVAL–PVP) hydrogels manufactured by injection molding. Results from the creep experiments showed that for PVA hydrogels, an increase in polymer concentration correlates with a decrease in the equilibrium water content (EWC) and the creep strain. In EVAL–PVP hydrogels, an increase in the hydrophobic segments (EVAL) correlates with a decrease in the EWC as well as the creep strain. An inverse FE method was used to identify the viscoelastic material parameters of the hydrogels in combination with creep testing using the poro-viscoelastic three-element constitutive model. The elastic modulus estimated from the inverse FE technique showed good agreement with the modulus estimated directly from the test data.
---
paper_title: The biphasic poroviscoelastic behavior of articular cartilage: role of the surface zone in governing the compressive behavior.
paper_content:
Surface fibrillation of articular cartilage is an early sign of degenerative changes in the development of osteoarthritis. To assess the influence of the surface zone on the viscoelastic properties of cartilage under compressive loading, we prepared osteochondral plugs from skeletally mature steers, with and without the surface zone of articular cartilage, for study in the confined compression creep experiment. The relative contributions of two viscoelastic mechanisms, i.e. a flow-independent mechanism [Hayes and Bodine, J. Biomechanics 11, 407-419 (1978)], and a flow-dependent mechanism [Mow et al. J. biomech. Engng 102, 73-84 (1980)], to the compressive creep response of these two types of specimens were determined using the biphasic poroviscoelastic theory proposed by Mak. [J. Biomechanics 20, 703-714 (1986)]. From the experimental results and the biphasic poroviscoelastic theory, we found that frictional drag associated with interstitial fluid flow and fluid pressurization are the dominant mechanisms of load support in the intact specimens, i.e. the flow-dependent mechanisms alone were sufficient to describe normal articular cartilage compressive creep behavior. For specimens with the surface removed, we found an increased creep rate which was derived from an increased tissue permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. permeability, as well as significant changes in the flow-independent parameters of the viscoelastic solid matrix. From these tissue properties and the biphasic poroviscoelastic theory, we determined that the flow-dependent mechanisms of load support, i.e. frictional drag and fluid pressurization, were greatly diminished in cartilage without the articular surface. Calculations based upon these material parameters show that for specimens with the surface zone removed, the cartilage solid matrix became more highly loaded during the early stages of creep. This suggests that an important function of the articular surface is to provide for a low fluid permeability, and thereby serve to restrict fluid exudation and increase interstitial fluid pressurization. Thus, it is likely that with increasing severity of damage to the articular surface, load support in cartilage under compression shifts from the flow-dependent modes of fluid drag and pressurization to increased solid matrix stress. This suggests that it is important to maintain the integrity of the articular surface in preserving normal compressive behavior of the tissue and normal load carriage in the joint.
---
paper_title: Determination of the Poisson's ratio of the cell: recovery properties of chondrocytes after release from complete micropipette aspiration.
paper_content:
Chondrocytes in articular cartilage are regularly subjected to compression and recovery due to dynamic loading of the joint. Previous studies have investigated the elastic and viscoelastic properties of chondrocytes using micropipette aspiration techniques, but in order to calculate cell properties, these studies have generally assumed that cells are incompressible with a Poisson's ratio of 0.5. The goal of this study was to measure the Poisson's ratio and recovery properties of the chondrocyte by combining theoretical modeling with experimental measures of complete cellular aspiration and release from a micropipette. Chondrocytes isolated from non-osteoarthritic and osteoarthritic cartilage were fully aspirated into a micropipette and allowed to reach mechanical equilibrium. Cells were then extruded from the micropipette and cell volume and morphology were measured throughout the experiment. This experimental procedure was simulated with finite element analysis, modeling the chondrocyte as either a compressible two-mode viscoelastic solid, or as a biphasic viscoelastic material. By fitting the experimental data to the theoretically predicted cell response, the Poisson's ratio and the viscoelastic recovery properties of the cell were determined. The Poisson's ratio of chondrocytes was found to be 0.38 for non-osteoarthritic cartilage and 0.36 for osteoarthritic chondrocytes (no significant difference). Osteoarthritic chondrocytes showed an increased recovery time following full aspiration. In contrast to previous assumptions, these findings suggest that chondrocytes are compressible, consistent with previous studies showing cell volume changes with compression of the extracellular matrix.
---
paper_title: Microrheology of Human Lung Epithelial Cells Measured by Atomic Force Microscopy
paper_content:
Lung epithelial cells are subjected to large cyclic forces from breathing. However, their response to dynamic stresses is poorly defined. We measured the complex shear modulus (G*(ω)) of human alveolar (A549) and bronchial (BEAS-2B) epithelial cells over three frequency decades (0.1–100 Hz) and at different loading forces (0.1–0.9 nN) with atomic force microscopy. G*(ω) was computed by correcting force-indentation oscillatory data for the tip-cell contact geometry and for the hydrodynamic viscous drag. Both cell types displayed similar viscoelastic properties. The storage modulus G′(ω) increased with frequency following a power law with exponent ∼0.2. The loss modulus G″(ω) was ∼2/3 lower and increased similarly to G′(ω) up to ∼10 Hz, but exhibited a steeper rise at higher frequencies. The cells showed a weak force dependence of G′(ω) and G″(ω). G*(ω) conformed to the power-law model with a structural damping coefficient of ∼0.3, indicating a coupling of elastic and dissipative processes within the cell. Power-law behavior implies a continuum distribution of stress relaxation time constants. This complex dynamics is consistent with the rheology of soft glassy materials close to a glass transition, thereby suggesting that structural disorder and metastability may be fundamental features of cell architecture.
---
paper_title: Creep indentation of single cells.
paper_content:
An apparatus for creep indentation of individual adherent cells was designed, developed, and experimentally validated. The creep cytoindentation apparatus (CCA) can perform stress-controlled experiments and measure the corresponding deformation of single anchorage-dependent cells. The apparatus can resolve forces on the order of 1 nN and cellular deformations on the order of 0.1 micron. Experiments were conducted on bovine articular chondrocytes using loads on the order of 10 nN. The experimentally observed viscoelastic behavior of these cells was modeled using the punch problem and standard linear solid. The punch problem yielded a Young's modulus of 1.11 +/- 0.48 kPa. The standard linear solid model yielded an instantaneous elastic modulus of 8.00 +/- 4.41 kPa, a relaxed modulus of 1.09 +/- 0.54 kPa, an apparent viscosity of 1.50 +/- 0.92 kPa-s, and a time constant of 1.32 +/- 0.65 s. To our knowledge, this is the first time that stress-controlled indentation testing has been applied at the single cell level. This methodology represents a new tool in understanding the mechanical nature of anchorage-dependent cells and mechanotransductional pathways.
---
paper_title: Dimensional and mechanical dynamics of active and stable edges in motile fibroblasts investigated by using atomic force microscopy
paper_content:
The atomic force microscope (AFM) was employed to investigate the extension and retraction dynamics of protruding and stable edges of motile 3T3 fibroblasts in culture. Such dynamics closely paralleled the results of earlier studies employing video microscopy that indicated that the AFM force-mapping technique does not appreciably perturb these dynamics. Force scans permitted height determinations of active and stable edges. Whereas the profiles of active edges are flat with average heights of 0.4–0.8 μm, stable edges smoothly ascend to 2–3 μm within about 6 μm of the edge. In the region of the leading edge, the height fluctuates up to 50% (SD) of the mean value, much more than the stable edge; this fluctuation presumably reflects differences in underlying cytoskeletal activity. In addition, force mapping yields an estimate of the local Young’s modulus or modulus of elasticity (E, the cortical stiffness). This stiffness will be related to “cortical tension,” can be accurately calculated for the stable edges, and is ≈12 kPa in this case. The thinness of the leading edge precludes accurate estimation of the E values, but within 4 μm of the margin it is considerably smaller than that for stable edges, which have an upper limit of 3–5 kPa. Although blebbing cannot absolutely be ruled out as a mechanism of extension, the data are consistent with an actin polymerization and/or myosin motor mechanism in which the average material properties of the extending margin would be nearly constant to the edge. Because the leading edge is softer than the stable edge, these data also are consistent with the notion that extension preferentially occurs in regions of lower cortical tension.
---
paper_title: Unconfined creep compression of chondrocytes
paper_content:
Abstract The study of single cell mechanics offers a valuable tool for understanding cellular milieus. Specific knowledge of chondrocyte biomechanics could lead to elucidation of disease etiologies and the biomechanical factors most critical to stimulating regenerative processes in articular cartilage. Recent studies in our laboratory have suggested that it may be acceptable to approximate the shape of a single chondrocyte as a disc. This geometry is easily utilized for generating models of unconfined compression. In this study, three continuum mechanics models of increasing complexity were formulated and used to fit unconfined compression creep data. Creep curves were obtained from middle/deep zone chondrocytes ( n =15) and separately fit using the three continuum models. The linear elastic solid model yielded a Young's modulus of 2.55±0.85 kPa. The viscoelastic model (adapted from the Kelvin model) generated an instantaneous modulus of 2.47±0.85 kPa, a relaxed modulus of 1.48±0.35 kPa, and an apparent viscosity of 1.92±1.80 kPa-s. Finally, a linear biphasic model produced an aggregate modulus of 2.58±0.87 kPa, a permeability of 2.57×10 −12 ±3.09 m 4 /N-s, and a Poisson's ratio of 0.069±0.021. The results of this study demonstrate that similar values for the cell modulus can be obtained from three models of increasing complexity. The elastic model provides an easy method for determining the cell modulus, however, the viscoelastic and biphasic models generate additional material properties that are important for characterizing the transient response of compressed chondrocytes.
---
paper_title: Quantitative analysis of the viscoelastic properties of thin regions of fibroblasts using atomic force microscopy.
paper_content:
Viscoelasticity of the leading edge, i.e., the lamellipodium, of a cell is the key property for a deeper understanding of the active extension of a cell's leading edge. The fact that the lamellipodium of a cell is very thin (<1000 nm) imparts special challenges for accurate measurements of its viscoelastic behavior. It requires addressing strong substrate effects and comparatively high stresses (>1 kPa) on thin samples. We present the method for an atomic force microscopy-based microrheology that allows us to fully quantify the viscoelastic constants (elastic storage modulus, viscous loss modulus, and the Poisson ratio) of thin areas of a cell (<1000 nm) as well as those of thick areas. We account for substrate effects by applying two different models-a model for well-adhered regions (Chen model) and a model for nonadhered regions (Tu model). This method also provides detailed information about the adhered regions of a cell. The very thin regions relatively near the edge of NIH 3T3 fibroblasts can be identified by the Chen model as strongly adherent with an elastic strength of approximately 1.6 +/- 0.2 kPa and with an experimentally determined Poisson ratio of approximately 0.4 to 0.5. Further from the edge of these cells, the adherence decreases, and the Tu model is effective in evaluating its elastic strength ( approximately 0.6 +/- 0.1 kPa). Thus, our AFM-based microrheology allows us to correlate two key parameters of cell motility by relating elastic strength and the Poisson ratio to the adhesive state of a cell. This frequency-dependent measurement allows for the decomposition of the elastic modulus into loss and storage modulus. Applying this decomposition and Tu's and Chen's finite depth models allow us to obtain viscoelastic signatures in a frequency range from 50 to 300 Hz, showing a rubber plateau-like behavior.
---
paper_title: Impact and contact stress analysis in multilayer media
paper_content:
Abstract The contact problem in a multilayer medium is analyzed based upon classical elasticity theory. The mixed boundary value problem is reformulated into a general approximation technique suitable for calculation on a digital computer. The Boussinesq problem demonstrates that this approximate method is much more accurate than an “exact” analysis recently published. A second example, which is most commonly applied in engineering practice, involves the parabolic punch. Numerical results are presented for both examples, to illustrate the physically significant effects of soft and hard layers and different layer thicknesses. The analysis is then extended to quasi-static impact. Results of experiments dropping a steel ball on nylon and rubber layers over a granite base are given and good agreement is found between the analytical and experimental results.
---
paper_title: A thin-layer model for viscoelastic, stress-relaxation testing of cells using atomic force microscopy: do cell properties reflect metastatic potential?
paper_content:
Atomic force microscopy has rapidly become a valuable tool for quantifying the biophysical properties of single cells. The interpretation of atomic force microscopy-based indentation tests, however, is highly dependent on the use of an appropriate theoretical model of the testing configuration. In this study, a novel, thin-layer viscoelastic model for stress relaxation was developed to quantify the mechanical properties of chondrosarcoma cells in different configurations to examine the hypothesis that viscoelastic properties reflect the metastatic potential and invasiveness of the cell using three well-characterized human chondrosarcoma cell lines (JJ012, FS090, 105KC) that show increasing chondrocytic differentiation and decreasing malignancy, respectively. Single-cell stress relaxation tests were conducted at 2 h and 2 days after plating to determine cell mechanical properties in either spherical or spread morphologies and analyzed using the new theoretical model. At both time points, JJ012 cells had the lowest moduli of the cell lines examined, whereas FS090 typically had the highest. At 2 days, all cells showed an increase in stiffness and a decrease in apparent viscosity compared to the 2-h time point. Fluorescent labeling showed that the F-actin structure in spread cells was significantly different between FS090 cells and JJ012/105KC cells. Taken together with results of previous studies, these findings indicate that cell transformation and tumorigenicity are associated with a decrease in cell modulus and apparent viscosity, suggesting that cell mechanical properties may provide insight into the metastatic potential and invasiveness of a cell.
---
paper_title: Spherical indentation testing of poroelastic relaxations in thin hydrogel layers
paper_content:
In this work, we present the Poroelastic Relaxation Indentation (PRI) testing approach for quantifying the mechanical and transport properties of thin layers of poly(ethylene glycol) hydrogels with thicknesses on the order of 200 μm. Specifically, PRI characterizes poroelastic relaxation in hydrogels by indenting the material at fixed depth and measuring the contact area-dependent load relaxation process as a function of time. With the aid of a linear poroelastic theory developed for thin or geometrically confined swollen polymer networks, we demonstrate that PRI can quantify the water diffusion coefficient, shear modulus and average pore size of the hydrogel layer. This approach provides a simple methodology to quantify the material properties of thin swollen polymer networks relevant to transport phenomena.
---
paper_title: Microrheology of Human Lung Epithelial Cells Measured by Atomic Force Microscopy
paper_content:
Lung epithelial cells are subjected to large cyclic forces from breathing. However, their response to dynamic stresses is poorly defined. We measured the complex shear modulus (G*(ω)) of human alveolar (A549) and bronchial (BEAS-2B) epithelial cells over three frequency decades (0.1–100 Hz) and at different loading forces (0.1–0.9 nN) with atomic force microscopy. G*(ω) was computed by correcting force-indentation oscillatory data for the tip-cell contact geometry and for the hydrodynamic viscous drag. Both cell types displayed similar viscoelastic properties. The storage modulus G′(ω) increased with frequency following a power law with exponent ∼0.2. The loss modulus G″(ω) was ∼2/3 lower and increased similarly to G′(ω) up to ∼10 Hz, but exhibited a steeper rise at higher frequencies. The cells showed a weak force dependence of G′(ω) and G″(ω). G*(ω) conformed to the power-law model with a structural damping coefficient of ∼0.3, indicating a coupling of elastic and dissipative processes within the cell. Power-law behavior implies a continuum distribution of stress relaxation time constants. This complex dynamics is consistent with the rheology of soft glassy materials close to a glass transition, thereby suggesting that structural disorder and metastability may be fundamental features of cell architecture.
---
paper_title: Viscoelastic properties of human mesenchymally-derived stem cells and primary osteoblasts, chondrocytes, and adipocytes
paper_content:
The mechanical properties of single cells play important roles in regulating cell-matrix interactions, potentially influencing the process of mechanotransduction. Recent studies also suggest that cellular mechanical properties may provide novel biological markers, or “biomarkers,” of cell phenotype, reflecting specific changes that occur with disease, differentiation, or cellular transformation. Of particular interest in recent years has been the identification of such biomarkers that can be used to determine specific phenotypic characteristics of stem cells that separate them from primary, differentiated cells. The goal of this study was to determine the elastic and viscoelastic properties of three primary cell types of mesenchymal lineage (chondrocytes, osteoblasts, and adipocytes) and to test the hypothesis that primary differentiated cells exhibit distinct mechanical properties compared to adult stem cells (adipose-derived or bone marrow-derived mesenchymal stem cells). In an adherent, spread configuration, chondrocytes, osteoblasts, and adipocytes all exhibited significantly different mechanical properties, with osteoblasts being stiffer than chondrocytes and both being stiffer than adipocytes. Adipose-derived and mesenchymal stem cells exhibited similar properties to each other, but were mechanically distinct from primary cells, particularly when comparing a ratio of elastic to relaxed moduli. These findings will help more accurately model the cellular mechanical environment in mesenchymal tissues, which could assist in describing injury thresholds and disease progression or even determining the influence of mechanical loading for tissue engineering efforts. Furthermore, the identification of mechanical properties distinct to stem cells could result in more successful sorting procedures to enrich multipotent progenitor cell populations.
---
paper_title: Finite element modelling of nanoindentation based methods for mechanical properties of cells.
paper_content:
The viscoelastic properties of the living cells are for quantifying the biomechanical effects of drug treatment, diseases and aging. Nanoindentation techniques have proven effective to characterize the viscoelastic properties of living cells. However, most studies utilized the Hertz contact model and assumed the Heaviside step loading, which does not represent real tests. Therefore, new mathematical models have been developed to determine the viscoelastic properties of the cells for nanoindentation tests. Finite element method was used to determine the empirical correction parameter in the mathematical model to account for large deformation, in which case the combined effect of finite lateral and vertical dimensions of the cell is essential. The viscoelastic integral operator was used to account for the realistic deformation rate. The predictive model captures the mechanical responses of the cells observed from previous experimental study. This work has demonstrated that the new model consistently predicts viscoelastic properties for both ramping and stress relaxation periods, which cannot be achieved by the commonly used models. Utilization of this new model can enrich the experimental cell mechanics in interpretation of nanoindentation of cells.
---
paper_title: Nanomechanical properties of individual chondrocytes and their developing growth factor-stimulated pericellular matrix
paper_content:
Abstract The nanomechanical properties of individual cartilage cells (chondrocytes) and their aggrecan and collagen-rich pericellular matrix (PCM) were measured via atomic force microscope nanoindentation using probe tips of two length scales (nanosized and micron-sized). The properties of cells freshly isolated from cartilage tissue (devoid of PCM) were compared to cells that were cultured for selected times (up to 28 days) in 3-D alginate gels which enabled PCM assembly and accumulation. Cells were immobilized and kept viable in pyramidal wells microfabricated into an array on silicon chips. Hertzian contact mechanics and finite element analyses were employed to estimate apparent moduli from the force versus depth curves. The effects of culture conditions on the resulting PCM properties were studied by comparing 10% fetal bovine serum to medium containing a combination of insulin growth factor-1 (IGF-1)+osteogenic protein-1 (OP-1). While both systems showed increases in stiffness with time in culture between days 7 and 28, the IGF-1+OP-1 combination resulted in a higher stiffness for the cell-PCM composite by day 28 and a higher apparent modulus of the PCM which is compared to the FBS cultured cells. These studies give insight into the temporal evolution of the nanomechanical properties of the pericellar matrix relevant to the biomechanics and mechanobiology of tissue-engineered constructs for cartilage repair.
---
paper_title: Immobilizing live bacteria for AFM imaging of cellular processes.
paper_content:
Coccoid cells of the bacterial species Staphylococcus aureus have been mechanically trapped in lithographically patterned substrates and imaged under growth media using atomic force microscopy (AFM) in order to follow cellular processes. The cells are not perturbed as there is no chemical linkage to the surface. Confinement effects are minimized compared to trapping the cells in porous membranes or soft gels. S. aureus cells have been imaged undergoing cell division whilst trapped in the patterned substrates. Entrapment in lithographically patterned substrates provides a novel way for anchoring bacterial cells so that the AFM tip will not push the cells off during imaging, whilst allowing the bacteria to continue with cellular processes.
---
paper_title: Understanding the nanoindentation mechanisms of a microsphere for biomedical applications
paper_content:
Nanoindentation techniques have proven effective to characterize nanomaterials and soft biomaterials. Using microfabricated wells to hold microspheres will enable automated indentation of microspheres. However, the existing contact mechanics based models such as the Hertz model and other modified models (e.g. thin layer models) only deal with indenting the specimen placed on a flat surface (i.e. the bottom surface is constrained vertically) without lateral constraint. Therefore, new mathematical models have been developed in this study to investigate the nanoindentation responses for a microsphere sitting in a well. Finite element simulation was employed to determine the empirical correction parameter in the mathematical model to account for the constraint imposed by the well. Utilization of this new model can also enrich the experimental contact mechanics.
---
paper_title: Cell mechanics, structure, and function are regulated by the stiffness of the three-dimensional microenvironment.
paper_content:
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells.
---
paper_title: Indentation and adhesive probing of a cell membrane with AFM: theoretical model and experiments
paper_content:
In probing adhesion and cell mechanics by atomic force microscopy (AFM), the mechanical properties of the membrane have an important if neglected role. Here we theoretically model the contact of an AFM tip with a cell membrane, where direct motivation and data are derived from a prototypical ligand-receptor adhesion experiment. An AFM tip is functionalized with a prototypical ligand, SIRPα, and then used to probe its native receptor on red cells, CD47. The interactions prove specific and typical in force, and also show in detachment, a sawtooth-shaped disruption process that can extend over hundreds of nm. The theoretical model here that accounts for both membrane indentation as well as membrane extension in tip retraction incorporates membrane tension and elasticity as well as AFM tip geometry and stochastic disruption. Importantly, indentation depth proves initially proportional to membrane tension and does not follow the standard Hertz model. Computations of detachment confirm nonperiodic disruption with membrane extensions of hundreds of nm set by membrane tension. Membrane mechanical properties thus clearly influence AFM probing of cells, including single molecule adhesion experiments.
---
paper_title: On the contact and adhesion of rough surfaces
paper_content:
The elastic contact of rough surfaces has been studied by Greenwood and Williamson [Proc. R. Soc. London, Ser. A 295, 300 (1966)] for Hertzian contacts and by Fuller and Tabor [Proc. R. Soc. London, Ser. A 345, 327 (1975)] for Johnson-Kendall-Roberts contacts. The theory for Deryagin-Muller-Toporov contacts is proposed here and compared with previous results. An extra load due to adhesion forces acting around the contacts appears, which can account for the friction under negative load and the increase in the apparent friction coefficient on clean surfaces.
---
paper_title: Nanomechanical properties of individual chondrocytes and their developing growth factor-stimulated pericellular matrix
paper_content:
Abstract The nanomechanical properties of individual cartilage cells (chondrocytes) and their aggrecan and collagen-rich pericellular matrix (PCM) were measured via atomic force microscope nanoindentation using probe tips of two length scales (nanosized and micron-sized). The properties of cells freshly isolated from cartilage tissue (devoid of PCM) were compared to cells that were cultured for selected times (up to 28 days) in 3-D alginate gels which enabled PCM assembly and accumulation. Cells were immobilized and kept viable in pyramidal wells microfabricated into an array on silicon chips. Hertzian contact mechanics and finite element analyses were employed to estimate apparent moduli from the force versus depth curves. The effects of culture conditions on the resulting PCM properties were studied by comparing 10% fetal bovine serum to medium containing a combination of insulin growth factor-1 (IGF-1)+osteogenic protein-1 (OP-1). While both systems showed increases in stiffness with time in culture between days 7 and 28, the IGF-1+OP-1 combination resulted in a higher stiffness for the cell-PCM composite by day 28 and a higher apparent modulus of the PCM which is compared to the FBS cultured cells. These studies give insight into the temporal evolution of the nanomechanical properties of the pericellar matrix relevant to the biomechanics and mechanobiology of tissue-engineered constructs for cartilage repair.
---
paper_title: Contribution of the nucleus to the mechanical properties of endothelial cells.
paper_content:
The cell nucleus plays a central role in the response of the endothelium to mechanical forces, possibly by deforming during cellular adaptation. The goal of this work was to precisely quantify the mechanical properties of the nucleus. Individual endothelial cells were subjected to compression between glass microplates. This technique allows measurement of the uniaxial force applied to the cell and the resulting deformation. Measurements were made on round and spread cells to rule out the influence of cell morphology on the nucleus mechanical properties. Tests were also carried out with nuclei isolated from cell cultures by a chemical treatment. The non-linear force-deformation curves indicate that round cells deform at lower forces than spread cells and nuclei. Finite-element models were also built with geometries adapted to actual morphometric measurements of round cells, spread cells and isolated nuclei. The nucleus and the cytoplasm were modeled as separate homogeneous hyperelastic materials. The models simulate the compression and yield the force-deformation curve for a given set of elastic moduli. These parameters are varied to obtain a best fit between the theoretical and experimental data. The elastic modulus of the cytoplasm is found to be on the order of 500N/m(2) for spread and round cells. The elastic modulus of the endothelial nucleus is on the order of 5000N/m(2) for nuclei in the cell and on the order of 8000N/m(2) for isolated nuclei. These results represent an unambiguous measurement of the nucleus mechanical properties and will be important in understanding how cells perceive mechanical forces and respond to them.
---
paper_title: Influence of cell spreading and contractility on stiffness measurements using AFM
paper_content:
Atomic Force Microscopy (AFM) is widely used for measuring mechanical properties of cells, and to understand how cells respond to their mechanical environments. A standard method for obtaining cell stiffness from experimental force–indentation curves is based on the simplified Hertz theory developed for studying the indentation of a semi-infinite elastic body by a spherical punch, assumptions that do not hold for biological cells. The modified Hertz theory developed by Dimitriadis et al., which takes the finite sample height into account, is widely used by experimentalists for greater accuracy. However, neither of these two models account for the finite lateral spread of the cells and cellular contractility. In this paper, we address the influence of cell geometry, cell pre-stress, and nuclear properties on cell stiffness measurements by modeling indentation of a cell of prescribed geometry with a spherical AFM probe using the finite element method. Using parametric studies, we develop scaling relationships between the effective stiffness probed by AFM and the bulk cell stiffness, taking cell and tip geometry into account. Taken together, our results demonstrate the need to take cell geometry into account while estimating the cell stiffness and provide simple expressions for doing so.
---
paper_title: In situ mechanical properties of the chondrocyte cytoplasm and nucleus
paper_content:
Abstract The way in which the nucleus experiences mechanical forces has important implications for understanding mechanotransduction. Knowledge of nuclear material properties and, specifically, their relationship to the properties of the bulk cell can help determine if the nucleus directly experiences mechanical loads, or if it is a signal transduction mechanism secondary to cell membrane deformation that leads to altered gene expression. Prior work measuring nuclear material properties using micropipette aspiration suggests that the nucleus is substantially stiffer than the bulk cell [Guilak, F., Tedrow, J.R., Burgkart, R., 2000. Viscoelastic properties of the cell nucleus. Biochem. Biophys. Res. Commun. 269, 781–786], whereas recent work with unconfined compression of single chondrocytes showed a nearly one-to-one correlation between cellular and nuclear strains [Leipzig, N.D., Athanasiou, K.A., 2008. Static compression of single chondrocytes catabolically modifies single-cell gene expression. Biophys. J. 94, 2412–2422]. In this study, a linearly elastic finite element model of the cell with a nuclear inclusion was used to simulate the unconfined compression data. Cytoplasmic and nuclear stiffnesses were varied from 1 to 7 kPa for several combinations of cytoplasmic and nuclear Poisson's ratios. It was found that the experimental data were best fit when the ratio of cytoplasmic to nuclear stiffness was 1.4, and both cytoplasm and nucleus were modeled as incompressible. The cytoplasmic to nuclear stiffness ratio is significantly lower than prior reports for isolated nuclei. These results suggest that the nucleus may behave mechanically different in situ than when isolated.
---
paper_title: Mechanical regulation of nuclear structure and function.
paper_content:
Mechanical loading induces both nuclear distortion and alterations in gene expression in a variety of cell types. Mechanotransduction is the process by which extracellular mechanical forces can activate a number of well-studied cytoplasmic signaling cascades. Inevitably, such signals are transduced to the nucleus and induce transcription factor-mediated changes in gene expression. However, gene expression also can be regulated through alterations in nuclear architecture, providing direct control of genome function. One putative transduction mechanism for this phenomenon involves alterations in nuclear architecture that result from the mechanical perturbation of the cell. This perturbation is associated with direct mechanical strain or osmotic stress, which is transferred to the nucleus. This review describes the current state of knowledge relating the nuclear architecture and the transfer of mechanical forces to the nucleus mediated by the cytoskeleton, the nucleoskeleton, and the LINC (linker of the nucleoskeleton and cytoskeleton) complex. Moreover, remodeling of the nucleus induces alterations in nuclear stiffness, which may be associated with cell differentiation. These phenomena are discussed in relation to the potential influence of nuclear architecture-mediated mechanoregulation of transcription and cell fate.
---
paper_title: Cell mechanics, structure, and function are regulated by the stiffness of the three-dimensional microenvironment.
paper_content:
This study adopts a combined computational and experimental approach to determine the mechanical, structural, and metabolic properties of isolated chondrocytes cultured within three-dimensional hydrogels. A series of linear elastic and hyperelastic finite-element models demonstrated that chondrocytes cultured for 24 h in gels for which the relaxation modulus is <5 kPa exhibit a cellular Young's modulus of ∼5 kPa. This is notably greater than that reported for isolated chondrocytes in suspension. The increase in cell modulus occurs over a 24-h period and is associated with an increase in the organization of the cortical actin cytoskeleton, which is known to regulate cell mechanics. However, there was a reduction in chromatin condensation, suggesting that changes in the nucleus mechanics may not be involved. Comparison of cells in 1% and 3% agarose showed that cells in the stiffer gels rapidly develop a higher Young's modulus of ∼20 kPa, sixfold greater than that observed in the softer gels. This was associated with higher levels of actin organization and chromatin condensation, but only after 24 h in culture. Further studies revealed that cells in stiffer gels synthesize less extracellular matrix over a 28-day culture period. Hence, this study demonstrates that the properties of the three-dimensional microenvironment regulate the mechanical, structural, and metabolic properties of living cells.
---
paper_title: A Triphasic Theory for the Swelling and Deformation Behaviors of Articular Cartilage
paper_content:
Swelling of articular cartilage depends on its fixed charge density and distribution, the stiffness of its collagen-proteoglycan matrix, and the ion concentrations in the interstitium. A theory for a tertiary mixture has been developed, including the two fluid-solid phases (biphasic), and an ion phase, representing cation and anion of a single salt, to describe the deformation and stress fields for cartilage under chemical and/or mechanical loads. This triphasic theory combines the physico-chemical theory for ionic and polyionic (proteoglycan) solutions with the biphasic theory for cartilage. The present model assumes the fixed charge groups to remain unchanged, and that the counter-ions are the cations of a single salt of the bathing solution. The momentum equation for the neutral salt and for the intersitial water are expressed in terms of their chemical potentials whose gradients are the driving forces for their movements. These chemical potentials depend on fluid pressure p, salt concentration c, solid matrix dilatation e and fixed charge density cF . For a uni-uni valent salt such as NaCl, they are given by μi = μo i + (RT/Mi )ln[γ± 2 c (c + c F )] and μW = μo w + [p − RTφ(2c + cF ) + Bw e]/ρT w , where R, T, Mi , γ± , φ, ρT w and Bw are universal gas constant, absolute temperature, molecular weight, mean activity coefficient of salt, osmotic coefficient, true density of water, and a coupling material coefficient, respectively. For infinitesimal strains and material isotropy, the stress-strain relationship for the total mixture stress is σ = − pI − Tc I + λs (trE)I + 2μs E, where E is the strain tensor and (λs ,μs ) are the Lame constants of the elastic solid matrix. The chemical-expansion stress (− Tc ) derives from the charge-to-charge repulsive forces within the solid matrix. This theory can be applied to both equilibrium and non-equilibrium problems. For equilibrium free swelling problems, the theory yields the well known Donnan equilibrium ion distribution and osmotic pressure equations, along with an analytical expression for the “pre-stress” in the solid matrix. For the confined-compression swelling problem, it predicts that the applied compressive stress is shared by three load support mechanisms: 1) the Donnan osmotic pressure; 2) the chemical-expansion stress; and 3) the solid matrix elastic stress. Numerical calculations have been made, based on a set of equilibrium free-swelling and confined-compression data, to assess the relative contribution of each mechanism to load support. Our results show that all three mechanisms are important in determining the overall compressive stiffness of cartilage.
---
paper_title: Biomechanical properties of single chondrocytes and chondrons determined by micromanipulation and finite-element modelling
paper_content:
A chondrocyte and its surrounding pericellular matrix (PCM) are defined as a chondron. Single chondrocytes and chondrons isolated from bovine articular cartilage were compressed by micromanipulation between two parallel surfaces in order to investigate their biomechanical properties and to discover the mechanical significance of the PCM. The force imposed on the cells was measured directly during compression to various deformations and then holding. When the nominal strain at the end of compression was 50 per cent, force relaxation showed that the cells were viscoelastic, but this viscoelasticity was generally insignificant when the nominal strain was 30 per cent or lower. The viscoelastic behaviour might be due to the mechanical response of the cell cytoskeleton and/or nucleus at higher deformations. A finite-element analysis was applied to simulate the experimental force-displacement/time data and to obtain mechanical property parameters of the chondrocytes and chondrons. Because of the large strains in the cells, a nonlinear elastic model was used for simulations of compression to 30 per cent nominal strain and a nonlinear viscoelastic model for 50 per cent. The elastic model yielded a Young’s modulus of 14+ 1 kPa (mean+ s.e.) for chondrocytes and 19+ 2 kPa for chondrons, respectively. The viscoelastic model generated an instantaneous elastic modulus of 21+ 3a nd 27+ 4 kPa, a long-term modulus of 9.3+ 0.8 and 12+ 1 kPa and an apparent viscosity of 2.8+ 0.5 and 3.4+ 0.6 kPa s for chondrocytes and chondrons, respectively. It was concluded that chondrons were generally stiffer and showed less viscoelastic behaviour than chondrocytes, and that the PCM significantly influenced the mechanical properties of the cells.
---
paper_title: On the factors affecting the critical indenter penetration for measurement of coating hardness
paper_content:
Abstract The nanoindentation test is the only viable approach to assess the properties of very thin coatings (
---
paper_title: A Mixture Theory for Charged-Hydrated Soft Tissues Containing Multi-electrolytes: Passive Transport and Swelling Behaviors
paper_content:
A new mixture theory was developed to model the mechano-electrochemical behaviors of charged-hydrated soft tissues containing multi-electrolytes. The mixture is composed of n + 2 constituents (1 charged solid phase, 1 noncharged solvent phase, and n ion species ). Results from this theory show that three types offorce are involved in the transport of ions and solvent through such materials : (1) a mechanochemical force (including hydraulic and osmotic pressures); (2) an electrochemical force; and (3) an electrical force. Our results also show that three types of material coefficients are required to characterize the transport rates of these ions and solvent: (1) a hydraulic permeability ; (2) mechano-electrochemical coupling coefficients; and (3) an ionic conductance matrix. Specifically, we derived the fundamental governing relationships between these forces and material coefficients to describe such mechano-electrochemical transduction effects as streaming potential, streaming current, diffusion (membrane) potential, electro-osmosis, and anomalous (negative) osmosis. As an example, we showed that the well-known formula for the resting cell membrane potential (Hodgkin and Huxley, 1952a, b) could be derived using our new n + 2 mixture model (a generalized triphasic theory). In general, the n + 2 mixture theory is consistent with and subsumes all previous theories pertaining to specific aspects of charged-hydrated tissues. In addition, our results provided the stress, strain, and fluid velocity fields within a tissue of finite thickness during a one-dimensional steady diffusion process. Numerical results were provided for the exchange of Na + and Ca ++ through the tissue. These numerical results support our hypothesis that tissue fixed charge density (c F ) plays a significant role in modulating kinetics of ions and solvent transport through charged-hydrated soft tissues.
---
paper_title: Finite element modelling of nanoindentation based methods for mechanical properties of cells.
paper_content:
The viscoelastic properties of the living cells are for quantifying the biomechanical effects of drug treatment, diseases and aging. Nanoindentation techniques have proven effective to characterize the viscoelastic properties of living cells. However, most studies utilized the Hertz contact model and assumed the Heaviside step loading, which does not represent real tests. Therefore, new mathematical models have been developed to determine the viscoelastic properties of the cells for nanoindentation tests. Finite element method was used to determine the empirical correction parameter in the mathematical model to account for large deformation, in which case the combined effect of finite lateral and vertical dimensions of the cell is essential. The viscoelastic integral operator was used to account for the realistic deformation rate. The predictive model captures the mechanical responses of the cells observed from previous experimental study. This work has demonstrated that the new model consistently predicts viscoelastic properties for both ramping and stress relaxation periods, which cannot be achieved by the commonly used models. Utilization of this new model can enrich the experimental cell mechanics in interpretation of nanoindentation of cells.
---
paper_title: Silicone rubber substrata: a new wrinkle in the study of cell locomotion
paper_content:
When tissue cells are cultured on very thin sheets of cross-linked silicone fluid, the traction forces the cells exert are made visible as elastic distortion and wrinkling of this substratum. Around explants this pattern of wrinkling closely resembles the "center effects" long observed in plasma clots and traditionally attributed to dehydration shrinkage.
---
|
Title: Nanobiomechanics of living cells: a review
Section 1: Introduction
Description 1: Provide an overview of the significance of mechanical properties of living cells, various testing methods, and the importance of understanding nanoindentation in cell interactions.
Section 2: Experimental aspects
Description 2: Discuss the details of experimental setups used in nanoindentation, including nanoindenter apparatus and atomic force microscope (AFM).
Section 3: Choice of appropriate atomic force microscope tips
Description 3: Detail the different AFM tip geometries and their respective advantages and disadvantages in nanoindentation.
Section 4: Flat punch
Description 4: Explain the specific use and characteristics of flat punch indenters for probing cell mechanics.
Section 5: Spherical tip
Description 5: Discuss the properties and applications of spherical tips in nanoindentation of cells.
Section 6: Pyramid tip
Description 6: Describe the use of pyramid tips for probing the fine features of cytoskeleton and associated drawbacks.
Section 7: Conical tip
Description 7: Provide details on conical tips, their use, and how they compare to pyramid tips.
Section 8: Extended atomic force microscope testing rigs
Description 8: Introduce the concept of cytocompression and its differentiation from cytoindentation.
Section 9: Mechanical modelling
Description 9: Review different mechanical models used to estimate cell mechanics, including structure-based and continuum models.
Section 10: Percolation models
Description 10: Explain how percolation models describe cell structure and mechanics.
Section 11: Summary of structure-based models
Description 11: Summarize the tensegrity and percolation models, emphasizing their complimentary nature.
Section 12: Continuum models
Description 12: Discuss the quantifiable mechanical properties of cells using continuum models and their relevance.
Section 13: Elastic model
Description 13: Examine the use of elastic models for determining the mechanical properties of cells under equilibrium conditions.
Section 14: Poroelastic model
Description 14: Describe the poroelastic model and its application in explaining the mechanical behavior of cells.
Section 15: Spring-dashpot viscoelastic models
Description 15: Discuss viscoelasticity models using spring-dashpot elements and their application in cell mechanics.
Section 16: Power-law rheology
Description 16: Introduce power-law rheology for describing the mechanics of certain cell types.
Section 17: Nanoindentation models
Description 17: Review nanoindentation models suitable for different AFM tips, including elastic, poroelastic, and viscoelastic models.
Section 18: Cell mechanics determined by different models
Description 18: Compare the mechanical properties of cells as determined by various models.
Section 19: Cell mechanics determined by different indenters
Description 19: Discuss the impact of indenter tip geometry on the measured mechanical properties of cells.
Section 20: Cell morphology on cell mechanics
Description 20: Evaluate how cell morphology affects cell mechanics and the importance of substrate influence.
Section 21: Cell -tip adhesion
Description 21: Address the issue of tip-cell adhesion during nanoindentation and the models considering adhesion forces.
Section 22: Inverse finite-element analysis
Description 22: Discuss the use of three-dimensional finite-element analysis to determine cell mechanical properties.
Section 23: Summary of strategy of selecting tip geometry and mechanical models
Description 23: Provide guidelines for selecting appropriate tip geometries and mechanical models based on cell morphology and experimental conditions.
Section 24: Conclusion and perspectives
Description 24: Summarize the importance of nanobiomechanics in understanding cell processes, and provide an outlook on future directions in mechanical modelling and experimental protocols.
|
On-Line and Off-Line Handwriting Recognition : A Comprehensive Survey
| 20 |
---
|
Title: On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey
Section 1: INTRODUCTION
Description 1: Introduce the nature of handwriting and discuss its historical significance and persistence in the digital age.
Section 2: Survival of Handwriting
Description 2: Explain the reasons why handwriting persists despite technological advancements and its evolving role.
Section 3: Recognition, Interpretation, and Identification
Description 3: Describe the different types of handwriting analysis, including recognition, interpretation, and identification of handwriting samples.
Section 4: Handwriting Input
Description 4: Discuss the methods of converting handwritten data into a digital format and the distinctions between off-line and on-line handwriting.
Section 5: The State of the Art
Description 5: Review the current advanced systems of handwriting recognition and their applications in various domains, including the latest research efforts.
Section 6: HANDWRITING GENERATION AND PERCEPTION
Description 6: Examine the psychological and physiological aspects of handwriting generation and perception, and the complexity of variability in handwriting.
Section 7: ON-LINE HANDWRITING RECOGNITION
Description 7: Focus on techniques and applications of on-line handwriting recognition, including pen-based computers, signature verification, and developmental tools.
Section 8: Pen-Based Computers
Description 8: Discuss the development of pen-based computers and the challenges and advancements in their handwriting recognition capabilities.
Section 9: Signature Verifiers
Description 9: Explain the process and challenges of signature verification systems, including the balance of Type I and Type II errors.
Section 10: Developmental Tools
Description 10: Highlight tools and applications for teaching handwriting and aiding in motor control rehabilitation, with a focus on children and disabled individuals.
Section 11: OFF-LINE HANDWRITING RECOGNITION
Description 11: Describe the tasks and techniques in off-line handwriting recognition, including character and word recognition, and document analysis.
Section 12: Preprocessing
Description 12: Outline the preprocessing steps necessary for analyzing scanned documents, such as thresholding, noise removal, and segmentation into lines and characters.
Section 13: Character Recognition
Description 13: Discuss methods used for recognizing individual characters in off-line handwriting, including various pattern recognition algorithms.
Section 14: Word Recognition
Description 14: Explain the approaches to word recognition, including both analytical and holistic methods, and their application in specific domains like postal addresses and bank checks.
Section 15: Application of Off-Line Handwriting
Description 15: Review significant applications of off-line handwriting recognition, particularly in reading postal addresses and bank checks.
Section 16: Handwritten Address Interpretation
Description 16: Focus on the interpretation of handwritten addresses and the systems developed for mail delivery.
Section 17: Bank Check Recognition
Description 17: Describe systems for recognizing the amounts on bank checks and their development over recent years.
Section 18: Signature Verification
Description 18: Explain the off-line signature verification process and the challenges faced without time information.
Section 19: Writer Identification
Description 19: Discuss the features and techniques used for identifying the author of a handwritten sample.
Section 20: Language Models
Description 20: Discuss the importance of language models in handwriting recognition and their role in improving accuracy and reducing errors.
|
A Survey on Complexity Results for Non-monotonic Logics
| 12 |
---
paper_title: Circumscription - A Form of Non-Monotonic Reasoning
paper_content:
Humans and intelligent computer programs must often jump to the conclusion that the objects they can determine to have certain properties or relations are the only objects that do. Circumscription formalizes such conjectural reasoning.
---
paper_title: Propositional circumscription and extended closed-world reasoning are ΠP2-complete
paper_content:
Abstract Circumscription and the closed-world assumption with its variants are well-known nonmonotonic techniques for reasoning with incomplete knowledge. Their complexity in the propositional case has been studied in detail for fragments of propositional logic. One open problem is whether the deduction problem for arbitrary propositional theories under the extended closed-world assumption or under circumscription is Π P 2 -complete, i.e., complete for a class of the second level of the polynomial hierarchy. We answer this question by proving these problems Π P 2 -complete, and we show how this result applies to other variants of closed-world reasoning.
---
paper_title: Computing Circumscription
paper_content:
Circumscription is a transformation of predicate formulas proposed by John McCarthy for the purpose of formalizing non-monotonic aspects of commonsense reasoning. Circumscription is difficult to implement because its definition involves a second-order quantifier. This paper presents metamathematical results that allow us in some cases to replace circumscription by an equivalent first-order formula.
---
paper_title: APPLICATIONS OF CIRCUMSCRIPTION TO FORMALIZING COMMON SENSE KNOWLEDGE
paper_content:
We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects of various entities. Included are nonmonotonic treatments of is-a hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously defined domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briefly explored.
---
paper_title: Computational Complexity of Hypothesis Assembly
paper_content:
T h e p rob lem of f inding a best exp lana t ion of a set of da ta has been a top ic of m u c h interest in A r t i f i c i a l I n t e l l i gence. In th is paper we present an approach to th is p r o b l e m by hypothesis assembly. We present th is approach f o rma l l y so tha t we can examine the t ime comp lex i t y and correctness of the a lgo r i t hms . We then examine a system i m p l e m e n t e d using th is approach , w h i c h per fo rms red b lood a n t i b o d y iden t i f i ca t ion . We use th is d o m a i n to examine the r a m i f icat ions of the assumpt ions of the f o r m a l model in a real wo r l d s i t ua t i on . We also br ie f ly compare th is approach to other assembly approaches in te rms o f t i m e comp lex i t y and rel iance on assumpt ions. I . I n t r o d u c t i o n T h e p rob lem of abduc t i ve reasoning (as proposed by the phi losopher C.S. Peirce) has been a topic of m u c h recent interest in A r t i f i c i a l Inte l l igence ( M i l l e r 1982, Reggia 1983, Cha rn iak 1985). T h e general task faced by an abduc t i ve reasoning system is to find the best exp lana t ion of a set of da ta or observat ions, i.e., the best way to account for a set of da ta . Mos t of the work in A r t i f i c i a l Inte l l igence in th is area has focused on a specific k i nd of a b d u c t i o n , wh ich we cal l hypothes is assembly. T h e hypothesis assembly task assumes as g iven a set of hypotheses w i t h some knowledge about what sorts of da ta each can account for , and f inds the subset of these hypotheses tha t best accounts for the p rob lem da ta . A t the O h i o State Labo ra to ry for A r t i f i c i a l In te l l igence, Josephson et . a l . (1985) have been deve lop ing an approach for a b d u c t i o n based upon hypothesis assembly. In th is paper we w i l l begin by present ing a m a t h e m a t i cal idea l iza t ion of th is approach . F r o m th is we w i l l analyze the comp lex i t y and correctness o f the a l g o r i t h m . T h e n we w i l l examine how wel l th is idea l iza t ion matches w i t h real T h i s wo rk has been suppor ted by the N a t i o n a l L i b ra r y o f Med ic ine under g ran t LM-04298 , the N a t i o n a l Science F o u n d a t i o n t h r o u g h a Gradua te Fe l lowsh ip , and the Defense Advanced Research Pro jec ts Agency, R A D C cont rac t F30602-85-C-0010. C o m p u t e r faci l i t ies were enhanced t h r o u g h gi f ts f r o m Xe rox C o r p o r a t i o n . wor ld concepts o f abduc t i on . In pa r t i cu la r , we w i l l examine a system cal led R E D , based upon th is app roach , wh ich performs an t i body iden t i f i ca t i on in the d o m a i n o f red b lood cell t y p i n g , as descr ibed in (Josephson 1984), (Josephson 1985) and ( S m i t h 1985), and show how the general ma thema t i ca l results respond to quest ions t ha t have been raised ( M o s t o w 1985) about i ts comp lex i t y . I I . M a t h e m a t i c a l I d e a l i z a t i o n o f A b d u c t i o n D e f i n i t i o n s In order to mot i va te the fo l low ing de f in i t ions , we w i l l examine br ief ly the d o m a i n o f the cu r ren t i m p l e m e n t a t i o n o f R E D , the d o m a i n o f b l ood bank a n t i b o d y analysis. T h e p r i mary da ta consists of results of several lab tests on b l ood samples. T h e b lood bank technologist knows how antibodies can account for var ious react ions. T h e lab tests have the p roper ty t ha t i f a n t i b o d y A accounts for some react ion r, and an t i body B accounts for react ion q, then the presence of bo th ant ibod ies A and B accounts for b o t h react ions r and q. T h i s p rope r t y of a doma in w i l l be referred to as independence of hypotheses. More fo rma l l y , we define a d o m a i n for hypothes is assembly as a the t r i p l e (H, M,e) , where H is a finite set of hypotheses, M is a finite set of manifestations, and c is a map f r o m subsets of H to subsets of M. e(S) is i n te rpre ted as the explanatory power of a set of hypotheses, and is the set of mani fes ta t ions for wh i ch those hypo the ses can account . An assembly p rob lem is specif ied by a subset Mo M. Mo is i n te rp re ted as the set of observed mani fes ta t ions 1 . In these te rms, we have The Independence Assumption: If S and T are subsets of H, then A l t h o u g h m a n y domains satisfy the independence ass u m p t i o n , we w ish to s t rengthen ou r result by rep lac ing 1 In wha t fo l lows, we w i l l use the no ta t i on e(S) where we shou ld , s t r i c t l y speak ing , w r i t e for the res t r i c t i on of e(S) to the observed man i fes ta t ions .
---
paper_title: Propositional circumscription and extended closed-world reasoning are ΠP2-complete
paper_content:
Abstract Circumscription and the closed-world assumption with its variants are well-known nonmonotonic techniques for reasoning with incomplete knowledge. Their complexity in the propositional case has been studied in detail for fragments of propositional logic. One open problem is whether the deduction problem for arbitrary propositional theories under the extended closed-world assumption or under circumscription is Π P 2 -complete, i.e., complete for a class of the second level of the polynomial hierarchy. We answer this question by proving these problems Π P 2 -complete, and we show how this result applies to other variants of closed-world reasoning.
---
paper_title: Abductive and default reasoning: a computational core
paper_content:
Of all the possible ways of computing abductive explanations, the ATMS procedure is one of the most popular. While this procedure is known to run in exponential time in the worst case, the proof actually depends on the existence of queries with an exponential number of answers. But how much of the difficulty stems from having to return these large sets of explanations? Here we explore abduction tasks similar to that of the ATMS, but which return relatively small answers. The main result is that although it is possible to generate some non-trivial explanations quickly, deciding if there is an explanation containing a given hypothesis is NP-hard, as is the task of generating even one explanation expressed in terms of a given set of assumption letters. Thus, the method of simply listing all explanations, as employed by the ATMS, probably cannot be improved upon. An interesting result of our analysis is the discovery of a subtask that is at the core of generating explanations, and is also at the core of generating extensions in Reiter's default logic. Moreover, it is this subtask that accounts for the computational difficulty of both forms of reasoning. This establishes for the first time a strong connection between computing abductive explanations and computing extensions in default logic.
---
paper_title: Computational Complexity of Hypothesis Assembly
paper_content:
T h e p rob lem of f inding a best exp lana t ion of a set of da ta has been a top ic of m u c h interest in A r t i f i c i a l I n t e l l i gence. In th is paper we present an approach to th is p r o b l e m by hypothesis assembly. We present th is approach f o rma l l y so tha t we can examine the t ime comp lex i t y and correctness of the a lgo r i t hms . We then examine a system i m p l e m e n t e d using th is approach , w h i c h per fo rms red b lood a n t i b o d y iden t i f i ca t ion . We use th is d o m a i n to examine the r a m i f icat ions of the assumpt ions of the f o r m a l model in a real wo r l d s i t ua t i on . We also br ie f ly compare th is approach to other assembly approaches in te rms o f t i m e comp lex i t y and rel iance on assumpt ions. I . I n t r o d u c t i o n T h e p rob lem of abduc t i ve reasoning (as proposed by the phi losopher C.S. Peirce) has been a topic of m u c h recent interest in A r t i f i c i a l Inte l l igence ( M i l l e r 1982, Reggia 1983, Cha rn iak 1985). T h e general task faced by an abduc t i ve reasoning system is to find the best exp lana t ion of a set of da ta or observat ions, i.e., the best way to account for a set of da ta . Mos t of the work in A r t i f i c i a l Inte l l igence in th is area has focused on a specific k i nd of a b d u c t i o n , wh ich we cal l hypothes is assembly. T h e hypothesis assembly task assumes as g iven a set of hypotheses w i t h some knowledge about what sorts of da ta each can account for , and f inds the subset of these hypotheses tha t best accounts for the p rob lem da ta . A t the O h i o State Labo ra to ry for A r t i f i c i a l In te l l igence, Josephson et . a l . (1985) have been deve lop ing an approach for a b d u c t i o n based upon hypothesis assembly. In th is paper we w i l l begin by present ing a m a t h e m a t i cal idea l iza t ion of th is approach . F r o m th is we w i l l analyze the comp lex i t y and correctness o f the a l g o r i t h m . T h e n we w i l l examine how wel l th is idea l iza t ion matches w i t h real T h i s wo rk has been suppor ted by the N a t i o n a l L i b ra r y o f Med ic ine under g ran t LM-04298 , the N a t i o n a l Science F o u n d a t i o n t h r o u g h a Gradua te Fe l lowsh ip , and the Defense Advanced Research Pro jec ts Agency, R A D C cont rac t F30602-85-C-0010. C o m p u t e r faci l i t ies were enhanced t h r o u g h gi f ts f r o m Xe rox C o r p o r a t i o n . wor ld concepts o f abduc t i on . In pa r t i cu la r , we w i l l examine a system cal led R E D , based upon th is app roach , wh ich performs an t i body iden t i f i ca t i on in the d o m a i n o f red b lood cell t y p i n g , as descr ibed in (Josephson 1984), (Josephson 1985) and ( S m i t h 1985), and show how the general ma thema t i ca l results respond to quest ions t ha t have been raised ( M o s t o w 1985) about i ts comp lex i t y . I I . M a t h e m a t i c a l I d e a l i z a t i o n o f A b d u c t i o n D e f i n i t i o n s In order to mot i va te the fo l low ing de f in i t ions , we w i l l examine br ief ly the d o m a i n o f the cu r ren t i m p l e m e n t a t i o n o f R E D , the d o m a i n o f b l ood bank a n t i b o d y analysis. T h e p r i mary da ta consists of results of several lab tests on b l ood samples. T h e b lood bank technologist knows how antibodies can account for var ious react ions. T h e lab tests have the p roper ty t ha t i f a n t i b o d y A accounts for some react ion r, and an t i body B accounts for react ion q, then the presence of bo th ant ibod ies A and B accounts for b o t h react ions r and q. T h i s p rope r t y of a doma in w i l l be referred to as independence of hypotheses. More fo rma l l y , we define a d o m a i n for hypothes is assembly as a the t r i p l e (H, M,e) , where H is a finite set of hypotheses, M is a finite set of manifestations, and c is a map f r o m subsets of H to subsets of M. e(S) is i n te rpre ted as the explanatory power of a set of hypotheses, and is the set of mani fes ta t ions for wh i ch those hypo the ses can account . An assembly p rob lem is specif ied by a subset Mo M. Mo is i n te rp re ted as the set of observed mani fes ta t ions 1 . In these te rms, we have The Independence Assumption: If S and T are subsets of H, then A l t h o u g h m a n y domains satisfy the independence ass u m p t i o n , we w ish to s t rengthen ou r result by rep lac ing 1 In wha t fo l lows, we w i l l use the no ta t i on e(S) where we shou ld , s t r i c t l y speak ing , w r i t e for the res t r i c t i on of e(S) to the observed man i fes ta t ions .
---
paper_title: The Computational Complexity of Abduction
paper_content:
Abstract The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is intractable (NP-hard) in general. In particular, choosing between incompatible hypotheses, reasoning about cancellation effects among hypotheses, and satisfying the maximum plausibility requirement are major factors leading to intractability. We also identify a tractable, but restricted, class of abduction problems.
---
paper_title: The complexity of closed world reasoning and circumscription
paper_content:
Closed world reasoning is a common nonmonotonic technique that allows for dealing with negative information in knowledge and data bases. We present a detailed analysis of the computational complexity of the different forms of closed world reasoning for various fragments of propositional logic. The analysis allows us to draw a complete picture of the tractability/ intractability frontier for such a form of nonmonotonic reasoning. We also discuss how to use our results in order to characterize the computational complexity of other problems related to nonmonotonic inheritance, diagnosis, and default reasoning.
---
paper_title: Relating Default Logic and Circumscription
paper_content:
Default logic and the various forms of circumscription were developed to deal with similar problems. In this paper, we consider what is known about the relationships between the two approaches and present several new results extending this knowledge. We show that there are interesting cases in which the two formalisms do not correspond, as well as cases where default logic subsumes circumscription. We also consider positive and negative results on translating between defaults and circumscription, and develop a context in which they can be evaluated.
---
paper_title: What does a conditional knowledge base entail?
paper_content:
This paper presents a logical approach to nonmonotonic reasoning based on the notion of a nonmonotonic consequence relation. A conditional knowledge base, consisting of a set of conditional assertions of the type"if ... then ...", represents the explicit defeasible knowledge an agent has about the way the world generally behaves. We look for a plausible definition of the set of all conditional assertions entailed by a conditional knowledge base. In a previous paper, S. Kraus and the authors defined and studied"preferential"consequence relations. They noticed that not all preferential relations could be considered as reasonable inference procedures. This paper studies a more restricted class of consequence relations,"rational"relations. It is argued that any reasonable nonmonotonic inference procedure should define a rational relation. It is shown that the rational relations are exactly those that may be represented by a"ranked"preferential model, or by a (non-standard) probabilistic model. The rational closure of a conditional knowledge base is defined and shown to provide an attractive answer to the question of the title. Global properties of this closure operation are proved: it is a cumulative operation. It is also computationally tractable. This paper assumes the underlying language is propositional.
---
|
Title: A Survey on Complexity Results for Non-monotonic Logics
Section 1: Introduction
Description 1: This section provides an overview of non-monotonic logics and the motivation behind using default assumptions for more compact knowledge representation. It also outlines the paper's goals and mentions related work in the literature.
Section 2: Complexity Classes
Description 2: This section offers a brief overview of the complexity concepts and classes used throughout the paper, such as P, NP, co-NP, p_2, and the polynomial hierarchy.
Section 3: Default Logic
Description 3: This section discusses default logic, the main computational problems associated with it, and significant complexity results. It also covers various restrictions that lead to polynomials tractable cases.
Section 4: Modal Non-Monotonic Logics
Description 4: This section reviews complexity results concerning non-monotonic versions of modal logics, particularly focusing on autoepistemic logic and its variants.
Section 5: Negation in Logic Programming
Description 5: This section surveys the complexity results for various semantics for negation in logic programming, including stable model, well-founded, supported model, and other semantics.
Section 6: Inference
Description 6: This section addresses the complexity of inference for circumscription and closed-world reasoning and discusses classes of formulae whose circumscription is first-order.
Section 7: Satisfiability
Description 7: This section examines the complexity of determining the satisfiability of circumscription and extended closed world scenarios.
Section 8: Model Checking
Description 8: This section focuses on the problem of model checking in circumscription and proves the complexity results associated with different types of formulae.
Section 9: Model Finding
Description 9: This section discusses the complexity of finding satisfying assignments (models) for circumscriptive theories, with a particular focus on propositional logic.
Section 10: Abduction
Description 10: This section delves into the complexity of logic-based abduction, outlining various decision tasks and preference criteria that affect the complexity.
Section 11: Polynomial Reductions between NMR Problems
Description 11: This section explores polynomial transformations between different non-monotonic reasoning problems and explains their significance for complexity analysis.
Section 12: Conclusions
Description 12: This section summarizes the surveyed complexity results and discusses the implications for designing algorithms for non-monotonic reasoning, mentioning areas not covered in the paper.
|
A Unifying Survey of Reinforced, Sensitive and Stigmergic Agent-Based Approaches for E-GTSP
| 14 |
---
paper_title: A Random-Key Genetic Algorithm for the Generalized Traveling Salesman Problem
paper_content:
The generalized traveling salesman problem is a variation of the well-known traveling salesman problem in which the set of nodes is divided into clusters; the objective is to find a minimum-cost tour passing through one node from each cluster. We present an effective heuristic for this problem. The method combines a genetic algorithm (GA) with a local tour improvement heuristic. Solutions are encoded using random keys, which circumvent the feasibility problems encountered when using traditional GA encodings. On a set of 41 standard test problems with symmetric distances and up to 442 nodes, the heuristic found solutions that were optimal in most cases and were within 1% of optimality in all but the largest problems, with computation times generally within 10 seconds. The heuristic is competitive with other heuristics published to date in both solution quality and computation time.
---
paper_title: Sensitive Ants in Solving the Generalized Vehicle Routing Problem
paper_content:
The idea of sensitivity in ant colony systems has been exploited in hybrid ant-based models with promising results for many combinatorial optimization problems. Heterogeneity is induced in the ant population by endowing individual ants with a certain level of sensitivity to the pheromone trail. The variable pheromone sensitivity within the same population of ants can potentially intensify the search while in the same time inducing diversity for the exploration of the environment. The performance of sensitive ant models is investigated for solving the generalized vehicle routing problem. Numerical results and comparisons are discussed and analysed with a focus on emphasizing any particular aspects and potential benefits related to hybrid ant-based models.
---
paper_title: A Memetic Algorithm for the Generalized Traveling Salesman Problem
paper_content:
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The recent studies on this subject consider different variations of a memetic algorithm approach to the GTSP. The aim of this paper is to present a new memetic algorithm for GTSP with a powerful local search procedure. The experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. While the other memetic algorithms were designed only for the symmetric GTSP, our algorithm can solve both symmetric and asymmetric instances.
---
paper_title: Lin-Kernighan Heuristic Adaptations for the Generalized Traveling Salesman Problem
paper_content:
The Lin-Kernighan heuristic is known to be one of the most successful heuristics for the Traveling Salesman Problem (TSP). It has also proven its efficiency in application to some other problems. In this paper, we discuss possible adaptations of TSP heuristics for the generalized traveling salesman problem (GTSP) and focus on the case of the Lin-Kernighan algorithm. At first, we provide an easy-to-understand description of the original Lin-Kernighan heuristic. Then we propose several adaptations, both trivial and complicated. Finally, we conduct a fair competition between all the variations of the Lin-Kernighan adaptation and some other GTSP heuristics. It appears that our adaptation of the Lin-Kernighan algorithm for the GTSP reproduces the success of the original heuristic. Different variations of our adaptation outperform all other heuristics in a wide range of trade-offs between solution quality and running time, making Lin-Kernighan the state-of-the-art GTSP local search.
---
paper_title: An Efficient Hybrid Ant Colony System for the Generalized Traveling Salesman Problem
paper_content:
The Generalized Traveling Salesman Problem (GTSP) is an extension of the well-known Traveling Salesman Problem (TSP), where the node set is partitioned into clusters, and the objective is to find the shortest cycle visiting each cluster exactly once. In this paper, we present a new hybrid Ant Colony System (ACS) algorithm for the symmetric GTSP. The proposed algorithm is a modification of a simple ACS for the TSP improved by an efficient GTSP-specific local search procedure. Our extensive computational experiments show that the use of the local search procedure dramatically improves the performance of the ACS algorithm, making it one of the most successful GTSP metaheuristics to date.
---
paper_title: Solving the Generalized Vehicle Routing Problem with an ACS‐based Algorithm
paper_content:
Ant colony system is a metaheuristic algorithm inspired by the behavior of real ants and was proposed by Dorigo et al. as a method for solving hard combinatorial optimization problems. In this paper we show its successful application to solving a network design problem: Generalized Vehicle Routing Problem. The Generalized Vehicle Routing Problem (GVRP) is the problem of designing optimal delivery or collection routes, subject to capacity restrictions, from a given depot to a number of predefined, mutually exclusive and exhaustive clusters. Computational results for several benchmark problems are reported.
---
paper_title: The Generalized Traveling Salesman and Orienteering Problems
paper_content:
Routing and Scheduling problems often require the determination of optimal sequences subject to a given set of constraints. The best known problem of this type is the classical Traveling Salesman Problem (TSP), calling for a minimum cost Hamiltonian cycle on a given graph.
---
paper_title: An efficient composite heuristic for the symmetric generalized traveling salesman problem
paper_content:
The main purpose of this paper is to introduce a new composite heuristic for solving the generalized traveling salesman problem. The proposed heuristic is composed of three phases: the construction of an initial partial solution, the insertion of a node from each non-visited node-subset, and a solution improvement phase. We show that the heuristic performs very well on 36 TSPLIB problems which have been solved to optimality by other researchers. We also propose some simple heuristics that can be used as basic blocks to construct more efficient composite heuristics.
---
paper_title: Extending the Horizons: Advances in Computing, Optimization, and Decision Technologies
paper_content:
This book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society.
---
paper_title: Efficient Local Search Algorithms for Known and New Neighborhoods for the Generalized Traveling Salesman Problem
paper_content:
The Generalized Traveling Salesman Problem (GTSP) is a well-known combinatorial optimization problem with a host of applications. It is an extension of the Traveling Salesman Problem (TSP) where the set of cities is partitioned into so-called clusters, and the salesman has to visit every cluster exactly once. While the GTSP is a very important combinatorial optimization problem and is well studied in many aspects, the local search algorithms used in the literature are mostly basic adaptations of simple TSP heuristics. Hence, a thorough and deep research of the neighborhoods and local search algorithms specific to the GTSP is required. We formalize the procedure of adaptation of a TSP neighborhood for the GTSP and classify all other existing and some new GTSP neighborhoods. For every neighborhood, we provide efficient exploration algorithms that are often significantly faster than the ones known from the literature. Finally, we compare different local search implementations empirically.
---
paper_title: A Multistart Heuristic for the Equality Generalized Traveling Salesman Problem
paper_content:
We study the equality generalized traveling salesman problem (E-GTSP), which is a variant of the well-known traveling salesman problem. We are given an undirected graph G = (V, E), with set of vertices V and set of edges E, each with an associated cost. The set of vertices is partitioned into clusters. E-GTSP is to find an elementary cycle visiting exactly one vertex for each cluster and minimizing the sum of the costs of the traveled edges. We propose a multistart heuristic, which iteratively starts with a randomly chosen set of vertices and applies a decomposition approach combined with improvement procedures. The decomposition approach considers a first phase to determine the visiting order of the clusters and a second phase to find the corresponding minimum cost cycle. We show the effectiveness of the proposed approach on benchmark instances from the literature. On small instances, the heuristic always identifies the optimal solution rapidly and outperforms all known heuristics; on larger instances, the heuristic always improves, in comparable computing times, the best known solution values obtained by the genetic algorithm recently proposed by Silberholz and Golden.
---
paper_title: Generalized Travelling Salesman Problem Through n Sets Of Nodes: An Integer Programming Approach
paper_content:
AbstractThis paper deals with a generalized version of the Travelling Salesman Problem, which consists of finding the shortest Hamiltonian cycle through n sets of nodes. The problem is formulated as an integer linear program including degree constraints, subtour elimination constraints, and integrality constraints. A branch and bound algorithm is used for the solution of the problem; the main feature of the algorithm lies in the relaxation of the subtour elimination constraints. Computational results for Euclidean and non-Euclidean problems are reported.
---
paper_title: MAX-MIN Ant System and local search for the traveling salesman problem
paper_content:
Ant System is a general purpose algorithm inspired by the study of the behavior of ant colonies. It is based on a cooperative search paradigm that is applicable to the solution of combinatorial optimization problems. We introduce MAX-MIN Ant System, an improved version of basic Ant System, and report our results for its application to symmetric and asymmetric instances of the well known traveling salesman problem. We show how MAX-MIN Ant System can be significantly improved, extending it with local search heuristics. Our results clearly show that MAX-MIN Ant System has the property of effectively guiding the local search heuristics towards promising regions of the search space by generating good initial tours.
---
paper_title: The complexity of agent design problems: Determinism and history dependence
paper_content:
The agent design problem is as follows: given a specification of an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In this article, we study the computational complexity of the agent design problem for tasks that are of the form "achieve this state of affairs" or "maintain this state of affairs." We consider three general formulations of these problems (in both non-deterministic and deterministic environments) that differ in the nature of what is viewed as an "acceptable" solution: in the least restrictive formulation, no limit is placed on the number of actions an agent is allowed to perform in attempting to meet the requirements of its specified task. We show that the resulting decision problems are intractable, in the sense that these are non-recursive (but recursively enumerable) for achievement tasks, and non-recursively enumerable for maintenance tasks. In the second formulation, the decision problem addresses the existence of agents that have satisfied their specified task within some given number of actions. Even in this more restrictive setting the resulting decision problems are either pspace-complete or np-complete. Our final formulation requires the environment to be history independent and bounded. In these cases polynomial time algorithms exist: for deterministic environments the decision problems are nl-complete; in non-deterministic environments, p-complete.
---
paper_title: A Sensitive Metaheuristic for Solving a Large Optimization Problem
paper_content:
A metaheuristic for solving complex problems is proposed. The introduced Sensitive Robot Metaheuristic (SRM) is based on the Ant Colony System optimization technique. The new model relies on the reaction of virtual sensitive robots to different stigmergic variables. Each robot is endowed with a particular stigmergic sensitivity level ensuring a good balance between search diversification and intensification. Comparative tests are performed on large-scale NP-hard robotic travel problems. These tests illustrate the effectiveness and robustness of the proposed metaheuristic.
---
paper_title: New Integer Programming Formulations of the Generalized Travelling Salesman Problem
paper_content:
The Generalized Travelling Salesman Problem, denoted by GTSP, is a variant of the classical travelling salesman problem (TSP), in which the nodes of an undirected graph are partitioned into node sets (clusters) and the salesman has to visit exactly one node from every cluster. In this paper we described six distinct formulations of the GTSP as an integer programming. Apart from the standard formulations all the new formulations that we describe are 'compact' in the sense that the number of constraints and variables is a polynomial function of the number of nodes in the problem. In order to provide compact formulations for the GTSP we used two approaches using auxiliary flow variables beyond the natural binary edge and node variables and the second one by distinguishing between global and local variables. Comparisons of the polytopes corresponding to their linear relaxations are established.
---
paper_title: Efficient Local Search Algorithms for Known and New Neighborhoods for the Generalized Traveling Salesman Problem
paper_content:
The Generalized Traveling Salesman Problem (GTSP) is a well-known combinatorial optimization problem with a host of applications. It is an extension of the Traveling Salesman Problem (TSP) where the set of cities is partitioned into so-called clusters, and the salesman has to visit every cluster exactly once. While the GTSP is a very important combinatorial optimization problem and is well studied in many aspects, the local search algorithms used in the literature are mostly basic adaptations of simple TSP heuristics. Hence, a thorough and deep research of the neighborhoods and local search algorithms specific to the GTSP is required. We formalize the procedure of adaptation of a TSP neighborhood for the GTSP and classify all other existing and some new GTSP neighborhoods. For every neighborhood, we provide efficient exploration algorithms that are often significantly faster than the ones known from the literature. Finally, we compare different local search implementations empirically.
---
paper_title: Improving ant systems using a local updating rule
paper_content:
An algorithm based on ant colony system for solving traveling salesman problem is proposed. The new algorithm, introduces in ant colony system an inner loop aiming to update the pheromone trails. The update increases the pheromone in the trail followed by the ants and therefore generates improved tours.
---
paper_title: MAX-MIN Ant System and local search for the traveling salesman problem
paper_content:
Ant System is a general purpose algorithm inspired by the study of the behavior of ant colonies. It is based on a cooperative search paradigm that is applicable to the solution of combinatorial optimization problems. We introduce MAX-MIN Ant System, an improved version of basic Ant System, and report our results for its application to symmetric and asymmetric instances of the well known traveling salesman problem. We show how MAX-MIN Ant System can be significantly improved, extending it with local search heuristics. Our results clearly show that MAX-MIN Ant System has the property of effectively guiding the local search heuristics towards promising regions of the search space by generating good initial tours.
---
paper_title: Heterogeneous sensitive ant model for combinatorial optimization
paper_content:
A new metaheuristic called Sensitive Ant Model (SAM) for solving combinatorial optimization problems is proposed. SAM improves and extends the Ant Colony System approach by enhancing each agent of the model with properties that induce heterogeneity. SAM agents are endowed with different pheromone sensitivity levels. Highly-sensitive agents are essentially influenced in the decision making process by stigmergic information and thus likely to select strong pheromone-marked moves. Search intensification can be therefore sustained. Agents with low sensitivity are biased towards random search inducing diversity for exploration of the environment. A heterogeneous agent model has the potential to cope with complex and/or dynamic search spaces. Sensitive agents (or ants) allow many types of reactions to a changing environment facilitating an efficient balance between exploration and exploitation.
---
paper_title: A Sensitive Metaheuristic for Solving a Large Optimization Problem
paper_content:
A metaheuristic for solving complex problems is proposed. The introduced Sensitive Robot Metaheuristic (SRM) is based on the Ant Colony System optimization technique. The new model relies on the reaction of virtual sensitive robots to different stigmergic variables. Each robot is endowed with a particular stigmergic sensitivity level ensuring a good balance between search diversification and intensification. Comparative tests are performed on large-scale NP-hard robotic travel problems. These tests illustrate the effectiveness and robustness of the proposed metaheuristic.
---
paper_title: A Brief History of Stigmergy
paper_content:
Stigmergy is a class of mechanisms that mediate animal-animal interactions. Its introduction in 1959 by Pierre-Paul Grasse made it possible to explain what had been until then considered paradoxical observations: In an insect society individuals work as if they were alone while their collective activities appear to be coordinated. In this article we describe the history of stigmergy in the context of social insects and discuss the general properties of two distinct stigmergic mechanisms: quantitative stigmergy and qualitative stigmergy.
---
paper_title: Self-Organization in Biological Systems
paper_content:
From the Publisher: ::: "Broad in scope, thorough yet accessible, this book is a self-contained introduction to self-organization and complexity in biology - a field of study at the forefront of life sciences research."--BOOK JACKET.
---
paper_title: Intelligent Complex Evolutionary Agent‐Based Systems
paper_content:
In this paper, we investigate the possibility to develop intelligent agent‐based complex systems that use evolutionary learning techniques in order to adapt for the efficient solving of the problems by reorganizing their structure. For this investigation is proposed a complex multiagent system called EAMS (Evolutionary Adaptive Multiagent System), which using an evolutionary learning technique can learn different patterns of reorganization. The realized study proves that evolutionary techniques successfully can be used to create complex multiagent systems capable to intelligently reorganize their structure during their life cycle. The practical establishment of the intelligence of a computational system in generally, an agent‐based system in particularly consists in how efficiently and flexibly the system can solve difficult problems.
---
paper_title: The complexity of agent design problems: Determinism and history dependence
paper_content:
The agent design problem is as follows: given a specification of an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In this article, we study the computational complexity of the agent design problem for tasks that are of the form "achieve this state of affairs" or "maintain this state of affairs." We consider three general formulations of these problems (in both non-deterministic and deterministic environments) that differ in the nature of what is viewed as an "acceptable" solution: in the least restrictive formulation, no limit is placed on the number of actions an agent is allowed to perform in attempting to meet the requirements of its specified task. We show that the resulting decision problems are intractable, in the sense that these are non-recursive (but recursively enumerable) for achievement tasks, and non-recursively enumerable for maintenance tasks. In the second formulation, the decision problem addresses the existence of agents that have satisfied their specified task within some given number of actions. Even in this more restrictive setting the resulting decision problems are either pspace-complete or np-complete. Our final formulation requires the environment to be history independent and bounded. In these cases polynomial time algorithms exist: for deterministic environments the decision problems are nl-complete; in non-deterministic environments, p-complete.
---
paper_title: A Memetic Algorithm for the Generalized Traveling Salesman Problem
paper_content:
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The recent studies on this subject consider different variations of a memetic algorithm approach to the GTSP. The aim of this paper is to present a new memetic algorithm for GTSP with a powerful local search procedure. The experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. While the other memetic algorithms were designed only for the symmetric GTSP, our algorithm can solve both symmetric and asymmetric instances.
---
paper_title: An Efficient Hybrid Ant Colony System for the Generalized Traveling Salesman Problem
paper_content:
The Generalized Traveling Salesman Problem (GTSP) is an extension of the well-known Traveling Salesman Problem (TSP), where the node set is partitioned into clusters, and the objective is to find the shortest cycle visiting each cluster exactly once. In this paper, we present a new hybrid Ant Colony System (ACS) algorithm for the symmetric GTSP. The proposed algorithm is a modification of a simple ACS for the TSP improved by an efficient GTSP-specific local search procedure. Our extensive computational experiments show that the use of the local search procedure dramatically improves the performance of the ACS algorithm, making it one of the most successful GTSP metaheuristics to date.
---
paper_title: Data Mining with an Ant Colony Optimization Algorithm
paper_content:
The paper proposes an algorithm for data mining called Ant-Miner (ant-colony-based data miner). The goal of Ant-Miner is to extract classification rules from data. The algorithm is inspired by both research on the behavior of real ant colonies and some data mining concepts as well as principles. We compare the performance of Ant-Miner with CN2, a well-known data mining algorithm for classification, in six public domain data sets. The results provide evidence that: 1) Ant-Miner is competitive with CN2 with respect to predictive accuracy, and 2) the rule lists discovered by Ant-Miner are considerably simpler (smaller) than those discovered by CN2.
---
paper_title: Lin-Kernighan Heuristic Adaptations for the Generalized Traveling Salesman Problem
paper_content:
The Lin-Kernighan heuristic is known to be one of the most successful heuristics for the Traveling Salesman Problem (TSP). It has also proven its efficiency in application to some other problems. In this paper, we discuss possible adaptations of TSP heuristics for the generalized traveling salesman problem (GTSP) and focus on the case of the Lin-Kernighan algorithm. At first, we provide an easy-to-understand description of the original Lin-Kernighan heuristic. Then we propose several adaptations, both trivial and complicated. Finally, we conduct a fair competition between all the variations of the Lin-Kernighan adaptation and some other GTSP heuristics. It appears that our adaptation of the Lin-Kernighan algorithm for the GTSP reproduces the success of the original heuristic. Different variations of our adaptation outperform all other heuristics in a wide range of trade-offs between solution quality and running time, making Lin-Kernighan the state-of-the-art GTSP local search.
---
paper_title: An Efficient Hybrid Ant Colony System for the Generalized Traveling Salesman Problem
paper_content:
The Generalized Traveling Salesman Problem (GTSP) is an extension of the well-known Traveling Salesman Problem (TSP), where the node set is partitioned into clusters, and the objective is to find the shortest cycle visiting each cluster exactly once. In this paper, we present a new hybrid Ant Colony System (ACS) algorithm for the symmetric GTSP. The proposed algorithm is a modification of a simple ACS for the TSP improved by an efficient GTSP-specific local search procedure. Our extensive computational experiments show that the use of the local search procedure dramatically improves the performance of the ACS algorithm, making it one of the most successful GTSP metaheuristics to date.
---
paper_title: A decision-theoretic framework for comparing heuristics
paper_content:
Abstract In this paper we describe a decision-theoretic framework for comparing a number of heuristics in terms of accuracy for a given combinatorial optimization problem. The procedure takes both expected accuracy and downside risk into account and is quite easy to implement.
---
paper_title: Support vector machine learning with an evolutionary engine
paper_content:
The paper presents a novel evolutionary technique constructed as an alternative of the standard support vector machines architecture. The approach adopts the learning strategy of the latter but aims to simplify and generalize its training, by offering a transparent substitute to the initial black-box. Contrary to the canonical technique, the evolutionary approach can at all times explicitly acquire the coefficients of the decision function, without any further constraints. Moreover, in order to converge, the evolutionary method does not require the positive (semi-)definition properties for kernels within nonlinear learning. Several potential structures, enhancements and additions are proposed, tested and confirmed using available benchmarking test problems. Computational results show the validity of the new approach in terms of runtime, prediction accuracy and flexibility.
---
|
Title: A Unifying Survey of Reinforced, Sensitive and Stigmergic Agent-Based Approaches for E-GTSP
Section 1: Introduction
Description 1: Introduce the significance and complexity of the Generalized Traveling Salesman Problem (GTSP) and its variant E-GTSP.
Section 2: The GTSP description
Description 2: Provide a comprehensive description of the Generalized Traveling Salesman Problem including definitions and its applications.
Section 3: A mathematical model of GTSP
Description 3: Present the mathematical model for GTSP and discuss its complex nature and time complexity.
Section 4: Agent-based approaches for solving GTSP
Description 4: Describe the various agent-based techniques proposed for solving GTSP, including Ant Colony System, Reinforcing Ant Colony System, Sensitive Ant Colony System, Sensitive Robot Metaheuristic, and Sensitive Stigmergic Agent System.
Section 5: Ant Colony System for GTSP
Description 5: Explain the Ant Colony System specifically adapted for GTSP, including algorithms and key operations.
Section 6: Reinforcing Ant Colony System for GTSP
Description 6: Detail the Reinforcing Ant Colony System and describe its modifications from the standard Ant Colony System.
Section 7: Sensitive Ant Colony System for GTSP
Description 7: Illustrate the functioning of the Sensitive Ant Colony System and how it uses pheromone sensitivity levels for improved problem-solving.
Section 8: SRM for solving GTSP
Description 8: Explain the Sensitive Robot Metaheuristic approach and its use of virtual autonomous robots with stigmergic sensitivity levels.
Section 9: Sensitive Stigmergic Agent System for GTSP
Description 9: Describe the Sensitive Stigmergic Agent System integrating concepts from Sensitive Ant Colony System and Stigmergic Agent System.
Section 10: Evaluations of Agent-Based Algorithms for E-GTSP
Description 10: Present the numerical experiments comparing the performance of different agent-based approaches for solving E-GTSP.
Section 11: Computational Analysis
Description 11: Provide a detailed computational analysis of the results obtained from various agent-based algorithms and compare their efficiency.
Section 12: Statistical analysis: advantages and disadvantages
Description 12: Summarize the statistical analysis highlighting the advantages and disadvantages of the techniques used.
Section 13: Conclusion
Description 13: Conclude with a summary of the findings, advantages of the methods described, and potential directions for future research.
Section 14: Acknowledgement
Description 14: Acknowledge contributions and assistance received in the course of the research.
|
Impact of precedence constraints on complexity of scheduling problems: a survey
| 7 |
---
paper_title: Sequencing and scheduling : algorithms and complexity
paper_content:
Sequencing and scheduling as a research area is motivated by questions that arise in production planning, in computer control, and generally in all situations in which scarce resources have to be allocated to activities over time. In this survey, we concentrate on the area of deterministic machine scheduling. We review complexity results and optimization and approximation algorithms for problems involving a single machine, parallel machines, open shops, flow shops and job shops. We also pay attention to two extensions of this area: resource-constrained project scheduling and stochastic machine scheduling.
---
paper_title: Computationally Tractable Classes of Ordered Sets
paper_content:
Ordered sets have recently gained much importance in many applied and theoretical problems in computer science and operations research ranging from project planning via processor scheduling to sorting and retrieval problems. These problems involve partial orders as their basic structure, e.g. as precedence constraints in scheduling problems, or as comparability relation among the objects to be sorted or retrieved.
---
paper_title: Scheduling: Theory, Algorithms, and Systems
paper_content:
This book on scheduling covers theoretical models as well as scheduling problems in the real world. Author Michael Pinedo also includes a CD that contains slide-shows from industry and movies dealing with implementations of scheduling systems. The book consists of three parts. The first part focuses on deterministic scheduling with the associated combinatorial problems. The second part covers probabilistic scheduling models. In this part it is assumed that processing times and other problem data are not known in advance. The third part deals with scheduling in practice. It covers heuristics that are popular with practitioners and discusses system design and development issues. Each chapter contains a series of computational and theoretical exercises. This book is of interest to theoreticians and practitioners alike. Graduate students in operations management, operations research, industrial engineering and computer science will find the book to be an accessible and invaluable resource. Scheduling will serve as an essential reference for professionals working on scheduling problems in manufacturing and computing environments. Michael Pinedo is the Julius Schlesinger Professor of Operations Management at New York University.
---
paper_title: Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
paper_content:
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
---
paper_title: Complexity of Scheduling under Precedence Constraints
paper_content:
Precedence constraints between jobs that have to be respected in every feasible schedule generally increase the computational complexity of a scheduling problem. Occasionally, their introduction may turn a problem that is solvable within polynomial time into an NP-complete one, for which a good algorithm is highly unlikely to exist. We illustrate the use of these concepts by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.
---
paper_title: Scheduling Algorithms
paper_content:
Besides scheduling problems for single and parallel machines and shop scheduling problems, this book covers advanced models involving due-dates, sequence dependent changeover times and batching. Discussion also extends to multiprocessor task scheduling and problems with multi-purpose machines. Among the methods used to solve these problems are linear programming, dynamic programming, branch-and-bound algorithms, and local search heuristics. The text goes on to summarize complexity results for different classes of deterministic scheduling problems.
---
paper_title: The Coffman-Graham Algorithm Optimally Solves UET Task Systems with Overinterval Orders
paper_content:
Scheduling of unit execution time (UET) task systems on parallel machines with minimal schedule length is known to be NP-complete. The problem is polynomially solvable for some special cases. For a fixed number of parallel machines m > 2, the complexity of the problem is still open, but the problem becomes NP-hard if m is arbitrary. In this paper we characterize a new order class that properly contains quasi-interval orders and we prove that the Coffman--Graham algorithm yields optimal schedules for this new class on any number of machines. Finally, some extensions are discussed for a larger order class and for scheduling in the presence of unit communication delays.
---
paper_title: Scheduling Interval-Ordered Tasks
paper_content:
We show that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors. Generalizations to arbitrary execution times are NP-complete.
---
paper_title: Sequencing Jobs to Minimize Total Weighted Completion Time Subject to Precedence Constraints
paper_content:
Suppose n jobs are to be sequenced for processing by a single machine, with the object of minimizing total weighted completion time. It is shown that the problem is NP-complete if there are arbitrary precedence constraints. However, if precedence constraints are “series parallel”, the problem can be solved in O( n log n ) time. This result generalizes previous results for the more special case of rooted trees. It is also shown how a decomposition procedure suggested by Sidney can be implemented in polynomial-bounded time. Equivalence of the sequencing problem with the optimal linear ordering problem for directed graphs is discussed.
---
paper_title: A Relation between Multiprocessor Scheduling and Linear Programming
paper_content:
The general non preemptive multiprocessor scheduling problem (NPMS) is NP-Complete, while in many specific cases, the same problem is Time-polynomial. A first connection between PMS and linear programming was established by Yannanakis, Sauer and Stone, who associated to any PMS instance some specific linear program. The main result inside this paper consists in a characterization of the partially ordered structures which allow the optimal values of any associated PMS instance to be equal to the optimal values of the corresponding linear programs.
---
paper_title: Optimal scheduling on parallel machines for a new order class
paper_content:
This paper addresses the problem of scheduling n unit length tasks on m identical machines under certain precedence constraints. The aim is to compute minimal length nonpreemptive schedules. We introduce a new order class which contains properly two rich families of precedence graphs: interval orders and a subclass of the class of series parallel orders. We present a linear time algorithm to find an optimal schedule for this new order class on any number of machines.
---
paper_title: Fractional dimension of partial orders
paper_content:
Given a partially ordered setP=(X, ≤), a collection of linear extensions {L1,L2,...,Lr} is arealizer if, for every incomparable pair of elementsx andy, we havex<y in someLi (andy<x in someLj). For a positive integerk, we call a multiset {L1,L2,...,Lt} ak-fold realizer if for every incomparable pairx andy we havex<y in at leastk of theLi's. Lett(k) be the size of a smallestk-fold realizer ofP; we define thefractional dimension ofP, denoted fdim(P), to be the limit oft(k)/k ask→∞. We prove various results about the fractional dimension of a poset.
---
paper_title: Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
paper_content:
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
---
paper_title: Scheduling Algorithms
paper_content:
Besides scheduling problems for single and parallel machines and shop scheduling problems, this book covers advanced models involving due-dates, sequence dependent changeover times and batching. Discussion also extends to multiprocessor task scheduling and problems with multi-purpose machines. Among the methods used to solve these problems are linear programming, dynamic programming, branch-and-bound algorithms, and local search heuristics. The text goes on to summarize complexity results for different classes of deterministic scheduling problems.
---
paper_title: Complexity results for scheduling chains on a single machine
paper_content:
Abstract We investigate the computational complexity of deterministic sequencing problems in which unit-time jobs have to be scheduled on a single machine subject to chain-like precedence constraints. NP-hardness is established for the cases in which the number of late jobs or the total weighted tardiness is to be minimized, and for several related problems involving the total weighted completion time criterion.
---
paper_title: Minimizing Total Tardiness on a Single Machine with Precedence Constraints
paper_content:
The problem of minimizing the total tardiness for a set of unit-processing-time jobs on a single machine is considered. J. K. Lenstra and A. H. G. Rinnooy Kan have shown that the problem is NP-hard if the jobs have arbitrary precedence constraints. They asked whether the problem remains NP-hard for tree-structured precedence constraints. In this paper we show that the problem is NP-hard even for a set of chains. Our result gives a sharp boundary for the complexity of this problem, since there is a simple, polynomial-time algorithm for a set of independent jobs. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
---
paper_title: Single machine precedence constrained scheduling is a vertex cover problem
paper_content:
In this paper we study the single machine precedence constrained scheduling problem of minimizing the sum of weighted completion time. Specifically, we settle an open problem first raised by Chudak & Hochbaum and whose answer was subsequently conjectured by Correa & Schulz. ::: ::: The most significant implication of our result is that the addressed scheduling problem is a special case of the vertex cover problem. This will hopefully be an important step towards proving that the two problems behave identically in terms of approximability. ::: ::: As a consequence of our result, previous results for the scheduling problem can be explained, and in some cases improved, by means of vertex cover theory. For example, our result implies the existence of a polynomial time algorithm for the special case of two-dimensional partial orders. This considerably extends Lawler's result from 1978 for series-parallel orders.
---
paper_title: Decomposition Algorithms for Single-Machine Sequencing with Precedence Relations and Deferral Costs
paper_content:
A one-machine deterministic job-shop sequencing problem is considered. Associated with each job is its processing time and linear deferral cost. In addition, the jobs are related by a general precedence relation. The objective is to order the jobs so as to minimize the sum of the deferral costs, subject to the constraint that the ordering must be consistent with the precedence relation. A decomposition algorithm is presented, and it is proved that a permutation is optimal if and only if it can be generated by this algorithm. Four special network structures are then considered, and specializations of the general algorithm are presented.
---
paper_title: Single-machine scheduling with deteriorating jobs under a series-parallel graph constraint
paper_content:
This paper considers single-machine scheduling problems with deteriorating jobs, i.e., jobs whose processing times are an increasing function of their starting times. In addition, the jobs are related by a series-parallel graph. It is shown that for the general linear problem to minimize the makespan, polynomial algorithms exist. It is also shown that for the proportional linear problem of minimization of the total weighted completion time, polynomial algorithms exist, too.
---
paper_title: Single-machine scheduling with precedence constraints and position-dependent processing times
paper_content:
Abstract In this paper we consider single-machine scheduling problems with position-dependent processing times, i.e., jobs whose processing times are an increasing or decreasing function of their positions in a processing sequence. In addition, the jobs are related by parallel chains and a series–parallel graph precedence constraints, respectively. It is shown that for the problems of minimization of the makespan polynomial algorithms exist.
---
paper_title: Sequencing Jobs to Minimize Total Weighted Completion Time Subject to Precedence Constraints
paper_content:
Suppose n jobs are to be sequenced for processing by a single machine, with the object of minimizing total weighted completion time. It is shown that the problem is NP-complete if there are arbitrary precedence constraints. However, if precedence constraints are “series parallel”, the problem can be solved in O( n log n ) time. This result generalizes previous results for the more special case of rooted trees. It is also shown how a decomposition procedure suggested by Sidney can be implemented in polynomial-bounded time. Equivalence of the sequencing problem with the optimal linear ordering problem for directed graphs is discussed.
---
paper_title: Sequencing with Series-Parallel Precedence Constraints
paper_content:
One of the most important ideas in the theory of sequencing and scheduling is the method of adjacent pairwise job interchange. This method compares the costs of two sequences which differ only by interchanging a pair of adjacent jobs. In 1956, W. E. Smith defined a class of problems for which a total preference ordering of the jobs exists with the property that in any sequence, whenever two adjacent jobs are not in preference order, they may be interchanged with no resultant cost increase. In such a case the unconstrained sequencing problem is easily solved by sequencing the jobs in preference order. ::: ::: In this paper, a natural subclass of these problems is considered for which such a total preference ordering exists for all subsequences of jobs. The main result is an efficient general algorithm for these sequencing problems with series-parallel precedence constraints. These problems include the least cost fault detection problem, the one-machine total weighted completion time problem, the two-machine maximum completion time flow-shop problem and the maximum cumulative cost problem.
---
paper_title: Single machine scheduling models with deterioration and learning: handling precedence constraints via priority generation
paper_content:
We consider various single machine scheduling problems in which the processing time of a job depends either on its position in a processing sequence or on its start time. We focus on problems of minimizing the makespan or the sum of (weighted) completion times of the jobs. In many situations we show that the objective function is priority-generating, and therefore the corresponding scheduling problem under series-parallel precedence constraints is polynomially solvable. In other situations we provide counter-examples that show that the objective function is not priority-generating.
---
paper_title: On the Approximability of Single-Machine Scheduling with Precedence Constraints
paper_content:
We consider the single-machine scheduling problem to minimize the weighted sum of completion times under precedence constraints. In a series of recent papers, it was established that this scheduling problem is a special case of minimum weighted vertex cover. ::: ::: In this paper, we show that the vertex cover graph associated with the scheduling problem is exactly the graph of incomparable pairs defined in the dimension theory of partial orders. Exploiting this relationship allows us to present a framework for obtaining (2-2/f)-approximation algorithms, provided that the set of precedence constraints has fractional dimension of at most f. Our approach yields the best-known approximation ratios for all previously considered special classes of precedence constraints, and it provides the first results for bounded degree and orders of interval dimension 2. ::: ::: On the negative side, we show that the addressed problem remains NP-hard even when restricted to the special case of interval orders. Furthermore, we prove that the general problem, if a fixed cost present in all feasible schedules is ignored, becomes as hard to approximate as vertex cover. We conclude by giving the first inapproximability result for this problem, showing under a widely believed assumption that it does not admit a polynomial-time approximation scheme.
---
paper_title: Scheduling Opposing Forests
paper_content:
A basic problem of deterministic scheduling theory is that of scheduling n unit-length tasks on m identical processors subject to precedence constraints so as to meet a given overall deadline. T. C. Hu’s classic “level algorithm” can be used to solve this problem in linear time if the precedence constraints have the form of an in-forest or an out-forest. We show that a polynomial time algorithm for a wider class of precedence constraints is unlikely, by proving the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of our title). However, for any fixed value of m we show that this problem can be solved in polynomial time for such precedence constraints. For the special case of $m = 3$ we provide a linear time algorithm.
---
paper_title: Complexity of Scheduling under Precedence Constraints
paper_content:
Precedence constraints between jobs that have to be respected in every feasible schedule generally increase the computational complexity of a scheduling problem. Occasionally, their introduction may turn a problem that is solvable within polynomial time into an NP-complete one, for which a good algorithm is highly unlikely to exist. We illustrate the use of these concepts by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.
---
paper_title: Sequencing Jobs to Minimize Total Weighted Completion Time Subject to Precedence Constraints
paper_content:
Suppose n jobs are to be sequenced for processing by a single machine, with the object of minimizing total weighted completion time. It is shown that the problem is NP-complete if there are arbitrary precedence constraints. However, if precedence constraints are “series parallel”, the problem can be solved in O( n log n ) time. This result generalizes previous results for the more special case of rooted trees. It is also shown how a decomposition procedure suggested by Sidney can be implemented in polynomial-bounded time. Equivalence of the sequencing problem with the optimal linear ordering problem for directed graphs is discussed.
---
paper_title: Complexity of Scheduling under Precedence Constraints
paper_content:
Precedence constraints between jobs that have to be respected in every feasible schedule generally increase the computational complexity of a scheduling problem. Occasionally, their introduction may turn a problem that is solvable within polynomial time into an NP-complete one, for which a good algorithm is highly unlikely to exist. We illustrate the use of these concepts by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.
---
paper_title: Complexity of machine scheduling problems
paper_content:
We survey and extend the results on the complexity of machine scheduling problems. After a brief review of the central concept of NP-completeness we give a classification of scheduling problems on single, different and identical machines and study the influence of various parameters on their complexity. The problems for which a polynomial-bounded algorithm is available are listed and NP-completeness is established for a large number of other machine scheduling problems. We finally discuss some questions that remain unanswered.
---
paper_title: Scheduling chain-structured tasks to minimize makespan and mean flow time
paper_content:
We consider the problem of scheduling a set of chains onm > 1 identical processors with the objectives of minimizing the makespan and the mean flow time. We show that finding a nonpreemptive schedule with the minimum makespan is strongly NP-hard for each fixedm > 1, answering the open question of whether this problem is strongly NP-hard for trees. We also show that finding a nonpreemptive schedule with the minimum mean flow time is strongly NP-hard for each fixedm > 1, improving the known strong NP-hardness results for in-trees and out-trees. Finally, we generalize the result of McNaughton, showing that preemption cannot reduce the mean weighted flow time for a set of chains. The last two results together imply that finding a preemptive schedule with the minimum mean flow time is also strongly NP-hard for each fixedm > 1, answering another open question on the complexity of this problem for trees.
---
paper_title: Optimal scheduling for two-processor systems
paper_content:
Despite the recognized potential of multiprocessing little is known concerning the general problem of finding efficient algorithms which compute minimallength schedules for given computations and m?2 processors. In this paper we formulate a general model of computation structures and exhibit an efficient algorithm for finding optimal nonpreemptive schedules for these structures on two-processor systems. We prove that the algorithm gives optimal solutions and discuss its application to preemptive scheduling disciplines.
---
paper_title: Optimality of HLF for scheduling divide-and-conquer UET task graphs on identical parallel processors
paper_content:
The problem of scheduling a set of n unit execution time (UET) tasks subject to precedence constraints on m identical parallel processors is known to be NP-hard in the strong sense. However, polynomial time algorithms exist for some classes of precedence graphs. In this paper, we consider a class of divide-and-conquer graphs that naturally models the execution of the recursive control abstraction of divide-and-conquer algorithms. We prove that the Highest Level First (HLF) strategy minimizes the schedule length for this class, thus settling a conjecture of Rayward-Smith and Clark.
---
paper_title: Linear-Time Algorithms for Scheduling on Parallel Processors
paper_content:
Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints.
---
paper_title: Scheduling Opposing Forests
paper_content:
A basic problem of deterministic scheduling theory is that of scheduling n unit-length tasks on m identical processors subject to precedence constraints so as to meet a given overall deadline. T. C. Hu’s classic “level algorithm” can be used to solve this problem in linear time if the precedence constraints have the form of an in-forest or an out-forest. We show that a polynomial time algorithm for a wider class of precedence constraints is unlikely, by proving the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of our title). However, for any fixed value of m we show that this problem can be solved in polynomial time for such precedence constraints. For the special case of $m = 3$ we provide a linear time algorithm.
---
paper_title: Parallel Sequencing and Assembly Line Problems
paper_content:
This paper deals with a new sequencing problem in which n jobs with ordering restrictions have to be done by men of equal ability. Assume every man can do any of the n jobs. The two questions considered in this paper are 1 How to arrange a schedule that requires the minimum number of men so that all jobs are completed within a prescribed time T, and 2 if m men are available, arrange a schedule that completes all jobs at the earliest time.
---
paper_title: The Coffman-Graham Algorithm Optimally Solves UET Task Systems with Overinterval Orders
paper_content:
Scheduling of unit execution time (UET) task systems on parallel machines with minimal schedule length is known to be NP-complete. The problem is polynomially solvable for some special cases. For a fixed number of parallel machines m > 2, the complexity of the problem is still open, but the problem becomes NP-hard if m is arbitrary. In this paper we characterize a new order class that properly contains quasi-interval orders and we prove that the Coffman--Graham algorithm yields optimal schedules for this new class on any number of machines. Finally, some extensions are discussed for a larger order class and for scheduling in the presence of unit communication delays.
---
paper_title: Scheduling Interval-Ordered Tasks
paper_content:
We show that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors. Generalizations to arbitrary execution times are NP-complete.
---
paper_title: Optimal scheduling on parallel machines for a new order class
paper_content:
This paper addresses the problem of scheduling n unit length tasks on m identical machines under certain precedence constraints. The aim is to compute minimal length nonpreemptive schedules. We introduce a new order class which contains properly two rich families of precedence graphs: interval orders and a subclass of the class of series parallel orders. We present a linear time algorithm to find an optimal schedule for this new order class on any number of machines.
---
paper_title: Optimal scheduling for two-processor systems
paper_content:
Despite the recognized potential of multiprocessing little is known concerning the general problem of finding efficient algorithms which compute minimallength schedules for given computations and m?2 processors. In this paper we formulate a general model of computation structures and exhibit an efficient algorithm for finding optimal nonpreemptive schedules for these structures on two-processor systems. We prove that the algorithm gives optimal solutions and discuss its application to preemptive scheduling disciplines.
---
paper_title: Scheduling precedence graphs of bounded height
paper_content:
The existence of a schedule for a partially ordered set of unit length tasks on m identical processors is known to be NP-complete (J. D. Ullman, NP-complete scheduling problems, J. Comput. System Sci., 10 (1975), 384–393). The problem remains NP-complete even if we restrict the precedence graph to be of height bounded by a constant. (J. K. Lenkstra and A. H. G. Rinnooy Kan, Complexity of scheduling under precedence constraints, Operations Res., 26 (1978), 22–35; D. Dolev and M. K. Warmuth, “Scheduling Flat Graphs,” IBM Research Report RJ 3398, 1982). In these NP-completeness proofs the upper bound on the number of available processors varies with the problem instance. We present a polynomial algorithm for the case where the upper bound on the number of available processors and the height of the precedence graph are both constants.
---
paper_title: Profile Scheduling of Opposing Forests and Level Orders
paper_content:
The question of existence of a schedule of a given length for n unit length tasks on n identical processors subject to precedence constraints is known to be NP-complete [Ullman, J. Comput. System Sci., 10 (1976), pp. 384–393]. For a fixed value of m we present polynomial algorithms to find an optimal schedule for two families of precedence graphs: level orders and opposing forests. In the case of opposing forest our algorithm is a considerable improvement over the algorithm presented in [Garey et al., SIAM J. Alg. Disc. Meth., 4 (1983), pp. 72–93].
---
paper_title: On a parallel machine scheduling problem with precedence constraints
paper_content:
We characterize a nontrivial special case with a polynomial-time algorithm for a well-known parallel machine scheduling problem with precedence constraints, with a fixed number of machines, and with tasks of unit length. The special case is related to instances with given maximum path length and maximum degree of the task precedence graph. The method is based on the observation that the number of tasks is either small and bounded by a constant depending on the maximum path length and maximum degree, or alternatively, the number of tasks is large, giving a "dense" schedule.
---
paper_title: Scheduling Opposing Forests
paper_content:
A basic problem of deterministic scheduling theory is that of scheduling n unit-length tasks on m identical processors subject to precedence constraints so as to meet a given overall deadline. T. C. Hu’s classic “level algorithm” can be used to solve this problem in linear time if the precedence constraints have the form of an in-forest or an out-forest. We show that a polynomial time algorithm for a wider class of precedence constraints is unlikely, by proving the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of our title). However, for any fixed value of m we show that this problem can be solved in polynomial time for such precedence constraints. For the special case of $m = 3$ we provide a linear time algorithm.
---
paper_title: Optimal scheduling for two-processor systems
paper_content:
Despite the recognized potential of multiprocessing little is known concerning the general problem of finding efficient algorithms which compute minimallength schedules for given computations and m?2 processors. In this paper we formulate a general model of computation structures and exhibit an efficient algorithm for finding optimal nonpreemptive schedules for these structures on two-processor systems. We prove that the algorithm gives optimal solutions and discuss its application to preemptive scheduling disciplines.
---
paper_title: Computationally Tractable Classes of Ordered Sets
paper_content:
Ordered sets have recently gained much importance in many applied and theoretical problems in computer science and operations research ranging from project planning via processor scheduling to sorting and retrieval problems. These problems involve partial orders as their basic structure, e.g. as precedence constraints in scheduling problems, or as comparability relation among the objects to be sorted or retrieved.
---
paper_title: Optimal scheduling of unit-time tasks on two uniform processors under tree-like precedence constraints
paper_content:
An O(n3/(b+1)) time algorithm to obtain a minimum finish time schedule subject to tree-like precedence constraints for unit-time tasks on two uniform processors is obtained. It is assumed that the slower processor takesb time units for each one taken by the speedier one, for some integerb. It is also noted that a slight modification of this schedule yields a minimum mean flow time schedule.
---
paper_title: Scheduling Interval-Ordered Tasks
paper_content:
We show that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors. Generalizations to arbitrary execution times are NP-complete.
---
paper_title: Minimizing total completion time for UET tasks with release time and outtree precedence constraints
paper_content:
Brucker et al. (Math Methods Oper Res 56: 407–412, 2003) have given an O(n 2 )-time algorithm for the problems $$P \mid p_{j}=1, r_{j}$$ , outtree $$\mid \sum C_{j}$$ and $$P \mid pmtn, p_{j}=1, r_{j}$$ , outtree $$\mid \sum C_{j}$$ . In this note, we show that their algorithm admits an O(n log n)-time implementation. Copyright Springer-Verlag 2005
---
paper_title: Scheduling identical jobs with chain precedence constraints on two uniform machines
paper_content:
The problem of scheduling identical jobs with chain precedence constraints on two uniform machines is considered. It is shown that the corresponding makespan problem can be solved in linear time.
---
paper_title: Minimizing mean flow time for UET tasks
paper_content:
We consider the problem of scheduling a set of n unit-execution-time (UET) tasks, with precedence constraints, on m ≥ 1 parallel and identical processors so as to minimize the mean flow time. For two processors, the Coffman--Graham algorithm gives a schedule that simultaneously minimizes the mean flow time and the makespan. The problem becomes strongly NP-hard for an arbitrary number of processors, although the complexity is not known for each fixed m ≥ 3. For arbitrary precedence constraints, we show that the Coffman--Graham algorithm gives a schedule with a worst-case bound no more than 2, and we give examples showing that the bound is tight. For intrees, the problem can be solved in polynomial time for each fixed m ≥ 1, although the complexity is not known for an arbitrary number of processors. We show that Hu's algorithm (which is optimal for the makespan objective) yields a schedule with a worst-case bound no more than 1.5, and we give examples showing that the ratio can approach 1.308999.
---
paper_title: Preemptive scheduling of precedence-constrained jobs on parallel machines : (preprint)
paper_content:
Polynomial time-bounded algorithms are presented for solving three problems involving the preemptive scheduling of precedence-constrained jobs on parallel machines: the “intree problem”, the “two-machine problem with equal release dates”, and the “general two-machine problem”. These problems are preemptive counterparts of problems involving the nonpreemptive scheduling of unit-time jobs previously solved by Brucker, Garey and Johnson and by Garey and Johnson. The algorithms and proofs (and the running times of the algorithms) closely parallel those presented in their papers. These results improve on previous results in preemptive scheduling and also suggest a close relationship between preemptive scheduling problems and problems in nonpreemptive scheduling of unit-time jobs.
---
paper_title: Parallel Sequencing and Assembly Line Problems
paper_content:
This paper deals with a new sequencing problem in which n jobs with ordering restrictions have to be done by men of equal ability. Assume every man can do any of the n jobs. The two questions considered in this paper are 1 How to arrange a schedule that requires the minimum number of men so that all jobs are completed within a prescribed time T, and 2 if m men are available, arrange a schedule that completes all jobs at the earliest time.
---
paper_title: On preemption redundancy in scheduling unit processing time jobs on two parallel machines
paper_content:
McNaughton’s theorem (1959) states that preemptions in scheduling arbitrary processing time jobs on identical parallel machines to minimize the total weighted completion time are redundant. Du, Leung and Young (1991) proved that this remains true even though the jobs have precedence constraints in the form of chains. There are known simple counterexamples showing that other extensions of McNaughton’s theorem to other criteria or more general precedence constraints such as intrees or outtrees, or different release dates of jobs, or different speeds of machines, are not true even for equal weights of jobs. In this paper we show that in the case of two machines and unit processing times, preemptions are still advantageous for intrees or machines with different speeds even for equal weights, or outtrees for different weights, but become redundant for outtrees and equal weights even for different release dates. We also conjecture that the latter statement is actually true for any number of machines.
---
paper_title: Preemptive Scheduling of Interval Orders is Polynomial
paper_content:
In 1979, Papadimitriou and Yannakakis gave a polynomial time algorithm for the scheduling of jobs requiring unit completion times when the precedence constraints form an interval order. The authors solve here the corresponding problem, for preemptive scheduling (a job can be interrupted to work on more important tasks, and completed at a later time, subject to the usual scheduling constraints.) The m-machine preemptive scheduling problem is shown to have a polynomial algorithm, for both unit time and variable execution times as well, when the precedence constraints are given by an interval order.
---
paper_title: Optimal preemptive scheduling on a fixed number of identical parallel machines
paper_content:
In this paper, we consider the preemptive scheduling problem on a fixed number of identical parallel machines. We present a polynomial-time algorithm for finding a minimal length schedule for an order class which contains properly interval orders.
---
paper_title: Minimizing total completion time for UET tasks with release time and outtree precedence constraints
paper_content:
Brucker et al. (Math Methods Oper Res 56: 407–412, 2003) have given an O(n 2 )-time algorithm for the problems $$P \mid p_{j}=1, r_{j}$$ , outtree $$\mid \sum C_{j}$$ and $$P \mid pmtn, p_{j}=1, r_{j}$$ , outtree $$\mid \sum C_{j}$$ . In this note, we show that their algorithm admits an O(n log n)-time implementation. Copyright Springer-Verlag 2005
---
paper_title: Scheduling chain-structured tasks to minimize makespan and mean flow time
paper_content:
We consider the problem of scheduling a set of chains onm > 1 identical processors with the objectives of minimizing the makespan and the mean flow time. We show that finding a nonpreemptive schedule with the minimum makespan is strongly NP-hard for each fixedm > 1, answering the open question of whether this problem is strongly NP-hard for trees. We also show that finding a nonpreemptive schedule with the minimum mean flow time is strongly NP-hard for each fixedm > 1, improving the known strong NP-hardness results for in-trees and out-trees. Finally, we generalize the result of McNaughton, showing that preemption cannot reduce the mean weighted flow time for a set of chains. The last two results together imply that finding a preemptive schedule with the minimum mean flow time is also strongly NP-hard for each fixedm > 1, answering another open question on the complexity of this problem for trees.
---
paper_title: Scheduling preemptive jobs with precedence constraints on parallel machines
paper_content:
Abstract In parallel machine scheduling problems with general precedence constraints, it is advantageous to interrupt jobs and complete them at a later time in order to minimize Makespan. For general precedence constraints, we know that this problem is in the class of NP-hard combinatorial problems for which the development of efficient algorithms is unlikely. In this paper we propose two new heuristics SUPIO and INFIO which, respectively, give an upper bound and a lower bound to Makespan. We show numerically that the ratio between these two bounds is very close to one. SUPIO heuristic provides an optimal solution in the following cases: P2 | ,pmnt| C max , P| forest,pmnt| C max and P| interval order,pmnt| C max . Its absolute performance ratio is upper bounded by (2 − 2/ m ). We illustrate the efficiency of our heuristics by computational results associated with randomly generated problems.We show in the conclusion that this new approach could solve some non-preemptive parallel machine scheduling problems.
---
paper_title: Two machine preemptive scheduling problem with release dates, equal processing times and precedence constraints
paper_content:
Abstract We consider a scheduling problem with two identical parallel machines and n jobs. For each job we are given its release date when job becomes available for processing. All jobs have equal processing times. Preemptions are allowed. There are precedence constraints between jobs which are given by a (di)graph consisting of a set of outtrees and a number of isolated vertices. The objective is to find a schedule minimizing mean flow time. We suggest an O( n 2 ) algorithm to solve this problem. The suggested algorithm also can be used to solve the related two-machine open shop problem with integer release dates, unit processing times and analogous precedence constraints.
---
paper_title: Digraph width measures in parameterized algorithmics
paper_content:
Abstract In contrast to undirected width measures such as tree-width, which have provided many important algorithmic applications, analogous measures for digraphs such as directed tree-width or DAG-width do not seem so successful. Several recent papers have given some evidence on the negative side. We confirm and consolidate this overall picture by thoroughly and exhaustively studying the complexity of a range of directed problems with respect to various parameters, and by showing that they often remain NP-hard even on graph classes that are restricted very beyond having small DAG-width. On the positive side, it turns out that clique-width (of digraphs) performs much better on virtually all considered problems, from the parameterized complexity point of view.
---
paper_title: Scheduling and Fixed-Parameter Tractability
paper_content:
Fixed-parameter tractability analysis and scheduling are two core domains of combinatorial optimization which led to deep understanding of many important algorithmic questions. However, even though fixed-parameter algorithms are appealing for many reasons, no such algorithms are known for many fundamental scheduling problems.
---
paper_title: Graph minors. II. Algorithmic aspects of tree-width
paper_content:
We introduce an invariant of graphs called the tree-width, and use it to obtain a polynomially bounded algorithm to test if a graph has a subgraph contractible to H, where H is any fixed planar graph. We also nonconstructively prove the existence of a polynomial algorithm to test if a graph has tree-width ≤ w, for fixed w. Neither of these is a practical algorithm, as the exponents of the polynomials are large. Both algorithms are derived from a polynomial algorithm for the DISJOINT CONNECTING PATHS problem (with the number of paths fixed), for graphs of bounded tree-width.
---
paper_title: On the Approximability of Single-Machine Scheduling with Precedence Constraints
paper_content:
We consider the single-machine scheduling problem to minimize the weighted sum of completion times under precedence constraints. In a series of recent papers, it was established that this scheduling problem is a special case of minimum weighted vertex cover. ::: ::: In this paper, we show that the vertex cover graph associated with the scheduling problem is exactly the graph of incomparable pairs defined in the dimension theory of partial orders. Exploiting this relationship allows us to present a framework for obtaining (2-2/f)-approximation algorithms, provided that the set of precedence constraints has fractional dimension of at most f. Our approach yields the best-known approximation ratios for all previously considered special classes of precedence constraints, and it provides the first results for bounded degree and orders of interval dimension 2. ::: ::: On the negative side, we show that the addressed problem remains NP-hard even when restricted to the special case of interval orders. Furthermore, we prove that the general problem, if a fixed cost present in all feasible schedules is ignored, becomes as hard to approximate as vertex cover. We conclude by giving the first inapproximability result for this problem, showing under a widely believed assumption that it does not admit a polynomial-time approximation scheme.
---
paper_title: Optimal scheduling for two-processor systems
paper_content:
Despite the recognized potential of multiprocessing little is known concerning the general problem of finding efficient algorithms which compute minimallength schedules for given computations and m?2 processors. In this paper we formulate a general model of computation structures and exhibit an efficient algorithm for finding optimal nonpreemptive schedules for these structures on two-processor systems. We prove that the algorithm gives optimal solutions and discuss its application to preemptive scheduling disciplines.
---
paper_title: Complexity results for scheduling chains on a single machine
paper_content:
Abstract We investigate the computational complexity of deterministic sequencing problems in which unit-time jobs have to be scheduled on a single machine subject to chain-like precedence constraints. NP-hardness is established for the cases in which the number of late jobs or the total weighted tardiness is to be minimized, and for several related problems involving the total weighted completion time criterion.
---
paper_title: Single-machine scheduling with deteriorating jobs under a series-parallel graph constraint
paper_content:
This paper considers single-machine scheduling problems with deteriorating jobs, i.e., jobs whose processing times are an increasing function of their starting times. In addition, the jobs are related by a series-parallel graph. It is shown that for the general linear problem to minimize the makespan, polynomial algorithms exist. It is also shown that for the proportional linear problem of minimization of the total weighted completion time, polynomial algorithms exist, too.
---
paper_title: Complexity of machine scheduling problems
paper_content:
We survey and extend the results on the complexity of machine scheduling problems. After a brief review of the central concept of NP-completeness we give a classification of scheduling problems on single, different and identical machines and study the influence of various parameters on their complexity. The problems for which a polynomial-bounded algorithm is available are listed and NP-completeness is established for a large number of other machine scheduling problems. We finally discuss some questions that remain unanswered.
---
paper_title: Scheduling Opposing Forests
paper_content:
A basic problem of deterministic scheduling theory is that of scheduling n unit-length tasks on m identical processors subject to precedence constraints so as to meet a given overall deadline. T. C. Hu’s classic “level algorithm” can be used to solve this problem in linear time if the precedence constraints have the form of an in-forest or an out-forest. We show that a polynomial time algorithm for a wider class of precedence constraints is unlikely, by proving the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of our title). However, for any fixed value of m we show that this problem can be solved in polynomial time for such precedence constraints. For the special case of $m = 3$ we provide a linear time algorithm.
---
paper_title: Complexity of Scheduling under Precedence Constraints
paper_content:
Precedence constraints between jobs that have to be respected in every feasible schedule generally increase the computational complexity of a scheduling problem. Occasionally, their introduction may turn a problem that is solvable within polynomial time into an NP-complete one, for which a good algorithm is highly unlikely to exist. We illustrate the use of these concepts by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.
---
paper_title: Single-machine scheduling with precedence constraints and position-dependent processing times
paper_content:
Abstract In this paper we consider single-machine scheduling problems with position-dependent processing times, i.e., jobs whose processing times are an increasing or decreasing function of their positions in a processing sequence. In addition, the jobs are related by parallel chains and a series–parallel graph precedence constraints, respectively. It is shown that for the problems of minimization of the makespan polynomial algorithms exist.
---
paper_title: Sequencing Jobs to Minimize Total Weighted Completion Time Subject to Precedence Constraints
paper_content:
Suppose n jobs are to be sequenced for processing by a single machine, with the object of minimizing total weighted completion time. It is shown that the problem is NP-complete if there are arbitrary precedence constraints. However, if precedence constraints are “series parallel”, the problem can be solved in O( n log n ) time. This result generalizes previous results for the more special case of rooted trees. It is also shown how a decomposition procedure suggested by Sidney can be implemented in polynomial-bounded time. Equivalence of the sequencing problem with the optimal linear ordering problem for directed graphs is discussed.
---
paper_title: Single machine scheduling models with deterioration and learning: handling precedence constraints via priority generation
paper_content:
We consider various single machine scheduling problems in which the processing time of a job depends either on its position in a processing sequence or on its start time. We focus on problems of minimizing the makespan or the sum of (weighted) completion times of the jobs. In many situations we show that the objective function is priority-generating, and therefore the corresponding scheduling problem under series-parallel precedence constraints is polynomially solvable. In other situations we provide counter-examples that show that the objective function is not priority-generating.
---
|
Title: Impact of precedence constraints on complexity of scheduling problems: a survey
Section 1: Introduction
Description 1: Introduce the role of partial order sets in scheduling theory and the focus of the survey on complexity results according to the structure of precedence constraints.
Section 2: Some already studied orders
Description 2: Introduce the definition of various classes of partial orders that have already been studied in scheduling theory, including graph theory definitions and specific structures like intrees and series-parallel graphs.
Section 3: Scheduling notations
Description 3: Explain the standard notations used in the paper for describing scheduling problems, including the α|β|γ format.
Section 4: One-machine problems
Description 4: Focus on the impact of precedence constraints on the complexity of one-machine scheduling problems, discussing polynomial cases, NP-complete cases, and open problems.
Section 5: Parallel machines without preemption
Description 5: Discuss the complexity of scheduling problems on parallel machines without preemption, focusing on the makespan criterion for both arbitrary and fixed numbers of machines, and other criteria like total flow time.
Section 6: Parallel machines with preemption
Description 6: Examine the impact of preemption on the complexity of scheduling problems on parallel machines, discussing both makespan and total flow time criteria.
Section 7: Conclusion
Description 7: Summarize the survey, highlighting key findings, remaining open problems, and possible future research directions in the field of scheduling problems with precedence constraints.
|
A Survey on Security and Privacy Protocols for Cognitive Wireless Sensor Networks
| 8 |
---
paper_title: Security in Cognitive Radio Networks: Threats and Mitigation
paper_content:
This paper describes a new class of attacks specific to cognitive radio networks. Wireless devices that can learn from their environment can also be taught things by malicious elements of their environment. By putting artificial intelligence in charge of wireless network devices, we are allowing unanticipated, emergent behavior, fitting a perhaps distorted or manipulated level of optimality. The state space for a cognitive radio is made up of a variety of learned beliefs and current sensor inputs. By manipulating radio sensor inputs, an adversary can affect the beliefs of a radio, and consequently its behavior. In this paper we focus primarily on PHY-layer issues, describing several classes of attacks and giving specific examples for dynamic spectrum access and adaptive radio scenarios. These attacks demonstrate the capabilities of an attacker who can manipulate the spectral environment when a radio is learning. The most powerful of which is a self-propagating AI virus that could interactively teach radios to become malicious. We then describe some approaches for mitigating the effectiveness of these attacks by instilling some level of "common sense" into radio systems, and requiring learned beliefs to expire and be relearned. Lastly we provide a road-map for extending these ideas to higher layers in the network stack.
---
paper_title: Cognitive Wireless Sensor Networks: Emerging topics and recent challenges
paper_content:
Adding cognition to the existing Wireless Sensor Networks (WSNs), or using numerous tiny sensors, similar to the idea presented in WSNs, in a Cognitive Radio Network (CRN) bring about many benefits. In this paper, we present an overview of Cognitive Wireless Sensor Networks (CWSNs), and discuss the emerging topics and recent challenges in the area. We discuss the main advantages, and suggest possible remedies to overcome the challenges. CWSNs enable current WSNs to overcome the scarcity problem of spectrum which is shared with many other successful systems such as Wi-Fi and Bluetooth. It has been shown that the coexistence of such networks can significantly degrade a WSN's performance. In addition, cognitive technology could provide access not only to new spectrum, but also to spectrum with better propagation characteristics. Moreover, by the adaptive change of system parameters such as modulation type and constellation size, different data rates can be achieved which in turn can directly influence the power consumption and the network lifetime. Furthermore, sensor measurements obtained within the network can provide the needed diversity to cope with spectrum fading at the physical layer.
---
paper_title: Securing wireless sensor networks: a survey
paper_content:
The significant advances of hardware manufacturing technology and the development of efficient software algorithms make technically and economically feasible a network composed of numerous, small, low-cost sensors using wireless communications, that is, a wireless sensor network. WSNs have attracted intensive interest from both academia and industry due to their wide application in civil and military scenarios. In hostile scenarios, it is very important to protect WSNs from malicious attacks. Due to various resource limitations and the salient features of a wireless sensor network, the security design for such networks is significantly challenging. In this article, we present a comprehensive survey of WSN security issues that were investigated by researchers in recent years and that shed light on future directions for WSN security.
---
paper_title: Security in Cognitive Radio Networks: The Required Evolution in Approaches to Wireless Network Security
paper_content:
This paper discusses the topic of wireless security in cognitive radio networks, delineating the key challenges in this area. With the ever-increasing scarcity of spectrum, cognitive radios are expected to become an increasingly important part of the overall wireless networking landscape. However, there is an important technical area that has received little attention to date in the cognitive radio paradigm: wireless security. The cognitive radio paradigm introduces entirely new classes of security threats and challenges, and providing strong security may prove to be the most difficult aspect of making cognitive radio a long-term commercially-viable concept. This paper delineates the key challenges in providing security in cognitive networks, discusses the current security posture of the emerging IEEE 802.22 cognitive radio standard, and identifies potential vulnerabilities along with potential mitigation approaches.
---
paper_title: Cognitive Radio Based Wireless Sensor Networks
paper_content:
In recent years, we have seen tremendous growth in the applications of wireless sensor networks (WSNs) operating in unlicensed spectrum bands. However, there is evidence that existing unlicensed spectrum is becoming overcrowded. On the other hand, with recent advances in cognitive radio (CR) technology, it is possible to apply the dynamic spectrum access (DSA) model in WSNs to get access to less congested spectrum, possibly with better propagation characteristics. In this paper we present a conceptual design of CR-based WSNs, identify the main advantages and challenges of using CR technology, and suggest possible remedies to overcome the challenges. As an illustration, we study the performance of CR-based WSN used for the automation and control applications in residential and commercial premises. Our simulation results compare the performance of a CR-based WSN with a standard ZigBee/802.15.4 WSN.
---
paper_title: 14 Security in Wireless Sensor Networks
paper_content:
A wide-band RF signal power detecting element includes, on an insulating substrate (21), at least one thin-film resistor (22a) for absorbing the power of a signal to be measured and generating heat, first and second ground electrodes (27, 28) formed by thin-film conductors, a first thin-film connecting portion (24) for electrically connecting the first ground electrode (27) to the thin-film resistor (22a), a second thin-film connecting portion (25) for electrically connecting the second ground electrode (28) to the thin-film resistor (22a) and narrowing the gap between the first and second thin-film connecting portions (24, 25) toward the thin-film resistor (22a), and an input electrode (26) formed between the first and second ground electrodes (27, 28) and electrically connected to the thin-film resistor (22a).
---
paper_title: Achieving Energy Efficiency and QoS for Low-Rate Applications with 802.11e
paper_content:
This paper analyses the energy efficiency and QoS performance of 802.11e as a connectivity solution for low-rate applications, such as wireless automation and monitoring. The authors consider non-interference and co-existence scenarios and show through modeling and simulations that the power save operation mode and the EDCA QoS mechanisms in the 802.11e standard can be exploited to achieve the power consumption requirements of low-rate applications. The authors also provide a comparison of the energy efficiency between 802.11e and 802.15.4 under varying interference and traffic conditions. Our results suggest that in some specific scenarios, 802.11e can achieve higher energy efficiency and QoS than 802.15.4.
---
paper_title: Wireless Sensor Network Attacks and Security Mechanisms: A Short Survey
paper_content:
Wireless sensor networks are specific adhoc networks. They are characterized by their limited computing power and energy constraints. This paper proposes a study of security in this kind of network. We show what are the specificities and vulnerabilities of wireless sensor networks. We present a list of attacks, which can be found in these particular networks, and how they use their vulnerabilities. Finally we discuss about different solutions made by the scientific community to secure wireless sensor networks.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Secure routing in wireless sensor networks: attacks and countermeasures
paper_content:
We consider routing security in wireless sensor networks. Many sensor network routing protocols have been proposed, but none of them have been designed with security as a goal. We propose security goals for routing in sensor networks, show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks, introduce two classes of novel attacks against sensor networks sinkholes and HELLO floods, and analyze the security of all the major sensor network routing protocols. We describe crippling attacks against all of them and suggest countermeasures and design considerations. This is the first such analysis of secure routing in sensor networks.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: An on-demand secure routing protocol resilient to byzantine failures
paper_content:
An ad hoc wireless network is an autonomous self-organizing system ofmobile nodes connected by wireless links where nodes not in directrange can communicate via intermediate nodes. A common technique usedin routing protocols for ad hoc wireless networks is to establish therouting paths on-demand, as opposed to continually maintaining acomplete routing table. A significant concern in routing is theability to function in the presence of byzantine failures whichinclude nodes that drop, modify, or mis-route packets in an attempt todisrupt the routing service.We propose an on-demand routing protocol for ad hoc wireless networks that provides resilience to byzantine failures caused by individual or colluding nodes. Our adaptive probing technique detects a malicious link after log n faults have occurred, where n is the length of the path. These links are then avoided by multiplicatively increasing their weights and by using an on-demand route discovery protocol that finds a least weight path to the destination.
---
paper_title: Routing Security Issues in Wireless Sensor Networks: Attacks and Defenses
paper_content:
Wireless Sensor Networks (WSNs) are rapidly emerging as an important new area in wireless and mobile computing research. Applications of WSNs are numerous and growing, and range from indoor deployment scenarios in the home and office to outdoor deployment scenarios in adversary's territory in a tactical battleground (Akyildiz et al., 2002). For military environment, dispersal of WSNs into an adversary's territory enables the detection and tracking of enemy soldiers and vehicles. For home/office environments, indoor sensor networks offer the ability to monitor the health of the elderly and to detect intruders via a wireless home security system. In each of these scenarios, lives and livelihoods may depend on the timeliness and correctness of the sensor data obtained from dispersed sensor nodes. As a result, such WSNs must be secured to prevent an intruder from obstructing the delivery of correct sensor data and from forging sensor data. To address the latter problem, end-to-end data integrity checksums and post-processing of senor data can be used to identify forged sensor data (Estrin et al., 1999; Hu et al., 2003a; Ye et al., 2004). The focus of this chapter is on routing security in WSNs. Most of the currently existing routing protocols for WSNs make an optimization on the limited capabilities of the nodes and the application-specific nature of the network, but do not any the security aspects of the protocols. Although these protocols have not been designed with security as a goal, it is extremely important to analyze their security properties. When the defender has the liabilities of insecure wireless communication, limited node capabilities, and possible insider threats, and the adversaries can use powerful laptops with high energy and long range communication to attack the network, designing a secure routing protocol for WSNs is obviously a non-trivial task.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Distributed detection of node replication attacks in sensor networks
paper_content:
The low-cost, off-the-shelf hardware components in unshielded sensor-network nodes leave them vulnerable to compromise. With little effort, an adversary may capture nodes, analyze and replicate them, and surreptitiously insert these replicas at strategic locations within the network. Such attacks may have severe consequences; they may allow the adversary to corrupt network data or even disconnect significant parts of the network. Previous node replication detection schemes depend primarily on centralized mechanisms with single points of failure, or on neighborhood voting protocols that fail to detect distributed replications. To address these fundamental limitations, we propose two new algorithms based on emergent properties (Gligor (2004)), i.e., properties that arise only through the collective action of multiple nodes. Randomized multicast distributes node location information to randomly-selected witnesses, exploiting the birthday paradox to detect replicated nodes, while line-selected multicast uses the topology of the network to detect replication. Both algorithms provide globally-aware, distributed node-replica detection, and line-selected multicast displays particularly strong performance characteristics. We show that emergent algorithms represent a promising new approach to sensor network security; moreover, our results naturally extend to other classes of networks in which nodes can be captured, replicated and re-inserted by an adversary.
---
paper_title: Source-location privacy in energy-constrained sensor network routing
paper_content:
As sensor-driven applications become increasingly integrated into our lives, issues related to sensor privacy will become increasingly important. Although many privacy-related issues can be addressed by security mechanisms, one sensor network privacy issue that cannot be adequately addressed by network security is confidentiality of the source sensor's location. In this paper, we focus on protecting the source's location by introducing suitable modifications to sensor routing protocols to make it difficult for an adversary to backtrack to the origin of the sensor communication. In particular, we focus on the class of flooding protocols. While developing and evaluating our privacy-aware routing protocols, we jointly consider issues of location-privacy as well as the amount of energy consumed by the sensor network. Motivated by the observations, we propose a flexible routing strategy, known as phantom routing, which protects the source's location. Phantom routing is a two-stage routing scheme that first consists of a directed walk along a random direction, followed by routing from the phantom source to the sink. Our investigations have shown that phantom routing is a powerful technique for protecting the location of the source during sensor transmissions.
---
paper_title: Energy Analysis of Public-Key Cryptography for Wireless Sensor Networks
paper_content:
In this paper, we quantify the energy cost of authentication and key exchange based on public-key cryptography on an 8-bit microcontroller platform. We present a comparison of two public-key algorithms, RSA and elliptic curve cryptography (ECC), and consider mutual authentication and key exchange between two untrusted parties such as two nodes in a wireless sensor network. Our measurements on an Atmel ATmega128L low-power microcontroller indicate that public-key cryptography is very viable on 8-bit energy-constrained platforms even if implemented in software. We found ECC to have a significant advantage over RSA as it reduces computation time and also the amount of data transmitted and stored.
---
paper_title: DIGITALIZED SIGNATURES AND PUBLIC-KEY FUNCTIONS AS INTRACTABLE AS FACTORIZATION
paper_content:
We introduce a new class of public-key functions involving a number n = pq having two large prime factors. As usual, the key n is public, while p and q are the private key used by the issuer for production of signatures and function inversion. These functions can be used for all the applications involving public-key functions proposed by Diffie and Hellman, including digitalized signatures. We prove that for any given n, if we can invert the function y = E (x1) for even a small percentage of the values y then we can factor n. Thus, as long as factorization of large numbers remains practically intractable, for appropriate chosen keys not even a small percentage of signatures are forgeable. Breaking the RSA function is at most hard as factorization, but is not known to be equivalent to factorization even in the weak sense that ability to invert all function values entails ability to factor the key. Computation time for these functions, i.e. signature verification, is several hundred times faster than for the RSA scheme. Inversion time, using the private key, is comparable. The almost-everywhere intractability of signature-forgery for our functions (on the assumption that factoring is intractable) is of great practical significance and seems to be the first proved result of this kind.
---
paper_title: Public key cryptography in sensor networks - revisited
paper_content:
The common perception of public key cryptography is that it is complex, slow and power hungry, and as such not at all suitable for use in ultra-low power environments like wireless sensor networks. It is therefore common practice to emulate the asymmetry of traditional public key based cryptographic services through a set of protocols [1] using symmetric key based message authentication codes (MACs). Although the low computational complexity of MACs is advantageous, the protocol layer requires time synchronization between devices on the network and a significant amount of overhead for communication and temporary storage. The requirement for a general purpose CPU to implement these protocols as well as their complexity makes them prone to vulnerabilities and practically eliminates all the advantages of using symmetric key techniques in the first place. In this paper we challenge the basic assumptions about public key cryptography in sensor networks which are based on a traditional software based approach. We propose a custom hardware assisted approach for which we claim that it makes public key cryptography feasible in such environments, provided we use the right selection of algorithms and associated parameters, careful optimization, and low-power design techniques. In order to validate our claim we present proof of concept implementations of two different algorithms—Rabin’s Scheme and NtruEncrypt—and analyze their architecture and performance according to various established metrics like power consumption, area, delay, throughput, level of security and energy per bit. Our implementation of NtruEncrypt in ASIC standard cell logic uses no more than 3,000 gates with an average power consumption of less than 20 μW. We envision that our public key core would be embedded into a light-weight sensor node architecture.
---
paper_title: A public-key infrastructure for key distribution in TinyOS based on elliptic curve cryptography
paper_content:
We present the first known implementation of elliptic curve cryptography over F/sub 2p/ for sensor networks based on the 8-bit, 7.3828-MHz MICA2 mote. Through instrumentation of UC Berkeley's TinySec module, we argue that, although secret-key cryptography has been tractable in this domain for some time, there has remained a need for an efficient, secure mechanism for distribution of secret keys among nodes. Although public-key infrastructure has been thought impractical, we argue, through analysis of our own implementation for TinyOS of multiplication of points on elliptic curves, that public-key infrastructure is, in fact, viable for TinySec keys' distribution, even on the MICA2. We demonstrate that public keys can be generated within 34 seconds, and that shared secrets can be distributed among nodes in a sensor network within the same, using just over 1 kilobyte of SRAM and 34 kilobytes of ROM.
---
paper_title: Pgp in constrained wireless devices
paper_content:
The market for Personal Digital Assistants (PDAs) is growing at a rapid pace. An increasing number of products, such as the PalmPilot, are adding wireless communications capabilities. PDA users are now able to send and receive email just as they would from their networked desktop machines. Because of the inherent insecurity of wireless environments, a system is needed for secure email communications. The requirements for the security system will likely be influenced by the constraints of the PDA, including limited memory, limited processing power, limited bandwidth, and a limited user interface. ::: ::: This paper describes our experience with porting PGP to the Research in Motion (RIM) two-way pager, and incorporating elliptic curve cryptography into PGP's suite of public-key ciphers. Our main conclusion is that PGP is a viable solution for providing secure and interoperable email communications between constrained wireless devices and desktop machines.
---
paper_title: Comparing Elliptic Curve Cryptography and RSA on 8-bit CPUs
paper_content:
Strong public-key cryptography is often considered to be too computationally expensive for small devices if not accelerated by cryptographic hardware. We revisited this statement and implemented elliptic curve point multiplication for 160-bit, 192-bit, and 224-bit NIST/SECG curves over GF(p) and RSA-1024 and RSA-2048 on two 8-bit microcontrollers. To accelerate multiple-precision multiplication, we propose a new algorithm to reduce the number of memory accesses.
---
paper_title: The RC5 Encryption Algorithm
paper_content:
This document describes the RC5 encryption algorithm, a fast symmetric block cipher suitable for hardware or software implementations. A novel feature of RC5 is the heavy use of data-dependent rotations. RC5 has a variable word size, a variable number of rounds, and a variable-length secret key. The encryption and decryption algorithms are exceptionally simple.
---
paper_title: Handbook of Applied Cryptography
paper_content:
From the Publisher: ::: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.
---
paper_title: The MD5 Message-Digest Algorithm
paper_content:
This document describes the MD5 message-digest algorithm. The ::: algorithm takes as input a message of arbitrary length and produces as ::: output a 128-bit "fingerprint" or "message digest" of the input. This ::: memo provides information for the Internet community. It does not ::: specify an Internet standard.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Secure Routing for Mobile Ad Hoc Networks
paper_content:
Buttyan found out a security flaw in Ariadne (Y.C. Hu, A. Perrig, andD. B. Johnson, "Ariadne: a secure on-demand routing protocol for ad hoc networks," in Proc. of the Eighth ACM hitl Conf. on Mobile Computing and Networking (MOBI.COM 2002), pp.23-28. Atlanta, GA.2002.) and proposed a secure routing protocol, EndairA (L. Buttyan, and I. Vajda, "Towards provable security for ad hoc routing protocols", in Proc. of the 2nd ACM Workshop on Security of ad hoc and Sensor Networks, 2005 and G. Acs, L. Buttyan, and I. Vajda, "Provably secure on-demand source routing in mobile ad hoc networks," IEEE Transactions on Mobile Computing, Vol. 5, No. 11, November 2006.), with the ability to resist active-1-1 attacks. But unfortunately we discover an as yet unknown active-0- 1 attack which we call man-in-the-middle attack and EndairA couldn 't resist. Accordingly we propose a new secure routing protocol, EndairALoc. Analysis shows that EndairALoc can resist not only active-1-1 attacks but also the wormhole attack. Furthermore EndairALoc uses pairwise secret keys instead of public keys used in EndairA. Compared with EndairA, EndairALoc can save more energy in the routing establishment.
---
paper_title: Efficient Distribution of Key Chain Commitments for Broadcast Authentication in Distributed Sensor Networks
paper_content:
Broadcast authentication is a fundamental security service in distributed sensor networks. A scheme named $\mu$TESLA has been proposed for efficient broadcast authentication in such networks. However, $\mu$TESLA requires initial distribution of certain information based on unicast between the base station and each sensor node before the actual authentication of broadcast messages. Due to the limited bandwidth in wireless sensor networks, this initial unicast-based distribution severely limits the application of $\mu$TESLA in large sensor networks. This paper presents a novel technique to replace the unicast-based initialization with a broadcast-based one. As a result, $\mu$TESLA can be used in a sensor network with a large amount of sensors, as long as the message from the base station can reach these sensor nodes. This paper further explores several techniques that improve the performance, the robustness, as well as the security of the proposed method. The resulting protocol satisfies several nice properties, including low overhead, tolerance of message loss, scalability to large networks, and resistance to replay attacks as well as some known Denial of Service (DOS) attacks.
---
paper_title: INSENS: Intrusion-Tolerant Routing in Wireless Sensor Networks
paper_content:
This paper describes an INtrusion-tolerant routing protocol for wireless SEnsor NetworkS (INSENS). INSENS securely and efficiently constructs tree-structured routing for wireless sensor networks (WSNs). The key objective of an INSENS network is to tolerate damage caused by an intruder who has compromised deployed sensor nodes and is intent on injecting, modifying, or blocking packets. To limit or localize the damage caused by such an intruder, INSENS incorporates distributed lightweight security mechanisms, including efficient one-way hash chains and nested keyed message authentication codes that defend against wormhole attacks, as well as multipath routing. Adapting to WSN characteristics, the design of INSENS also pushes complexity away from resource-poor sensor nodes towards resource-rich base stations. An enhanced single-phase version of INSENS scales to large networks, integrates bidirectional verification to defend against rushing attacks, accommodates multipath routing to multiple base stations, enables secure joining/leaving, and incorporates a novel pairwise key setup scheme based on transitory global keys that is more resilient than LEAP. Simulation results are presented to demonstrate and assess the tolerance of INSENS to various attacks launched by an adversary. A prototype implementation of INSENS over a network of MICA2 motes is presented to evaluate the cost incurred.
---
paper_title: Energy Analysis of Public-Key Cryptography for Wireless Sensor Networks
paper_content:
In this paper, we quantify the energy cost of authentication and key exchange based on public-key cryptography on an 8-bit microcontroller platform. We present a comparison of two public-key algorithms, RSA and elliptic curve cryptography (ECC), and consider mutual authentication and key exchange between two untrusted parties such as two nodes in a wireless sensor network. Our measurements on an Atmel ATmega128L low-power microcontroller indicate that public-key cryptography is very viable on 8-bit energy-constrained platforms even if implemented in software. We found ECC to have a significant advantage over RSA as it reduces computation time and also the amount of data transmitted and stored.
---
paper_title: Secure routing in wireless sensor networks: attacks and countermeasures
paper_content:
We consider routing security in wireless sensor networks. Many sensor network routing protocols have been proposed, but none of them have been designed with security as a goal. We propose security goals for routing in sensor networks, show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks, introduce two classes of novel attacks against sensor networks sinkholes and HELLO floods, and analyze the security of all the major sensor network routing protocols. We describe crippling attacks against all of them and suggest countermeasures and design considerations. This is the first such analysis of secure routing in sensor networks.
---
paper_title: Public key cryptography in sensor networks - revisited
paper_content:
The common perception of public key cryptography is that it is complex, slow and power hungry, and as such not at all suitable for use in ultra-low power environments like wireless sensor networks. It is therefore common practice to emulate the asymmetry of traditional public key based cryptographic services through a set of protocols [1] using symmetric key based message authentication codes (MACs). Although the low computational complexity of MACs is advantageous, the protocol layer requires time synchronization between devices on the network and a significant amount of overhead for communication and temporary storage. The requirement for a general purpose CPU to implement these protocols as well as their complexity makes them prone to vulnerabilities and practically eliminates all the advantages of using symmetric key techniques in the first place. In this paper we challenge the basic assumptions about public key cryptography in sensor networks which are based on a traditional software based approach. We propose a custom hardware assisted approach for which we claim that it makes public key cryptography feasible in such environments, provided we use the right selection of algorithms and associated parameters, careful optimization, and low-power design techniques. In order to validate our claim we present proof of concept implementations of two different algorithms—Rabin’s Scheme and NtruEncrypt—and analyze their architecture and performance according to various established metrics like power consumption, area, delay, throughput, level of security and energy per bit. Our implementation of NtruEncrypt in ASIC standard cell logic uses no more than 3,000 gates with an average power consumption of less than 20 μW. We envision that our public key core would be embedded into a light-weight sensor node architecture.
---
paper_title: Comparing Elliptic Curve Cryptography and RSA on 8-bit CPUs
paper_content:
Strong public-key cryptography is often considered to be too computationally expensive for small devices if not accelerated by cryptographic hardware. We revisited this statement and implemented elliptic curve point multiplication for 160-bit, 192-bit, and 224-bit NIST/SECG curves over GF(p) and RSA-1024 and RSA-2048 on two 8-bit microcontrollers. To accelerate multiple-precision multiplication, we propose a new algorithm to reduce the number of memory accesses.
---
paper_title: A distributed protocol for detection of packet dropping attack in mobile ad hoc networks
paper_content:
In multi-hop mobile ad hoc networks (MANETs), mobile nodes cooperate with each other without using any infrastructure such as access points or base stations. Security remains a major challenge for these networks due to their features of open medium, dynamically changing topologies, reliance on cooperative algorithms, absence of centralized monitoring points, and lack of clear lines of defense. Among the various attacks to which MANETs are vulnerable, malicious packet dropping attack is very common where a malicious node can partially degrade or completely disrupt communication in the network by consistently dropping packets. In this paper, a mechanism for detection of packet dropping attack is presented based on cooperative participation of the nodes in a MANET. The redundancy of routing information in an ad hoc network is utilized to make the scheme robust so that it works effectively even in presence of transient network partitioning and Byzantine failure of nodes. The proposed scheme is fully cooperative and thus more secure as the vulnerabilities of any election algorithm used for choosing a subset of nodes for cooperation are absent. Simulation results show the effectiveness of the protocol.
---
paper_title: Multilevel μTESLA: Broadcast authentication for distributed sensor networks
paper_content:
Broadcast authentication is a fundamental security service in distributed sensor networks. This paper presents the development of a scalable broadcast authentication scheme named multilevel μTESLA based on μTESLA, a broadcast authentication protocol whose scalability is limited by its unicast-based initial parameter distribution. Multilevel μTESLA satisfies several nice properties, including low overhead, tolerance of message loss, scalability to large networks, and resistance to replay attacks as well as denial-of-service attacks. This paper also presents the experimental results obtained through simulation, which demonstrate the performance of the proposed scheme under severe denial-of-service attacks and poor channel quality.
---
paper_title: An efficient scheme for authenticating public keys in sensor networks
paper_content:
With the advance of technology, Public Key Cryptography (PKC) will sooner or later be widely used in wireless sensor networks. Recently, it has been shown that the performance of some public-key algorithms, such as Elliptic Curve Cryptography (ECC), is already close to being practical on sensor nodes. However, the energy consumption of PKC is still expensive, especially compared to symmetric-key algorithms. To maximize the lifetime of batteries, we should minimize the use of PKC whenever possible in sensor networks.This paper investigates how to replace one of the important PKC operations--the public key authentication--with symmetric key operations that are much more efficient. Public key authentication is to verify the authenticity of another party's public key to make sure that the public key is really owned by the person it is claimed to belong to. In PKC, this operation involves an expensive signature verification on a certificate. We propose an efficient alternative that uses one-way hash function only. Our scheme uses all sensor's public keys to construct a forest of Merkle trees of different heights. By optimally selecting the height of each tree, we can minimize the computation and communication costs. The performance of our scheme is evaluated in the paper.
---
paper_title: Guide to Elliptic Curve Cryptography
paper_content:
Elliptic curves also figured prominently in the recent proof of Fermat's Last Theorem by Andrew Wiles. Originally pursued for purely aesthetic reasons, elliptic curves have recently been utilized in devising algorithms for factoring integers, primality proving, and in public-key cryptography. In this article, we aim to give the reader an introduction to elliptic curve cryptosystems, and to demonstrate why these systems provide relatively small block sizes, high-speed software and hardware implementations, and offer the highest strength-per-key-bit of any known public-key scheme.
---
paper_title: LKHW: a directed diffusion-based secure multicast scheme for wireless sensor networks
paper_content:
In this paper, we present a mechanism for securing group communications in Wireless Sensor Networks (WSN). First, we derive an extension of logical key hierarchy (LKH). Then we merge the extension with directed diffusion. The resulting protocol, LKHW, combines the advantages of both LKH and directed diffusion: robustness in routing, and security from the tried and tested concepts of secure multicast. In particular, LKHW enforces both backward and forward secrecy, while incurring an energy cost that scales roughly logarithmically with the group size. This is the first security protocol that leverages directed diffusion, and we show how directed diffusion can be extended to incorporate security in an efficient manner.
---
paper_title: SPINS: security protocols for sensor networks
paper_content:
Wireless sensor networks will be widely deployed in the near future. While much research has focused on making these networks feasible and useful, security has received little attention. We present a suite of security protocols optimized for sensor networks: SPINS. SPINS has two secure building blocks: SNEP and μTESLA. SNEP includes: data confidentiality, two-party data authentication, and evidence of data freshness. μTESLA provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols.
---
paper_title: LEAP+: Efficient security mechanisms for large-scale distributed sensor networks
paper_content:
We describe LEAPp (Localized Encryption and Authentication Protocol), a key management protocol for sensor networks that is designed to support in-network processing, while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node. The design of the protocol is motivated by the observation that different types of messages exchanged between sensor nodes have different security requirements, and that a single keying mechanism is not suitable for meeting these different security requirements. LEAPp supports the establishment of four types of keys for each sensor node: an individual key shared with the base station, a pairwise key shared with another sensor node, a cluster key shared with multiple neighboring nodes, and a global key shared by all the nodes in the network. LEAPp also supports (weak) local source authentication without precluding in-network processing. Our performance analysis shows that LEAPp is very efficient in terms of computational, communication, and storage costs. We analyze the security of LEAPp under various attack models and show that LEAPp is very effective in defending against many sophisticated attacks, such as HELLO flood attacks, node cloning attacks, and wormhole attacks. A prototype implementation of LEAPp on a sensor network testbed is also described.
---
paper_title: Directed diffusion: a scalable and robust communication paradigm for sensor networks
paper_content:
Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
---
paper_title: Routing Security Issues in Wireless Sensor Networks: Attacks and Defenses
paper_content:
Wireless Sensor Networks (WSNs) are rapidly emerging as an important new area in wireless and mobile computing research. Applications of WSNs are numerous and growing, and range from indoor deployment scenarios in the home and office to outdoor deployment scenarios in adversary's territory in a tactical battleground (Akyildiz et al., 2002). For military environment, dispersal of WSNs into an adversary's territory enables the detection and tracking of enemy soldiers and vehicles. For home/office environments, indoor sensor networks offer the ability to monitor the health of the elderly and to detect intruders via a wireless home security system. In each of these scenarios, lives and livelihoods may depend on the timeliness and correctness of the sensor data obtained from dispersed sensor nodes. As a result, such WSNs must be secured to prevent an intruder from obstructing the delivery of correct sensor data and from forging sensor data. To address the latter problem, end-to-end data integrity checksums and post-processing of senor data can be used to identify forged sensor data (Estrin et al., 1999; Hu et al., 2003a; Ye et al., 2004). The focus of this chapter is on routing security in WSNs. Most of the currently existing routing protocols for WSNs make an optimization on the limited capabilities of the nodes and the application-specific nature of the network, but do not any the security aspects of the protocols. Although these protocols have not been designed with security as a goal, it is extremely important to analyze their security properties. When the defender has the liabilities of insecure wireless communication, limited node capabilities, and possible insider threats, and the adversaries can use powerful laptops with high energy and long range communication to attack the network, designing a secure routing protocol for WSNs is obviously a non-trivial task.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: DOS-Resistant Authentication with Client Puzzles
paper_content:
Denial of service by server resource exhaustion has become a major security threat in open communications networks. Public-key authentication does not completely protect against the attacks because the authentication protocols often leave ways for an unauthenticated client to consume a server's memory space and computational resources by initiating a large number of protocol runs and inducing the server to perform expensive cryptographic computations. We show how stateless authentication protocols and the client puzzles of Juels and Brainard can be used to prevent such attacks.
---
paper_title: A pairwise key predistribution scheme for wireless sensor networks
paper_content:
To achieve security in wireless sensor networks, it is important to be able to encrypt and authenticate messages sent between sensor nodes. Before doing so, keys for performing encryption and authentication must be agreed upon by the communicating parties. Due to resource constraints, however, achieving key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and other public-key based schemes, are not suitable for wireless sensor networks due to the limited computational abilities of the sensor nodes. Predistribution of secret keys for all pairs of nodes is not viable due to the large amount of memory this requires when the network size is large.In this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an in-depth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smaller-scale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain.
---
paper_title: A key-management scheme for distributed sensor networks
paper_content:
Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.
---
paper_title: Distributed detection of node replication attacks in sensor networks
paper_content:
The low-cost, off-the-shelf hardware components in unshielded sensor-network nodes leave them vulnerable to compromise. With little effort, an adversary may capture nodes, analyze and replicate them, and surreptitiously insert these replicas at strategic locations within the network. Such attacks may have severe consequences; they may allow the adversary to corrupt network data or even disconnect significant parts of the network. Previous node replication detection schemes depend primarily on centralized mechanisms with single points of failure, or on neighborhood voting protocols that fail to detect distributed replications. To address these fundamental limitations, we propose two new algorithms based on emergent properties (Gligor (2004)), i.e., properties that arise only through the collective action of multiple nodes. Randomized multicast distributes node location information to randomly-selected witnesses, exploiting the birthday paradox to detect replicated nodes, while line-selected multicast uses the topology of the network to detect replication. Both algorithms provide globally-aware, distributed node-replica detection, and line-selected multicast displays particularly strong performance characteristics. We show that emergent algorithms represent a promising new approach to sensor network security; moreover, our results naturally extend to other classes of networks in which nodes can be captured, replicated and re-inserted by an adversary.
---
paper_title: Protecting Access to People Location Information
paper_content:
Ubiquitous computing provides new types of information for which access needs to be controlled. For instance, a person’s current location is a sensitive piece of information, and only authorized entities should be able to learn it. We present several challenges that arise for the specification and implementation of policies controlling access to location information. For example, there can be multiple sources of location information, policies need to be flexible, conflicts between policies might occur, and privacy issues need to be taken into account. Different environments handle these challenges in a different way. We discuss the challenges in the context of a hospital and a university environment. We show how our design of an access control mechanism for a system providing people location information addresses the challenges. Our mechanism can be deployed in different environments. We demonstrate feasibility of our design with an example implementation based on digital certificates.
---
paper_title: Framework for security and privacy in automotive telematics
paper_content:
Automotive telematics may be defined as the information-intensive applications that are being enabled for vehicles by a combination of telecommunications and computing technology. Telematics by its nature requires the capture of sensor data, storage and exchange of data to obtain remote services. In order for automotive telematics to grow to its full potential, telematics data must be protected. Data protection must include privacy and security for end-users, service providers and application providers. In this paper, we propose a new framework for data protection that is built on the foundation of privacy and security technologies. The privacy technology enables users and service providers to define flexible data model and policy models. The security technology provides traditional capabilities such as encryption, authentication, non-repudiation. In addition, it provides secure environments for protected execution, which is essential to limiting data access to specific purposes.
---
paper_title: Preserving Source-Location Privacy in Wireless Sensor Networks
paper_content:
Wireless sensor networks (WSN) have the potential to be widely used in many areas for unattended event monitoring. Mainly due to lack of a protected physical boundary, wireless communications are vulnerable to unauthorized interception and detection. Privacy is becoming one of the major issues that jeopardize the successful deployment of wireless sensor networks. While confidentiality of the message can be ensured through content encryption, it is much more difficult to adequately address the source-location privacy. For WSN, source-location privacy service is further complicated by the fact that the sensor nodes consist of low-cost and low-power radio devices, computationally intensive cryptographic algorithms (such as public-key cryptosystems) and large scale broadcasting-based protocols are not suitable for WSN. In this paper, we propose a scheme to provide both content confidentiality and source-location privacy through routing to a randomly selected intermediate node (RRIN) and a network mixing ring (NMR), where the RRIN provides local source-location privacy and NMR yields network-level (global) source-location privacy. While being able to provide source-location privacy for WSN, our simulation results also demonstrate that the proposed scheme is very efficient and can be used for practical applications.
---
paper_title: Source-location privacy in energy-constrained sensor network routing
paper_content:
As sensor-driven applications become increasingly integrated into our lives, issues related to sensor privacy will become increasingly important. Although many privacy-related issues can be addressed by security mechanisms, one sensor network privacy issue that cannot be adequately addressed by network security is confidentiality of the source sensor's location. In this paper, we focus on protecting the source's location by introducing suitable modifications to sensor routing protocols to make it difficult for an adversary to backtrack to the origin of the sensor communication. In particular, we focus on the class of flooding protocols. While developing and evaluating our privacy-aware routing protocols, we jointly consider issues of location-privacy as well as the amount of energy consumed by the sensor network. Motivated by the observations, we propose a flexible routing strategy, known as phantom routing, which protects the source's location. Phantom routing is a two-stage routing scheme that first consists of a directed walk along a random direction, followed by routing from the phantom source to the sink. Our investigations have shown that phantom routing is a powerful technique for protecting the location of the source during sensor transmissions.
---
paper_title: How to leak a secret
paper_content:
In this paper we formalize the notion of a ring signature, which makes it possible to specify a set of possible signers without revealing which member actually produced the signature. Unlike group signatures, ring signatures have no group managers, no setup procedures, no revocation procedures, and no coordination: any user can choose any set of possible signers that includes himself, and sign any message by using his secret key and the others' public keys, without getting their approval or assistance. Ring signatures provide an elegant way to leak authoritative secrets in an anonymous way, to sign casual email in a way which can only be verified by its intended recipient, and to solve other problems in multiparty computations. The main contribution of this paper is a new construction of such signatures which is unconditionally signer-ambiguous, provably secure in the random oracle model, and exceptionally efficient: adding each ring member increases the cost of signing or verifying by a single modular multiplication and a single symmetric encryption.
---
paper_title: Privacy and security in library RFID: issues, practices, and architectures
paper_content:
We expose privacy issues related to Radio Frequency Identification (RFID) in libraries, describe current deployments, and suggest novel architectures for library RFID. Libraries are a fast growing application of RFID; the technology promises to relieve repetitive strain injury, speed patron self-checkout, and make possible comprehensive inventory. Unlike supply-chain RFID, library RFID requires item-level tagging, thereby raising immediate patron privacy issues. Current conventional wisdom suggests that privacy risks are negligible unless an adversary has access to library databases. We show this is not the case. In addition, we identify private authentication as a key technical issue: how can a reader and tag that share a secret efficiently authenticate each other without revealing their identities to an adversary? Previous solutions to this problem require reader work linear in the number of tags. We give a general scheme for building private authentication with work logarithmic in the number of tags, given a scheme with linear work as a sub protocol. This scheme may be of independent interest beyond RFID applications. We also give a simple scheme that provides security against a passive eavesdropper using XOR alone, without pseudo-random functions or other heavy crypto operations.
---
paper_title: Preserving source location privacy in monitoring-based wireless sensor networks
paper_content:
While a wireless sensor network is deployed to monitor certain events and pinpoint their locations, the location information is intended only for legitimate users. However, an eavesdropper can monitor the traffic and deduce the approximate location of monitored objects in certain situations. We first describe a successful attack against the flooding-based phantom routing, proposed in the seminal work by Celal Ozturk, Yanyong Zhang, and Wade Trappe. Then, we propose GROW (Greedy Random Walk), a two-way random walk, i.e., from both source and sink, to reduce the chance an eavesdropper can collect the location information. We improve the delivery rate by using local broadcasting and greedy forwarding. Privacy protection is verified under a backtracking attack model. The message delivery time is a little longer than that of the broadcasting-based approach, but it is still acceptable if we consider the enhanced privacy preserving capability of this new approach. At the same time, the energy consumption is less than half the energy consumption of flooding-base phantom routing, which is preferred in a low duty cycle, environmental monitoring sensor network.
---
paper_title: Resilient aggregation in sensor networks
paper_content:
This paper studies security for data aggregation in sensor networks. Current aggregation schemes were designed without security in mind and there are easy attacks against them. We examine several approaches for making these aggregation schemes more resilient against certain attacks, and we propose a mathematical framework for formally evaluating their security.
---
paper_title: A Survey on the Encryption of Convergecast Traffic with In-Network Processing
paper_content:
We present an overview of end-to-end encryption solutions for convergecast traffic in wireless sensor networks that support in-network processing at forwarding intermediate nodes. Other than hop-by-hop based encryption approaches, aggregator nodes can perform in-network processing on encrypted data. Since it is not required to decrypt the incoming ciphers before aggregating, substantial advantages are 1) neither keys nor plaintext is available at aggregating nodes, 2) the overall energy consumption of the backbone can be reduced, 3) the system is more flexible with respect to changing routes, and finally 4) the overall system security increases. We provide a qualitative comparison of available approaches, point out their strengths, respectively weaknesses, and investigate opportunities for further research.
---
paper_title: CDA: concealed data aggregation for reverse multicast traffic in wireless sensor networks
paper_content:
End-to-end encryption for wireless sensor networks is a challenging problem. To save the overall energy resources of the network, it is agreed that sensed data need to be consolidated and aggregated on their way to the final destination. We present an approach that (1) conceals sensed data end-to-end, by (2) still providing efficient in-network data aggregation. The aggregating intermediate nodes are not required to operate on the sensed plaintext data. We apply a particular class of encryption transformation and exemplarily discuss the approach on the basis of two aggregation functions. We use actual implementation to show that the approach is feasible and flexible and frequently even more energy efficient than hop-by-hop encryption.
---
paper_title: Secure comparison of encrypted data in wireless sensor networks
paper_content:
End-to-end encryption schemes that support operations over ciphertext are of utmost importance for commercial private party wireless sensor network implementations to become meaningful and profitable. For wireless sensor networks, we demonstrated in our previous work that privacy homomorphisms, when used for this purpose, offer two striking advantages apart from end-to-end concealment of data and ability to operate on ciphertexts: flexibility by keyless aggregation and conservation and balancing of aggregator backbone energy. We offered proof of concept by applying a certain privacy homomorphism for sensor network applications that rely on the addition operation. But a large class of aggregator functions like median computation or finding maximum/minimum rely exclusively on comparison operations. Unfortunately, as shown by Rivest, et al., any privacy homomorphism is insecure even against ciphertext that only attacks if they support comparison operations. In this paper we show that a particular order preserving encryption scheme achieves the above mentioned energy benefits and flexibility when used to support comparison operations over encrypted texts for wireless sensor networks, while also managing to hide the plaintext distribution and being secure against ciphertext only attacks. The scheme is shown to have reasonable memory and computation overhead when applied for wireless sensor networks.
---
paper_title: SIA: Secure information aggregation in sensor networks
paper_content:
In sensor networks, data aggregation is a vital primitive enabling efficient data queries. An on-site aggregator device collects data from sensor nodes and produces a condensed summary which is forwarded to the off-site querier, thus reducing the communication cost of the query. Since the aggregator is on-site, it is vulnerable to physical compromise attacks. A compromised aggregator may report false aggregation results. Hence, it is essential that techniques are available to allow the querier to verify the integrity of the result returned by the aggregator node. ::: ::: We propose a novel framework for secure information aggregation in sensor networks. By constructing efficient random sampling mechanisms and interactive proofs, we enable the querier to verify that the answer given by the aggregator is a good approximation of the true value, even when the aggregator and a fraction of the sensor nodes are corrupted. In particular, we present efficient protocols for secure computation of the median and average of the measurements, for the estimation of the network size, for finding the minimum and maximum sensor reading, and for random sampling and leader election. Our protocols require only sublinear communication between the aggregator and the user.
---
paper_title: Energy-Efficient Secure Pattern Based Data Aggregation for Wireless Sensor Networks
paper_content:
Data aggregation in wireless sensor networks eliminates redundancy to improve bandwidth utilization and energy-efficiency of sensor nodes. This paper presents a secure energy-efficient data aggregation protocol called ESPDA (Energy-Efficient Secure Pattern based Data Aggregation). Unlike conventional data aggregation techniques, ESPDA prevents the redundant data transmission from sensor nodes to cluster-heads. If sensor nodes sense the same data, ESPDA first puts all but one of them into sleep mode and generate pattern codes to represent the characteristics of data sensed by sensor nodes. Cluster-heads implement data aggregation based on pattern codes and only distinct data in encrypted form is transmitted from sensor nodes to the base station via cluster-heads. Due to the use of pattern codes, cluster-heads do not need to know the sensor data to perform data aggregation, which allows sensor nodes to establish secure end-to-end communication links with base station. Therefore, there is no need for encryption/decryption key distribution between the cluster-heads and sensor nodes. Moreover, the use of NOVSF Block-Hopping technique improves the security by randomly changing the mapping of data blocks to NOVSF time slots. Performance evaluation shows that ESPDA outperforms conventional data aggregation methods up to 50% in bandwidth efficiency.
---
paper_title: Efficient and provably secure aggregation of encrypted data in wireless sensor networks
paper_content:
Wireless sensor networks (WSNs) are composed of tiny devices with limited computation and battery capacities. For such resource-constrained devices, data transmission is a very energy-consuming operation. To maximize WSN lifetime, it is essential to minimize the number of bits sent and received by each device. One natural approach is to aggregate sensor data along the path from sensors to the sink. Aggregation is especially challenging if end-to-end privacy between sensors and the sink (or aggregate integrity) is required. In this article, we propose a simple and provably secure encryption scheme that allows efficient additive aggregation of encrypted data. Only one modular addition is necessary for ciphertext aggregation. The security of the scheme is based on the indistinguishability property of a pseudorandom function (PRF), a standard cryptographic primitive. We show that aggregation based on this scheme can be used to efficiently compute statistical values, such as mean, variance, and standard deviation of sensed data, while achieving significant bandwidth savings. To protect the integrity of the aggregated data, we construct an end-to-end aggregate authentication scheme that is secure against outsider-only attacks, also based on the indistinguishability property of PRFs.
---
paper_title: Concealed Data Aggregation for Reverse Multicast Traffic in Sensor Networks: Encryption, Key Distribution, and Routing Adaptation
paper_content:
Routing in wireless sensor networks is different from that in commonsense mobile ad-hoc networks. It mainly needs to support reverse multicast traffic to one particular destination in a multihop manner. For such a communication pattern, end-to-end encryption is a challenging problem. To save the overall energy resources of the network, sensed data needs to be consolidated and aggregated on its way to the final destination. We present an approach that 1) conceals sensed data end-to-end by 2) still providing efficient and flexible in-network data aggregation. The aggregating intermediate nodes are not required to operate on the sensed plaintext data. We apply a particular class of encryption transformations and discuss techniques for computing the aggregation functions "average" and "movement detection." We show that the approach is feasible for the class of "going down" routing protocols. We consider the risk of corrupted sensor nodes by proposing a key predistribution algorithm that limits an attacker's gain and show how key predistribution and a key-ID sensitive "going down" routing protocol help increase the robustness and reliability of the connected backbone
---
paper_title: A witness-based approach for data fusion assurance in wireless sensor networks
paper_content:
In wireless sensor networks, sensor nodes are spread randomly over the coverage area to collect information of interest. Data fusion is used to process these collected information before they are sent to the base station, the observer of the sensor network. We study the security of the data fusion process in this work. In particular, we propose a witness-based solution to assure the validation of the data sent from data fusion nodes to the base station. We also present the theoretical analysis for the overhead associated with the mechanism, which indicates that even in an extremely harsh environment the overhead is low for the proposed mechanism.
---
paper_title: PDA: Privacy-Preserving Data Aggregation in Wireless Sensor Networks
paper_content:
Providing efficient data aggregation while preserving data privacy is a challenging problem in wireless sensor networks research. In this paper, we present two privacy-preserving data aggregation schemes for additive aggregation functions. The first scheme -cluster-based private data aggregation (CPDA)-leverages clustering protocol and algebraic properties of polynomials. It has the advantage of incurring less communication overhead. The second scheme -Slice-Mix-AggRegaTe (SMART)-builds on slicing techniques and the associative property of addition. It has the advantage of incurring less computation overhead. The goal of our work is to bridge the gap between collaborative data collection by wireless sensor networks and data privacy. We assess the two schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. We present simulation results of our schemes and compare their performance to a typical data aggregation scheme -TAG, where no data privacy protection is provided. Results show the efficacy and efficiency of our schemes. To the best of our knowledge, this paper is among the first on privacy-preserving data aggregation in wireless sensor networks.
---
paper_title: Secure aggregation for wireless networks
paper_content:
An emerging class of important applications uses ad hoc wireless networks of low-power sensor devices to monitor and send information about a possibly hostile environment to a powerful base station connected to a wired network. To conserve power, intermediate network nodes should aggregate results from individual sensors. However, this opens the risk that a single compromised sensor device can render the network useless, or worse, mislead the operator into trusting a false reading. We present a protocol that provides a secure aggregation mechanism for wireless networks that is resilient to both intruder devices and single device key compromises. Our protocol is designed to work within the computation, memory and power consumption limits of inexpensive sensor devices, but takes advantage of the properties of wireless networking, as well as the power asymmetry between the devices and the base station.
---
paper_title: Secure verification of location claims
paper_content:
With the growing prevalence of sensor and wireless networks comes a new demand for location-based access control mechanisms. We introduce the concept of secure location verification, and we show how it can be used for location-based access control. Then, we present the Echo protocol, a simple method for secure location verification. The Echo protocol is extremely lightweight: it does not require time synchronization, cryptography, or very precise clocks. Hence, we believe that it is well suited for use in small, cheap, mobile devices.
---
paper_title: Tamper Resistance -- a Cautionary Note
paper_content:
An increasing number of systems from pay-TV to electronic purses, rely on the tamper resistance of smartcards and other security processors. We describe a number of attacks on such systems -- some old, some new and some that are simply little known outside the chip testing community. We conclude that trusting tamper resistance is problematic; smartcards are broken routinely, and even a device that was described by a government signals agency as 'the most secure processor generally available' turns out to be vulnerable. Designers of secure systems should consider the consequences with care.
---
paper_title: SWATT: softWare-based attestation for embedded devices
paper_content:
We expect a future where we are surrounded by embedded devices, ranging from Java-enabled cell phones to sensor networks and smart appliances. An adversary can compromise our privacy and safety by maliciously modifying the memory contents of these embedded devices. In this paper, we propose a softWare-based attestation technique (SWATT) to verify the memory contents of embedded devices and establish the absence of malicious changes to the memory contents. SWATT does not need physical access to the device's memory, yet provides memory content attestation similar to TCG or NGSCB without requiring secure hardware. SWATT can detect any change in memory contents with high probability, thus detecting viruses, unexpected configuration settings, and Trojan Horses. To circumvent SWATT, we expect that an attacker needs to change the hardware to hide memory content changes. We present an implementation of SWATT in off-the-shelf sensor network devices, which enables us to verify the contents of the program memory even while the sensor node is running.
---
paper_title: INSENS: Intrusion-Tolerant Routing in Wireless Sensor Networks
paper_content:
This paper describes an INtrusion-tolerant routing protocol for wireless SEnsor NetworkS (INSENS). INSENS securely and efficiently constructs tree-structured routing for wireless sensor networks (WSNs). The key objective of an INSENS network is to tolerate damage caused by an intruder who has compromised deployed sensor nodes and is intent on injecting, modifying, or blocking packets. To limit or localize the damage caused by such an intruder, INSENS incorporates distributed lightweight security mechanisms, including efficient one-way hash chains and nested keyed message authentication codes that defend against wormhole attacks, as well as multipath routing. Adapting to WSN characteristics, the design of INSENS also pushes complexity away from resource-poor sensor nodes towards resource-rich base stations. An enhanced single-phase version of INSENS scales to large networks, integrates bidirectional verification to defend against rushing attacks, accommodates multipath routing to multiple base stations, enables secure joining/leaving, and incorporates a novel pairwise key setup scheme based on transitory global keys that is more resilient than LEAP. Simulation results are presented to demonstrate and assess the tolerance of INSENS to various attacks launched by an adversary. A prototype implementation of INSENS over a network of MICA2 motes is presented to evaluate the cost incurred.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: Low Cost Attacks on Tamper Resistant Devices
paper_content:
There has been considerable recent interest in the level of tamper resistance that can be provided by low cost devices such as smart-cards. It is known that such devices can be reverse engineered using chip testing equipment, but a state of the art semiconductor laboratory costs millions of dollars. In this paper, we describe a number of attacks that can be mounted by opponents with much shallower pockets. Three of them involve special (but low cost) equipment: differential fault analysis, chip rewriting, and memory remanence. There are also attacks based on good old fashioned protocol failure which may not require any special equipment at all. We describe and give examples of each of these. Some of our attacks are significant improvements on the state of the art; others are useful cautionary tales. Together, they show that building tamper resistant devices, and using them effectively, is much harder than it looks.
---
paper_title: Challenges in intrusion detection for wireless ad-hoc networks
paper_content:
This paper presents a brief survey of current research in intrusion detection for wireless ad-hoc networks. In addition to examining the challenges of providing intrusion detection in this environment, this paper reviews current efforts to detect attacks against the ad-hoc routing infrastructure, as well as detecting attacks directed against the mobile nodes. This paper also examines the intrusion detection architectures that may be deployed for different wireless ad-hoc network infrastructures, as well as proposed methods of intrusion response.
---
paper_title: On supporting distributed collaboration in sensor networks
paper_content:
In sensor networks, nodes may malfunction due to the hostile environment. Therefore, dealing with node failure is a very important research issue. In this paper, we study distributed cooperative failure detection techniques. In the proposed techniques, the nodes around a suspected node collaborate with each other to reach an agreement on whether the suspect is faulty or malicious. We first formalize the problem as how to construct a dominating tree to cover all the neighbors of the suspect and give the lower bound of the message complexity. Two tree-based propagation collection protocols are proposed to construct dominating trees and collect information via the tree structure. Instead of using the traditional flooding technique, we propose a coverage-based heuristic to improve the system performance. Theoretical analysis and simulation results show that the heuristic can help achieve a higher tree coverage with lower message complexity, lower delay and lower energy consumption.
---
paper_title: Denial of Service in Sensor Networks
paper_content:
Sensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments, helping to protect and monitor military, environmental, safety-critical, or domestic infrastructures and resources, Denial-of-service attacks against such networks, however, may permit real world damage to public health and safety. Without proper security mechanisms, networks will be confined to limited, controlled environments, negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial-of-service vulnerabilities, the authors analyzed two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment.
---
paper_title: An interleaved hop-by-hop authentication scheme for filtering of injected false data in sensor networks
paper_content:
Sensor networks are often deployed in unattended environments, thus leaving these networks vulnerable to false data injection attacks in which an adversary injects false data into the network with the goal of deceiving the base station or depleting the resources of the relaying nodes. Standard authentication mechanisms cannot prevent this attack if the adversary has compromised one or a small number of sensor nodes. In this paper, we present an interleaved hop-by-hop authentication scheme that guarantees that the base station will detect any injected false data packets when no more than a certain number t nodes are compromised. Further, our scheme provides an upper bound B for the number of hops that a false data packet could be forwarded before it is detected and dropped, given that there are up to t colluding compromised nodes. We show that in the worst case B is O(t/sup 2/). Through performance analysis, we show that our scheme is efficient with respect to the security it provides, and it also allows a tradeoff between security and performance.
---
paper_title: Security in Ad Hoc Networks : a General Intrusion Detection Architecture Enhancing Trust Based Approaches
paper_content:
In the last few years, the performances of wireless technologies have increased tremendously thus opening new fields of application in the domain of networking. One of such fields concerns mobile ad hoc networks (MANETs) in which mobile nodes organise themselves in a network without the help of any predefined infrastructure. Securing MANETs is just as important, if not more, as securing traditional wired networks. Existing solutions can be used to obtain a certain level of security. Nevertheless, these solutions may not always be suitable to wireless networks. Furthermore, ad hoc networks have their own vulnerabilities that cannot be tackled by these solutions. To obtain an acceptable level of security in such a context, traditional security solutions should be coupled with an intrusion detection mechanism. In this paper we show how ad hoc networks can be, to a certain extent, secured using traditional techniques. We then examine the different intrusion detection techniques and point out the reasons why they usually cannot be used in an ad hoc context. Finally, we go through the requirements of an intrusion detection system for ad hoc networks, and define an adapted architecture for an intrusion detection system for manets.
---
paper_title: An Intrusion Detection Architecture for Clustered Wireless Ad Hoc Networks
paper_content:
For the shortcomings of the existing single-function management systems of electric power marketing management at present, the energy acquisition and operation management system based on the customized integrated services distribution automation terminal is proposed, using wireless ad-hoc networks and wireless mesh networking technology to solve the 'last mile' problem in the distribution automation system. As an instance of a wireless sensor network (WSN), it provides a highly reliable, real-time on-site intelligent network with strong adaptability to environment. This paper describes the structure of the energy acquisition and operation management system, and it's the uplink downlink communication channels and interfaces. The main function module design and system characteristics are detailed. The system seamlessly integrates power on-site management system, distribution transformer management system and automatic meter reading system, with a new intensive, diversified, practical system to replace the previous single-function systems.
---
paper_title: Highly reliable trust establishment scheme in ad hoc networks
paper_content:
Securing ad hoc networks in a fully self-organized way is effective and light-weight, but fails to accomplish trust initialization in many trust deficient scenarios. To overcome this problem, this paper aims at building well established trust relationships in ad hoc networks without relying on any pre-defined assumption. We propose a probabilistic solution based on distributed trust model. A secret dealer is introduced only in the system bootstrapping phase to complement the assumption in trust initialization. With it, much shorter and more robust trust chains are able to be constructed with high probability. A fully self-organized trust establishment approach is then adopted to conform to the dynamic membership changes. The simulation results on both static and dynamic performances show that our scheme is highly resilient to dynamic membership changing and scales well. The lack of initial trust establishment mechanisms in most higher level security solutions (e.g. key management schemes, secure routing protocols) for ad hoc networks makes them benefit from our scheme.
---
paper_title: Establishing Trust In Pure Ad-hoc Networks
paper_content:
An ad-hoc network of wireless nodes is a temporarily formed network, created, operated and managed by the nodes themselves. It is also often termed an infrastructure-less, self-organized, or spontaneous network. Nodes assist each other by passing data and control packets from one node to another, often beyond the wireless range of the original sender. The execution and survival of an ad-hoc network is solely dependent upon the cooperative and trusting nature of its nodes.However, this naive dependency on intermediate nodes makes the ad-hoc network vulnerable to passive and active attacks by malicious nodes. A number of protocols have been developed to secure ad-hoc networks using cryptographic schemes, but all rely on the presence of an omnipresent, and often omniscient, trust authority. As this paper describes, dependence on a central trust authority is an impractical requirement for ad-hoc networks. We present a model for trust-based communication in ad-hoc networks that also demonstrates that a central trust authority is a superfluous requirement. The model introduces the notion of belief and provides a dynamic measure of reliability and trustworthiness in an ad hoc network.
---
paper_title: Enforcing cooperative resource sharing in untrusted peer-to-peer environment
paper_content:
Peer-to-Peer (P2P) computing is widely recognized as a promising paradigm for building next generation distributed applications. However, the autonomous, heterogeneous, and decentralized nature of participating peers introduces the following challenge for resource staring: how to make peers profitable in the untrusted P2P environment? To address the problem, we present a self-policing and distributed approach by combining two models: PET, a personalized trust model, and M-CUBE, a multiple-currency based economic model, to lay a foundation for resource sharing in untrusted P2P computing environments. PET is a flexible trust model that can adapt to different requirements, and provides the solid support for the currency management in M-CUBE. M-CUBE provides a novel self-policing and quality-aware framework for the sharing of multiple resources, including both homogeneous and heterogeneous resources. We evaluate the efficacy and performance of this approach in the context of a real application, a peer-to-peer Web server sharing. Our results show that our approach is flexible enough to adapt to different situations and effective to make the system profitable, especially for systems with large scale.
---
paper_title: Peer-to-Peer: Harnessing the Power of Disruptive Technologies
paper_content:
From the Publisher: ::: Upstart software projects Napster, Gnutella, and Freenet have dominated newspaper headlines, challenging traditional approaches to content distribution with their revolutionary use of peer-to-peer file-sharing technologies. Reporters try to sort out the ramifications of seemingly ungoverned peer-to-peer networks. Lawyers, business leaders, and social commentators debate the virtues and evils of these bold new distributed systems. But what's really behind such disruptive technologies -- the breakthrough innovations that have rocked the music and media worlds? And what lies ahead? ::: In this book, key peer-to-peer pioneers take us beyond the headlines and hype and show how the technology is changing the way we communicate and exchange information. Those working to advance peer-to-peer as a technology, a business opportunity, and an investment offer their insights into how the technology has evolved and where it's going. They explore the problems they've faced, the solutions they've discovered, the lessons they've learned, and their goals for the future of computer networking. ::: Until now, Internet communities have been limited by the flat interactive qualities of email and network newsgroups, where people can exchange recommendations and ideas but have great difficulty commenting on one another's postings, structuring information, performing searches, and creating summaries. Peer-to-peer challenges the traditional authority of the client/server model, allowing shared information to reside instead with producers and users. Peer-to-peer networks empower users to collaborate on producing and consuming information, adding to it, commenting on it, and building communities around it. ::: This compilation represents the collected wisdom of today's peer-to-peer luminaries. It includes contributions from Gnutella's Gene Kan, Freenet's Brandon Wiley, Jabber's Jeremie Miller, and many others -- plus serious discussions of topics ranging from accountability and trust to security and performance. Fraught with questions and promise, peer-to-peer is sure to remain on the computer industry's center stage for years to come.
---
paper_title: Trust evaluation based security solution in ad hoc networks
paper_content:
Ad hoc networks are new paradigm of networks offering unrestricted mobility without any underlying infrastructure. The ad hoc networks have salient characteristics that are totally different from conventional networks. These cause extra challenges on security. In an ad hoc network, each node should not trust any peer. However, traditional cryptographic solution is useless against threats from internal compromised nodes. Thus, new mechanisms are needed to provide effective security solution for the ad hoc networks. In this paper, a trust evaluation based security solution is proposed to provide effective security decision on data protection, secure routing and other network activities. Logical and computational trust analysis and evaluation are deployed among network nodes. Each node's evaluation of trust on other nodes should be based on serious study and inference from such trust factors as experience statistics, data value, intrusion detection result, and references of other nodes, as well as node owner's preference and policy. In order to prove the applicability of the proposed solution, authors further present a routing protocol and analyze its security over several active attacks.
---
paper_title: Reputation-based framework for high integrity sensor networks
paper_content:
The traditional approach of providing network security has been to borrow tools from cryptography and authentication. However, we argue that the conventional view of security based on cryptography alone is not sufficient for the unique characteristics and novel misbehaviors encountered in sensor networks. Fundamental to this is the observation that cryptography cannot prevent malicious or non-malicious insertion of data from internal adversaries or faulty nodes. We believe that in general tools from different domains such as economics, statistics and data analysis will have to be combined with cryptography for the development of trustworthy sensor networks. Following this approach, we propose a reputation-based framework for sensor networks where nodes maintain reputation for other nodes and use it to evaluate their trustworthiness. We will show that this framework provides a scalable, diverse and a generalized approach for countering all types of misbehavior resulting from malicious and faulty nodes. We are currently developing a system within this framework where we employ a Bayesian formulation, specifically a beta reputation system, for reputation representation, updates and integration. We will explain the reasoning behind our design choices, analyzing their pros & cons. We conclude the paper by verifying the efficacy of this system through some preliminary simulation results.
---
paper_title: Reputation- and Trust-Based Systems for Wireless Self-organizing Networks
paper_content:
Traditional approach of providing network security has been to borrow tools and mechanisms from cryptography. However, the conventional view of security based on cryptography alone is not sufficient for the defending against unique and novel types of misbehavior exhibited by nodes in wireless self-organizing networks such as mobile ad hoc networks and wireless sensor networks. Reputation-based frameworks, where nodes maintain reputation of other nodes and use it to evaluate their trustworthiness, are deployed to provide scalable, diverse and a generalized approach for countering different types of misbehavior resulting form malicious and selfish nodes in these networks. In this chapter, we present a comprehensive discussion on reputation and trust-based systems for wireless self-organizing networks. Different classes of reputation system are described along with their unique characteristics and working principles. A number of currently used reputation systems are critically reviewed and compared with respect to their effectiveness and efficiency of performance. Some open problems in the area of reputation and trust-based system within the domain of wireless self-organizing networks are also discussed.
---
paper_title: Computing of trust in wireless networks
paper_content:
The concept of small world in the context of wireless networks, first studied by A. Helmy (see IEEE Commun. Lett., vol.7, no.10, 2003), enables a path-finder to find paths from a source node to a designated target node in efficiently wireless networks. Based on this observation, we provide a practical approach to compute trust in wireless networks by viewing an individual mobile device as a node of a delegation graph, G, and mapping a delegation path from the source node, S, to the target node, T, into an edge in the correspondent transitive closure of the graph, G, from which a trust value is computed.
---
paper_title: Mobile Jamming Attack and its Countermeasure in Wireless Sensor Networks
paper_content:
Denial-of-service (DoS) attacks are serious threats due to the resources constrained property in wireless sensor networks. Jamming attacks are the representative energy-consumption DoS attacks that can be launched easily. Hence, many countermeasures have been proposed to mitigate the damage causing by jamming attacks. In this paper, we first present a novel and powerful jamming attack called mobile jamming attack. Besides, we propose a multi-dataflow topologies scheme that can effectively defend the mobile jamming attack. The simulation results demonstrate that the mobile jamming attack is more devastating than traditional jamming attacks and the proposed defense scheme can effectively alleviate the damage.
---
paper_title: Scalable, Cluster-based Anti-replay Protection for Wireless Sensor Networks
paper_content:
Large-scale wireless sensor network (WSN) deployments show great promise for military, homeland security, and many other applications. This promise, however, is offset by important security concerns. The resource constraints that typify wireless sensor devices make traditional security solutions impractical. One threat to secure sensor networks is the replay attack, in which packets are captured and replayed into the network. This type of attack can be perpetrated to confuse observers or to mount a denial-of-service or denial-of-sleep attack. Traditional techniques for anti-replay protection are too resource intensive for large-scale WSN deployments. While techniques for reducing data transmission overhead of WSN-speciflc anti-replay mechanisms have been explored, the important problem of minimizing per-node reply table storage requirements has not been addressed. This paper introduces Clustered Anti-Replay Protection or CARP, which leverages sensor network clustering to place a limit on the amount of memory required to store anti-replay information. We show that clustering keeps the memory required for anti-replay tables manageable, reducing the size from 30% of a Mica2's memory to 4.4% for a 200-node network. While the advantages of this technique are clear, the difficulty lies in securely updating network-wide anti-replay tables when the network reclusters, an event that must happen routinely to distribute energy consumption across the nodes in the network. Our mechanism distributes necessary anti-replay information in a secure, low-overhead, and completely distributed manner. We further show the energy-consumption overhead of adding anti-replay counters to network traffic across several WSN medium access control (MAC) protocols and two representative WSN platforms. On the Mica2 platform, overheads range from a 0% to 1.32% decrease in network lifetime, depending on the MAC protocol. On the Tmote Sky, overheads range from 0% to 4.64%. Providing anti-replay support in a secure, scalable, and distributed way is necessary to the overall security of future WSN deployments if they are to meet current expectations.
---
paper_title: Security in Cognitive Radio Networks: Threats and Mitigation
paper_content:
This paper describes a new class of attacks specific to cognitive radio networks. Wireless devices that can learn from their environment can also be taught things by malicious elements of their environment. By putting artificial intelligence in charge of wireless network devices, we are allowing unanticipated, emergent behavior, fitting a perhaps distorted or manipulated level of optimality. The state space for a cognitive radio is made up of a variety of learned beliefs and current sensor inputs. By manipulating radio sensor inputs, an adversary can affect the beliefs of a radio, and consequently its behavior. In this paper we focus primarily on PHY-layer issues, describing several classes of attacks and giving specific examples for dynamic spectrum access and adaptive radio scenarios. These attacks demonstrate the capabilities of an attacker who can manipulate the spectral environment when a radio is learning. The most powerful of which is a self-propagating AI virus that could interactively teach radios to become malicious. We then describe some approaches for mitigating the effectiveness of these attacks by instilling some level of "common sense" into radio systems, and requiring learned beliefs to expire and be relearned. Lastly we provide a road-map for extending these ideas to higher layers in the network stack.
---
paper_title: Defense against Primary User Emulation Attacks in Cognitive Radio Networks
paper_content:
Cognitive radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands while causing no interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs and its operational aspects are being investigated actively. However, the security aspects of spectrum sensing have garnered little attention. In this paper, we identify a threat to spectrum sensing, which we call the primary user emulation (PUE) attack. In this attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. To counter this threat, we propose a transmitter verification scheme, called LocDef (localization-based defense), which verifies whether a given signal is that of an incumbent transmitter by estimating its location and observing its signal characteristics. To estimate the location of the signal transmitter, LocDef employs a non-interactive localization scheme. Our security analysis and simulation results suggest that LocDef is effective in identifying PUE attacks under certain conditions.
---
paper_title: Ensuring Trustworthy Spectrum Sensing in Cognitive Radio Networks
paper_content:
Cognitive Radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands without inducing interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs that has attracted great attention from researhers recently. Although the operational aspects of spectrum sensing are being investigated actively, its security aspects have garnered little attention. In this paper, we describe an attack that poses a great threat to spectrum sensing. In this attack, which is called the primary user emulation (PUE) attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. As a way of countering this threat, we propose a transmitter verification procedure that can be integrated into the spectrum sensing mechanism. The transmitter verification procedure employs a location verification scheme to distinguish incumbent signals from unlicensed signals masquerading as incumbent signals. Two alternative techniques are proposed to realize location verification: Distance Ratio Test and Distance Difference Test. We provide simulation results of the two techniques as well as analyses of their security in the paper.
---
paper_title: Security in Cognitive Radio Networks: Threats and Mitigation
paper_content:
This paper describes a new class of attacks specific to cognitive radio networks. Wireless devices that can learn from their environment can also be taught things by malicious elements of their environment. By putting artificial intelligence in charge of wireless network devices, we are allowing unanticipated, emergent behavior, fitting a perhaps distorted or manipulated level of optimality. The state space for a cognitive radio is made up of a variety of learned beliefs and current sensor inputs. By manipulating radio sensor inputs, an adversary can affect the beliefs of a radio, and consequently its behavior. In this paper we focus primarily on PHY-layer issues, describing several classes of attacks and giving specific examples for dynamic spectrum access and adaptive radio scenarios. These attacks demonstrate the capabilities of an attacker who can manipulate the spectral environment when a radio is learning. The most powerful of which is a self-propagating AI virus that could interactively teach radios to become malicious. We then describe some approaches for mitigating the effectiveness of these attacks by instilling some level of "common sense" into radio systems, and requiring learned beliefs to expire and be relearned. Lastly we provide a road-map for extending these ideas to higher layers in the network stack.
---
paper_title: Security and privacy of collaborative spectrum sensing in cognitive radio networks
paper_content:
Collaborative spectrum sensing is regarded as a promising approach to significantly improve the performance of spectrum sensing in cognitive radio networks. However, due to the open nature of wireless communications and the increasingly available software defined radio platforms, collaborative spectrum sensing also poses many new research challenges, especially in the aspect of security and privacy. In this article, we first identify the potential security threats toward collaborative spectrum sensing in CRNs. Then we review the existing proposals related to secure collaborative spectrum sensing. Furthermore, we identify several new location privacy related attacks in collaborative sensing, which are expected to compromise secondary users? location privacy by correlating their sensing reports and their physical location. To thwart these attacks, we propose a novel privacy preserving framework in collaborative spectrum sensing to prevent location privacy leaking. We design and implement a real-world testbed to evaluate the system performance. The attack experiment results show that if there is no any security guarantee, the attackers could successfully compromise a secondary user?s location privacy at a success rate of more than 90 percent. We also show that the proposed privacy preserving framework could significantly improve the location privacy of secondary users with a minimal effect on the performance of collaborative sensing.
---
paper_title: Cognitive Spectrum and Its Security Issues
paper_content:
The current trend for opportunistic use of the licensed or licensed-exempt wireless spectrum with limited rules, or even without rules, introduces significant scientific and technical challenges for the networks of the future. Until now, for the realization of the cognitive radio paradigm, several spectrum sharing schemes have been proposed, such as centralized and distributed schemes, and cooperative or noncooperative spectrum sharing mechanisms. Unfortunately, some of the existing proposals for spectrum sharing and management introduce significant security leakages, putting into effect unfairness, unavailability, and selfishness, or even malicious behaviors. Additionally, the identification, recording and reporting of selfish, free-riders, malicious and anomalous actions by peers is still an open issue in the majority of the existing spectrum management schemes. This paper discusses and classifies the weak points and the vulnerabilities of the spectrum sharing mechanisms.
---
paper_title: Security in Cognitive Radio Networks: The Required Evolution in Approaches to Wireless Network Security
paper_content:
This paper discusses the topic of wireless security in cognitive radio networks, delineating the key challenges in this area. With the ever-increasing scarcity of spectrum, cognitive radios are expected to become an increasingly important part of the overall wireless networking landscape. However, there is an important technical area that has received little attention to date in the cognitive radio paradigm: wireless security. The cognitive radio paradigm introduces entirely new classes of security threats and challenges, and providing strong security may prove to be the most difficult aspect of making cognitive radio a long-term commercially-viable concept. This paper delineates the key challenges in providing security in cognitive networks, discusses the current security posture of the emerging IEEE 802.22 cognitive radio standard, and identifies potential vulnerabilities along with potential mitigation approaches.
---
paper_title: Jamming and sensing of encrypted wireless ad hoc networks
paper_content:
This paper considers the problem of an attacker disrupting an encrypted victim wireless ad hoc network through jamming. Jamming is broken down into layers and this paper focuses on jamming at the Transport/Network layer. Jamming at this layer exploits AODV and TCP protocols and is shown to be very effective in simulated and real networks when it can sense victim packet types, but the encryption is assumed to mask the entire header and contents of the packet so that only packet size, timing, and sequence is available to the attacker for sensing. A sensor is developed that consists of four components. The first is a probabilistic model of the sizes and inter-packet timing of different packet types. The second is a historical method for detecting known protocol sequences that is used to develop the probabilistic models, the third is an active jamming mechanism to force the victim network to produce known sequences for the historical analyzer, and the fourth is the online classifier that makes packet type classification decisions. The method is tested on live data and found that for many packet types the classification is highly reliable. The relative roles of size, timing, and sequence are discussed along with the implications for making networks more secure.
---
paper_title: The feasibility of launching and detecting jamming attacks in wireless networks
paper_content:
Wireless networks are built upon a shared medium that makes it easy for adversaries to launch jamming-style attacks. These attacks can be easily accomplished by an adversary emitting radio frequency signals that do not follow an underlying MAC protocol. Jamming attacks can severely interfere with the normal operation of wireless networks and, consequently, mechanisms are needed that can cope with jamming attacks. In this paper, we examine radio interference attacks from both sides of the issue: first, we study the problem of conducting radio interference attacks on wireless networks, and second we examine the critical issue of diagnosing the presence of jamming attacks. Specifically, we propose four different jamming attack models that can be used by an adversary to disable the operation of a wireless network, and evaluate their effectiveness in terms of how each method affects the ability of a wireless node to send and receive packets. We then discuss different measurements that serve as the basis for detecting a jamming attack, and explore scenarios where each measurement by itself is not enough to reliably classify the presence of a jamming attack. In particular, we observe that signal strength and carrier sensing time are unable to conclusively detect the presence of a jammer. Further, we observe that although by using packet delivery ratio we may differentiate between congested and jammed scenarios, we are nonetheless unable to conclude whether poor link utility is due to jamming or the mobility of nodes. The fact that no single measurement is sufficient for reliably classifying the presence of a jammer is an important observation, and necessitates the development of enhanced detection schemes that can remove ambiguity when detecting a jammer. To address this need, we propose two enhanced detection protocols that employ consistency checking. The first scheme employs signal strength measurements as a reactive consistency check for poor packet delivery ratios, while the second scheme employs location information to serve as the consistency check. Throughout our discussions, we examine the feasibility and effectiveness of jamming attacks and detection schemes using the MICA2 Mote platform.
---
paper_title: Implementation issues in spectrum sensing for cognitive radios
paper_content:
There are new system implementation challenges involved in the design of cognitive radios, which have both the ability to sense the spectral environment and the flexibility to adapt transmission parameters to maximize system capacity while coexisting with legacy wireless networks. The critical design problem is the need to process multigigahertz wide bandwidth and reliably detect presence of primary users. This places severe requirements on sensitivity, linearity and dynamic range of the circuitry in the RF front-end. To improve radio sensitivity of the sensing function through processing gain we investigated three digital signal processing techniques: matched filtering, energy detection and cyclostationary feature detection. Our analysis shows that cyclostationary feature detection has advantages due to its ability to differentiate modulated signals, interference and noise in low signal to noise ratios. In addition, to further improve the sensing reliability, the advantage of a MAC protocol that exploits cooperation among many cognitive users is investigated.
---
paper_title: Defense against Primary User Emulation Attacks in Cognitive Radio Networks
paper_content:
Cognitive radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands while causing no interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs and its operational aspects are being investigated actively. However, the security aspects of spectrum sensing have garnered little attention. In this paper, we identify a threat to spectrum sensing, which we call the primary user emulation (PUE) attack. In this attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. To counter this threat, we propose a transmitter verification scheme, called LocDef (localization-based defense), which verifies whether a given signal is that of an incumbent transmitter by estimating its location and observing its signal characteristics. To estimate the location of the signal transmitter, LocDef employs a non-interactive localization scheme. Our security analysis and simulation results suggest that LocDef is effective in identifying PUE attacks under certain conditions.
---
paper_title: Security threats to signal classifiers using self-organizing maps
paper_content:
Spectrum sensing is required for many cognitive radio applications, including spectral awareness, interoperability, and dynamic spectrum access. Previous work has demonstrated the ill effects of primary user emulation attacks, and pointed out specific vulnerabilities in spectrum sensing that uses featurebased classifiers. This paper looks specifically at the use of unsupervised learning in signal classifiers, and attacks against self-organizing maps. By temporarily manipulating their signals, attackers can cause other secondary users to permanently misclassify them as primary users, giving them complete access to the spectrum. In the paper we develop the theory behind manipulating the decision regions in a neural network using self-organizing maps. We then demonstrate through simulation the ability for an attacker to formulate the necessary input signals to execute the attack. Lastly we provide recommendations to mitigate the efficacy of this type of attack.
---
paper_title: Ensuring Trustworthy Spectrum Sensing in Cognitive Radio Networks
paper_content:
Cognitive Radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands without inducing interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs that has attracted great attention from researhers recently. Although the operational aspects of spectrum sensing are being investigated actively, its security aspects have garnered little attention. In this paper, we describe an attack that poses a great threat to spectrum sensing. In this attack, which is called the primary user emulation (PUE) attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. As a way of countering this threat, we propose a transmitter verification procedure that can be integrated into the spectrum sensing mechanism. The transmitter verification procedure employs a location verification scheme to distinguish incumbent signals from unlicensed signals masquerading as incumbent signals. Two alternative techniques are proposed to realize location verification: Distance Ratio Test and Distance Difference Test. We provide simulation results of the two techniques as well as analyses of their security in the paper.
---
paper_title: The Sum of Log-Normal Probability Distributions in Scatter Transmission Systems
paper_content:
The long-term fluctuation of transmission loss in scatter propagation systems has been found to have a logarithmicnormal distribution. In other words, the scatter loss in decibels has Gaussian statistical distribution. Therefore, in many important communication systems (e.g., FM), the noise power of a radio jump, or hop, has log-normal statistical distribution. In a multihop system, the noise power of each hop contributes to the total noise. The resulting noise of the system is therefore the statistical sum of the individual noise distributions. In multihop scatter systems and others, such as multichannel speech-transmission systems, the sum of several log-normal distributions is needed. No exact solution to this problem is known. The following discussion presents an approximate solution which is satisfactory in most practical cases. For tactical multihop scatter systems, a further approximation is proposed, which reduces significantly the necessary computation. An example of the computation is given.
---
paper_title: Attack-proof collaborative spectrum sensing in cognitive radio networks
paper_content:
Collaborative sensing in cognitive radio networks can significantly improve the probability of detecting the transmission of primary users. In current collaborative sensing schemes, all collaborative secondary users are assumed to be honest. As a consequence, the system is vulnerable to attacks in which malicious secondary users report false detection results. In this paper, we investigate how to improve the security of collaborative sensing. Particularly, we develop a malicious user detection algorithm that calculates the suspicious level of secondary users based on their past reports. Then, we calculate trust values as well as consistency values that are used to eliminate the malicious users' influence on the primary user detection results. Through simulations, we show that even a single malicious user can significantly degrade the performance of collaborative sensing. The proposed trust value indicator can effectively differentiate honest and malicious secondary users. The receiver operating characteristic (ROC) curves for the primary user detection demonstrate the improvement in the security of collaborative sensing.
---
paper_title: An Analytical Model for Primary User Emulation Attacks in Cognitive Radio Networks
paper_content:
In this paper, we study the denial-of-service (DoS) attack on secondary users in a cognitive radio network by primary user emulation (PUE). Most approaches in the literature on primary user emulation attacks (PUEA) discuss mechanisms to deal with the attacks but not analytical models. Simulation studies and results from test beds have been presented but no analytical model relating the various parameters that could cause a PUE attack has been proposed and studied. We propose an analytical approach based on Fenton's approximation and Markov inequality and obtain a lower bound on the probability of a successful PUEA on a secondary user by a set of co-operating malicious users. We consider a fading wireless environment and discuss the various parameters that can affect the feasibility of a PUEA. We show that the probability of a successful PUEA increases with the distance between the primary transmitter and secondary users. This is the first analytical treatment to study the feasibility of a PUEA.
---
paper_title: CatchIt: Detect Malicious Nodes in Collaborative Spectrum Sensing
paper_content:
Collaborative spectrum sensing in cognitive radio networks has been proposed as an efficient way to improve the performance of primary users detection. In collaborative spectrum sensing schemes, secondary users are often assumed to be trustworthy. In practice, however, cognitive radio nodes can be compromised. Compromised secondary users can report false detection results and significantly degrade the performance of spectrum sensing. In this paper, we investigate the case that there are multiple malicious users in cognitive radio networks and the exact number of malicious users is unknown. An onion-peeling approach is proposed to defense against multiple untrustworthy secondary nodes. We calculate suspicious level of all nodes according to their reports. When the suspicious level of a node is beyond certain threshold, it will be considered as malicious and its report will be excluded in decision-making. We continue to calculate the suspicious level of remaining nodes until no malicious node can be found. Simulation results show that malicious nodes greatly degrade the performance of collaborative sensing, and the proposed scheme can efficiently detect malicious nodes. Compared with existing defense methods, the proposed scheme significantly improves the performance of primary user detection, measured by ROC curves, and captures the dynamic change in the behaviors of malicious users.
---
paper_title: Security in cognitive radio networks: An example using the commercial IEEE 802.22 standard
paper_content:
Security in wireless networks is challenging. Security in cognitive radio networks (CRN) is even more challenging. This is because a CRN consists of cognitive radios (CR) which have many more functions and processes to account for, such as sensing, geolocation, spectrum management, access to the policy database etc. Each of these functions and processes need to be assessed for potential vulnerabilities and security mechanisms need to be provided for protection of not just the secondary users of the spectrum but also the primary users or the incumbents. This paper discusses the potential security vulnerabilities and the remediations for the same in a CRN with an example using a commercial IEEE 802.22 standard. Due to the unique characteristics of the CRs in a CRN, enhanced security mechanisms are required. The security mechanisms in CRN are divided into several security sub-layers which protect non-cognitive as well as cognitive functions of the system and the interactions between the two. This paper describes these security features as incorporated into the IEEE 802.22 standard. It is possible to apply similar security mechanisms for a military CRN.
---
paper_title: Secure cooperative spectrum sensing for Cognitive Radio networks
paper_content:
A key enabling functionality in implementing Cognitive Radio is to reliably detect the licensed users. In recent literature, cooperation among spectrum sensing terminals is suggested to offer reliable sensing performance. We consider the problem that the presence of malfunctioning or malicious sensing terminals will severely degrade the performance of cooperative spectrum sensing. In this paper, we extend the Weighted Sequential Probability Ratio Test (WSPRT) by replacing the binary local report with N-bits local report to achieve a better detection performance. Additionally, three types of reputation rating evaluation schemes are introduced: neutral, punitive and heavy punitive. Simulation results show that the extended WSPRT technique improves detection performance. Moreover, the extended WSPRT with heavy punitive scheme is shown to be the most robust against the malfunctioning or malicious sensing terminals.
---
paper_title: Robust Distributed Spectrum Sensing in Cognitive Radio Networks
paper_content:
Distributed spectrum sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data will be reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various network operating conditions. Our simulation results indicate that WSPRT is the most robust against the Byzantine failure problem among the data fusion techniques that were considered.
---
paper_title: Attack-proof collaborative spectrum sensing in cognitive radio networks
paper_content:
Collaborative sensing in cognitive radio networks can significantly improve the probability of detecting the transmission of primary users. In current collaborative sensing schemes, all collaborative secondary users are assumed to be honest. As a consequence, the system is vulnerable to attacks in which malicious secondary users report false detection results. In this paper, we investigate how to improve the security of collaborative sensing. Particularly, we develop a malicious user detection algorithm that calculates the suspicious level of secondary users based on their past reports. Then, we calculate trust values as well as consistency values that are used to eliminate the malicious users' influence on the primary user detection results. Through simulations, we show that even a single malicious user can significantly degrade the performance of collaborative sensing. The proposed trust value indicator can effectively differentiate honest and malicious secondary users. The receiver operating characteristic (ROC) curves for the primary user detection demonstrate the improvement in the security of collaborative sensing.
---
paper_title: Catching Attacker(s) for Collaborative Spectrum Sensing in Cognitive Radio Systems: An Abnormality Detection Approach
paper_content:
Collaborative spectrum sensing, which collects local observations or decisions from multiple secondary users to make a decision by a fusion center, is an effective approach to alleviate the unreliability of single-user spectrum sensing. However, it is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and make attack-proof decisions for spectrum sensing. Most existing attacker detection schemes are based on the knowledge of the attacker's strategy and thus apply the Baeysian detection of attackers. However, in practical cognitive radio systems, the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case that the attacker does not know the reports of honest secondary users (called independent attack), it is numerically shown that attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case that the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss detection and false alarm probabilities. This motivates cognitive radio systems to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.
---
paper_title: Common Control Channel Security Framework for Cognitive Radio Networks
paper_content:
Cognitive radio networks are becoming an increasingly important part of the wireless networking landscape due to the ever-increasing scarcity of spectrum resources. Such networks perform co-operative spectrum sensing to find white spaces and apply policies to determine when and in which bands they may communicate. In a typical MAC protocol designed for cooperatively communicating ad hoc cognitive radio networks, nodes make use of a common control channel to perform channel negotiations before any actual data transmission. The provision of common control channel security is vital to ensure any subsequent security among the communicating cognitive radio nodes. To date, wireless security has received little attention in cognitive radio networks research. The cognitive radio paradigm introduces entirely new classes of security threats and challenges, such as selfish misbehaviours, licensed user emulation and eavesdropping. This paper presents a novel framework for providing common control channel security for co-operatively communicating cognitive radio nodes. To the best of the authors' knowledge, this is the first paper which proposes such a concept. The paper investigates how two cognitive radio nodes can authenticate each other prior to any confidential channel negotiations to ensure subsequent security against attacks. The paper also describes the importance of common control channel security and concludes with future work describing the realization of the proposed framework.
---
paper_title: Secure Cognitive Networks
paper_content:
Cognitive networks are intelligent networks that can automatically sense the environment and adapt the communication parameters accordingly. These types of networks have applications in dynamic spectrum access (DSA), co-existence of different wireless networks, interference management, etc. They are thought to drive the next generation of devices, protocols and applications. Clearly, the cognitive network paradigm poses many new technical challenges in protocol design, power efficiency, spectrum management, spectrum detection, environment awareness, new distributed algorithm design, distributed spectrum measurements, QoS guarantees, and security. Overcoming these issues becomes even more challenging due to non-uniform spectrum and other radio resource allocation policies, economic considerations, the inherent transmission impairments of wireless links, and user mobility. This paper will discuss the research challenges for security in cognitive networks (CNs) [1, 2]. It will present the security, and privacy requirements, threat analysis and finally proposes a framework for security using fast authentication and authorization architecture.
---
paper_title: Two Types of Attacks against Cognitive Radio Network MAC Protocols
paper_content:
Some typical MAC protocols have been proposed for multi-hop CR network recently. In a multi-hop MAC protocol, a node uses the common control channel to perform channel negotiation before data transmission. Recent research findings indicate that insecure transmission of control channels open vulnerable holes for the Denial of Service attacks. This paper makes a security analysis for CR network MAC protocols. There are two types of attacks against CR network MAC protocols. Firstly, we study how denial of service (DoS) attack is launched in multi-hop CR network MAC protocols. Then, we explore MAC layer greedy behaviors in CR networks. Our analysis and simulations indicate that such attacks can greatly affect the performance of CR networks. And the key factors for attack efficiency are presented in this paper.
---
paper_title: Security in Cognitive Radio Networks: Threats and Mitigation
paper_content:
This paper describes a new class of attacks specific to cognitive radio networks. Wireless devices that can learn from their environment can also be taught things by malicious elements of their environment. By putting artificial intelligence in charge of wireless network devices, we are allowing unanticipated, emergent behavior, fitting a perhaps distorted or manipulated level of optimality. The state space for a cognitive radio is made up of a variety of learned beliefs and current sensor inputs. By manipulating radio sensor inputs, an adversary can affect the beliefs of a radio, and consequently its behavior. In this paper we focus primarily on PHY-layer issues, describing several classes of attacks and giving specific examples for dynamic spectrum access and adaptive radio scenarios. These attacks demonstrate the capabilities of an attacker who can manipulate the spectral environment when a radio is learning. The most powerful of which is a self-propagating AI virus that could interactively teach radios to become malicious. We then describe some approaches for mitigating the effectiveness of these attacks by instilling some level of "common sense" into radio systems, and requiring learned beliefs to expire and be relearned. Lastly we provide a road-map for extending these ideas to higher layers in the network stack.
---
paper_title: A Proposed Propagation-Based Methodology with Which to Address the Hidden Node Problem and Security/Reliability Issues in Cognitive Radio
paper_content:
A highly accurate, fast and robust integral equation-based propagation method is presented here. It is explained how this method, in conjunction with a radio environment mapping server can be used to address the dasiahidden node problempsila and salient security/reliability issues in cognitive radio (CR) by accurately quantifying the effects or CR transmissions in realtime thus allaying the legitimate concerns of primary users regarding the deployment of CR technology. A roadmap for the development of the propagation method is given such that sufficient accuracy and execution times can be achieved.
---
paper_title: Cooperative Shared Spectrum Sensing for Dynamic Cognitive Radio Networks
paper_content:
Cooperative spectrum sensing for cognitive radio networks is recently being studied to simultaneously minimize uncertainty in primary user detection and solve hidden terminal problem. Sensing wideband spectrum is another challenging task for a single cognitive radio due to large sensing time required. In this paper, we introduce a technique to tackle both wideband and cooperative spectrum sensing tasks. We divide the wideband spectrum into several subbands. Then a group of cognitive radios is assigned for sensing of a particular narrow subband. A cognitive base station is used for collecting the results and making the final decision over the full spectrum. Our proposed algorithm minimizes time and amount of energy spent for wideband spectrum scanning by a cognitive radio, and effectively detects the primary users in the wideband spectrum thanks to cooperative shared spectrum sensing.
---
paper_title: Security and privacy of collaborative spectrum sensing in cognitive radio networks
paper_content:
Collaborative spectrum sensing is regarded as a promising approach to significantly improve the performance of spectrum sensing in cognitive radio networks. However, due to the open nature of wireless communications and the increasingly available software defined radio platforms, collaborative spectrum sensing also poses many new research challenges, especially in the aspect of security and privacy. In this article, we first identify the potential security threats toward collaborative spectrum sensing in CRNs. Then we review the existing proposals related to secure collaborative spectrum sensing. Furthermore, we identify several new location privacy related attacks in collaborative sensing, which are expected to compromise secondary users? location privacy by correlating their sensing reports and their physical location. To thwart these attacks, we propose a novel privacy preserving framework in collaborative spectrum sensing to prevent location privacy leaking. We design and implement a real-world testbed to evaluate the system performance. The attack experiment results show that if there is no any security guarantee, the attackers could successfully compromise a secondary user?s location privacy at a success rate of more than 90 percent. We also show that the proposed privacy preserving framework could significantly improve the location privacy of secondary users with a minimal effect on the performance of collaborative sensing.
---
paper_title: Defense against Primary User Emulation Attacks in Cognitive Radio Networks
paper_content:
Cognitive radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands while causing no interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs and its operational aspects are being investigated actively. However, the security aspects of spectrum sensing have garnered little attention. In this paper, we identify a threat to spectrum sensing, which we call the primary user emulation (PUE) attack. In this attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. To counter this threat, we propose a transmitter verification scheme, called LocDef (localization-based defense), which verifies whether a given signal is that of an incumbent transmitter by estimating its location and observing its signal characteristics. To estimate the location of the signal transmitter, LocDef employs a non-interactive localization scheme. Our security analysis and simulation results suggest that LocDef is effective in identifying PUE attacks under certain conditions.
---
paper_title: CatchIt: Detect Malicious Nodes in Collaborative Spectrum Sensing
paper_content:
Collaborative spectrum sensing in cognitive radio networks has been proposed as an efficient way to improve the performance of primary users detection. In collaborative spectrum sensing schemes, secondary users are often assumed to be trustworthy. In practice, however, cognitive radio nodes can be compromised. Compromised secondary users can report false detection results and significantly degrade the performance of spectrum sensing. In this paper, we investigate the case that there are multiple malicious users in cognitive radio networks and the exact number of malicious users is unknown. An onion-peeling approach is proposed to defense against multiple untrustworthy secondary nodes. We calculate suspicious level of all nodes according to their reports. When the suspicious level of a node is beyond certain threshold, it will be considered as malicious and its report will be excluded in decision-making. We continue to calculate the suspicious level of remaining nodes until no malicious node can be found. Simulation results show that malicious nodes greatly degrade the performance of collaborative sensing, and the proposed scheme can efficiently detect malicious nodes. Compared with existing defense methods, the proposed scheme significantly improves the performance of primary user detection, measured by ROC curves, and captures the dynamic change in the behaviors of malicious users.
---
paper_title: Achieving cooperative spectrum sensing in wireless cognitive radio networks
paper_content:
Dynamic spectrum access has been studied to exploit instantaneous spectrum availability by opening licensed spectrum to secondary users. To achieve high spectrum efficiency, secondary unlicensed users need to continuously sense spectrum to detect the presence of primary licensed users. Cooperative spectrum sensing has been recognized as a powerful solution to improve spectrum sensing performance, which requires nearby wireless nodes to share sensing results with each other. However, information sharing is achieved through broadcasting in wireless networks, which can provide free-riding opportunity for selfish nodes. Selfish nodes can benefit from receiving the sensing results from its neighbors by free without sharing. Therefore, appropriate strategies are essential to enforce and sustain the cooperation among neighboring nodes. In this paper we model cooperative spectrum sensing as an N-player horizontal infinite game, and study varies strategies for it. In wireless networks, the frequently occurred collisions make the cooperation enforcement problem quite challenging as it is hard to tell whether the information lost is due to nodes' selfishness or wireless collision. In this paper, we prove that Grim Trigger strategy, a classical strategy to stimulate cooperation in an infinite game, can result in poor performance due to random errors. We then propose a strategy basing on Carrot-and-Stick strategy, which can recover cooperation among multiple players from deviation. We prove that if nodes are sufficiently far-sight, or equivalently the entire system runs sufficiently long, the Nash Equilibrium of the proposed strategy for spectrum sensing game is still mutual cooperation, even under collision situation. We also prove that the proposed strategy is robust to collisions and colluding cheat.
---
paper_title: Evolutionary cooperative spectrum sensing game: how to collaborate?
paper_content:
Cooperative spectrum sensing has been shown to be able to greatly improve the sensing performance in cognitive radio networks. However, if cognitive users belong to different service providers, they tend to contribute less in sensing in order to increase their own throughput. In this paper, we propose an evolutionary game framework to answer the question of "how to collaborate" in multiuser de-centralized cooperative spectrum sensing, because evolutionary game theory provides an excellent means to address the strategic uncertainty that a user/player may face by exploring different actions, adaptively learning during the strategic interactions, and approaching the best response strategy under changing conditions and environments using replicator dynamics. We derive the behavior dynamics and the evolutionarily stable strategy (ESS) of the secondary users. We then prove that the dynamics converge to the ESS, which renders the possibility of a de-centralized implementation of the proposed sensing game. According to the dynamics, we further develop a distributed learning algorithm so that the secondary users approach the ESS solely based on their own payoff observations. Simulation results show that the average throughput achieved in the proposed cooperative sensing game is higher than the case where secondary users sense the primary user individually without cooperation. The proposed game is demonstrated to converge to the ESS, and achieve a higher system throughput than the fully cooperative scenario, where all users contribute to sensing in every time slot.
---
paper_title: Authenticating Primary Users' Signals in Cognitive Radio Networks via Integrated Cryptographic and Wireless Link Signatures
paper_content:
To address the increasing demand for wireless bandwidth, cognitive radio networks (CRNs) have been proposed to increase the efficiency of channel utilization; they enable the sharing of channels among secondary (unlicensed) and primary (licensed) users on a non-interference basis. A secondary user in a CRN should constantly monitor for the presence of a primary user's signal to avoid interfering with the primary user. However, to gain unfair share of radio channels, an attacker (e.g., a selfish secondary user) may mimic a primary user's signal to evict other secondary users. Therefore, a secure primary user detection method that can distinguish a primary user's signal from an attacker's signal is needed. A unique challenge in addressing this problem is that Federal Communications Commission (FCC) prohibits any modification to primary users. Consequently, existing cryptographic techniques cannot be used directly. In this paper, we develop a novel approach for authenticating primary users' signals in CRNs, which conforms to FCC's requirement. Our approach integrates cryptographic signatures and wireless link signatures (derived from physical radio channel characteristics) to enable primary user detection in the presence of attackers. Essential to our approach is a {\em helper node} placed physically close to a primary user. The helper node serves as a "bridge" to enable a secondary user to verify cryptographic signatures carried by the helper node's signals and then obtain the helper node's authentic link signatures to verify the primary user's signals. A key contribution in our paper is a novel physical layer authentication technique that enables the helper node to authenticate signals from its associated primary user. Unlike previous techniques for link signatures, our approach explores the geographical proximity of the helper node to the primary user, and thus does not require any training process.
---
paper_title: Using Classification to Protect the Integrity of Spectrum Measurements in White Space Networks
paper_content:
The emerging paradigm for using the wireless spectrum more efficiently is based on enabling secondary users to exploit white-space frequencies that are not occupied by primary users. A key enabling technology for forming networks over white spaces is distributed spectrum measurements to identify and assess the quality of unused channels. This spectrum availability data is often aggregated at a central base station or database to govern the usage of spectrum. This process is vulnerable to integrity violations if the devices are malicious and misreport spectrum sensing results. In this paper we propose CUSP, a new technique based on machine learning that uses a trusted initial set of signal propagation data in a region as input to build a classifier using Support Vector Machines. The classifier is subsequently used to detect integrity violations. Using classification eliminates the need for arbitrary assumptions about signal propagation models and parameters or thresholds in favor of direct training data. Extensive evaluations using TV transmitter data from the FCC, terrain data from NASA, and house density data from the US Census Bureau for areas in Illinois and Pennsylvania show that our technique is effective against attackers of varying sophistication, while accommodating for regional terrain and shadowing diversity. 1
---
paper_title: Double Thresholds Based Cooperative Spectrum Sensing Against Untrusted Secondary Users in Cognitive Radio Networks
paper_content:
Spectrum sensing is an essential mechanism in a cognitive radio network. However, the detection performance will be greatly degraded when few untrusted secondary users exist which can be identified as 'Always Yes' users and 'Always No' users. In this paper, we propose an energy detector with double thresholds combined with revised data fusion rules to find these untrusted users and counteract their malicious effects. Probabilities of detection and false alarm of three kinds of revised data fusion rules are derived. Compared with the conventional cooperative methods, our method can get a better detection performance.
---
paper_title: Secure Cooperative Sensing Techniques for Cognitive Radio Systems
paper_content:
The most important task for a cognitive radio (CR) system is to identify the primary licensed users over a wide range of spectrum. Cooperation among spectrum sensing devices has been shown to offer various benefits including decrease in sensitivity requirements of the individual sensing devices. However, it has been shown in the literature that the performance of cooperative sensing schemes can be severely degraded due to presence of malicious users sending false sensing data. In this paper, we present techniques to identify such malicious users and mitigate their harmful effect on the performance of the cooperative sensing system.
---
paper_title: An Efficient, Secure and User Privacy-Preserving Search Protocol for Peer-to-Peer Networks
paper_content:
A peer-to-peer (P2P) network is a distributed system in which the autonomous peers can leave and join the network at their will and share their resources to perform some functions in a distributed manner. In an unstructured P2P network, there is no centralized administrative entity that controls the operations of the peers, and the resources (i.e., the files) that the peers share are not related to the their topological positions in the network. With the advent of the Internet of Things (IoT), the P2P networks have found increased interest in the research community since the search protocols for these networks can be gainfully utilized in the resource discovery process for the IoT applications. However, there are several challenges in designing an efficient search protocol for the unstructured P2P networks since these networks suffer from problems such as fake content distribution, free riding, whitewashing, poor search scalability, lack of a robust trust model and the absence a of user privacy protection mechanism. Moreover, the peers can join and leave the network frequently, which makes trust management and searching in these networks quite a challenging task. In this chapter, a secure and efficient searching protocol for unstructured P2P networks is proposed that utilizes topology adaptation by constructing an overlay of trusted peers and increases the search efficiency by intelligently exploiting the formation of semantic community structures among the trustworthy peers. It also guarantees that the privacy of the users and data in the network is protected. Extensive simulation results are presented and the performance of the protocol is also compared with those of some of the existing protocols to demonstrate its advantages.
---
paper_title: The Research of Cross-Layer Architecture Design and Security for Cognitive Radio Network
paper_content:
In this paper, we discuss the framework of cognitive radio network, analysis motivations for cross-layer design and security in cognitive radio network (CRN) first. Secondly, we proposed a novel architecture in which the dynamic channel access is achieved by a cross-layer design between the PHY and MAC layers for cognitive radio network. Moreover the resolution of cross-layer security problem is proposed and analysis in mathematic. Finally, we discuss the security issues of spectrum sensing for Centralized CRN and a novel centralized dynamic channel access mechanism is proposed, simulation shows it can improve network performance.
---
paper_title: CatchIt: Detect Malicious Nodes in Collaborative Spectrum Sensing
paper_content:
Collaborative spectrum sensing in cognitive radio networks has been proposed as an efficient way to improve the performance of primary users detection. In collaborative spectrum sensing schemes, secondary users are often assumed to be trustworthy. In practice, however, cognitive radio nodes can be compromised. Compromised secondary users can report false detection results and significantly degrade the performance of spectrum sensing. In this paper, we investigate the case that there are multiple malicious users in cognitive radio networks and the exact number of malicious users is unknown. An onion-peeling approach is proposed to defense against multiple untrustworthy secondary nodes. We calculate suspicious level of all nodes according to their reports. When the suspicious level of a node is beyond certain threshold, it will be considered as malicious and its report will be excluded in decision-making. We continue to calculate the suspicious level of remaining nodes until no malicious node can be found. Simulation results show that malicious nodes greatly degrade the performance of collaborative sensing, and the proposed scheme can efficiently detect malicious nodes. Compared with existing defense methods, the proposed scheme significantly improves the performance of primary user detection, measured by ROC curves, and captures the dynamic change in the behaviors of malicious users.
---
paper_title: Reputation-based cooperative spectrum sensing with trusted nodes assistance
paper_content:
Existing cooperative spectrum sensing (CSS) schemes are typically vulnerable to attacks where misbehaved cognitive radios (CRs) falsify sensing data. To ensure the robustness of spectrum sensing, this letter presents a secure CSS scheme by introducing a reputation-based mechanism to identify misbehaviors and mitigate their harmful effect on sensing performance. Encouraged by the fact that such secure CSS is sensitive to the correctness of reputations, we further present a trusted node assistance scheme. This scheme starts with reliable CRs. Sensing information from other CRs are incorporated into cooperative sensing only when their reputation is verified, which increases robustness of cooperative sensing. Simulations verify the effectiveness of the proposed schemes.
---
paper_title: Defense against spectrum sensing data falsification attacks in mobile ad hoc networks with cognitive radios
paper_content:
Cognitive radios (CRs) have been considered for use in mobile ad hoc networks (MANETs). The area of security in Cognitive Radio MANETs (CR-MANETs) has yet to receive much attention. However, some distinct characteristics of CRs introduce new, non-trivial security risks to CR-MANETs. In this paper, we study spectrum sensing data falsification (SSDF) attacks to CR-MANETs, in which intruders send false local spectrum sensing results in cooperative spectrum sensing, and SSDF may result in incorrect spectrum sensing decisions by CRs. We present a consensus-based cooperative spectrum sensing scheme to counter SSDF attacks in CR-MANETs. Our scheme is based on recent advances in consensus algorithms that have taken inspiration from self-organizing behavior of animal groups such as fish. Unlike the existing schemes, there is no need for a common receiver to do the data fusion for reaching the final decision to counter SSDF attacks. Simulation results are presented to show the effectiveness of the proposed scheme.
---
paper_title: Catching Attacker(s) for Collaborative Spectrum Sensing in Cognitive Radio Systems: An Abnormality Detection Approach
paper_content:
Collaborative spectrum sensing, which collects local observations or decisions from multiple secondary users to make a decision by a fusion center, is an effective approach to alleviate the unreliability of single-user spectrum sensing. However, it is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and make attack-proof decisions for spectrum sensing. Most existing attacker detection schemes are based on the knowledge of the attacker's strategy and thus apply the Baeysian detection of attackers. However, in practical cognitive radio systems, the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case that the attacker does not know the reports of honest secondary users (called independent attack), it is numerically shown that attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case that the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss detection and false alarm probabilities. This motivates cognitive radio systems to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.
---
paper_title: Cooperative Spectrum Sensing with Double Threshold Detection Based on Reputation in Cognitive Radio
paper_content:
Cooperative spectrum sensing can mitigate the effects of shadowing and fading. However, when the number of cognitive users is very large, the bandwidth for reporting their sensing results will be insufficient. In order to eliminate the fail sensing problem for a cognitive radio system with double threshold detector, a new cooperative spectrum sensing algorithm is presented based on reputation in this paper. In particular, the closed forms for the normalized average number of sensing bits, the probabilities of the detection and the false-alarm are derived. Simulation results show that the average number of sensing bits decreases greatly without failing sensing, and the sensing performance is improved comparing with the conventional double threshold detection and the conventional single threshold detection.
---
paper_title: Reputation- and Trust-Based Systems for Wireless Self-organizing Networks
paper_content:
Traditional approach of providing network security has been to borrow tools and mechanisms from cryptography. However, the conventional view of security based on cryptography alone is not sufficient for the defending against unique and novel types of misbehavior exhibited by nodes in wireless self-organizing networks such as mobile ad hoc networks and wireless sensor networks. Reputation-based frameworks, where nodes maintain reputation of other nodes and use it to evaluate their trustworthiness, are deployed to provide scalable, diverse and a generalized approach for countering different types of misbehavior resulting form malicious and selfish nodes in these networks. In this chapter, we present a comprehensive discussion on reputation and trust-based systems for wireless self-organizing networks. Different classes of reputation system are described along with their unique characteristics and working principles. A number of currently used reputation systems are critically reviewed and compared with respect to their effectiveness and efficiency of performance. Some open problems in the area of reputation and trust-based system within the domain of wireless self-organizing networks are also discussed.
---
paper_title: Robust Distributed Spectrum Sensing in Cognitive Radio Networks
paper_content:
Distributed spectrum sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data will be reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various network operating conditions. Our simulation results indicate that WSPRT is the most robust against the Byzantine failure problem among the data fusion techniques that were considered.
---
paper_title: Attack-proof collaborative spectrum sensing in cognitive radio networks
paper_content:
Collaborative sensing in cognitive radio networks can significantly improve the probability of detecting the transmission of primary users. In current collaborative sensing schemes, all collaborative secondary users are assumed to be honest. As a consequence, the system is vulnerable to attacks in which malicious secondary users report false detection results. In this paper, we investigate how to improve the security of collaborative sensing. Particularly, we develop a malicious user detection algorithm that calculates the suspicious level of secondary users based on their past reports. Then, we calculate trust values as well as consistency values that are used to eliminate the malicious users' influence on the primary user detection results. Through simulations, we show that even a single malicious user can significantly degrade the performance of collaborative sensing. The proposed trust value indicator can effectively differentiate honest and malicious secondary users. The receiver operating characteristic (ROC) curves for the primary user detection demonstrate the improvement in the security of collaborative sensing.
---
paper_title: Collaborative Spectrum Sensing in the Presence of Byzantine Attacks in Cognitive Radio Networks
paper_content:
Cognitive radio (CR) has emerged as a solution to the problem of spectrum scarcity as it exploits the transmission opportunities in the under-utilized spectrum bands of primary users. Collaborative (or distributed) spectrum sensing has been shown to have various advantages in terms of spectrum utilization and robustness. The data fusion scheme is a key component of collaborative spectrum sensing. In this paper, we analyze the performance limits of collaborative spectrum sensing under Byzantine Attacks where malicious users send false sensing data to the fusion center leading to increased probability of incorrect sensing results. We show that above a certain fraction of Byzantine attackers in the CR network, data fusion scheme becomes completely incapable and no reputation based fusion scheme can achieve any performance gain. We present optimal attacking strategies for given attacking resources and also analyze the possible counter measures at the fusion center (FC).
---
paper_title: A survey of spectrum sensing algorithms for cognitive radio applications
paper_content:
The spectrum sensing problem has gained new aspects with cognitive radio and opportunistic spectrum access concepts. It is one of the most challenging issues in cognitive radio systems. In this paper, a survey of spectrum sensing methodologies for cognitive radio is presented. Various aspects of spectrum sensing problem are studied from a cognitive radio perspective and multi-dimensional spectrum sensing concept is introduced. Challenges associated with spectrum sensing are given and enabling spectrum sensing methods are reviewed. The paper explains the cooperative sensing concept and its various forms. External sensing algorithms and other alternative sensing methods are discussed. Furthermore, statistical modeling of network traffic and utilization of these models for prediction of primary user behavior is studied. Finally, sensing features of some current wireless standards are given.
---
paper_title: Security threats to signal classifiers using self-organizing maps
paper_content:
Spectrum sensing is required for many cognitive radio applications, including spectral awareness, interoperability, and dynamic spectrum access. Previous work has demonstrated the ill effects of primary user emulation attacks, and pointed out specific vulnerabilities in spectrum sensing that uses featurebased classifiers. This paper looks specifically at the use of unsupervised learning in signal classifiers, and attacks against self-organizing maps. By temporarily manipulating their signals, attackers can cause other secondary users to permanently misclassify them as primary users, giving them complete access to the spectrum. In the paper we develop the theory behind manipulating the decision regions in a neural network using self-organizing maps. We then demonstrate through simulation the ability for an attacker to formulate the necessary input signals to execute the attack. Lastly we provide recommendations to mitigate the efficacy of this type of attack.
---
paper_title: Ensuring Trustworthy Spectrum Sensing in Cognitive Radio Networks
paper_content:
Cognitive Radio (CR) is a promising technology that can alleviate the spectrum shortage problem by enabling unlicensed users equipped with CRs to coexist with incumbent users in licensed spectrum bands without inducing interference to incumbent communications. Spectrum sensing is one of the essential mechanisms of CRs that has attracted great attention from researhers recently. Although the operational aspects of spectrum sensing are being investigated actively, its security aspects have garnered little attention. In this paper, we describe an attack that poses a great threat to spectrum sensing. In this attack, which is called the primary user emulation (PUE) attack, an adversary's CR transmits signals whose characteristics emulate those of incumbent signals. The highly flexible, software-based air interface of CRs makes such an attack possible. Our investigation shows that a PUE attack can severely interfere with the spectrum sensing process and significantly reduce the channel resources available to legitimate unlicensed users. As a way of countering this threat, we propose a transmitter verification procedure that can be integrated into the spectrum sensing mechanism. The transmitter verification procedure employs a location verification scheme to distinguish incumbent signals from unlicensed signals masquerading as incumbent signals. Two alternative techniques are proposed to realize location verification: Distance Ratio Test and Distance Difference Test. We provide simulation results of the two techniques as well as analyses of their security in the paper.
---
paper_title: A New Cooperative Detection Technique with Malicious User Suppression
paper_content:
Spectrum detection for vacant bands is one of the key techniques in cognitive radio (CR) systems. Cooperative detection outperforms single user detection in many aspects. The existence of malicious user could severely degrade the performance of cooperative CR systems. In this paper, a new cooperative detection scheme with malicious user suppression is proposed, which has lower complexity and better performance compared with the existing one. Simulation results show that when 25% users in the system are malicious, our proposed method can introduce more improvement of missed detection probability by nearly 20%.
---
paper_title: A PHY-layer Authentication Approach for Transmitter Identification in Cognitive Radio Networks
paper_content:
Cognitive radio (CR) was proposed as the key technologies to achieve the secondary usage of the spectrum. The security problems of CR networks have not been intensively studied, such as the primary user emulation (PUE) attacks. In this paper, we study the non-interactive security issues for wireless networks and propose a physical layer authentication approach to prevent PUE attacks in CR networks. We extract transmitter location fingerprints from the wireless medium in the multipath propagation environment. Wavelet transform is used to extract the characteristics of these fingerprints. Simulation and experiment results show that our approach can identify the PUE attackers and the legitimate primary users effectively.
---
paper_title: On Secure Spectrum Sensing in Cognitive Radio Networks Using Emitters Electromagnetic Signature
paper_content:
As Cognitive Radio Network (CRN) emerges as an extremely promising next generation wireless technology that can ease the apparent spectrum scarcity and support novel wireless applications; they will become bigger targets for hackers. Moreover, they will also be exposed to diverse security threats especially at the physical layer (PHY) spectrum sensing module. Hence, security consideration is central in its development. Starting with overview of on-going research efforts in CR-based network security, this paper describes a PHY attacker model that exploits the adaptability and flexibility of CRN. In thwarting this attack, we propose a waveform pattern recognition scheme to identify emitters and detect camouflaging attackers by using the Electromagnetic Signature (EMS) of the transceiver. On the performance of the technique, our simulation results show that our approach is effective for spectrum monitoring, mitigating denial-of-service threats and facilitating spectral efficiency.
---
paper_title: Towards Secure Spectrum Decision
paper_content:
The key idea of dynamic spectrum access (DSA) networks is to allow the secondary, unlicensed users to detect and use unused portions of the spectrum (white spaces) opportunistically. The two main constraints in the design of DSA networks is to make sure that this opportunistic access is done without any disruption of service to the primary users and without any modifications to the primaries themselves. Most architectures and protocols for DSA networks in the literature assume that all parties are honest and that there are no attackers. Recently (IEEE ICC, CogNet 2008) we demonstrated the failure of this approach by showing that an attacker can manipulate messages to convince the parties involved in the protocol to make incorrect spectrum decisions. In this paper, we consider spectrum decision protocols in clustered infrastructure-based dynamic spectrum access networks where the spectrum decision in each cluster is coordinated by some central authority. We propose an efficient and provably secure protocol that can be used to protect the spectrum decision process against a malicious adversary.
---
paper_title: Self-Organized Public-Key Management for Mobile Ad Hoc Networks
paper_content:
In contrast with conventional networks, mobile ad hoc networks usually do not provide online access to trusted authorities or to centralized servers, and they exhibit frequent partitioning due to link and node failures and to node mobility. For these reasons, traditional security solutions that require online trusted authorities or certificate repositories are not well-suited for securing ad hoc networks. We propose a fully self-organized public-key management system that allows users to generate their public-private key pairs, to issue certificates, and to perform authentication regardless of the network partitions and without any centralized services. Furthermore, our approach does not require any trusted authority, not even in the system initialization phase.
---
paper_title: A Radio-independent Authentication Protocol (EAP-CRP) for Networks of Cognitive Radios
paper_content:
Securing future wireless networks will be a critical challenge as the popularity of mobile communications implies that wireless networks will be the target of abuse. The next generation of wireless networks, as envisioned by recent advances in cognitive radio (CR) technologies, will be autonomic and able to adjust their configuration to changes in the communication environment. Unfortunately, the authentication frameworks for various radio technologies, such as IEEE802.11 and 802.16, are quite different from one another and, in order to support radio reconfiguration, it is necessary to devise an appropriate authentication framework for CR systems. In this paper, we propose a radio-independent authentication protocol for CRs that is independent of the underlying radio protocols and able to support EAP transport. The re-keying protocol assumes user-specific information, such as location information, as a key seed. The keys for authentication and encryption are derived from the historical location registry of a mobile terminal. The keys are frequently updated as mobile users' position varies. After discussing authentication issues for CR networks, radio-independent authentication via location information, and application to EAP transport, we evaluate the confidentiality of the key management method and its integration with EAP, thereby supporting the effectiveness of our key management method for CR networks.
---
paper_title: Common Control Channel Security Framework for Cognitive Radio Networks
paper_content:
Cognitive radio networks are becoming an increasingly important part of the wireless networking landscape due to the ever-increasing scarcity of spectrum resources. Such networks perform co-operative spectrum sensing to find white spaces and apply policies to determine when and in which bands they may communicate. In a typical MAC protocol designed for cooperatively communicating ad hoc cognitive radio networks, nodes make use of a common control channel to perform channel negotiations before any actual data transmission. The provision of common control channel security is vital to ensure any subsequent security among the communicating cognitive radio nodes. To date, wireless security has received little attention in cognitive radio networks research. The cognitive radio paradigm introduces entirely new classes of security threats and challenges, such as selfish misbehaviours, licensed user emulation and eavesdropping. This paper presents a novel framework for providing common control channel security for co-operatively communicating cognitive radio nodes. To the best of the authors' knowledge, this is the first paper which proposes such a concept. The paper investigates how two cognitive radio nodes can authenticate each other prior to any confidential channel negotiations to ensure subsequent security against attacks. The paper also describes the importance of common control channel security and concludes with future work describing the realization of the proposed framework.
---
paper_title: Spectrum Enforcement and Liability Assignment in Cognitive Radio Systems
paper_content:
The advent of frequency-agile radios holds the potential for improving the utilization of spectrum by allowing wireless systems to dynamically adapt their spectral footprint based on the local conditions. Whether this is done using market mechanisms or opportunistic approaches, the gains result from shifting some responsibility for avoiding harmful interference from the static "regulatory layer" to layers that can adapt at runtime. However, this leaves open the major problem of how to enforce/incentivize compliance and what the structure of "light-handed" regulation should be. This paper examines this and focuses on two specific technical problems: (a) determining whether harmful interference is occurring and (b) assigning liability by detecting the culprits. "Light-handed regulation" is interpreted as making unambiguous (and easily certified) requirements on the behavior of individual devices themselves while still preserving significant freedom to innovate at both the device and the system level. The basic idea explored here is to require the PHY/MAC layers of a cognitive radio to guarantee silence during certain time- slots where the exact sequence of required silences is given by a device/system -specific code. Thus, if a system is a source of harmful interference, the interference pattern itself contains the signature of the culprit. Nevertheless, identifying the unique interference pattern becomes challenging as both the number of cognitive radios and the number of harmful interferers increases. The key tradeoffs are explored in terms of the "regulatory overhead" (amount of enforced silence) needed to make guarantees. The quality of regulatory guarantees is expressed by the time required to convict the guilty, the number of potential cognitive systems that can be supported, and the number of simultaneously guilty parties that can be resolved. We show that the time to conviction need only scale logarithmically in the potential number of cognitive users. The base of the logarithm is determined by the amount of overhead that we will tolerate and how many guilty parties we want to be able to resolve.
---
paper_title: Policy-based spectrum access control for dynamic spectrum access network radios
paper_content:
We describe the design of a policy-based spectrum access control system for the Defense Advanced Research Projects Agency (DARPA) NeXt Generation (XG) communications program to overcome harmful interference caused by a malfunctioning device or a malicious user. In tandem with signal-detection-based interference-avoidance algorithms employed by cognitive software-defined radios (SDR), we design a set of policy-based components, tightly integrated with the accredited kernel on the radio device. The policy conformance and enforcement components ensure that a radio does not violate machine understandable policies, which are encoded in a declarative language and which define stakeholders' goals and requirements. We report on our framework experimentation, illustrating the capability offered to radios for enforcing policies and the capability for managing radios and securing access control to interfaces changing the radios' policies.
---
paper_title: TRIESTE: A Trusted Radio Infrastructure for Enforcing SpecTrum Etiquettes
paper_content:
There has been considerable effort directed at developing "cognitive radio" (CR) platforms, which will expose the lower-layers of the protocol stack to researchers, developers and the "public". In spite of the great potential of such a radio platform, such "public" development threatens the success of these platforms: the proliferation of such wireless platforms, plus the open-source nature of their supporting software, is powerful but also dangerous. It is easily conceivable that inexpensive and widely available cognitive radios could become an ideal platform for abuse since the lowest layers of the wireless protocol stack are accessible to programmers. In order to regulate the future radio environment, this paper presents a framework, known as TRIESTE (Trusted Radio Infrastructure for Enforcing SpecTrum Etiquettes), which can ensure that radio devices are only able to access/use the spectrum in a manner that conforms to their privileges. In TRIESTE, two levels of etiquette enforcement mechanisms are employed. The first is an on-board mechanism that ensures trustworthy radio operation by restricting any potential violation operation from accessing the radio through a secure component located in each CR. External to individual cognitive radios, an infrastructure consisting of spectrum sensors monitors the radio environment, and reports measurements to spectrum police agents that punish CRs if violations are detected.
---
paper_title: Secure Physical Layer using Dynamic Permutations in Cognitive OFDMA Systems
paper_content:
This paper proposes a novel lightweight mechanism for a secure physical (PHY) layer in cognitive radio network (CRN) using orthogonal frequency division multiplexing (OFDM). User's data symbols are mapped over the physical subcarriers with a permutation formula. The PHY layer is secured with a random and dynamic subcarrier permutation which is based on a single pre-shared information and depends on dynamic spectrum access (DSA). The dynamic subcarrier permutation is varying over time, geographical location and environment status, resulting in a very robust protection that ensures confidentiality. The method is shown to be effective also for existing non-cognitive systems. The proposed mechanism is effective against eavesdropping even if the eavesdropper adopts a long-time patterns analysis, thus protecting cryptography techniques of higher layers. The correlation properties of the permutations are analyzed for several DSA patterns. Simulations are performed according to the parameters of the IEEE 802.16e system model. The securing mechanism proposed provides intrinsic PHY layer security and it can be easily implemented in the current IEEE 802.16 standard applying almost negligible modifications.
---
paper_title: Anti-jamming coding techniques with application to cognitive radio
paper_content:
In this paper, we consider the design of efficient anti-jamming coding techniques for recovering lost packets transmitted through parallel channels. We present two coding schemes with small overhead and low complexity, namely, rateless coding and piecewise coding. For piecewise coding, we propose the optimal as well as several suboptimal design methods to build short block codes with small number of parity checks. One application of the anti-jamming coding techniques is in a cognitive radio system to protect the secondary users from the interference by the primary users. For such application, we consider two types of subchannel selections, i.e., the single uniform and general non-uniform subchannel selections. Throughput and the goodput performance of the secondary users employing either anti-jamming coding technique is analyzed under both subchannel selection strategies. The results show that both coding techniques provide reliable transmissions with high throughput and small redundancy. The piecewise coding using the designed short codes provides better performance with smaller overhead under low to medium jamming rate. For non-uniform subchannel selection, the designed short code improves the throughput and goodput performance of secondary transmission with antijamming piecewise coding while the rateless coding provides similar or worse performance than that in the uniform case.
---
paper_title: Considerations for Successful Cognitive Radio Systems in US TV White Space
paper_content:
On February 17, 2009, the United States will complete the transition to digital television. The FCC is in the process of establishing rules to allow unlicensed secondary use of the TV "white space" that this transition creates. This television white space (TVWS) opens vital new portions of limited radio spectrum and enables delivery of new communication services, particularly wireless broadband, to millions of underserved Americans. In some rural markets, up to 250 MHz of spectrum could be utilized. Expected initial system deployments in TVWS include wireless Internet service providers (WISPs) and broadband coverage systems for business enterprises. It is critical that these early services are successful, both technically in protecting incumbent licensed users as well as financially in being commercially viable. Cognitive radio technologies are available that enable radio systems operating in the TVWS spectrum to reliably protect the incumbent licensed users. These technologies include geolocation, augmented by sensing, and beaconing. Existing protocols, such as WiMax, WiFi, and proprietary protocols, rebanded to the UHF frequency ranges, will provide broadband data throughput at adequate ranges to allow practical services. It is important to develop and deploy early viable solutions into this band that protect incumbent licensed users so that cycles of learning can start. Lessons from these early systems will allow the industry to accept more aggressive cognitive radio technologies and services, will offer greater comfort to critics of the technology, and will provide regulatory agencies worldwide with data for future rulemaking.
---
paper_title: Radio-telepathy: extracting a secret key from an unauthenticated wireless channel
paper_content:
Securing communications requires the establishment of cryptographic keys, which is challenging in mobile scenarios where a key management infrastructure is not always present. In this paper, we present a protocol that allows two users to establish a common cryptographic key by exploiting special properties of the wireless channel: the underlying channel response between any two parties is unique and decorrelates rapidly in space. The established key can then be used to support security services (such as encryption) between two users. Our algorithm uses level-crossings and quantization to extract bits from correlated stochastic processes. The resulting protocol resists cryptanalysis by an eavesdropping adversary and a spoofing attack by an active adversary without requiring an authenticated channel, as is typically assumed in prior information-theoretic key establishment schemes. We evaluate our algorithm through theoretical and numerical studies, and provide validation through two complementary experimental studies. First, we use an 802.11 development platform with customized logic that extracts raw channel impulse response data from the preamble of a format-compliant 802.11a packet. We show that it is possible to practically achieve key establishment rates of ~ 1 bit/sec in a real, indoor wireless environment. To illustrate the generality of our method, we show that our approach is equally applicable to per-packet coarse signal strength measurements using off-the-shelf 802.11 hardware.
---
|
Title: A Survey on Security and Privacy Protocols for Cognitive Wireless Sensor Networks
Section 1: Introduction
Description 1: Discuss the significance of Wireless Sensor Networks (WSNs) and Cognitive Wireless Sensor Networks (CWSNs), their applications, benefits, and the necessity for security protocols.
Section 2: Security and Privacy Issues in WSNs
Description 2: Cover the traditional security and privacy challenges in WSNs, including specific types of attacks such as DoS, secrecy, authentication threats, and stealthy attacks on service integrity.
Section 3: Security Mechanisms in Traditional WSNs
Description 3: Present various security mechanisms employed in traditional WSNs, such as cryptographic applications, key management protocols, defense mechanisms against DoS attacks, and secure data aggregation.
Section 4: Security Vulnerabilities in CWSNs
Description 4: Outline the additional security vulnerabilities introduced in CWSNs, focusing on threats unique to cognitive radio functionalities, including masquerading, jamming, unauthorized spectrum access, and attacks on the cognitive engine.
Section 5: Work on Identification of CWSN Threats
Description 5: Provide an overview of research identifying specific attacks on CR networks, including jamming, PUE attacks, masquerading attacks, false spectrum reports by secondary users, and cognitive control channel security.
Section 6: Security Mechanisms for CWSNs
Description 6: Detail various defense mechanisms specific to CWSNs, enhancing sensor inputs, using reputation-based systems, robust authentication schemes, prevention of unauthorized spectrum access, and protection frameworks for CCCs.
Section 7: Emerging Research Directions
Description 7: Identify and discuss potential future research challenges and directions in the field of cognitive radio networks, particularly focusing on security and privacy issues that need addressing for real-world deployment.
Section 8: Conclusion
Description 8: Summarize the presented security threats and defense mechanisms for CWSNs and underline the importance of addressing these security challenges for the successful deployment of cognitive wireless sensor networks.
|
Survey of Parallel Computing with MATLAB
| 16 |
---
paper_title: Highly Parallel Computing
paper_content:
Part 1 Foundations: overview - overview and scope of this book, definition and driving forces, questions raised, emerging answers, previous attempts why success now?, conclusions and future directions sample applications - scientific and engineering applications, database systems, artificial intelligence systems, summary technological constraints and opportunities - processor and network technology, memory technology, storage technology computational models and selected algorithms - computational models an operational view, computational models - an analytical view, selected parallel algorithms. Part 2 Parallel software: languages and programming environments - review of the major serial languages, parallel imperative languages and extensions, declarative languages, the programmer's view compilers, other translators - serial compiler essentials, parallelizing compiler essentials, summary and perspective operating systems - operating systems for serial machines, controlling concurrency, classifying operating systems for parallel computers, history of parallel operating systems. Part 3 Parallel architectures: interconnection networks - static connection topologies, dynamic connection topologies SIMD parallel architectures - evolution from von Neumann machines, vector processors, pipelined SIMD vector processors, parallel SIMD designs MIMD parallel architectures - stepping up to MIMD, private memory (message-passing), MIMD designs, shared memory MIMD designs hybrid parallel architectures - VLIW architectures, MSIMD tree machines, MSIMD reconfigurable designs.
---
paper_title: Highly Parallel Computing
paper_content:
Part 1 Foundations: overview - overview and scope of this book, definition and driving forces, questions raised, emerging answers, previous attempts why success now?, conclusions and future directions sample applications - scientific and engineering applications, database systems, artificial intelligence systems, summary technological constraints and opportunities - processor and network technology, memory technology, storage technology computational models and selected algorithms - computational models an operational view, computational models - an analytical view, selected parallel algorithms. Part 2 Parallel software: languages and programming environments - review of the major serial languages, parallel imperative languages and extensions, declarative languages, the programmer's view compilers, other translators - serial compiler essentials, parallelizing compiler essentials, summary and perspective operating systems - operating systems for serial machines, controlling concurrency, classifying operating systems for parallel computers, history of parallel operating systems. Part 3 Parallel architectures: interconnection networks - static connection topologies, dynamic connection topologies SIMD parallel architectures - evolution from von Neumann machines, vector processors, pipelined SIMD vector processors, parallel SIMD designs MIMD parallel architectures - stepping up to MIMD, private memory (message-passing), MIMD designs, shared memory MIMD designs hybrid parallel architectures - VLIW architectures, MSIMD tree machines, MSIMD reconfigurable designs.
---
paper_title: MATLAB®: A Language for Parallel Computing
paper_content:
Parallel computing with the MATLAB® language and environment has received interest from various quarters. The Parallel Computing Toolbox™ and MATLAB® Distributed Computing Server™ from The MathWorks are among several available tools that offer this capability. We explore some of the key features of the parallel MATLAB language that these tools offer. We describe the underlying mechanics as well as the salient design decisions and rationale for certain features in the toolset. The paper concludes by identifying some issues that we must address as the language features evolve.
---
paper_title: Survey of Parallel MATLAB Techniques and Applications to Signal and Image Processing
paper_content:
We present a survey of modern parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
---
paper_title: MATLAB®: A Language for Parallel Computing
paper_content:
Parallel computing with the MATLAB® language and environment has received interest from various quarters. The Parallel Computing Toolbox™ and MATLAB® Distributed Computing Server™ from The MathWorks are among several available tools that offer this capability. We explore some of the key features of the parallel MATLAB language that these tools offer. We describe the underlying mechanics as well as the salient design decisions and rationale for certain features in the toolset. The paper concludes by identifying some issues that we must address as the language features evolve.
---
paper_title: Survey of Parallel MATLAB Techniques and Applications to Signal and Image Processing
paper_content:
We present a survey of modern parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
---
paper_title: Enhancements to MatlabMPI: Easier Compilation, Collective Communication, and Profiling
paper_content:
This paper provides a brief overview of several enhancements made to the MatlabMPI suite. MatlabMPI is a pure MATLAB code implementation of the core parts of the MPI specifications. The enhancements provide a more attractive option for HPCMP users to design parallel MATLAB code. Intelligent compiler configuration tools have also been delivered to further isolate MatlabMPI users from the complexities of the UNIX environments on the various HPCMP systems. Users are now able to install and use MatlabMPI with less difficulty, greater flexibility, and increased portability. Collective communication functions were added to MatlabMPI to expand functionality beyond the core implementation. Profiling capabilities, producing TAU (Tuning and Analysis Utility) trace files, are now offered to support parallel code optimization. All of these enhancements have been tested and documented on a variety of HPCMP systems. All material, including commented example code to demonstrate the usefulness of MatlabMPI, is available by contacting the authors.
---
paper_title: MATLAB®: A Language for Parallel Computing
paper_content:
Parallel computing with the MATLAB® language and environment has received interest from various quarters. The Parallel Computing Toolbox™ and MATLAB® Distributed Computing Server™ from The MathWorks are among several available tools that offer this capability. We explore some of the key features of the parallel MATLAB language that these tools offer. We describe the underlying mechanics as well as the salient design decisions and rationale for certain features in the toolset. The paper concludes by identifying some issues that we must address as the language features evolve.
---
paper_title: Survey of Parallel MATLAB Techniques and Applications to Signal and Image Processing
paper_content:
We present a survey of modern parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
---
paper_title: 'pMATLAB Parallel MATLAB Library'
paper_content:
MATLAB® has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with approximately one million users worldwide. The primary benefits of MATLAB are reduced code development time via high levels of abstractions (e.g. first class multi-dimensional arrays and thousands of built in functions), interpretive, interactive programming, and powerful mathematical graphics. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab provides this capability by implementing parallel global array semantics using standard operator overloading techniques. The core data structure in pMatlab is a distributed numerical array whose distribution onto multiple processors is specified with a "map" construct. Communication operations between distributed arrays are abstracted away from the user and pMatlab transparently supports redistribution between any block-cyclic-overlapped distributions up to four dimensions. pMatlab is built on top of the MatlabMPI communication library and runs on any combination of heterogeneous systems that support MATLAB, which includes Windows, Linux, MacOS X, and SunOS. This paper describes the overall design and architecture of the pMatlab implementation. Performance is validated by implementing the HPC Challenge benchmark suite and comparing pMatlab performance with the equivalent C+MPI codes. These results indicate that pMatlab can often achieve comparable performance to C+MPI, usually at one tenth the code size. Finally, we present implementation data collected from a sample of real pMatlab applications drawn from the approximately one hundred users at MIT Lincoln Laboratory. These data indicate that users are typically able to go from a serial code to an efficient pMatlab code in about 3 hours while changing less than 1% of their code.
---
paper_title: Survey of Parallel MATLAB Techniques and Applications to Signal and Image Processing
paper_content:
We present a survey of modern parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
---
paper_title: MATLAB®: A Language for Parallel Computing
paper_content:
Parallel computing with the MATLAB® language and environment has received interest from various quarters. The Parallel Computing Toolbox™ and MATLAB® Distributed Computing Server™ from The MathWorks are among several available tools that offer this capability. We explore some of the key features of the parallel MATLAB language that these tools offer. We describe the underlying mechanics as well as the salient design decisions and rationale for certain features in the toolset. The paper concludes by identifying some issues that we must address as the language features evolve.
---
paper_title: 'pMATLAB Parallel MATLAB Library'
paper_content:
MATLAB® has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with approximately one million users worldwide. The primary benefits of MATLAB are reduced code development time via high levels of abstractions (e.g. first class multi-dimensional arrays and thousands of built in functions), interpretive, interactive programming, and powerful mathematical graphics. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab provides this capability by implementing parallel global array semantics using standard operator overloading techniques. The core data structure in pMatlab is a distributed numerical array whose distribution onto multiple processors is specified with a "map" construct. Communication operations between distributed arrays are abstracted away from the user and pMatlab transparently supports redistribution between any block-cyclic-overlapped distributions up to four dimensions. pMatlab is built on top of the MatlabMPI communication library and runs on any combination of heterogeneous systems that support MATLAB, which includes Windows, Linux, MacOS X, and SunOS. This paper describes the overall design and architecture of the pMatlab implementation. Performance is validated by implementing the HPC Challenge benchmark suite and comparing pMatlab performance with the equivalent C+MPI codes. These results indicate that pMatlab can often achieve comparable performance to C+MPI, usually at one tenth the code size. Finally, we present implementation data collected from a sample of real pMatlab applications drawn from the approximately one hundred users at MIT Lincoln Laboratory. These data indicate that users are typically able to go from a serial code to an efficient pMatlab code in about 3 hours while changing less than 1% of their code.
---
paper_title: Survey of Parallel MATLAB Techniques and Applications to Signal and Image Processing
paper_content:
We present a survey of modern parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
---
paper_title: MATLAB®: A Language for Parallel Computing
paper_content:
Parallel computing with the MATLAB® language and environment has received interest from various quarters. The Parallel Computing Toolbox™ and MATLAB® Distributed Computing Server™ from The MathWorks are among several available tools that offer this capability. We explore some of the key features of the parallel MATLAB language that these tools offer. We describe the underlying mechanics as well as the salient design decisions and rationale for certain features in the toolset. The paper concludes by identifying some issues that we must address as the language features evolve.
---
paper_title: 'pMATLAB Parallel MATLAB Library'
paper_content:
MATLAB® has emerged as one of the languages most commonly used by scientists and engineers for technical computing, with approximately one million users worldwide. The primary benefits of MATLAB are reduced code development time via high levels of abstractions (e.g. first class multi-dimensional arrays and thousands of built in functions), interpretive, interactive programming, and powerful mathematical graphics. The compute intensive nature of technical computing means that many MATLAB users have codes that can significantly benefit from the increased performance offered by parallel computing. pMatlab provides this capability by implementing parallel global array semantics using standard operator overloading techniques. The core data structure in pMatlab is a distributed numerical array whose distribution onto multiple processors is specified with a "map" construct. Communication operations between distributed arrays are abstracted away from the user and pMatlab transparently supports redistribution between any block-cyclic-overlapped distributions up to four dimensions. pMatlab is built on top of the MatlabMPI communication library and runs on any combination of heterogeneous systems that support MATLAB, which includes Windows, Linux, MacOS X, and SunOS. This paper describes the overall design and architecture of the pMatlab implementation. Performance is validated by implementing the HPC Challenge benchmark suite and comparing pMatlab performance with the equivalent C+MPI codes. These results indicate that pMatlab can often achieve comparable performance to C+MPI, usually at one tenth the code size. Finally, we present implementation data collected from a sample of real pMatlab applications drawn from the approximately one hundred users at MIT Lincoln Laboratory. These data indicate that users are typically able to go from a serial code to an efficient pMatlab code in about 3 hours while changing less than 1% of their code.
---
|
Title: Survey of Parallel Computing with MATLAB
Section 1: INTRODUCTION
Description 1: This section introduces serial computation, parallel computing fundamentals, and the advantages of parallel computing.
Section 2: Computer Memory Architectures
Description 2: This section discusses the types of memory architectures in parallel computer hardware: shared memory, distributed memory, and distributed shared memory.
Section 3: Matlab (matrix laboratory)
Description 3: This section provides an overview of MATLAB, its capabilities, advantages, and widespread applications in various fields.
Section 4: HISTORY OF PARALLEL COMPUTING WITH MATLAB
Description 4: This section reviews the historical development of parallel computing with MATLAB from 1995 to 2011, exploring various approaches and challenges faced.
Section 5: MatlabMPI
Description 5: This section explains MatlabMPI, its design, advantages, and limitations, including improvements made by Ohio Supercomputer Center.
Section 6: bcMPI
Description 6: This section details bcMPI, an alternative to MatlabMPI developed by Ohio Supercomputer Center, discussing its compatibility and advantages.
Section 7: pMatlab
Description 7: This section introduces pMatlab, its implicit programming approach, and support for global arrays for optimized performance.
Section 8: pMatlab benchmark
Description 8: This section presents benchmark results of pMatlab and comparisons with serial MATLAB and C+MPI.
Section 9: Star-P
Description 9: This section describes Star-P, a set of extensions to MATLAB for simplifying parallel computations.
Section 10: Development of PCT
Description 10: This section traces the development stages of the MATLAB Parallel Computing Toolbox (PCT) from 2006 to mid-2012.
Section 11: CURRENT RESEARCH IN PARALLEL MATLAB
Description 11: This section focuses on new features in MATLAB R2012b, particularly associated with the Parallel Computing Toolbox and GPU support.
Section 12: Graphics Processing Unit (GPU)
Description 12: This section explores the evolution, advantages, and applications of GPUs in parallel computing.
Section 13: Parallel Computing Toolbox in R2012b
Description 13: This section highlights the new features and capabilities of the Parallel Computing Toolbox in MATLAB R2012b and discusses execution of benchmarking problems.
Section 14: Benchmarking A\b on the GPU in R2012b
Description 14: This section demonstrates the performance comparison of matrix left division (\) on CPU and GPU in R2012b.
Section 15: PARALLEL MATLAB FOR NEAR FUTURE
Description 15: This section discusses anticipated future developments in MATLAB parallel computing tools, including an increase in the number of workers and enhanced CUDA compatibility.
Section 16: CONCLUSIONS AND FUTURE WORK
Description 16: This section summarizes the paper, emphasizing the benefits of parallel MATLAB and outlining plans for future work, including the development of a new tool combining previous versions' advantages.
|
How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
| 12 |
---
paper_title: European Union regulations on algorithmic decision-making and a"right to explanation"
paper_content:
We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
---
paper_title: Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
paper_content:
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
---
paper_title: Explanation and understanding.
paper_content:
The study of explanation, while related to intuitive theories, concepts, and mental models, offers important new perspectives on high-level thought. Explanations sort themselves into several distinct types corresponding to patterns of causation, content domains, and explanatory stances, all of which have cognitive consequences. Although explanations are necessarily incomplete—often dramatically so in laypeople—those gaps are difficult to discern. Despite such gaps and the failure to recognize them fully, people do have skeletal explanatory senses, often implicit, of the causal structure of the world. They further leverage those skeletal understandings by knowing how to access additional explanatory knowledge in other minds and by being particularly adept at using situational support to build explanations on the fly in real time. Across development and cultures, there are differences in preferred explanatory schemes, but rarely are any kinds of schemes completely unavailable to a group.
---
paper_title: Explanation in Artificial Intelligence: Insights from the Social Sciences
paper_content:
Abstract There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a ‘good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
---
paper_title: A Survey of Explanations in Recommender Systems
paper_content:
This paper provides a comprehensive review of explanations in recommender systems. We highlight seven possible advantages of an explanation facility, and describe how existing measures can be used to evaluate the quality of explanations. Since explanations are not independent of the recommendation process, we consider how the ways recommendations are presented may affect explanations. Next, we look at different ways of interacting with explanations. The paper is illustrated with examples of explanations throughout, where possible from existing applications.
---
paper_title: Towards A Rigorous Science of Interpretable Machine Learning
paper_content:
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
---
paper_title: Explanation in Case-Based Reasoning–Perspectives and Goals
paper_content:
We present an overview of different theories of explanation from the philosophy and cognitive science communities. Based on these theories, as well as models of explanation from the knowledge-based systems area, we present a framework for explanation in case-based reasoning (CBR) based on explanation goals. We propose ways that the goals of the user and system designer should be taken into account when deciding what is a good explanation for a given CBR system. Some general types of goals relevant to many CBR systems are identified, and used to survey existing methods of explanation in CBR. Finally, we identify some future challenges.
---
paper_title: A Survey of Methods for Explaining Black Box Models
paper_content:
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
---
paper_title: Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
paper_content:
Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.
---
paper_title: Case-based explanation of non-case-based learning methods.
paper_content:
Abstract ::: We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.
---
paper_title: Towards A Rigorous Science of Interpretable Machine Learning
paper_content:
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
---
paper_title: Meaningful Explanations of Black Box AI Decision Systems
paper_content:
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.
---
paper_title: The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
paper_content:
We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
---
paper_title: Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
paper_content:
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
---
paper_title: Hybrid Neural Network and Expert Systems
paper_content:
Preface. Part I: Fundamentals of Hybrid Systems. 1. Overview of Neural and Symbolic Systems. 2. Research in Hybrid Neural and Symbolic Systems. 3. Models for Integrating Systems. Part II: Case Studies of Hybrid Neural Network and Expert Systems. 4. LAM Hybrid System for Window Glazing Design. 5. Hybrid Systems Approach to Nuclear Plant Monitoring. 6. Chemical Tank Control System. 7. Image Interpretation via Fusion of Heterogeneous Sources using a Hybrid Expert-Neural Network System. 8. Hybrid Systems for Multiple Target Recognition. Part III: Analysis and Guidelines. 9. Guidelines for Developing Hybrid Systems. 10. Tools and Development Systems. 11. Summary and the Future of Hybrid Neural Network and Expert Systems. References. Index.
---
paper_title: Data Mining: Practical Machine Learning Tools and Techniques
paper_content:
Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization
---
paper_title: Analogical Asides on Case-Based Reasoning
paper_content:
This paper explores some of the similarities and differences between cognitive models of analogy and case-based reasoning systems. I first point out a paradox in the treatment of adaptation in analogy and in case-based reasoning; a paradox which can be only resolved by expanding the role of adaptation in cognitive models of analogy. Some psychological research on the process of adaptation in human subjects is reported and then the implications of this research are propagated into analogy and then on into CBR. The argument is that some of the existing stages in CBR should be integrated into a more stream-lined architecture that would be more efficient than current schemes.
---
paper_title: Deep Residual Learning for Image Recognition
paper_content:
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
---
paper_title: A Survey of Explanations in Recommender Systems
paper_content:
This paper provides a comprehensive review of explanations in recommender systems. We highlight seven possible advantages of an explanation facility, and describe how existing measures can be used to evaluate the quality of explanations. Since explanations are not independent of the recommendation process, we consider how the ways recommendations are presented may affect explanations. Next, we look at different ways of interacting with explanations. The paper is illustrated with examples of explanations throughout, where possible from existing applications.
---
paper_title: Deep learning
paper_content:
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
---
paper_title: A systematic review and taxonomy of explanations in decision support and recommender systems
paper_content:
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
---
paper_title: Explanation in Case-Based Reasoning–Perspectives and Goals
paper_content:
We present an overview of different theories of explanation from the philosophy and cognitive science communities. Based on these theories, as well as models of explanation from the knowledge-based systems area, we present a framework for explanation in case-based reasoning (CBR) based on explanation goals. We propose ways that the goals of the user and system designer should be taken into account when deciding what is a good explanation for a given CBR system. Some general types of goals relevant to many CBR systems are identified, and used to survey existing methods of explanation in CBR. Finally, we identify some future challenges.
---
paper_title: Visualizing and Understanding Convolutional Networks
paper_content:
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
---
paper_title: A Survey of Methods for Explaining Black Box Models
paper_content:
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
---
paper_title: Intriguing properties of neural networks
paper_content:
Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.
---
paper_title: Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
paper_content:
We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our approach—Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say ‘dog’ in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g.VGG), (2) CNNs used for structured outputs (e.g.captioning), (3) CNNs used in tasks with multi-modal inputs (e.g.visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are robust to adversarial perturbations, (d) are more faithful to the underlying model, and (e) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show that even non-attention based models learn to localize discriminative regions of input image. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names (Bau et al. in Computer vision and pattern recognition, 2017) to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo on CloudCV (Agrawal et al., in: Mobile cloud visual media computing, pp 265–290. Springer, 2015) (http://gradcam.cloudcv.org) and a video at http://youtu.be/COjUB9Izk6E.
---
paper_title: Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
paper_content:
Advances in artificial intelligence, sensors and big data management have far-reaching societal impacts. As these systems augment our everyday lives, it becomes increasing-ly important for people to understand them and remain in control. We investigate how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers. Using topic modeling, co-occurrence and network analysis, we mapped the research space from diverse domains, such as algorith-mic accountability, interpretable machine learning, context-awareness, cognitive psychology, and software learnability. We reveal fading and burgeoning trends in explainable systems, and identify domains that are closely connected or mostly isolated. The time is ripe for the HCI community to ensure that the powerful new autonomous systems have intelligible interfaces built-in. From our results, we propose several implications and directions for future research to-wards this goal.
---
paper_title: Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions
paper_content:
Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability -- they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as "black box" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes.
---
paper_title: A local weighting method to the integration of neural network and case based reasoning
paper_content:
Our aim is to build an integrated learning framework of neural network and case based reasoning. The main idea is that feature weights for case based reasoning can be evaluated using neural networks. In our previous method, we derived the feature weight set from the trained neural network and the training data so that the feature weight is constant for all queries. In this paper, we propose a local feature weighting method using a neural network. The neural network guides the case based reasoning by providing case-specific weights to the learning process. We developed a learning process to get the local weights using the neural network and showed the performance of our learning system using the sinusoidal dataset.
---
paper_title: A connectionist fuzzy case-based reasoning model
paper_content:
This paper presents a new version of an existing hybrid model for the development of knowledge-based systems, where case-based reasoning is used as a problem solver. Numeric predictive attributes are modeled in terms of fuzzy sets to define neurons in an associative Artificial Neural Network (ANN). After the Fuzzy-ANN is trained, its weights and the membership degrees in the training examples are used to automatically generate a local distance function and an attribute weighting scheme. Using this distance function and following the Nearest Neighbor rule, a new hybrid Connectionist Fuzzy Case-Based Reasoning model is defined. Experimental results show that the model proposed allows to develop knowledge-based systems with a higher accuracy than when using the original model. The model takes the advantages of the approaches used, providing a more natural framework to include expert knowledge by using linguistic terms.
---
paper_title: Case-based explanation of non-case-based learning methods.
paper_content:
Abstract ::: We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.
---
paper_title: A machine learning approach to yield management in semiconductor manufacturing
paper_content:
Yield improvement is one of the most important topics in semiconductor manufacturing. Traditional statistical methods are no longer feasible nor efficient, if possible, in analysing the vast amount...
---
paper_title: Towards integration of memory based learning and neural networks
paper_content:
We propose a hybrid prediction system of neural network (NN) and memory based learning (MBR). NN and MBR are frequently applied to data mining with various objectives. NN and MBR can be directly applied to classification and regression without additional transformation mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. In our hybrid system of NN and MBR, the feature weight set which is calculated from the trained NN plays the core role in connecting both learning strategies and the explanation on prediction can be given by obtaining and presenting the most similar examples from the case base. Experimental results show that the hybrid system has a high potential in solving data mining problems.
---
paper_title: Memory and neural network based expert system
paper_content:
Abstract We suggest a hybrid expert system of memory and neural network based learning. Neural network (NN) and memory based reasoning (MBR) have common advantages over other learning strategies. NN and MBR can be directly applied to the classification and regression problem without additional transform mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. Unfortunately, they have an achilles tendon. The knowledge representation of NN is unreadable to human being and this ‘black box’ property restricts the application of NN to areas which needs proper explanations as well as precise predictions. On the other hand, MBR suffers from the feature-weighting problem. When MBR measures the distance between cases, some features should be treated more importantly than others. Although previous researchers provide several feature-weighting mechanisms to overcome the difficulty, those methods were mainly applicable only to the classification problem. In our hybrid system of NN and MBR, the feature weight set calculated from the trained neural network plays the core role in connecting both the learning strategies. Moreover, the explanation on prediction can be given by presenting the most similar cases from the case base. In this paper, we present the basic idea of the hybrid system. We also present an application example with a wafer yield prediction system for semiconductor manufacturing. Experimental results show that the hybrid system predicts the yield with relatively high accuracy and is capable of learning adaptively to changing behavior of the manufacturing system.
---
paper_title: Combining a neural network with case-based reasoning in a diagnostic system.
paper_content:
Abstract This paper presents a new approach for integrating case-based reasoning (CBR) with a neural network (NN) in diagnostic systems. When solving a new problem, the neural network is used to make hypotheses and to guide the CBR module in the search for a similar previous case that supports one of the hypotheses. The knowledge acquired by the network is interpreted and mapped into symbolic diagnosis descriptors , which are kept and used by the system to determine whether a final answer is credible, and to build explanations for the reasoning carried out. The NN-CBR model has been used in the development of a system for the diagnosis of congenital heart diseases (CHD). The system has been evaluated using two cardiological databases with a total of 214 CHD cases. Three other well-known databases have been used to evaluate the NN-CBR approach further. The hybrid system manages to solve problems that cannot be solved by the neural network with a good level of accuracy. Additionally, the hybrid system suggests some solutions for common CBR problems, such as indexing and retrieval, as well as for neural network problems, such as the interpretation of the knowledge stored in a neural network and the explanation of reasoning.
---
paper_title: Case-based reasoning and neural network based expert system for personalization
paper_content:
Abstract We suggest a hybrid expert system of case-based reasoning (CBR) and neural network (NN) for symbolic domain. In previous research, we proposed a hybrid system of memory and neural network based learning. In the system, the feature weights are extracted from the trained neural network, and used to improve retrieval accuracy of case-based reasoning. However, this system has worked best in domains in which all features had numeric values. When the feature values are symbolic, nearest neighbor methods typically resort to much simpler metrics, such as counting the features that match. A more sophisticated treatment of the feature space is required in symbolic domains. We propose feature-weighted CBR with neural network, which uses value difference metric (VDM) as distance function for symbolic features. In our system, the feature weight set calculated from the trained neural network plays the core role in connecting both the learning strategies. Moreover, the explanation on prediction can be given by presenting the most similar cases from the case base. To validate our system, illustrative experimental results are presented. We use datasets from the UCI machine learning archive for experiments. Finally, we present an application with a personalized counseling system for cosmetic industry whose questionnaires have symbolic features. Feature-weighted CBR with neural network predicts the five elements, which show customers’ character and physical constitution, with relatively high accuracy and expert system for personalization recommends personalized make-up style, color, life style and products.
---
paper_title: Answering with Cases: A CBR Approach to Deep Learning
paper_content:
Every year tenths of thousands of customer support engineers around the world deal with, and proactively solve, complex help-desk tickets. Daily, almost every customer support expert will turn his/her attention to a prioritization strategy, to achieve the best possible result. To assist with this, in this paper we describe a novel case-based reasoning application to address the tasks of: high solution accuracy and shorter prediction resolution time. We describe how appropriate cases can be generated to assist engineers and how our solution can scale over time to produce domain-specific reusable cases for similar problems. Our work is evaluated using data from 5000 cases from the automotive industry.
---
paper_title: Explanation in Case-Based Reasoning–Perspectives and Goals
paper_content:
We present an overview of different theories of explanation from the philosophy and cognitive science communities. Based on these theories, as well as models of explanation from the knowledge-based systems area, we present a framework for explanation in case-based reasoning (CBR) based on explanation goals. We propose ways that the goals of the user and system designer should be taken into account when deciding what is a good explanation for a given CBR system. Some general types of goals relevant to many CBR systems are identified, and used to survey existing methods of explanation in CBR. Finally, we identify some future challenges.
---
paper_title: A hybrid approach of neural network and memory-based learning to data mining
paper_content:
We propose a hybrid prediction system of neural network and memory-based learning. Neural network (NN) and memory-based reasoning (MBR) are frequently applied to data mining with various objectives. They have common advantages over other learning strategies. NN and MBR can be directly applied to classification and regression without additional transformation mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. Unfortunately, they have shortcomings when applied to data mining tasks. Though the neural network is considered as one of the most powerful and universal predictors, the knowledge representation of NN is unreadable to humans, and this "black box" property restricts the application of NN to data mining problems, which require proper explanations for the prediction. On the other hand, MBR suffers from the feature-weighting problem. When MBR measures the distance between cases, some input features should be treated as more important than other features. Feature weighting should be executed prior to prediction in order to provide the information on the feature importance. In our hybrid system of NN and MBR, the feature weight set, which is calculated from the trained neural network, plays the core role in connecting both learning strategies, and the explanation for prediction can be given by obtaining and presenting the most similar examples from the case base. Moreover, the proposed system has advantages in the typical data mining problems such as scalability to large datasets, high dimensions, and adaptability to dynamic situations. Experimental results show that the hybrid system has a high potential in solving data mining problems.
---
paper_title: A Hybrid Case-based Model for Forecasting
paper_content:
An investigation is described into the application of artificial intelligence to forecasting in the domain of oceanography. A hybrid approach to forecasting the thermal structure of the water ahead of a moving vessel is presented which combines the ability of a case-based reasoning system for identifying previously encountered similar situations and the generalizing ability of an artificial neural network to guide the adaptation stage of the case-based reasoning mechanism. The system has been successfully tested in real time in the Atlantic Ocean; the results obtained are presented and compared with those derived from other forecasting methods.
---
paper_title: An Automated Hybrid CBR System for Forecasting
paper_content:
A hybrid neuro-symbolic problem solving model is presented in which the aim is to forecast parameters of a complex and dynamic environment in an unsupervised way. In situations in which the rules that determine a system are unknown, the prediction of the parameter values that determine the characteristic behaviour of the system can be a problematic task. The proposed system employs a case-based reasoning model that incorporates a growing cell structures network, a radial basis function network and a set of Sugeno fuzzy models to provide an accurate prediction. Each of these techniques is used in a different stage of the reasoning cycle of the case-based reasoning system to retrieve, to adapt and to review the proposed solution to the problem. This system has been used to predict the red tides that appear in the coastal waters of the north west of the Iberian Peninsula. The results obtained from those experiments are presented.
---
paper_title: Data mining using example-based methods in oceanographic forecast models
paper_content:
The abstract presents a hybrid system that has proved capable of creating a prediction detailing the physical interactions occurring in a rapidly changing oceanographic environment. The aim of the system is to identify and forecast the thermal structure of the water ahead of an ongoing vessel. The work focuses on the development of a system for forecasting the behaviour of complex environments, in which the underling knowledge of the domain is not completely available, the rules governing the system are fuzzy and the sets of data samples are limited and incomplete. The paper presents a hybrid approach that combines the ability of a case based reasoning system (CBR) for Selecting previous similar situations and the generalising ability of artificial neural networks (ANN) to guide the adaptation stage of the case base reasoning system. The system has been successfully tested in the Atlantic Ocean in September 1997.
---
paper_title: Modular Integration of Connectionist and Symbolic Processing in Knowledge-Based Systems
paper_content:
MIX is an ESPRIT project aimed at developing strategies and tools for integrating symbolic and neural methods in hybrid systems. The project arose from the observation that current hybrid systems are generally small-scale experimental systems which couple one symbolic and one connectionist model, often in an ad hoc fashion. Hence the objective of building a versatile testbed for the design, prototyping and assessment of a variety of hybrid models or architectures, in particular those which combine diverse neural network models with rule/model-based, cased-based, and fuzzy reasoning. A multiagent approach has been chosen to facilitate modular implementation of these hybrid models, which will be tested in the context of real-world applications in the steel and automobile industries.
---
paper_title: MBNR: Case-Based Reasoning with Local Feature Weighting by Neural Network
paper_content:
Our aim is to build an integrated learning framework of neural network and case-based reasoning. The main idea is that feature weights for case-based reasoning can be evaluated by neural networks. In this paper, we propose MBNR (Memory-Based Neural Reasoning), case-based reasoning with local feature weighting by neural network. In our method, the neural network guides the case-based reasoning by providing case-specific weights to the learning process. We developed a learning algorithm to train the neural network to learn the case-specific local weighting patterns for case-based reasoning. We showed the performance of our learning system using four datasets.
---
paper_title: Case-based explanation of non-case-based learning methods.
paper_content:
Abstract ::: We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.
---
paper_title: Memory and neural network based expert system
paper_content:
Abstract We suggest a hybrid expert system of memory and neural network based learning. Neural network (NN) and memory based reasoning (MBR) have common advantages over other learning strategies. NN and MBR can be directly applied to the classification and regression problem without additional transform mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. Unfortunately, they have an achilles tendon. The knowledge representation of NN is unreadable to human being and this ‘black box’ property restricts the application of NN to areas which needs proper explanations as well as precise predictions. On the other hand, MBR suffers from the feature-weighting problem. When MBR measures the distance between cases, some features should be treated more importantly than others. Although previous researchers provide several feature-weighting mechanisms to overcome the difficulty, those methods were mainly applicable only to the classification problem. In our hybrid system of NN and MBR, the feature weight set calculated from the trained neural network plays the core role in connecting both the learning strategies. Moreover, the explanation on prediction can be given by presenting the most similar cases from the case base. In this paper, we present the basic idea of the hybrid system. We also present an application example with a wafer yield prediction system for semiconductor manufacturing. Experimental results show that the hybrid system predicts the yield with relatively high accuracy and is capable of learning adaptively to changing behavior of the manufacturing system.
---
paper_title: A local weighting method to the integration of neural network and case based reasoning
paper_content:
Our aim is to build an integrated learning framework of neural network and case based reasoning. The main idea is that feature weights for case based reasoning can be evaluated using neural networks. In our previous method, we derived the feature weight set from the trained neural network and the training data so that the feature weight is constant for all queries. In this paper, we propose a local feature weighting method using a neural network. The neural network guides the case based reasoning by providing case-specific weights to the learning process. We developed a learning process to get the local weights using the neural network and showed the performance of our learning system using the sinusoidal dataset.
---
paper_title: A hybrid CBR classification model by integrating ANN into CBR
paper_content:
Case-based reasoning (CBR) is an artificial intelligent approach to problem solving and learning, which understands and extracts knowledge from past cases. However, CBR faces the challenge of assigning weights to the features to measure similarity between cases effectively and correctly. Integration of inherent learning capability of artificial neural networks (ANNs) to help the CBR in attributing the correct and appropriate weights to the features is likely to improve the performance of the standard CBR approach. This paper integrates back propagation neural network (BPNN) into CBR in an innovative way to develop an efficient model for classification tasks. The implementation of integration of NN and CBR for classification tasks is done by building training and testing datasets and optimising NN architecture in terms of number of neurons in hidden layer. This paper investigates the integration of multi-layer BP neural network and CBR. The experimental results obtained with the proposed hybrid model are compared with that of standard CBR, CBR with value difference matrix (VDM) and one existing CBR with BPNN approaches. The superiority of the proposed hybrid CBR model is established to others. The performance of the proposed model is validated with four datasets.
---
paper_title: A machine learning approach to yield management in semiconductor manufacturing
paper_content:
Yield improvement is one of the most important topics in semiconductor manufacturing. Traditional statistical methods are no longer feasible nor efficient, if possible, in analysing the vast amount...
---
paper_title: Towards integration of memory based learning and neural networks
paper_content:
We propose a hybrid prediction system of neural network (NN) and memory based learning (MBR). NN and MBR are frequently applied to data mining with various objectives. NN and MBR can be directly applied to classification and regression without additional transformation mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. In our hybrid system of NN and MBR, the feature weight set which is calculated from the trained NN plays the core role in connecting both learning strategies and the explanation on prediction can be given by obtaining and presenting the most similar examples from the case base. Experimental results show that the hybrid system has a high potential in solving data mining problems.
---
paper_title: Memory and neural network based expert system
paper_content:
Abstract We suggest a hybrid expert system of memory and neural network based learning. Neural network (NN) and memory based reasoning (MBR) have common advantages over other learning strategies. NN and MBR can be directly applied to the classification and regression problem without additional transform mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. Unfortunately, they have an achilles tendon. The knowledge representation of NN is unreadable to human being and this ‘black box’ property restricts the application of NN to areas which needs proper explanations as well as precise predictions. On the other hand, MBR suffers from the feature-weighting problem. When MBR measures the distance between cases, some features should be treated more importantly than others. Although previous researchers provide several feature-weighting mechanisms to overcome the difficulty, those methods were mainly applicable only to the classification problem. In our hybrid system of NN and MBR, the feature weight set calculated from the trained neural network plays the core role in connecting both the learning strategies. Moreover, the explanation on prediction can be given by presenting the most similar cases from the case base. In this paper, we present the basic idea of the hybrid system. We also present an application example with a wafer yield prediction system for semiconductor manufacturing. Experimental results show that the hybrid system predicts the yield with relatively high accuracy and is capable of learning adaptively to changing behavior of the manufacturing system.
---
paper_title: CBR for Modeling Complex Systems
paper_content:
This paper describes how CBR can be used to compare, reuse, and adapt inductive models that represent complex systems. Complex systems are not well understood and therefore require models for their manipulation and understanding. We propose an approach to address the challenges for using CBR in this context, which relate to finding similar inductive models (solutions) to represent similar complex systems (problems). The purpose is to improve the modeling task by considering the quality of different models to represent a system based on the similarity to a system that was successfully modeled. The revised and confirmed suitability of a model can become additional evidence of similarity between two complex systems, resulting in an increased understanding of a domain. This use of CBR supports tasks (e.g., diagnosis, prediction) that inductive or mathematical models alone cannot perform. We validate our approach by modeling software systems, and illustrate its potential significance for biological systems.
---
paper_title: Case-based reasoning and neural network based expert system for personalization
paper_content:
Abstract We suggest a hybrid expert system of case-based reasoning (CBR) and neural network (NN) for symbolic domain. In previous research, we proposed a hybrid system of memory and neural network based learning. In the system, the feature weights are extracted from the trained neural network, and used to improve retrieval accuracy of case-based reasoning. However, this system has worked best in domains in which all features had numeric values. When the feature values are symbolic, nearest neighbor methods typically resort to much simpler metrics, such as counting the features that match. A more sophisticated treatment of the feature space is required in symbolic domains. We propose feature-weighted CBR with neural network, which uses value difference metric (VDM) as distance function for symbolic features. In our system, the feature weight set calculated from the trained neural network plays the core role in connecting both the learning strategies. Moreover, the explanation on prediction can be given by presenting the most similar cases from the case base. To validate our system, illustrative experimental results are presented. We use datasets from the UCI machine learning archive for experiments. Finally, we present an application with a personalized counseling system for cosmetic industry whose questionnaires have symbolic features. Feature-weighted CBR with neural network predicts the five elements, which show customers’ character and physical constitution, with relatively high accuracy and expert system for personalization recommends personalized make-up style, color, life style and products.
---
paper_title: Towards A Rigorous Science of Interpretable Machine Learning
paper_content:
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
---
paper_title: A hybrid approach of neural network and memory-based learning to data mining
paper_content:
We propose a hybrid prediction system of neural network and memory-based learning. Neural network (NN) and memory-based reasoning (MBR) are frequently applied to data mining with various objectives. They have common advantages over other learning strategies. NN and MBR can be directly applied to classification and regression without additional transformation mechanisms. They also have strength in learning the dynamic behavior of the system over a period of time. Unfortunately, they have shortcomings when applied to data mining tasks. Though the neural network is considered as one of the most powerful and universal predictors, the knowledge representation of NN is unreadable to humans, and this "black box" property restricts the application of NN to data mining problems, which require proper explanations for the prediction. On the other hand, MBR suffers from the feature-weighting problem. When MBR measures the distance between cases, some input features should be treated as more important than other features. Feature weighting should be executed prior to prediction in order to provide the information on the feature importance. In our hybrid system of NN and MBR, the feature weight set, which is calculated from the trained neural network, plays the core role in connecting both learning strategies, and the explanation for prediction can be given by obtaining and presenting the most similar examples from the case base. Moreover, the proposed system has advantages in the typical data mining problems such as scalability to large datasets, high dimensions, and adaptability to dynamic situations. Experimental results show that the hybrid system has a high potential in solving data mining problems.
---
paper_title: A Survey of Methods for Explaining Black Box Models
paper_content:
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
---
paper_title: Hybrid case-based reasoning system by cost-sensitive neural network for classification
paper_content:
Case-based reasoning (CBR) is an artificial intelligent approach to learning and problem-solving, which solves a target problem by relating past similar solved problems. But it faces the challenge of weights assignment to features to measure similarity between cases. There are many methods to overcome this feature weighting problem of CBR. However, neural network’s pruning is one of the powerful and useful methods to overcome this feature weighting problem, which extracts feature weights from trained neural network without losing the generality of training set by four popular mechanisms: sensitivity, activity, saliency and relevance. It is habitually assumed that the training sets used for learning are balanced. However, this hypothesis is not always true in real-world applications, and hence, the tendency is to yield classification models that are biased toward the overrepresented class. Therefore, a hybrid CBR system is proposed in this paper to overcome this problem, which adopts a cost-sensitive back-propagation neural network (BPNN) in network pruning to find feature weights. These weights are used in CBR. A single cost parameter is used by the cost-sensitive BPNN to distinguish the importance of class errors. A balanced decision boundary is generated by the cost parameter using prior information. Thus, the class imbalance problem of network pruning is overcome to improve the accuracy of the hybrid CBR. From the empirical results, it is observed that the performance of the proposed hybrid CBR system is better than the hybrid CBR by standard neural network. The performance of the proposed hybrid system is validated with seven datasets.
---
paper_title: A Case-based Reasoning with Feature Weights Derived by BP Network
paper_content:
Case-based reasoning (CBR) is a methodology for problem solving and decision-making in complex and changing environments. This study investigates the performance of a hybrid case-based reasoning method that integrates a multi-layer BP neural network with case-based reasoning (CBR) algorithms for derivatives feature weights. This approach is applied to fault detection and diagnosis (FDD) system involves the examination of several criteria. The correct identification of the underlying mechanism of a fault is an important step in the entire fault analysis process. The trained BP neural network provides the basis to obtain attribute weights, whereas CBR serves as a classifier to identify the fault mechanism. Different parameters of the hybrid methods were varied to study their effect. The results indicate that better performance could be achieved by the proposed hybrid method than that using conventional CBR alone.
---
paper_title: MBNR: Case-Based Reasoning with Local Feature Weighting by Neural Network
paper_content:
Our aim is to build an integrated learning framework of neural network and case-based reasoning. The main idea is that feature weights for case-based reasoning can be evaluated by neural networks. In this paper, we propose MBNR (Memory-Based Neural Reasoning), case-based reasoning with local feature weighting by neural network. In our method, the neural network guides the case-based reasoning by providing case-specific weights to the learning process. We developed a learning algorithm to train the neural network to learn the case-specific local weighting patterns for case-based reasoning. We showed the performance of our learning system using four datasets.
---
paper_title: Hybrid expert system using case based reasoning and neural network for classification
paper_content:
Abstract Case Based Reasoning (CBR) is an analogical reasoning method, which solves problems by relating some previously solved problems to a current unsolved problem to draw analogical inferences for problem solving. But CBR faces the challenge of assigning weights to the features to measure similarity between a current unsolved case and cases stored in the case base effectively and correctly. The concept of neural network’s pruning is already used to sort out feature weighting problem in CBR. But it loses generality and actual elicited knowledge in the ANN’s links. This work proposes a method to extract symbolic weights from a trained neural network by observing the whole trained neural network as an AND/OR graph and then finds solution for each node that becomes the weight of a corresponding node. The proposed feature weighting mechanism is used in CBR to build a hybrid expert system for classification task and the performance of the proposed hybrid system is compared with that with other feature weighting mechanisms. The performance is validated on swine flu dataset and ionosphere, sonar and heart datasets collected from UCI repository. From the empirical results it is observed that in all the experiments the proposed feature weighting mechanism outperforms most of the earlier weighting mechanisms extracted from trained neural network.
---
paper_title: Case-based explanation of non-case-based learning methods.
paper_content:
Abstract ::: We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.
---
paper_title: An evaluation of machine-learning methods for predicting pneumonia mortality
paper_content:
Abstract This paper describes the application of eight statistical and machine-learning methods to derive computer models for predicting mortality of hospital patients with pneumonia from their findings at initial presentation. The eight models were each constructed based on 9847 patient cases and they were each evaluated on 4352 additional cases. The primary evaluation metric was the error in predicted survival as a function of the fraction of patients predicted to survive. This metric is useful in assessing a model's potential to assist a clinician in deciding whether to treat a given patient in the hospital or at home. We examined the error rates of the models when predicting that a given fraction of patients will survive. We examined survival fractions between 0.1 and 0.6. Over this range, each model's predictive error rate was within 1% of the error rate of every other model. When predicting that approximately 30% of the patients will survive, all the models have an error rate of less than 1.5%. The models are distinguished more by the number of variables and parameters that they contain than by their error rates; these differences suggest which models may be the most amenable to future implementation as paper-based guidelines.
---
paper_title: Explaining Explanations in AI
paper_content:
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
---
paper_title: Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
paper_content:
In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.
---
paper_title: Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
paper_content:
Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to enable new learning-based inference and decision strategies that achieve desirable properties such as robustness and interpretability. We take a first step in this direction and introduce the Deep k-Nearest Neighbors (DkNN). This hybrid classifier combines the k-nearest neighbors algorithm with representations of the data learned by each layer of the DNN: a test input is compared to its neighboring training points according to the distance that separates them in the representations. We show the labels of these neighboring points afford confidence estimates for inputs outside the model's training manifold, including on malicious inputs like adversarial examples--and therein provides protections against inputs that are outside the models understanding. This is because the nearest neighbors can be used to estimate the nonconformity of, i.e., the lack of support for, a prediction in the training data. The neighbors also constitute human-interpretable explanations of predictions. We evaluate the DkNN algorithm on several datasets, and show the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.
---
paper_title: Case-based explanation of non-case-based learning methods.
paper_content:
Abstract ::: We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.
---
paper_title: Explanation oriented retrieval
paper_content:
This paper is based on the observation that the nearest neighbour in a case-based prediction system may not be the best case to explain a prediction. This observation is based on the notion of a decision surface (i.e. class boundary) and the idea that cases located between the target case and the decision surface are more convincing as support for explanation. This motivates the idea of explanation utility, a metric that may be different to the similarity metric used for nearest neighbour retrieval. In this paper we present an explanation utility framework and present detailed examples of how it is used in two medical decision-support tasks. These examples show how this notion of explanation utility sometimes select cases other than the nearest neighbour for use in explanation and how these cases are more convincing as explanations.
---
paper_title: Explaining Explanations in AI
paper_content:
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
---
paper_title: YASENN: Explaining Neural Networks via Partitioning Activation Sequences
paper_content:
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
---
paper_title: Towards A Rigorous Science of Interpretable Machine Learning
paper_content:
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
---
paper_title: A Case-Based Explanation System for Black-Box Systems
paper_content:
Most users of machine-learning products are reluctant to use them without any sense of the underlying logic that has led to the system's predictions. Unfortunately many of these systems lack any transparency in the way they operate and are deemed to be black boxes. In this paper we present a Case-Based Reasoning (CBR) solution to providing supporting explanations of black-box systems. This CBR solution has two key facets; it uses local information to assess the importance of each feature and using this, it selects the cases from the data used to build the black-box system for use in explanation. The retrieval mechanism takes advantage of the derived feature importance information to help select cases that are a better reflection of the black-box solution and thus more convincing explanations.
---
paper_title: Gaining insight through case-based explanation
paper_content:
Traditional explanation strategies in machine learning have been dominated by rule and decision tree based approaches. Case-based explanations represent an alternative approach which has inherent advantages in terms of transparency and user acceptability. Case-based explanations are based on a strategy of presenting similar past examples in support of and as justification for recommendations made. The traditional approach to such explanations, of simply supplying the nearest neighbour as an explanation, has been found to have shortcomings. Cases should be selected based on their utility in forming useful explanations. However, the relevance of the explanation case may not be clear to the end user as it is retrieved using domain knowledge which they themselves may not have. In this paper the focus is on a knowledge-light approach to case-based explanations that works by selecting cases based on explanation utility and offering insights into the effects of feature-value differences. In this paper we examine to two such a knowledge-light frameworks for case-based explanation. We look at explanation oriented retrieval (EOR) a strategy which explicitly models explanation utility and also at the knowledge-light explanation framework (KLEF) that uses local logistic regression to support case-based explanation.
---
paper_title: Case-Based Reasoning for Explaining Probabilistic Machine Learning
paper_content:
This paper describes a generic framework for explaining the prediction of probabilistic machine learning algorithms using cases. The framework consists of two components: a similarity metric between cases that is defined relative to a probability model and an novel case-based approach to justifying the probabilistic prediction by estimating the prediction error using case-based reasoning. As basis for deriving similarity metrics, we define similarity in terms of the principle of interchangeability that two cases are considered similar or identical if two probability distributions, derived from excluding either one or the other case in the case base, are identical. Lastly, we show the applicability of the proposed approach by deriving a metric for linear regression, and apply the proposed approach for explaining predictions of the energy performance of households.
---
paper_title: The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
paper_content:
We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
---
paper_title: An Evaluation of the Usefulness of Case-Based Explanation
paper_content:
One of the perceived benefits of Case-Based Reasoning (CBR) is the potential to use retrieved cases to explain predictions. Surprisingly, this aspect of CBR has not been much researched. There has been some early work on knowledge-intensive approaches to CBR where the cases contain explanation patterns (e.g. SWALE). However, a more knowledge-light approach where the case similarity is the basis for explanation has received little attention. To explore this, we have developed a CBR system for predicting blood-alcohol level. We compare explanations of predictions produced by this system with alternative rule-based explanations. The case-based explanations fare very well in this evaluation and score significantly better than the rule-based alternative.
---
paper_title: DeepRED – Rule Extraction from Deep Neural Networks
paper_content:
Neural network classifiers are known to be able to learn very accurate models. In the recent past, researchers have even been able to train neural networks with multiple hidden layers (deep neural networks) more effectively and efficiently. However, the major downside of neural networks is that it is not trivial to understand the way how they derive their classification decisions. To solve this problem, there has been research on extracting better understandable rules from neural networks. However, most authors focus on nets with only one single hidden layer. The present paper introduces a new decompositional algorithm – DeepRED – that is able to extract rules from deep neural networks.
---
paper_title: Massively Parallel Case-Based Reasoning with Probabilistic Similarity Metrics
paper_content:
We propose a probabilistic case-space metric for the case matching and case adaptation tasks. Central to our approach is a probability propagation algorithm adopted from Bayesian reasoning systems, which allows our case-based reasoning system to perform theoretically sound probabilistic reasoning. The same probability propagation mechanism actually offers a uniform solution to both the case matching and case adaptation problems. We also show how the algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system. We argue that using this kind of an approach, the difficult problem of case indexing can be completely avoided.
---
paper_title: On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
paper_content:
Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.
---
paper_title: In Defense of Fully Connected Layers in Visual Representation Transfer
paper_content:
Pre-trained convolutional neural network (CNN) models have been widely applied in many computer vision tasks, especially in transfer learning tasks. In transfer learning, the target domain may be in a different feature space or follow a different data distribution, compared to the source domain. In CNN transfer tasks, we often transfer visual representations from a source domain (e.g., ImageNet) to target domains with fewer training images or have different image properties. It is natural to explore which CNN model performs better in visual representation transfer. Through visualization analyses and extensive experiments, we show that when either image properties or task objective in the target domain is far away from those in the source domain, having the fully connected layers in the source domain pre-trained model is essential in achieving high accuracy after transferring to the target domain.
---
paper_title: Deep Weighted Averaging Classifiers
paper_content:
Recent advances in deep learning have achieved impressive gains in classification accuracy on a variety of types of data, including images and text. Despite these gains, however, concerns have been raised about the calibration, robustness, and interpretability of these models. In this paper we propose a simple way to modify any conventional deep architecture to automatically provide more transparent explanations for classification decisions, as well as an intuitive notion of the credibility of each prediction. Specifically, we draw on ideas from nonparametric kernel regression, and propose to predict labels based on a weighted sum of training instances, where the weights are determined by distance in a learned instance-embedding space. Working within the framework of conformal methods, we propose a new measure of nonconformity suggested by our model, and experimentally validate the accompanying theoretical expectations, demonstrating improved transparency, controlled error rates, and robustness to out-of-domain data, without compromising on accuracy or calibration.
---
paper_title: The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
paper_content:
We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
---
paper_title: Axiomatic Attribution for Deep Networks
paper_content:
We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.
---
paper_title: Learning Important Features Through Propagating Activation Differences
paper_content:
The purported "black box" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http://goo.gl/qKb7pL, code: http://goo.gl/RM8jvH.
---
paper_title: Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
paper_content:
Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to enable new learning-based inference and decision strategies that achieve desirable properties such as robustness and interpretability. We take a first step in this direction and introduce the Deep k-Nearest Neighbors (DkNN). This hybrid classifier combines the k-nearest neighbors algorithm with representations of the data learned by each layer of the DNN: a test input is compared to its neighboring training points according to the distance that separates them in the representations. We show the labels of these neighboring points afford confidence estimates for inputs outside the model's training manifold, including on malicious inputs like adversarial examples--and therein provides protections against inputs that are outside the models understanding. This is because the nearest neighbors can be used to estimate the nonconformity of, i.e., the lack of support for, a prediction in the training data. The neighbors also constitute human-interpretable explanations of predictions. We evaluate the DkNN algorithm on several datasets, and show the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.
---
paper_title: This Looks Like That: Deep Learning for Interpretable Image Recognition
paper_content:
When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training without any annotations for parts of images. We demonstrate our method on the CUB-200-2011 dataset and the Stanford Cars dataset. Our experiments show that ProtoPNet can achieve comparable accuracy with its analogous non-interpretable counterpart, and when several ProtoPNets are combined into a larger network, it can achieve an accuracy that is on par with some of the best-performing deep models. Moreover, ProtoPNet provides a level of interpretability that is absent in other interpretable deep models.
---
paper_title: Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions
paper_content:
Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability -- they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as "black box" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes.
---
paper_title: A Survey of Methods for Explaining Black Box Models
paper_content:
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
---
|
```
Title: How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins
Section 1: Introduction
Description 1: Provide an overview of the importance of explainable AI (XAI), explain the twin-systems approach, and introduce the concept of ANN-CBR twins.
Section 2: "Explanation" Needs Explanation
Description 2: Discuss the various interpretations and definitions of "explanation" within the context of XAI and how CBR systems can provide post-hoc explanation-by-example.
Section 3: Motivation for a Systematic Review
Description 3: Explain the reasons behind conducting a systematic review of ANN-CBR twins, highlighting gaps and fragmentation in the current literature.
Section 4: Defining ANN-CBR Twins
Description 4: Define ANN-CBR twin-systems, detailing the specific criteria and components that distinguish them from other hybrid systems.
Section 5: A Systematic Review: Methodology
Description 5: Describe the search procedures, including the systematic top-down searches and bottom-up citation-based searches, used to gather relevant literature on ANN-CBR twins.
Section 6: Results Summary
Description 6: Present an overview of the findings from the systematic search, including the number of papers reviewed and the final selection of relevant studies.
Section 7: A History of ANN-CBR Twins
Description 7: Outline the historical development of ANN-CBR twins, identifying key periods and contributions from different research groups.
Section 8: Korean Developments (1999-2007): Feature-Weighting Tests of Twins
Description 8: Discuss the significant contributions and findings from the South Korean group at KAIST, focusing on their comparative tests of feature-weighting techniques.
Section 9: A Parallel Discovery in L.A.: Caruana et al. (1999)
Description 9: Provide an overview of Caruana et al.'s work on using MLPs to provide case-based explanations, highlighting differences from the Korean approach.
Section 10: An Irish Departure (mid-2000s): Local Feature-Weighting Tests of Twins
Description 10: Discuss the contributions from the Irish group at University College Dublin, focusing on their local feature-weighting methods and user tests.
Section 11: Recent DNN-CBR Twinning
Description 11: Explore recent developments and approaches for combining DNNs with CBR for explainability, identifying notable techniques and their applications.
Section 12: Future Directions: Road Mapping
Description 12: Summarize the significance of the survey and propose future research directions for ANN-CBR twins in addressing XAI challenges.
```
|
A Survey on OFDM-Based Elastic Core Optical Networking
| 15 |
---
paper_title: Dynamic optical mesh networks: Drivers, challenges and solutions for the future
paper_content:
We discuss the scalability challenges facing the optical networks. Using the architecture based on spectrum-sliced elastic optical path network (SLICE), we demonstrate how the networking functionality can be effectively shifted to the optical domain.
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: 24-Gb/s Transmission over 730 m of Multimode Fiber by Direct Modulation of an 850-nm VCSEL using Discrete Multi-tone Modulation
paper_content:
Using discrete multi-tone modulation with up to 64-QAM mapping, 24-Gb/s transmission is experimentally demonstrated over 730 m of MMF by direct modulation of an 850-nm VCSEL and direct detection with a MMF receiver.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Technology and architecture to enable the explosive growth of the internet
paper_content:
At current growth rates, Internet traffic will increase by a factor of one thousand in roughly 20 years. It will be challenging for transmission and routing/switching systems to keep pace with this level of growth without requiring prohibitively large increases in network cost and power consumption. We present a high-level vision for addressing these challenges based on both technological and architectural advancements.
---
paper_title: Optical packet switching: A reality check
paper_content:
This paper presents an analysis of the energy consumption in a number of optical switch fabric architectures for optical packet-switched applications and compares them to electronic switch fabrics. Optical packet switching does not appear to offer any substantial power consumption advantages over electronic packet switching. Therefore, there is no compelling case for optical packet switching.
---
paper_title: Simple all-optical FFT scheme enabling Tbit/s real-time signal processing
paper_content:
A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.
---
paper_title: Optical Networking Technologies That Will Create Future Bandwidth-Abundant Networks [Invited]
paper_content:
The transport network paradigm is moving toward next-generation networks that aim at IP convergence, while architectures and technologies are diversifying. Video technologies including ultrahigh-definition TV (more than 33 M pixels) continue to advance, and future communication networks will become video-centric. The inefficiencies of current IP technologies, in particular, the energy consumption and throughput limitations of IP routers, will become pressing problems. Harnessing the full power of light will resolve these problems and spur the creation of future video-centric networks. Extension of optical layer technologies and coordination with new transport protocols will be critical; hierarchical optical path technologies and optical circuit/path switching will play key roles. Recent technical advances in these fields are presented.
---
paper_title: Ofdm Systems for Wireless Communications
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) systems are widely used in the standards for digital audio/video broadcasting, WiFi and WiMax. Being a frequency-domain approach to communications, OFDM has important advantages in dealing with the frequency-selective nature of high data rate wireless communication channels. As the needs for operating with higher data rates become more pressing, OFDM systems have emerged as an effective physical-layer solution. This short monograph is intended as a tutorial which highlights the deleterious aspects of the wireless channel and presents why OFDM is a good choice as a modulation that can transmit at high data rates. The system-level approach we shall pursue will also point out the disadvantages of OFDM systems especially in the context of peak to average ratio, and carrier frequency synchronization. Finally, simulation of OFDM systems will be given due prominence. Simple MATLAB programs are provided for bit error rate simulation using a discrete-time OFDM representation. Software is also provided to simulate the effects of inter-block-interference, inter-carrier-interference and signal clipping on the error rate performance. Different components of the OFDM system are described, and detailed implementation notes are provided for the programs. The program can be downloaded here. Table of Contents: Introduction / Modeling Wireless Channels / Baseband OFDM System / Carrier Frequency Offset / Peak to Average Power Ratio / Simulation of the Performance of OFDM Systems / Conclusions
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: Data Transmission by FrequencyDivision Multiplexing Using the Discrete Fourier Transform
paper_content:
The Fourier transform data communication system is a realization of frequency-division multiplexing (FDM) in which discrete Fourier transforms are computed as part of the modulation and demodulation processes. In addition to eliminating the banks of subcarrier oscillators and coherent demodulators usually required in FDM systems, a completely digital implementation can be built around a special-purpose computer performing the fast Fourier transform. In this paper, the system is described and the effects of linear channel distortion are investigated. Signal design criteria and equalization algorithms are derived and explained. A differential phase modulation scheme is presented that obviates any equalization.
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: Coherent optical OFDM: theory and design.
paper_content:
Coherent optical OFDM (CO-OFDM) has recently been proposed and the proof-of-concept transmission experiments have shown its extreme robustness against chromatic dispersion and polarization mode dispersion. In this paper, we first review the theoretical fundamentals for CO-OFDM and its channel model in a 2x2 MIMO-OFDM representation. We then present various design choices for CO-OFDM systems and perform the nonlinearity analysis for RF-to-optical up-converter. We also show the receiver-based digital signal processing to mitigate self-phase-modulation (SPM) and Gordon-Mollenauer phase noise, which is equivalent to the midspan phase conjugation.
---
paper_title: No-Guard-Interval Coherent Optical OFDM for 100-Gb/s Long-Haul WDM Transmission
paper_content:
This paper describes coherent optical orthogonal frequency division multiplexing (CO-OFDM) techniques for the long-haul transmission of 100-Gb/s-class channels. First, we discuss the configurations of the transmitter and receiver that implement the optical multiplexing/demultiplexing techniques for high-speed CO-OFDM transmission. Next, we review the no-guard-interval (No-GI) CO-OFDM transmission scheme which utilizes optical multiplexing for OFDM signal generation and the intradyne receiver configuration with digital signal processing (DSP). We examine the transmission characteristics of the proposed scheme, and show that No-GI CO-OFDM offers compact signal spectra and superior performance with regard to tolerance against optical amplifier noise and polarization-mode dispersion (PMD). We then introduce long-haul high-capacity transmission experiments employing No-GI CO-OFDM; 13.4 Tb/s (134 times 111 Gb/s) transmission is successfully demonstrated over 3600 km of ITU-T G.652 single-mode fiber without using optical dispersion compensation.
---
paper_title: Channel estimation for wireless OFDM systems
paper_content:
A channel estimation technique is developed for an OFDM system with multiple transmit and multiple receive antennas. The approach exploits the channel structure in the time domain and the frequency domain. The method is appropriate for OFDM systems operating in a time varying channel. A multi-input-multi-output OFDM (MOFDM) system model is defined, followed by a proposed excitation strategy for sufficient identification of the channel. The estimation strategy is shown to be optimal within the class of linear MMSE estimators. Experimental results of the algorithm are provided.
---
paper_title: Bit and Power Loading for Coherent Optical OFDM
paper_content:
We show the first experiment of bit and power loading for coherent optical orthogonal frequency-division-multiplexing (CO-OFDM) systems. The data rate of CO-OFDM systems can be dynamically adjusted according to the channel condition. The system performance can be further improved through optimal power loading into each modulation band.
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: OFDM for Flexible High-Speed Optical Networks
paper_content:
Fast advancing silicon technology underpinned by Moore's law is creating a major transformation in optical fiber communications. The recent upsurge of interests in optical orthogonal frequency-division multiplexing (OFDM) as an efficient modulation and multiplexing scheme is merely a manifestation of this unmistakable trend. Since the formulation of the fundamental concept of OFDM by Chang in 1966 and many landmark works by others thereafter, OFDM has been triumphant in almost all the major RF communication standards. Nevertheless, its application to optical communications is rather nascent and its potential success in the optical domain remains an open question. This tutorial provides a review of optical OFDM slanted towards emerging optical fiber networks. The objective of the tutorial is two-fold: (i) to review OFDM fundamentals from its basic mathematical formation to its salient disadvantages and advantages, and (ii) to reveal the unique characteristics of the fiber optical channel and identify the challenges and opportunities in the application of optical OFDM.
---
paper_title: Low-Cost and Robust 1-Gbit/s Plastic Optical Fiber Link Based on Light-Emitting Diode Technology
paper_content:
1-Gbit/s transmission is demonstrated over 50 m of step-index PMMA plastic optical fiber (1-mm core-diameter) using a commercial light-emitting diode. This is enabled by use of discrete multitone modulation with up to 64-QAM constellation mapping.
---
paper_title: 24-Gb/s Transmission over 730 m of Multimode Fiber by Direct Modulation of an 850-nm VCSEL using Discrete Multi-tone Modulation
paper_content:
Using discrete multi-tone modulation with up to 64-QAM mapping, 24-Gb/s transmission is experimentally demonstrated over 730 m of MMF by direct modulation of an 850-nm VCSEL and direct detection with a MMF receiver.
---
paper_title: Cost-effective 33-Gbps intensity modulation direct detection multi-band OFDM LR-PON system employing a 10-GHz-based transceiver.
paper_content:
We develop a dynamic multi-band OFDM subcarrier allocation scheme to fully utilize the available bandwidth under the restriction of dispersion- and chirp-related power fading. The experimental results successfully demonstrate an intensity-modulation-direct-detection 34.78-Gbps OFDM signal transmissions over 100-km long-reach (LR) passive-optical networks (PONs) based on a cost-effective 10-GHz EAM and a 10-GHz PIN. Considering 0-100-km transmission bandwidth of a 10-GHz EAM, the narrowest bandwidth is theoretically evaluated to occur at ~40 km, instead of 100 km. Consequently, the performances of 20-100-km PONs are experimentally investigated, and at least 33-Gbps capacity is achieved to support LR-PONs of all possible 20-100-km radii.
---
paper_title: Transmission performance of adaptively modulated optical OFDM signals in multimode fiber links
paper_content:
A novel optical signal modulation concept of adaptively modulated optical orthogonal frequency-division multiplexing (AMOOFDM) is proposed and numerical simulations of the transmission performance of AMOOFDM signals are undertaken in unamplified multimode fiber (MMF)-based links using directly modulated distributed feedback lasers (DMLs). It is shown that 28 Gb/s intensity modulation and direct-detection AMOOFDM signal transmission over 300-m MMFs is feasible in unamplified DML-based links having 3-dB bandwidths of 150MHz/km. In addition, AMOOFDM is less susceptible to modal dispersion and variation in launching conditions when compared with existing schemes.
---
paper_title: No-Guard-Interval Coherent Optical OFDM for 100-Gb/s Long-Haul WDM Transmission
paper_content:
This paper describes coherent optical orthogonal frequency division multiplexing (CO-OFDM) techniques for the long-haul transmission of 100-Gb/s-class channels. First, we discuss the configurations of the transmitter and receiver that implement the optical multiplexing/demultiplexing techniques for high-speed CO-OFDM transmission. Next, we review the no-guard-interval (No-GI) CO-OFDM transmission scheme which utilizes optical multiplexing for OFDM signal generation and the intradyne receiver configuration with digital signal processing (DSP). We examine the transmission characteristics of the proposed scheme, and show that No-GI CO-OFDM offers compact signal spectra and superior performance with regard to tolerance against optical amplifier noise and polarization-mode dispersion (PMD). We then introduce long-haul high-capacity transmission experiments employing No-GI CO-OFDM; 13.4 Tb/s (134 times 111 Gb/s) transmission is successfully demonstrated over 3600 km of ITU-T G.652 single-mode fiber without using optical dispersion compensation.
---
paper_title: Adaptive Optical Wireless OFDM System with Controlled Asymmetric Clipping
paper_content:
Optical wireless (OW) technology is attractive for short-range high-speed transmission, especially in RF sensitive environments or where secure applications are desired. We consider transmission over a broadband OW channel, present in the non-directed line-of-sight (LOS) link. For communication, we assume a system concept based on a modulation-adaptive OFDM (DMT). Such system offers efficient channel capacity exploitation, while avoiding inconvenient pointing and tracking mechanisms. Moreover, it allows deployment of simple optical components and efficient electrical signal processing. We first show that the dynamically adaptive system can provide great transmission rate enhancements compared to the statically designed one, even assuming a very conservative constraint on the electrical signal waveform (i.e., no clipping). Then, we show that significant further improvements can be achieved by tolerating some clipping, at the cost of accepting a minor increase of the symbol error rate.
---
paper_title: Adaptive OFDM system for communications over the indoor wireless optical channel
paper_content:
The authors propose an adaptive orthogonal frequency division multiplexing (OFDM) system for communications over the indoor wireless diffuse optical channel. This channel can be characterised as short-term stationary, with severe attenuation and multipath-induced penalty, and high dependence on the space distribution of emitters and receivers. We have chosen OFDM systems because of their capability of supporting high data rates without channel equalisation. They also mitigate the quality of service fluctuations induced when space distribution of emitters and receivers varies. The performance of the new proposed scheme is compared with that of an adaptive system described in a previous work. The obtained results show that a significant increase of the system throughput is attained over noisy wireless optical channels.
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: 448-Gb/s Reduced-Guard-Interval CO-OFDM Transmission Over 2000 km of Ultra-Large-Area Fiber and Five 80-GHz-Grid ROADMs
paper_content:
We propose a novel coherent optical orthogonal frequency-division multiplexing (CO-OFDM) scheme with reduced guard interval (RGI) for high-speed high-spectral-efficiency long-haul optical transmission. In this scheme, fiber chromatic dispersion is compensated for within the receiver rather than being accommodated by the guard interval (GI) as in conventional CO-OFDM, thereby reducing the needed GI, especially when fiber dispersion is large. We demonstrate the generation of a 448-Gb/s RGI-CO-OFDM signal with 16-QAM subcarrier modulation through orthogonal band multiplexing. This signal occupies an optical bandwidth of 60 GHz, and is transmitted over 2000 km of ultra-large-area fiber (ULAF) with five passes through an 80-GHz-grid wavelength-selective switch. Banded digital coherent detection with two detection bands is used to receive this 448-Gb/s signal. Wavelength-division multiplexed transmission of three 80-GHz spaced 448-Gb/s RGI-CO-OFDM channels is also demonstrated, achieving a net system spectral efficiency of 5.2 b/s/Hz and a transmission distance of 1600 km of ULAF.
---
paper_title: Coherent optical OFDM: theory and design.
paper_content:
Coherent optical OFDM (CO-OFDM) has recently been proposed and the proof-of-concept transmission experiments have shown its extreme robustness against chromatic dispersion and polarization mode dispersion. In this paper, we first review the theoretical fundamentals for CO-OFDM and its channel model in a 2x2 MIMO-OFDM representation. We then present various design choices for CO-OFDM systems and perform the nonlinearity analysis for RF-to-optical up-converter. We also show the receiver-based digital signal processing to mitigate self-phase-modulation (SPM) and Gordon-Mollenauer phase noise, which is equivalent to the midspan phase conjugation.
---
paper_title: 10x121.9-Gb/s PDM-OFDM Transmission with 2-b/s/Hz Spectral Efficiency over 1,000 km of SSMF
paper_content:
PDM-OFDM transmission of 10x121.9-Gb/s (112.6-Gb/s without OFDM overhead) at 50-GHz channel spacing is demonstrated over 1,000-km SSMF without any inline dispersion compensation. 8-QAM subcarrier modulation allows transmission of 121.9 Gb/s within a 22.8-GHz optical bandwidth.
---
paper_title: Zero-guard-interval coherent optical OFDM with overlapped frequency-domain CD and PMD equalization
paper_content:
This paper presents a new channel estimation/equalization algorithm for coherent OFDM (CO-OFDM) digital receivers, which enables the elimination of the cyclic prefix (CP) for OFDM transmission. We term this new system as the zero-guard-interval (ZGI)-CO-OFDM. ZGI-CO-OFDM employs an overlapped frequency-domain equalizer (OFDE) to compensate both chromatic dispersion (CD) and polarization mode dispersion (PMD) before the OFDM demodulation. Despite the zero CP overhead, ZGI-CO-OFDM demonstrates a superior PMD tolerance than the previous reduced-GI (RGI)-CO-OFDM, which is verified under several different PMD conditions. Additionally, ZGI-CO-OFDM can improve the channel estimation accuracy under high PMD conditions by using a larger intra-symbol frequency-averaging (ISFA) length as compared to RGI-CO-OFDM. ZGI-CO-OFDM also enables the use of ever smaller fast Fourier transform (FFT) sizes (i.e. <128), while maintaining the zero CP overhead. Finally, we provide an analytical comparison of the computation complexity between the conventional, RGI- and ZGI- CO-OFDM. We show that ZGI-CO-OFDM requires reasonably small additional computation effort (~13.6%) compared to RGI-CO-OFDM for 112-Gb/s transmission over a 1600-km dispersion-uncompensated optical link.
---
paper_title: Orthogonal frequency division multiplexing for high-speed optical transmission
paper_content:
Optical Orthogonal frequency division multiplexing (OOFDM) is shown to outperform RZ-OOK transmission in high-speed optical communications systems in terms of transmission distance and spectral efficiency. The OOFDM in combination with the subcarrier multiplexing offers a significant improvement in spectral efficiency of at least 2.9 bits/s/Hz.
---
paper_title: Optical OFDM - A Candidate for Future Long-Haul Optical Transmission Systems
paper_content:
We review coherent-optical orthogonal frequency division multiplexing (OFDM) for long-haul optical transmission systems. Two important aspects of such systems are reviewed: RF-aided phase noise compensation and polarization division multiplexing enabled by MIMO processing.
---
paper_title: 121.9-Gb/s PDM-OFDM Transmission With 2-b/s/Hz Spectral Efficiency Over 1000 km of SSMF
paper_content:
We discuss optical multi-band orthogonal frequency division multiplexing (OFDM) and show that by using multiple parallel OFDM bands, the required bandwidth of the digital-to-analogue/ analogue-to-digital converters and the required cyclic prefix can significantly be reduced. With the help of four OFDM bands and polarization division multiplexing (PDM) we report continuously detectable transmission of 10 times121.9-Gb/s (112.6-Gb/s without OFDM overhead) at 50-GHz channel spacing over 1,000-km standard single mode fiber (SSMF) without any inline dispersion compensation. In this experiment 8 QAM subcarrier modulation is used which confines the spectrum of the 121.9 Gb/s PDM-OFDM signal within a 22.8 GHz optical bandwidth. Moreover, we propose a digital signal processing method to reduce the matching requirements for the wideband transmitter IQ mixer structures required for PDM-OFDM.
---
paper_title: Transmission of 1.2 Tb/s continuous waveband PDM-OFDM-FDM signal with spectral efficiency of 3.3 bit/s/Hz over 400 km of SSMF
paper_content:
We demonstrate generation, transmission and reception of a 1.21 Tb/s continuous waveband PDM-OFDM-FDM signal with spectral efficiency of 3.33 bit/s/Hz. After DCF-free transmission over 400-km SSMF a considerable Q-factor margin of 2dB vs. EFEC limit was achieved.
---
paper_title: Coherent optical orthogonal frequency division multiplexing
paper_content:
Coherent optical orthogonal frequency division multiplexing is proposed to combat dispersion in optical media. It is shown that optical-signal-to-noise ratio penalty at 10 Gbit/s is maintained below 2 dB for 3000 km transmission of standard-singlemode fibre without dispersion compensation.
---
paper_title: 107 Gb/s coherent optical OFDM transmission over 1000-km SSMF fiber using orthogonal band multiplexing.
paper_content:
Coherent optical OFDM (CO-OFDM) has emerged as an attractive modulation format for the forthcoming 100 Gb/s Ethernet. However, even the spectral-efficient implementation of CO-OFDM requires digital-to-analog converters (DAC) and analog-to-digital converters (ADC) to operate at the bandwidth which may not be available today or may not be cost-effective. In order to resolve the electronic bandwidth bottleneck associated with DAC/ADC devices, we propose and elucidate the principle of orthogonal-band-multiplexed OFDM (OBM-OFDM) to subdivide the entire OFDM spectrum into multiple orthogonal bands. With this scheme, the DAC/ADCs do not need to operate at extremely high sampling rate. The corresponding mapping to the mixed-signal integrated circuit (IC) design is also revealed. Additionally, we show the proof-of-concept transmission experiment through optical realization of OBM-OFDM. To the best of our knowledge, we present the first experimental demonstration of 107 Gb/s QPSK-encoded CO-OFDM signal transmission over 1000 km standard-single- mode-fiber (SSMF) without optical dispersion compensation and without Raman amplification. The demonstrated system employs 2x2 MIMO-OFDM signal processing and achieves high electrical spectral efficiency with direct-conversion at both transmitter and receiver.
---
paper_title: Optical orthogonal frequency division multiplexing using frequency/time domain filtering for high spectral efficiency up to 1 bit/s/Hz
paper_content:
Summary form only given. We have proposed a novel optical orthogonal frequency division multiplexing technique that can overcome the spectral efficiency limitation of the conventional WDM system. This scheme permits substantial overlapping of the spectrum and can achieve the spectral efficiency up to 1 bit/s/Hz in principle. For demultiplexing, we used a newly developed optical discrete Fourier transformer (DFT) instead of electrical digital processing, which is impossible to apply in the optical frequency range. The optical DFT was realized by using a set of delay lines, a phase shifter and a coupler in the frequency domain and bit synchronization and an optical gate in the time domain. In experimental demonstration of this scheme, error-free operation was obtained with a 0.8 bit/s/Hz of spectral efficiency.
---
paper_title: Single source optical OFDM transmitter and optical FFT receiver demonstrated at line rates of 5.4 and 10.8 Tbit/s
paper_content:
OFDM data with line rates of 5.4 Tbit/s or 10.8 Tbit/s are generated and decoded with a new real-time all-optical FFT receiver. Each of 75 carriers of a comb source is encoded with 18 GBd QPSK or 16-QAM.
---
paper_title: No-Guard-Interval Coherent Optical OFDM for 100-Gb/s Long-Haul WDM Transmission
paper_content:
This paper describes coherent optical orthogonal frequency division multiplexing (CO-OFDM) techniques for the long-haul transmission of 100-Gb/s-class channels. First, we discuss the configurations of the transmitter and receiver that implement the optical multiplexing/demultiplexing techniques for high-speed CO-OFDM transmission. Next, we review the no-guard-interval (No-GI) CO-OFDM transmission scheme which utilizes optical multiplexing for OFDM signal generation and the intradyne receiver configuration with digital signal processing (DSP). We examine the transmission characteristics of the proposed scheme, and show that No-GI CO-OFDM offers compact signal spectra and superior performance with regard to tolerance against optical amplifier noise and polarization-mode dispersion (PMD). We then introduce long-haul high-capacity transmission experiments employing No-GI CO-OFDM; 13.4 Tb/s (134 times 111 Gb/s) transmission is successfully demonstrated over 3600 km of ITU-T G.652 single-mode fiber without using optical dispersion compensation.
---
paper_title: Orthogonal Frequency Division Multiplexing for Adaptive Dispersion Compensation in Long Haul WDM Systems
paper_content:
Simulations show orthogonal frequency division multiplexing (OFDM) with optical single sideband modulation can adaptively compensate for dispersion in 4000-km 32×10Gbps WDM SMF links with 40% spectral efficiency. OFDM requires no reverse feedback path so can compensate rapid plant variations.
---
paper_title: Spectrally Efficient Compatible Single-Sideband Modulation for OFDM Transmission With Direct Detection
paper_content:
A combination of orthogonal frequency-division multiplexing (OFDM) and compatible single-sideband modulation (CompSSB) using a standard direct-detection scheme is suggested to overcome chromatic dispersion without explicit compensation. Since the proposed type of SSB modulation does not require a spectral gap between optical carrier and subcarriers, it is highly spectrally efficient and the complexity in the analogue part is reduced compared to known direct-detection schemes for OFDM.
---
paper_title: Maximizing the Transmission Performance of Adaptively Modulated Optical OFDM Signals in Multimode-Fiber Links by Optimizing Analog-to-Digital Converters
paper_content:
Based on a comprehensive theoretical model of a recently proposed novel technique known as adaptively modulated optical orthogonal frequency-division multiplexing (AMOOFDM), investigations are undertaken into the impact of an analog-to-digital converter involved in the AMOOFDM modem on the transmission performance of AMOOFDM signals in unamplified intensity-modulation and direct-detection (IMDD) multimode-fiber (MMF)-based links. It is found that signal quantization and clipping effects are significant in determining the maximum achievable transmission performance of the AMOOFDM modem. A minimum quantization bit value of ten and optimum clipping ratio of 13 dB are identified, based on which, the transmission performance is maximized. It is shown that 40-Gb/s-over-220-m and 32-Gb/s-over-300-m IMDD-AMOOFDM signal transmission at 1550 nm with loss margins of about 15 dB is feasible in the installed worst case 62.5-mum MMF links having 3-dB effective bandwidths as small as 150 MHz middot km. Meanwhile, excellent performance, robustness to fiber types, and variation in launch conditions and signal bit rates is observed. In addition, discussions are presented of the potential of 100-Gb/s AMOOFDM signal transmission over installed MMF links
---
paper_title: No-Guard-Interval Coherent Optical OFDM for 100-Gb/s Long-Haul WDM Transmission
paper_content:
This paper describes coherent optical orthogonal frequency division multiplexing (CO-OFDM) techniques for the long-haul transmission of 100-Gb/s-class channels. First, we discuss the configurations of the transmitter and receiver that implement the optical multiplexing/demultiplexing techniques for high-speed CO-OFDM transmission. Next, we review the no-guard-interval (No-GI) CO-OFDM transmission scheme which utilizes optical multiplexing for OFDM signal generation and the intradyne receiver configuration with digital signal processing (DSP). We examine the transmission characteristics of the proposed scheme, and show that No-GI CO-OFDM offers compact signal spectra and superior performance with regard to tolerance against optical amplifier noise and polarization-mode dispersion (PMD). We then introduce long-haul high-capacity transmission experiments employing No-GI CO-OFDM; 13.4 Tb/s (134 times 111 Gb/s) transmission is successfully demonstrated over 3600 km of ITU-T G.652 single-mode fiber without using optical dispersion compensation.
---
paper_title: Coherent optical orthogonal frequency division multiplexing
paper_content:
Coherent optical orthogonal frequency division multiplexing is proposed to combat dispersion in optical media. It is shown that optical-signal-to-noise ratio penalty at 10 Gbit/s is maintained below 2 dB for 3000 km transmission of standard-singlemode fibre without dispersion compensation.
---
paper_title: MIMO systems with antenna selection
paper_content:
Multiple-input-multiple-output (MIMO) wireless systems are those that have multiple antenna elements at both the transmitter and receiver. They were first investigated by computer simulations in the 1980s. Since that time, interest in MIMO systems has exploded. They are now being used for third-generation cellular systems (W-CDMA) and are discussed for future high-performance modes of the highly successful IEEE 802.11 standard for wireless local area networks. MIMO-related topics also occupy a considerable part of today's academic communications research. The multiple antennas in MIMO systems can be exploited in two different ways. One is the creation of a highly effective antenna diversity system; the other is the use of the multiple antennas for the transmission of several parallel data streams to increase the capacity of the system. This article presented an overview of MIMO systems with antenna selection. The transmitter, the receiver, or both use only the signals from a subset of the available antennas. This allows considerable reductions in the hardware expense.
---
paper_title: PMD-Supported Coherent Optical OFDM Systems
paper_content:
Although polarization-mode dispersion (PMD) greatly impairs conventional high-speed single-carrier systems, it is shown that for multicarrier systems such as coherent optical orthogonal frequency-division-multiplexed systems (CO-OFDM), not only does PMD not cause any impairment, but it also provides a benefit of polarization diversity against polarization-dependent-loss-induced fading and consequently improves the system margin. The PMD benefit to fiber nonlinearity reduction in CO-OFDM systems is also predicted
---
paper_title: OFDM for Optical Communications
paper_content:
The first book on optical OFDM by the leading pioneers in the field. The only book to cover error correction codes for optical OFDM. It gives applications of OFDM to free-space communications, optical access networks, and metro and log haul transports show optical OFDM can be implemented. It contains introductions to signal processing for optical engineers and optical communication fundamentals for wireless engineers. This book gives a coherent and comprehensive introduction to the fundamentals of OFDM signal processing, with a distinctive focus on its broad range of applications. It evaluates the architecture, design and performance of a number of OFDM variations, discusses coded OFDM, and gives a detailed study of error correction codes for access networks, 100 Gb/s Ethernet and future optical networks. The emerging applications of optical OFDM, including single-mode fiber transmission, multimode fiber transmission, free space optical systems, and optical access networks are examined, with particular attention paid to passive optical networks, radio-over-fiber, WiMAX and UWB communications. Written by two of the leading contributors to the field, this book will be a unique reference for optical communications engineers and scientists. Students, technical managers and telecom executives seeking to understand this new technology for future-generation optical networks will find the book invaluable. William Shieh is an associate professor and reader in the electrical and electronic engineering department, The University of Melbourne, Australia. He received his M.S. degree in electrical engineering and Ph.D. degree in physics both from University of Southern California. Ivan Djordjevic is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona, Tucson, where he directs the Optical Communications Systems Laboratory (OCSL). His current research interests include optical networks, error control coding, constrained coding, coded modulation, turbo equalization, OFDM applications, and quantum error correction. 'This wonderful book is the first one to address the rapidly emerging optical OFDM field. Written by two leading researchers in the field, the book is structured to comprehensively cover any optical OFDM aspect one could possibly think of, from the most fundamental to the most specialized. The book adopts a coherent line of presentation, while striking a thoughtful balance between the various topics, gradually developing the optical-physics and communication-theoretic concepts required for deep comprehension of the topic, eventually treating the multiple optical OFDM methods, variations and applications. In my view, this book will remain relevant for many years to come, and will be increasingly accessed by graduate students, accomplished researchers as well as telecommunication engineers and managers keen to attain a perspective on the emerging role of OFDM in the evolution of photonic networks' - Prof. Moshe Nazarathy, EE Dept., Technion, Israel Institute of Technology.
---
paper_title: OFDM for Flexible High-Speed Optical Networks
paper_content:
Fast advancing silicon technology underpinned by Moore's law is creating a major transformation in optical fiber communications. The recent upsurge of interests in optical orthogonal frequency-division multiplexing (OFDM) as an efficient modulation and multiplexing scheme is merely a manifestation of this unmistakable trend. Since the formulation of the fundamental concept of OFDM by Chang in 1966 and many landmark works by others thereafter, OFDM has been triumphant in almost all the major RF communication standards. Nevertheless, its application to optical communications is rather nascent and its potential success in the optical domain remains an open question. This tutorial provides a review of optical OFDM slanted towards emerging optical fiber networks. The objective of the tutorial is two-fold: (i) to review OFDM fundamentals from its basic mathematical formation to its salient disadvantages and advantages, and (ii) to reveal the unique characteristics of the fiber optical channel and identify the challenges and opportunities in the application of optical OFDM.
---
paper_title: Bit and Power Loading for Coherent Optical OFDM
paper_content:
We show the first experiment of bit and power loading for coherent optical orthogonal frequency-division-multiplexing (CO-OFDM) systems. The data rate of CO-OFDM systems can be dynamically adjusted according to the channel condition. The system performance can be further improved through optimal power loading into each modulation band.
---
paper_title: Transparent WDM network with bitrate tunable optical OFDM transponders
paper_content:
Reach estimations for several variable-bitrate OFDM schemes are presented and discussed in the framework of a transparent EU core network scenario. 44% reduction on OE interfaces is found compared to a fixed-bitrate 40Gb/s network.
---
paper_title: Optical Network Design With Mixed Line Rates and Multiple Modulation Formats
paper_content:
With the growth of traffic volume and the emergence of various new applications, future telecom networks are expected to be increasingly heterogeneous with respect to applications supported and underlying technologies employed. To address this heterogeneity, it may be most cost effective to set up different lightpaths at different bit rates in such a backbone telecom mesh network employing optical wavelength-division multiplexing. This approach can be cost effective because low-bit-rate services will need less grooming (i.e., less multiplexing with other low-bit-rate services onto high-capacity wavelengths), while a high-bit-rate service can be accommodated directly on a wavelength itself. Optical networks with mixed line rates (MLRs), e.g., 10/40/100 Gb/s over different wavelength channels, are a new networking paradigm. The unregenerated reach of a lightpath depends on its line rate. So, the assignment of a line rate to a lightpath is a tradeoff between its capacity and transparent reach. Thus, based on their signal-quality constraints (threshold bit error rate), intelligent assignment of line rates to lightpaths can minimize the need for signal regeneration. This constraint on the transparent reach based on threshold signal quality can be relaxed by employing more advanced modulation formats, but with more investment. We propose a design method for MLR optical networks with transceivers employing different modulation formats. Our results demonstrate the tradeoff between a transceiver's cost and its optical reach in overall network design.
---
paper_title: Defragmentation of transparent Flexible optical WDM (FWDM) networks
paper_content:
We introduce the network defragmentation problem for FWDM networks, formulate it, and propose heuristics. The network defragmentation process consolidates the available spectrum significantly while minimizing the number of interrupted connections.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: On the spectrum-efficiency of bandwidth-variable optical OFDM transport networks
paper_content:
We investigated the high spectrum efficiency property of the bandwidth-variable optical OFDM (BV-OOFDM) transport networks. In contrast to mixed-line-rate (MLR) networks, BV-OOFDM enables flexible allocation and efficient utilization of the spectral resource.
---
paper_title: Elastic optical networks with 25–100G format-versatile WDM transmission systems
paper_content:
We propose a network model based on format-versatile transceiver for data-rate tunability. We estimate each format's resistance to degradations and dimension a backbone network accordingly, yielding 20% savings on resources compared to single-rate networks.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Routing, Wavelength Assignment, and Spectrum Allocation in Transparent Flexible Optical WDM (FWDM) Networks
paper_content:
We propose the flexible optical WDM network architecture, and introduce the routing, wavelength assignment, and spectrum allocation problem in transparent FWDM networks. Spectrum and cost efficiency are improved compared to fixed grid networks.
---
paper_title: Elastic optical networks with 25–100G format-versatile WDM transmission systems
paper_content:
We propose a network model based on format-versatile transceiver for data-rate tunability. We estimate each format's resistance to degradations and dimension a backbone network accordingly, yielding 20% savings on resources compared to single-rate networks.
---
paper_title: Optical Network Design With Mixed Line Rates and Multiple Modulation Formats
paper_content:
With the growth of traffic volume and the emergence of various new applications, future telecom networks are expected to be increasingly heterogeneous with respect to applications supported and underlying technologies employed. To address this heterogeneity, it may be most cost effective to set up different lightpaths at different bit rates in such a backbone telecom mesh network employing optical wavelength-division multiplexing. This approach can be cost effective because low-bit-rate services will need less grooming (i.e., less multiplexing with other low-bit-rate services onto high-capacity wavelengths), while a high-bit-rate service can be accommodated directly on a wavelength itself. Optical networks with mixed line rates (MLRs), e.g., 10/40/100 Gb/s over different wavelength channels, are a new networking paradigm. The unregenerated reach of a lightpath depends on its line rate. So, the assignment of a line rate to a lightpath is a tradeoff between its capacity and transparent reach. Thus, based on their signal-quality constraints (threshold bit error rate), intelligent assignment of line rates to lightpaths can minimize the need for signal regeneration. This constraint on the transparent reach based on threshold signal quality can be relaxed by employing more advanced modulation formats, but with more investment. We propose a design method for MLR optical networks with transceivers employing different modulation formats. Our results demonstrate the tradeoff between a transceiver's cost and its optical reach in overall network design.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Spectrally/bitrate flexible optical network planning
paper_content:
We consider the Routing and Spectrum Allocation (RSA) problem in an OFDM-based optical network with elastic bandwidth allocation. We asses the spectrum utilization gains of this flexible architecture compared to a traditional fixed-grid rigid-bandwidth WDM network.
---
paper_title: PONIARD: A Programmable Optical Networking Infrastructure for Advanced Research and Development of Future Internet
paper_content:
Motivated by the design goals of Global Environment for Network Innovation (GENI), we consider how to support the slicing of link bandwidth resources as well as the virtualization of optical access networks and optical backbone mesh networks. Specifically, in this paper, we study a novel programmable mechanism called optical orthogonal frequency division multiplexing (OFDM)/orthogonal frequency division multiple access (OFDMA) for link virtualization. Unlike conventional time division multiplexing (TDM)/time division multiple access (TDMA) and wavelength division multiplexing (WDM)/wavelength division multiple access (WDMA) methods, optical OFDM/OFDMA utilizes advanced digital signal processing (DSP), parallel signal detection (PSD), and flexible resource management schemes for subwavelength level multiplexing and grooming. Simulations as well as experiments are conducted to demonstrate performance improvements and system benefits including cost-reduction and service transparency.
---
paper_title: Demonstration of novel spectrum-efficient elastic optical path network with per-channel variable capacity of 40 Gb/s to over 400 Gb/s
paper_content:
We demonstrated, for the first time, a novel spectrum-efficient elastic optical path network for 100 Gb/s services and beyond, based on flexible rate transceivers and variable-bandwidth wavelength crossconnects.
---
paper_title: Bit-rate-flexible all-optical OFDM transceiver using variable multi-carrier source and DQPSK/DPSK mixed multiplexing
paper_content:
We propose and demonstrate a bit-rate-flexible all-optical OFDM transceiver. Signals were successfully generated at 107, 42.8, 32.1, and 10.7Gbit/s and received through the transceiver. The 107-Gbit/s signal was successfully transmitted over 40-km-SMF without dispersion compensation.
---
paper_title: Transparent WDM network with bitrate tunable optical OFDM transponders
paper_content:
Reach estimations for several variable-bitrate OFDM schemes are presented and discussed in the framework of a transparent EU core network scenario. 44% reduction on OE interfaces is found compared to a fixed-bitrate 40Gb/s network.
---
paper_title: Elastic optical networks with 25–100G format-versatile WDM transmission systems
paper_content:
We propose a network model based on format-versatile transceiver for data-rate tunability. We estimate each format's resistance to degradations and dimension a backbone network accordingly, yielding 20% savings on resources compared to single-rate networks.
---
paper_title: Bit and Power Loading for Coherent Optical OFDM
paper_content:
We show the first experiment of bit and power loading for coherent optical orthogonal frequency-division-multiplexing (CO-OFDM) systems. The data rate of CO-OFDM systems can be dynamically adjusted according to the channel condition. The system performance can be further improved through optimal power loading into each modulation band.
---
paper_title: Flexible-bandwidth and format-agile networking based on optical arbitrary waveform generation and wavelength selective switches
paper_content:
This paper experimentally demonstrates flexible-bandwidth networking by splitting a 500-GHz waveform generated by optical arbitrary waveform generation into its two tributary spectral slices using a liquid-crystal spatial light phase modulator as a wavelength selective switch.
---
paper_title: Distance-adaptive super-wavelength routing in elastic optical path network (SLICE) with optical OFDM
paper_content:
We propose a spectrally-efficient super-wavelength-path routing in SLICE by selecting a set of subcarrier numbers and modulation levels according to path distances. We demonstrate 420 Gb/s path routing using short-reach 14-subcarrier 8-APSK and long-reach 21-subcarrier QPSK.
---
paper_title: Bandwidth scalable, coherent transmitter based on parallel synthesis of multiple spectral slices
paper_content:
This paper presents a bandwidth-scalable, coherent optical transmitter based on the parallel synthesis of multiple spectral slices. As a proof-of-principle, two spectral slice, 6-ns DPSK and QPSK waveforms at 12 Gsymbols/s are generated and measured.
---
paper_title: Experimental demonstration of 400 Gb/s multi-flow, multi-rate, multi-reach optical transmitter for efficient elastic spectral routing
paper_content:
We demonstrate a multi-flow/multi-rate/multi-reach optical transmitter and spectral routing of total 400 Gb/s optical flows. The number of optical flows, bit rate, and optical reach can be adjusted to enable efficient elastic spectral routing.
---
paper_title: Optical Path Aggregation for 1-Tb/s Transmission in Spectrum-Sliced Elastic Optical Path Network
paper_content:
We propose and experimentally demonstrate optical path aggregation in a spectrum-sliced elastic optical path network (SLICE). Multiple optical orthogonal frequency-division-multiplexed (OFDM) 100-Gb/s optical paths are aggregated in the optical domain to form a spectrally continuous 1-Tb/s super-wavelength optical path and transmitted over a network of bandwidth-variable wavelength cross-connects. We evaluate the potential implementation issues and conclude that the OFDM paths can be optically aggregated with optical signal-to-noise ratio penalty of less than 1 dB.
---
paper_title: Bit and Power Loading for Coherent Optical OFDM
paper_content:
We show the first experiment of bit and power loading for coherent optical orthogonal frequency-division-multiplexing (CO-OFDM) systems. The data rate of CO-OFDM systems can be dynamically adjusted according to the channel condition. The system performance can be further improved through optimal power loading into each modulation band.
---
paper_title: Highly programmable wavelength selective switch based on liquid crystal on silicon switching elements
paper_content:
We present a novel wavelength selective switch (WSS) based on a liquid crystal on silicon (LCOS) switching element. The unit operates simultaneously at both 50 and 100 GHz channel spacing and is compatible with 40 G transmission requirements.
---
paper_title: Flexible and grid-less wavelength selective switch using LCOS technology
paper_content:
The increasing spectral efficiency of Optical Transmission systems is constrained by the limitations of wavelength switching and is driving a requirement for significantly more flexible approaches to routing of the optical traffic. We present the intrinsic Grid-free capabilities of LCOS and show how it can be used practically in a flexible Grid architecture to maximize total fiber capacity.
---
paper_title: Experimental demonstration of a gridless multi-granular optical network supporting flexible spectrum switching
paper_content:
A gridless dynamic multi-granular optical network supporting flexible spectrum allocation is proposed and experimentally demonstrated to efficiently accommodate high-speed traffic and increase channel density for lower speed traffic leading to improved network efficiency and scalability.
---
paper_title: Wavelength-Selective Switches for ROADM Applications
paper_content:
The trends, architecture, and performance of wavelength-selective switches (WSS) are analyzed in the context of their application to reconfigurable optical add/drop multiplexer (ROADM)-based optical networks. The resulting analyses define the requirements for the latest generation of ROADM systems and provide insight into the critical specifications of this technology. In addition, the current trends for WSS technology are reviewed in the context of synergies with the strengths of different switching technologies.
---
paper_title: Data rate and channel spacing flexible wavelength blocking filter
paper_content:
We present a high-resolution blocking filter, which seamlessly supports data rates from 2.5 Gbit/s to 160 Gbit/s with a granularity of 13.2 GHz. The filter consists of a linear array of 64 MEMS micromirrors and a high-dispersion echelle grating.
---
paper_title: Simple all-optical FFT scheme enabling Tbit/s real-time signal processing
paper_content:
A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically.
---
paper_title: Wavelength blocking filter with flexible data rates and channel spacing
paper_content:
This work presents a high-resolution (13.2 GHz) channel-blocking optical filter, suitable for use as a reconfigurable optical add/drop multiplexer (ROADM), which seamlessly supports data rates from 2.5 to 160 Gb/s. The filter consists of a linear array of 64 MEMS micromirrors and a high-dispersion echelle grating. The demonstrated device had an insertion loss of 9 dB, a loss ripple of 1.2 dB, and a group delay ripple of 15 ps. Data transmission through the device with various mixed data rate scenarios ranging from 2.5 to 160 Gb/s showed negligible penalty, except at 40 Gb/s where a maximum penalty of 1.5 dB was observed due to a phase coherence with the blocker filter ripple.
---
paper_title: Demonstration of novel spectrum-efficient elastic optical path network with per-channel variable capacity of 40 Gb/s to over 400 Gb/s
paper_content:
We demonstrated, for the first time, a novel spectrum-efficient elastic optical path network for 100 Gb/s services and beyond, based on flexible rate transceivers and variable-bandwidth wavelength crossconnects.
---
paper_title: Demonstration of bit rate variable ROADM functionality on an optical OFDM superchannel
paper_content:
We demonstrate a bit rate variable add- and drop function performed on an optical OFDM superchannel signal by optical filtering and superposition of OFDM subbands and the application of different modulation formats for dynamic networks.
---
paper_title: Data rate and channel spacing flexible wavelength blocking filter
paper_content:
We present a high-resolution blocking filter, which seamlessly supports data rates from 2.5 Gbit/s to 160 Gbit/s with a granularity of 13.2 GHz. The filter consists of a linear array of 64 MEMS micromirrors and a high-dispersion echelle grating.
---
paper_title: Highly programmable wavelength selective switch based on liquid crystal on silicon switching elements
paper_content:
We present a novel wavelength selective switch (WSS) based on a liquid crystal on silicon (LCOS) switching element. The unit operates simultaneously at both 50 and 100 GHz channel spacing and is compatible with 40 G transmission requirements.
---
paper_title: Flexible and grid-less wavelength selective switch using LCOS technology
paper_content:
The increasing spectral efficiency of Optical Transmission systems is constrained by the limitations of wavelength switching and is driving a requirement for significantly more flexible approaches to routing of the optical traffic. We present the intrinsic Grid-free capabilities of LCOS and show how it can be used practically in a flexible Grid architecture to maximize total fiber capacity.
---
paper_title: Filtering characteristics of highly-spectrum efficient spectrum-sliced elastic optical path (SLICE) network
paper_content:
We investigate the performance of OFDM-modulated signals in spectrum-sliced elastic optical path network. We analyze the filtering characteristics and the guard band for multi-node transmission. The architecture increases spectral efficiency over the current WDM systems.
---
paper_title: Experimental demonstration of a gridless multi-granular optical network supporting flexible spectrum switching
paper_content:
A gridless dynamic multi-granular optical network supporting flexible spectrum allocation is proposed and experimentally demonstrated to efficiently accommodate high-speed traffic and increase channel density for lower speed traffic leading to improved network efficiency and scalability.
---
paper_title: Demonstration of novel spectrum-efficient elastic optical path network with per-channel variable capacity of 40 Gb/s to over 400 Gb/s
paper_content:
We demonstrated, for the first time, a novel spectrum-efficient elastic optical path network for 100 Gb/s services and beyond, based on flexible rate transceivers and variable-bandwidth wavelength crossconnects.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Experimental demonstration of a gridless multi-granular optical network supporting flexible spectrum switching
paper_content:
A gridless dynamic multi-granular optical network supporting flexible spectrum allocation is proposed and experimentally demonstrated to efficiently accommodate high-speed traffic and increase channel density for lower speed traffic leading to improved network efficiency and scalability.
---
paper_title: Wavelength-Selective Switches for ROADM Applications
paper_content:
The trends, architecture, and performance of wavelength-selective switches (WSS) are analyzed in the context of their application to reconfigurable optical add/drop multiplexer (ROADM)-based optical networks. The resulting analyses define the requirements for the latest generation of ROADM systems and provide insight into the critical specifications of this technology. In addition, the current trends for WSS technology are reviewed in the context of synergies with the strengths of different switching technologies.
---
paper_title: Dynamic routing and frequency slot assignment for elastic optical path networks that adopt distance adaptive modulation
paper_content:
We propose a dynamic routing and frequency slot assignment algorithm for SLICE networks that employ distance adaptive modulation. We verify that the spectrum utilization penalty that stems from non-uniform bandwidth allocation is marginal.
---
paper_title: Routing and Spectrum Allocation in OFDM-Based Optical Networks with Elastic Bandwidth Allocation
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) has been recently proposed as a modulation technique for optical networks, due to its good spectral efficiency and impairment tolerance. Optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions. We consider the planning problem of an OFDM-based optical network where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers. We introduce the Routing and Spectrum Allocation (RSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, and present various algorithms to solve the RSA. We start by presenting an optimal ILP RSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RSA into two substituent subproblems, namely, (i) routing and (ii) spectrum allocation (R+SA) and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all traffic matrix connections. To feed the sequential algorithm, two ordering policies are proposed; a simulated annealing meta-heuristic is also used to obtain even better orderings. Our results indicate that the proposed sequential heuristic with appropriate ordering yields close to optimal solutions in low running times.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Defragmentation of transparent Flexible optical WDM (FWDM) networks
paper_content:
We introduce the network defragmentation problem for FWDM networks, formulate it, and propose heuristics. The network defragmentation process consolidates the available spectrum significantly while minimizing the number of interrupted connections.
---
paper_title: Routing and Spectrum Allocation in OFDM-Based Optical Networks with Elastic Bandwidth Allocation
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) has been recently proposed as a modulation technique for optical networks, due to its good spectral efficiency and impairment tolerance. Optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions. We consider the planning problem of an OFDM-based optical network where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers. We introduce the Routing and Spectrum Allocation (RSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, and present various algorithms to solve the RSA. We start by presenting an optimal ILP RSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RSA into two substituent subproblems, namely, (i) routing and (ii) spectrum allocation (R+SA) and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all traffic matrix connections. To feed the sequential algorithm, two ordering policies are proposed; a simulated annealing meta-heuristic is also used to obtain even better orderings. Our results indicate that the proposed sequential heuristic with appropriate ordering yields close to optimal solutions in low running times.
---
paper_title: Dynamic routing and frequency slot assignment for elastic optical path networks that adopt distance adaptive modulation
paper_content:
We propose a dynamic routing and frequency slot assignment algorithm for SLICE networks that employ distance adaptive modulation. We verify that the spectrum utilization penalty that stems from non-uniform bandwidth allocation is marginal.
---
paper_title: Routing and Spectrum Allocation in OFDM-Based Optical Networks with Elastic Bandwidth Allocation
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) has been recently proposed as a modulation technique for optical networks, due to its good spectral efficiency and impairment tolerance. Optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions. We consider the planning problem of an OFDM-based optical network where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers. We introduce the Routing and Spectrum Allocation (RSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, and present various algorithms to solve the RSA. We start by presenting an optimal ILP RSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RSA into two substituent subproblems, namely, (i) routing and (ii) spectrum allocation (R+SA) and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all traffic matrix connections. To feed the sequential algorithm, two ordering policies are proposed; a simulated annealing meta-heuristic is also used to obtain even better orderings. Our results indicate that the proposed sequential heuristic with appropriate ordering yields close to optimal solutions in low running times.
---
paper_title: Dynamic routing and spectrum assignment in flexible optical path networks
paper_content:
We propose dynamic routing and spectrum assignment algorithms for bitrate-flexible lightpaths in OFDM-based optical networks. The novel algorithms enable dynamic spectrum assignment with more efficient resource utilization and less traffic blockings.
---
paper_title: Impact of transparent network constraints on capacity gain of elastic channel spacing
paper_content:
We compare fixed-grid network architectures with variable-spacing OFDM based solutions. We show that capacity gains can reach up to 50% but are strongly affected by physical and topological constraints of transparent networks and traffic statistics.
---
paper_title: Distance-adaptive super-wavelength routing in elastic optical path network (SLICE) with optical OFDM
paper_content:
We propose a spectrally-efficient super-wavelength-path routing in SLICE by selecting a set of subcarrier numbers and modulation levels according to path distances. We demonstrate 420 Gb/s path routing using short-reach 14-subcarrier 8-APSK and long-reach 21-subcarrier QPSK.
---
paper_title: Spectrally/bitrate flexible optical network planning
paper_content:
We consider the Routing and Spectrum Allocation (RSA) problem in an OFDM-based optical network with elastic bandwidth allocation. We asses the spectrum utilization gains of this flexible architecture compared to a traditional fixed-grid rigid-bandwidth WDM network.
---
paper_title: Elastic Bandwidth Allocation in Flexible OFDM-Based Optical Networks
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete and present various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two substituent subproblems, namely 1) routing and modulation level and 2) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.
---
paper_title: Distance-adaptive spectrum allocation in elastic optical path network (SLICE) with bit per symbol adjustment
paper_content:
We present a concept of spectrally-efficient optical networking with distance-adaptive spectral allocation by adjusting the number of modulation levels. We demonstrate it by routing 40 Gb/s optical paths using short-reach, narrow-spectrum 16APSK and long-reach QPSK.
---
paper_title: A quick method for finding shortest pairs of disjoint paths
paper_content:
Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex v, a pair of edge-disjoint paths from s to v of minimum total edge cost. Suurballe has given an O(n2 logn)-time algorithm for this problem. We give an implementation of Suurballe's algorithm that runs in O(m log(1+ m/n)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.
---
paper_title: Algorithms for maximizing spectrum efficiency in elastic optical path networks that adopt distance adaptive modulation
paper_content:
We propose optical path routing and frequency slot assignment algorithms that suit elastic optical paths and the distance adaptive modulation scheme. The algorithms are proven to yield high spectrum efficiency for the distant-adaptive frequency allocation scheme.
---
paper_title: Survivable transparent Flexible optical WDM (FWDM) networks
paper_content:
We propose an efficient survivable FWDM network design algorithm for the first time. Survivable FWDM networks are efficient in terms of spectral utilization, power consumption, and cost compared to the conventional survivable fixed grid networks.
---
paper_title: Mobipack: optimal hitless SONET defragmentation in near-optimal cost
paper_content:
We study the problem of bandwidth fragmentation in links that comprise rings and meshes in SONET networks. Fragmentation is a serious challenge for network operators since it creates "holes" in the transport pipe causing new demands to be rejected, in spite of sufficient bandwidth being available. Unlike the well-studied, general fragmentation problem, link defragmentation is "hard" and novel due to some unique constraints imposed by the SONET standard. Since a defragmentation operation typically occurs on a network carrying live traffic, in addition to the "quality" of the output, any link defragmentation algorithm has to avoid traffic hit and also optimize the "cost" of reorganizing circuits. We propose an algorithm called mobipack that is optimal in its defragmentation quality. Moreover, we demonstrate via extensive simulations, that it also achieves its goal in a hitless manner with only marginal additional cost over the optimal, making it extremely attractive to use in practice
---
paper_title: Routing and Spectrum Allocation in OFDM-Based Optical Networks with Elastic Bandwidth Allocation
paper_content:
Orthogonal Frequency Division Multiplexing (OFDM) has been recently proposed as a modulation technique for optical networks, due to its good spectral efficiency and impairment tolerance. Optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions. We consider the planning problem of an OFDM-based optical network where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers. We introduce the Routing and Spectrum Allocation (RSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, and present various algorithms to solve the RSA. We start by presenting an optimal ILP RSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RSA into two substituent subproblems, namely, (i) routing and (ii) spectrum allocation (R+SA) and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all traffic matrix connections. To feed the sequential algorithm, two ordering policies are proposed; a simulated annealing meta-heuristic is also used to obtain even better orderings. Our results indicate that the proposed sequential heuristic with appropriate ordering yields close to optimal solutions in low running times.
---
paper_title: Dynamic bandwidth allocation in flexible OFDM-based networks
paper_content:
We propose a general policy to allocate subcarriers to time-varying traffic in a flexible OFDM optical network. We compare the OFDM network performance to that of a fixed-grid WDM network using simulations.
---
paper_title: Defragmentation of transparent Flexible optical WDM (FWDM) networks
paper_content:
We introduce the network defragmentation problem for FWDM networks, formulate it, and propose heuristics. The network defragmentation process consolidates the available spectrum significantly while minimizing the number of interrupted connections.
---
paper_title: Traffic grooming in Spectrum-Elastic Optical Path Networks
paper_content:
We propose a novel approach of traffic grooming in Spectrum-Elastic Optical Path Networks. Higher spectrum efficiency is achieved by our approach comparing with non-traffic-grooming scenario.
---
paper_title: Highly survivable restoration scheme employing optical bandwidth squeezing in spectrum-sliced elastic optical path (SLICE) network
paper_content:
This paper proposes a novel restoration scheme in the spectrum-sliced elastic optical network. The proposed scheme achieves a high level of survivability for the traffic that is subject to the committed service profile.
---
paper_title: PONIARD: A Programmable Optical Networking Infrastructure for Advanced Research and Development of Future Internet
paper_content:
Motivated by the design goals of Global Environment for Network Innovation (GENI), we consider how to support the slicing of link bandwidth resources as well as the virtualization of optical access networks and optical backbone mesh networks. Specifically, in this paper, we study a novel programmable mechanism called optical orthogonal frequency division multiplexing (OFDM)/orthogonal frequency division multiple access (OFDMA) for link virtualization. Unlike conventional time division multiplexing (TDM)/time division multiple access (TDMA) and wavelength division multiplexing (WDM)/wavelength division multiple access (WDMA) methods, optical OFDM/OFDMA utilizes advanced digital signal processing (DSP), parallel signal detection (PSD), and flexible resource management schemes for subwavelength level multiplexing and grooming. Simulations as well as experiments are conducted to demonstrate performance improvements and system benefits including cost-reduction and service transparency.
---
paper_title: Adaptive IP/optical OFDM networking design
paper_content:
A new networking approach based on IP/optical OFDM technologies is proposed, providing an adaptive mechanism of bandwidth provisioning and pipe resizing for dynamic traffic flows. A comparison study is presented to demonstrate its advantages.
---
paper_title: Virtualized optical network (VON) for agile cloud computing environment
paper_content:
A virtualized optical network is proposed as a key to implementing increased agility and flexibility into a cloud computing environment by providing any-to-any connectivity with the appropriate optical bandwidth at the appropriate time.
---
paper_title: A programmable router interface supporting link virtualization with adaptive optical OFDMA transmission
paper_content:
A new programmable router interface with adaptive packet over optical OFDMA transmission is proposed, providing a perfect mechanism of bandwidth partition for link virtualization. A comparison study is presented to demonstrate its advantages.
---
paper_title: Energy-efficient global networks and their implications
paper_content:
Nowadays, energy related problems are becoming a key environmental, social and political issue because the demand for energy and power resources grows continuously. A decoupling of economic growth and energy consumption becomes a very important goal that can be achieved only by increasing the energy efficiency of new technologies and processes, i.e., the ability to provide a higher performance by consuming less energy. Communication networks play a significant role as the crucial part of the information and communication technologies. An adequate and future-ready communication network infrastructure has become one of the most important strategic goals of any government, region, and municipality, because a high-capacity and efficient network leads to an accelerated development of both business and society. This paper aims to give an overview on possible potentials of new technologies for implementing energy-efficient network elements. It identifies main energy-related issues in high-performance network elements and tries to draw attention to some promising approaches for implementing low-consuming components and networks. Various aspects related to applications and services that can make use of the energy-efficient and high-performance global networks are discussed. Some thoughts about possible implications on energy productivity, economy, and society can also be found.
---
paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies
paper_content:
The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.
---
paper_title: Highly survivable restoration scheme employing optical bandwidth squeezing in spectrum-sliced elastic optical path (SLICE) network
paper_content:
This paper proposes a novel restoration scheme in the spectrum-sliced elastic optical network. The proposed scheme achieves a high level of survivability for the traffic that is subject to the committed service profile.
---
|
Title: A Survey on OFDM-Based Elastic Core Optical Networking
Section 1: Introduction
Description 1: Introduce the rapid growth of Internet traffic and the need for large-capacity optical fiber transmission systems, and overview the challenges and requirements for future optical networks.
Section 2: OFDM Principle
Description 2: Explain the basic principles of OFDM technology, including its operation, advantages, and the core components of an OFDM system.
Section 3: Building Blocks of OFDM Systems
Description 3: Describe the key building blocks of an OFDM system, detailing components and functions such as guard intervals, cyclic prefixes, channel estimation, and link adaption.
Section 4: Advantages and Disadvantages of OFDM
Description 4: Discuss the various advantages of OFDM technology as well as its disadvantages and the challenges it presents.
Section 5: Optical OFDM Transmission Technology
Description 5: Provide an overview of Optical OFDM transmission technology, including the classifications and types of signal synthesis and detection mechanisms.
Section 6: O-OFDM Signal Synthesis Types
Description 6: Explain the different signal synthesis types for optical OFDM, such as FFT-based approaches and optical approaches, and discuss their implementations.
Section 7: O-OFDM Signal Detection Types
Description 7: Describe the types of signal detection for optical OFDM, including direct detection and coherent detection, along with their respective advantages and limitations.
Section 8: MIMO O-OFDM
Description 8: Introduce MIMO in the context of optical OFDM and how it is used to increase the capacity or reduce the transmission impairments.
Section 9: Modulation Formats and Adaptive Modulation
Description 9: Detail the advanced modulation formats and adaptive modulation techniques used in O-OFDM systems to support high-speed transmission.
Section 10: Elastic Optical Network Concept
Description 10: Introduce the concept of elastic optical networks and explain various proposed architectures such as SLICE, FWDM, and Data-Rate Elastic Optical Network.
Section 11: OFDM-Based Elastic Optical Network Architecture
Description 11: Discuss the architecture and components of OFDM-based elastic optical networks, and explain the benefits and requirements for such networks.
Section 12: Data-Rate/Bandwidth-Variable Transponder
Description 12: Describe the data-rate/bandwidth-variable transponder and mechanisms to support multiple data rates using OFDM technology.
Section 13: Bandwidth-Variable Optical Switching
Description 13: Explain the design and functionality of bandwidth-variable optical switching nodes in elastic optical networks.
Section 14: Network-Level Technologies
Description 14: Cover the network-level technologies required for elastic optical networks, including spectrum slot specifications, RSA algorithms, traffic grooming, survivability strategies, and network control and management schemes.
Section 15: Conclusion
Description 15: Summarize the key points covered in the survey and outline the potential future research directions for OFDM-based elastic optical networks.
|
A small trip in the untranquil world of genomes A survey on the detection and analysis of genome rearrangement breakpoints
| 19 |
---
paper_title: Are molecular cytogenetics and bioinformatics suggesting diverging models of ancestral mammalian genomes?
paper_content:
Excavating ancestral genomes The recent release of the chicken genome sequence (Hillier et al. 2004Go) provided exciting news for the comparative genomics community as it allows insights into the early evolution of the human genome. A bird species can now be used as an outgroup to model early mammalian genome organization and reshuffling. The genome sequence data have already been incorporated in a computational analysis of chicken, mouse, rat, and human genome sequences for the reconstruction of the ancestral genome organization of both a mammalian ancestor as well as a murid rodent ancestor (Hillier et al. 2004Go; Bourque et al. 2005Go). This bioinformatic effort joins a molecular cytogenetic model (Richard et al. 2003Go; Yang et al. 2003Go; Robinson et al. 2004Go; Svartman et al. 2004Go; Wienberg 2004Go; Froenicke 2005Go) as the second global approach to explore the architecture of the ancestral eutherian karyotype—a fundamental question in comparative genomics. Since both models use the human genome as reference, they are readily comparable. Surprisingly, however, they share few similarities. Only two small autosomes and the sex chromosomes of the hypothesized ancestral karyotypes are common to both. Unfortunately, given its significance, neither the extent of these differences nor their impact on comparative genomics have been discussed by Bourque and colleagues (2005Go). In an attempt to redress this, we compare the two methods of ancestral genome reconstruction, verify the resulting models, and discuss reasons for their apparent divergence.
---
paper_title: Human and mouse genomic sequences reveal extensive breakpoint reuse in mammalian evolution
paper_content:
The human and mouse genomic sequences provide evidence for a larger number of rearrangements than previously thought and reveal extensive reuse of breakpoints from the same short fragile regions. Breakpoint clustering in regions implicated in cancer and infertility have been reported in previous studies; we report here on breakpoint clustering in chromosome evolution. This clustering reveals limitations of the widely accepted random breakage theory that has remained unchallenged since the mid-1980s. The genome rearrangement analysis of the human and mouse genomes implies the existence of a large number of very short “hidden” synteny blocks that were invisible in the comparative mapping data and ignored in the random breakage model. These blocks are defined by closely located breakpoints and are often hard to detect. Our results suggest a model of chromosome evolution that postulates that mammalian genomes are mosaics of fragile regions with high propensity for rearrangements and solid regions with low propensity for rearrangements.
---
paper_title: The Fragile Breakage versus Random Breakage Models of Chromosome Evolution
paper_content:
For many years, studies of chromosome evolution were dominated by the random breakage theory, which implies that there are no rearrangement hot spots in the human genome. In 2003, Pevzner and Tesler argued against the random breakage model and proposed an alternative "fragile breakage" model of chromosome evolution. In 2004, Sankoff and Trinh argued against the fragile breakage model and raised doubts that Pevzner and Tesler provided any evidence of rearrangement hot spots. We investigate whether Sankoff and Trinh indeed revealed a flaw in the arguments of Pevzner and Tesler. We show that Sankoff and Trinh's synteny block identification algorithm makes erroneous identifications even in small toy examples and that their parameters do not reflect the realities of the comparative genomic architecture of human and mouse. We further argue that if Sankoff and Trinh had fixed these problems, their arguments in support of the random breakage model would disappear. Finally, we study the link between rearrangements and regulatory regions and argue that long regulatory regions and inhomogeneity of gene distribution in mammalian genomes may be responsible for the breakpoint reuse phenomenon.
---
paper_title: Evolution's cauldron: duplication, deletion, and rearrangement in the mouse and human genomes.
paper_content:
This study examines genomic duplications, deletions, and rearrangements that have happened at scales ranging from a single base to complete chromosomes by comparing the mouse and human genomes. From whole-genome sequence alignments, 344 large (>100-kb) blocks of conserved synteny are evident, but these are further fragmented by smaller-scale evolutionary events. Excluding transposon insertions, on average in each megabase of genomic alignment we observe two inversions, 17 duplications (five tandem or nearly tandem), seven transpositions, and 200 deletions of 100 bases or more. This includes 160 inversions and 75 duplications or transpositions of length >100 kb. The frequencies of these smaller events are not substantially higher in finished portions in the assembly. Many of the smaller transpositions are processed pseudogenes; we define a "syntenic" subset of the alignments that excludes these and other small-scale transpositions. These alignments provide evidence that approximately 2% of the genes in the human/mouse common ancestor have been deleted or partially deleted in the mouse. There also appears to be slightly less nontransposon-induced genome duplication in the mouse than in the human lineage. Although some of the events we detect are possibly due to misassemblies or missing data in the current genome sequence or to the limitations of our methods, most are likely to represent genuine evolutionary events. To make these observations, we developed new alignment techniques that can handle large gaps in a robust fashion and discriminate between orthologous and paralogous alignments.
---
paper_title: The convergence of cytogenetics and rearrangement-based models for ancestral genome reconstruction.
paper_content:
Froenicke et al. (2006) asked the question “Are molecular cytogenetics and bioinformatics suggesting diverging models of ancestral mammalian genomes?” Their commentary seems to imply that cytogenetics is superior to “bioinformatics” when it comes to studies of ancestral mammalian genomes. But, comparing cytogenetics with other approaches to deriving ancestral genomic architectures (like MGR developed by Bourque and Pevzner 2002) on two very different data sets (80+ cytogenetic maps vs. four distantly related sequenced genomes) does not say much about the merits and demerits of the approaches; instead, it indirectly evaluates the quality of the input data sets. In this context, the main conclusion of Froenicke et al. (2006) amounts to the statement that 80+ cytogenetic maps lead to a more definitive reconstruction than four divergent genomes when it comes to low-resolution ancestral architectures. We never argued against this point in our publications and even advocated for the use of radiation hybrid mapping (as a trade-off in resolution between cytogenetic and sequencing data) to extend the number of analyzed genomes (Murphy et al. 2003, 2005). Indeed, with more taxon sampling, a rearrangementbased reconstruction with just seven genomes (Murphy et al. 2005) is already highly consistent with the cytogenetics reconstruction. Before we substantiate this convergence, we clarify the notion of weak associations, which was used in Bourque et al. (2005) to alert the reader that some of the adjacencies were left as unresolved in the ancestor. The discrepancies identified by Froenicke et al. (2006) all involve weak associations and point toward a misunderstanding more than a contradiction. Finally, we underline some of the important strengths exclusive to rearrangement-based approaches, such as the ability to detect smaller genomic segments, to handle fast evolving lineages, and to orient conserved segments in the ancestors.
---
paper_title: Hotspots of mammalian chromosomal evolution
paper_content:
Background ::: Chromosomal evolution is thought to occur through a random process of breakage and rearrangement that leads to karyotype differences and disruption of gene order. With the availability of both the human and mouse genomic sequences, detailed analysis of the sequence properties underlying these breakpoints is now possible.
---
paper_title: Chromosomal breakpoint reuse in genome sequence rearrangement
paper_content:
In order to apply gene-order rearrangement algorithms to the comparison of genome sequences, Pevzner and Tesler bypass gene finding and ortholog identification and use the order of homologous blocks of unannotated sequence as input. The method excludes blocks shorter than a threshold length. Here we investigate possible biases introduced by eliminating short blocks, focusing on the notion of breakpoint reuse introduced by these authors. Analytic and simulation methods show that reuse is very sensitive to the proportion of blocks excluded. As is pertinent to the comparison of mammalian genomes, this exclusion risks randomizing the comparison partially or entirely.
---
paper_title: Mechanism and regulation of human non-homologous DNA end-joining
paper_content:
In multicellular eukaryotes, non-homologous DNA end-joining (NHEJ) is the primary pathway for repairing double-stranded DNA breaks (DSBs). The other important pathway for the repair of such breaks is homologous recombination, which is restricted to late S and G2 phases in dividing cells. ::: Pathological DSBs result when there is replication across a nick, when ionizing radiation passes near the DNA, and when reactive-oxygen species contact DNA. Such breaks are repaired by NHEJ when they occur in G0, G1 or early S phases of the cell cycle, and often even during late S and G2 phases. ::: Physiological DSBs result during V(D)J recombination and class-switch recombination. The rejoining phase of these two processes uses NHEJ. ::: NHEJ is typically imprecise in multicellular eukaryotes, making it the only main DNA-repair pathway that is error prone. ::: Because of its error-prone nature, NHEJ might contribute to cancer and ageing. ::: Defects in NHEJ result in sensitivity to ionizing radiation and in a lack of lymphocytes. The lack of lymphocytes results from the loss of the ability to complete V(D)J recombination. In humans, mutations in Artemis are responsible for about 15% of cases of severe combined immune deficiency syndrome. Artemis, in complex with the DNA-dependent protein kinase catalytic subunit (DNA-PKcs), is responsible for trimming the DNA ends in NHEJ. ::: In multicellular eukaryotes, non-homologous DNA end-joining (NHEJ) is the primary pathway for repairing double-stranded DNA breaks (DSBs). The other important pathway for the repair of such breaks is homologous recombination, which is restricted to late S and G2 phases in dividing cells. Pathological DSBs result when there is replication across a nick, when ionizing radiation passes near the DNA, and when reactive-oxygen species contact DNA. Such breaks are repaired by NHEJ when they occur in G0, G1 or early S phases of the cell cycle, and often even during late S and G2 phases. Physiological DSBs result during V(D)J recombination and class-switch recombination. The rejoining phase of these two processes uses NHEJ. NHEJ is typically imprecise in multicellular eukaryotes, making it the only main DNA-repair pathway that is error prone. Because of its error-prone nature, NHEJ might contribute to cancer and ageing. Defects in NHEJ result in sensitivity to ionizing radiation and in a lack of lymphocytes. The lack of lymphocytes results from the loss of the ability to complete V(D)J recombination. In humans, mutations in Artemis are responsible for about 15% of cases of severe combined immune deficiency syndrome. Artemis, in complex with the DNA-dependent protein kinase catalytic subunit (DNA-PKcs), is responsible for trimming the DNA ends in NHEJ.
---
paper_title: Structural variation in the human genome
paper_content:
The first wave of information from the analysis of the human genome revealed SNPs to be the main source of genetic and phenotypic human variation. However, the advent of genome-scanning technologies has now uncovered an unexpectedly large extent of what we term 'structural variation' in the human genome. This comprises microscopic and, more commonly, submicroscopic variants, which include deletions, duplications and large-scale copy-number variants - collectively termed copy-number variants or copy-number polymorphisms - as well as insertions, inversions and translocations. Rapidly accumulating evidence indicates that structural variants can comprise millions of nucleotides of heterogeneity within every genome, and are likely to make an important contribution to human diversity and disease susceptibility.
---
paper_title: Nonhomologous end joining in yeast.
paper_content:
AbstractNonhomologous end joining (NHEJ), the direct rejoining of DNA double-strand breaks, is closely associated with illegitimate recombination and chromosomal rearrangement. This has led to the concept that NHEJ is error prone. Studies with the yeast Saccharomyces cerevisiae have revealed that this model eukaryote has a classical NHEJ pathway dependent on Ku and DNA ligase IV, as well as alternative mechanisms for break rejoining. The evolutionary conservation of the Ku-dependent process includes several genes dedicated to this pathway, indicating that classical NHEJ at least is a strong contributor to fitness in the wild. Here we review how double-strand break structure, the yeast NHEJ proteins, and alternative rejoining mechanisms influence the accuracy of break repair. We also consider how the balance between NHEJ and homologous repair is regulated by cell state to promote genome preservation. The principles discussed are instructive to NHEJ in all organisms.
---
paper_title: Structure of chromosomal duplicons and their role in mediating human genomic disorders.
paper_content:
Chromosome-specific low-copy repeats, or duplicons, occur in multiple regions of the human genome. Homologous recombination between different duplicon copies leads to chromosomal rearrangements, such as deletions, duplications, inversions, and inverted duplications, depending on the orientation of the recombining duplicons. When such rearrangements cause dosage imbalance of a developmentally important gene(s), genetic diseases now termed genomic disorders result, at a frequency of 0.7-1/1000 births. Duplicons can have simple or very complex structures, with variation in copy number from 2 to >10 repeats, and each varying in size from a few kilobases in length to hundreds of kilobases. Analysis of the different duplicons involved in human genomic disorders identifies features that may predispose to recombination, including large size and high sequence identity between the recombining copies, putative recombination promoting features, and the presence of multiple genes/pseudogenes that may include genes expressed in germ cells. Most of the chromosome rearrangements involve duplicons near pericentromeric regions, which may relate to the propensity of such regions to accumulate duplicons. Detailed analyses of the structure, polymorphic variation, and mechanisms of recombination in genomic disorders, as well as the evolutionary origin of various duplicons will further our understanding of the structure, function, and fluidity of the human genome.
---
paper_title: Pathological consequences of sequence duplications in the human genome.
paper_content:
As large-scale sequencing accumulates momentum, an increasing number of instances are being revealed in which genes or other relatively rare sequences are duplicated, either in tandem or at nearby locations. Such duplications are a source of considerable polymorphism in populations, and also increase the evolutionary possibilities for the coregulation of juxtaposed sequences. As a further consequence, they promote inversions and deletions that are responsible for significant inherited pathology. Here we review known examples of genomic duplications present on the human X chromosome and autosomes.
---
paper_title: Genome architecture, rearrangements and genomic disorders.
paper_content:
An increasing number of human diseases are recognized to result from recurrent DNA rearrangements involving unstable genomic regions. These are termed genomic disorders, in which the clinical phenotype is a consequence of abnormal dosage of gene(s) located within the rearranged genomic fragments. Both inter- and intrachromosomal rearrangements are facilitated by the presence of region-specific low-copy repeats (LCRs) and result from nonallelic homologous recombination (NAHR) between paralogous genomic segments. LCRs usually span approximately 10-400 kb of genomic DNA, share >or= 97% sequence identity, and provide the substrates for homologous recombination, thus predisposing the region to rearrangements. Moreover, it has been suggested that higher order genomic architecture involving LCRs plays a significant role in karyotypic evolution accompanying primate speciation.
---
paper_title: Molecular-evolutionary mechanisms for genomic disorders
paper_content:
Molecular studies of unstable regions in the human genome have identified region-specific low-copy repeats (LCRs). Unlike highly repetitive sequences (e.g. Alus and LINEs), LCRs are usually of 10–400 kb in size and exhibit ≥ 95–97% similarity. According to computer analyses of available sequencing data, LCRs may constitute >5% of the human genome. Through the process of non-allelic homologous recombination using paralogous genomic segments as substrates, LCRs have been shown to facilitate meiotic DNA rearrangements associated with disease traits, referred to as genomic disorders. In addition, this LCR-based complex genome architecture appears to play a major role in both primate karyotype evolution and human tumorigenesis.
---
paper_title: Origins of primate chromosomes – as delineated by Zoo-FISH and alignments of human and mouse draft genome sequences
paper_content:
This review examines recent advances in comparative eutherian cytogenetics, including Zoo-FISH data from 30 non-primate species. These data provide insights into the nature of karyotype evolution and
---
paper_title: Are molecular cytogenetics and bioinformatics suggesting diverging models of ancestral mammalian genomes?
paper_content:
Excavating ancestral genomes The recent release of the chicken genome sequence (Hillier et al. 2004Go) provided exciting news for the comparative genomics community as it allows insights into the early evolution of the human genome. A bird species can now be used as an outgroup to model early mammalian genome organization and reshuffling. The genome sequence data have already been incorporated in a computational analysis of chicken, mouse, rat, and human genome sequences for the reconstruction of the ancestral genome organization of both a mammalian ancestor as well as a murid rodent ancestor (Hillier et al. 2004Go; Bourque et al. 2005Go). This bioinformatic effort joins a molecular cytogenetic model (Richard et al. 2003Go; Yang et al. 2003Go; Robinson et al. 2004Go; Svartman et al. 2004Go; Wienberg 2004Go; Froenicke 2005Go) as the second global approach to explore the architecture of the ancestral eutherian karyotype—a fundamental question in comparative genomics. Since both models use the human genome as reference, they are readily comparable. Surprisingly, however, they share few similarities. Only two small autosomes and the sex chromosomes of the hypothesized ancestral karyotypes are common to both. Unfortunately, given its significance, neither the extent of these differences nor their impact on comparative genomics have been discussed by Bourque and colleagues (2005Go). In an attempt to redress this, we compare the two methods of ancestral genome reconstruction, verify the resulting models, and discuss reasons for their apparent divergence.
---
paper_title: Molecular cytotaxonomy of primates by chromosomal in situ suppression hybridization
paper_content:
A new strategy for analyzing chromosomal evolution in primates is presented using chromosomal in situ suppression (CISS) hybridization. Biotin-labeled DNA libraries from flow-sorted human chromosomes are hybridized to chromosome preparations of catarrhines, platyrrhines, and prosimians. By this approach rearrangements of chromosomes that occurred during hominoid evolution are visualized directly at the level of DNA sequences, even in primate species with pronounced chromosomal shuffles.
---
paper_title: Reconstruction of genomic rearrangements in great apes and gibbons by chromosome painting.
paper_content:
The homology between hylobatid chromosomes and other primates has long remained elusive. We used chromosomal in situ suppression hybridization of all human chromosome-specific DNA libraries to "paint" the chromosomes of primates and establish homologies between the human, great ape (chimpanzee, gorilla, and orangutan), and gibbon karyotypes (Hylobates lar species group, 2n = 44). The hybridization patterns unequivocally demonstrate the high degree of chromosomal homology and synteny of great ape and human chromosomes. Relative to human, no translocations were detected in great apes, except for the well-known fusion-origin of human chromosome 2 and a 5;17 translocation in the gorilla. In contrast, numerous translocations were detected that have led to the massive reorganization of the gibbon karyotype: the 22 autosomal human chromosomes have been divided into 51 elements to compose the 21 gibbon autosomes. Molecular cytogenetics promises to finally allow hylobatids to be integrated into the overall picture of chromosomal evolution in the primates.
---
paper_title: The convergence of cytogenetics and rearrangement-based models for ancestral genome reconstruction.
paper_content:
Froenicke et al. (2006) asked the question “Are molecular cytogenetics and bioinformatics suggesting diverging models of ancestral mammalian genomes?” Their commentary seems to imply that cytogenetics is superior to “bioinformatics” when it comes to studies of ancestral mammalian genomes. But, comparing cytogenetics with other approaches to deriving ancestral genomic architectures (like MGR developed by Bourque and Pevzner 2002) on two very different data sets (80+ cytogenetic maps vs. four distantly related sequenced genomes) does not say much about the merits and demerits of the approaches; instead, it indirectly evaluates the quality of the input data sets. In this context, the main conclusion of Froenicke et al. (2006) amounts to the statement that 80+ cytogenetic maps lead to a more definitive reconstruction than four divergent genomes when it comes to low-resolution ancestral architectures. We never argued against this point in our publications and even advocated for the use of radiation hybrid mapping (as a trade-off in resolution between cytogenetic and sequencing data) to extend the number of analyzed genomes (Murphy et al. 2003, 2005). Indeed, with more taxon sampling, a rearrangementbased reconstruction with just seven genomes (Murphy et al. 2005) is already highly consistent with the cytogenetics reconstruction. Before we substantiate this convergence, we clarify the notion of weak associations, which was used in Bourque et al. (2005) to alert the reader that some of the adjacencies were left as unresolved in the ancestor. The discrepancies identified by Froenicke et al. (2006) all involve weak associations and point toward a misunderstanding more than a contradiction. Finally, we underline some of the important strengths exclusive to rearrangement-based approaches, such as the ability to detect smaller genomic segments, to handle fast evolving lineages, and to orient conserved segments in the ancestors.
---
paper_title: Reconstruction of the ancestral karyotype of eutherian mammals
paper_content:
Applying the parsimony principle, i.e. that chromosomes identical in species belonging to different taxa were likely to be present in their common ancestor, the ancestral karyotype of eutherian mammals (about 100 million years old) was tentatively reconstructed. Comparing chromosome banding with all ZOO-FISH data from literature or studied by us, this reconstruction can be proposed with only limited uncertainties. This karyotype comprised 50 chromosomes of which 40-42 were acrocentrics. Ten ancestral pairs of chromosomes were homologous to a single human chromosome: 5, 6, 9, 11, 13, 17, 18, 20, X and Y (human nomenclature). Nine others were homologous to a part of a human chromosome: 1p + q (proximal), 1q, 2p + q (proximal), 2q, part of 7, 8q, 10p, 10q and 19p (human nomenclature). Finally, seven pairs of chromosomes, homologs to human chromosomes 3 + 21, 4 + 8p, part of 7 + 16p, part of 12 + part of 22 (twice), 14 + 15, 16q + 19q, formed syntenies disrupted in man.
---
paper_title: Mauve: Multiple alignment of conserved genomic sequence with rearrangements. Genome Res
paper_content:
As genomes evolve, they undergo large-scale evolutionary processes that present a challenge to sequence comparison not posed by short sequences. Recombination causes frequent genome rearrangements, horizontal transfer introduces new sequences into bacterial chromosomes, and deletions remove segments of the genome. Consequently, each genome is a mosaic of unique lineage-specific segments, regions shared with a subset of other genomes and segments conserved among all the genomes under consideration. Furthermore, the linear order of these segments may be shuffled among genomes. We present methods for identification and alignment of conserved genomic DNA in the presence of rearrangements and horizontal transfer. Our methods have been implemented in a software package called Mauve. Mauve has been applied to align nine enterobacterial genomes and to determine global rearrangement structure in three mammalian genomes. We have evaluated the quality of Mauve alignments and drawn comparison to other methods through extensive simulations of genome evolution.
---
paper_title: A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins
paper_content:
A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every
---
paper_title: Evolution's cauldron: duplication, deletion, and rearrangement in the mouse and human genomes.
paper_content:
This study examines genomic duplications, deletions, and rearrangements that have happened at scales ranging from a single base to complete chromosomes by comparing the mouse and human genomes. From whole-genome sequence alignments, 344 large (>100-kb) blocks of conserved synteny are evident, but these are further fragmented by smaller-scale evolutionary events. Excluding transposon insertions, on average in each megabase of genomic alignment we observe two inversions, 17 duplications (five tandem or nearly tandem), seven transpositions, and 200 deletions of 100 bases or more. This includes 160 inversions and 75 duplications or transpositions of length >100 kb. The frequencies of these smaller events are not substantially higher in finished portions in the assembly. Many of the smaller transpositions are processed pseudogenes; we define a "syntenic" subset of the alignments that excludes these and other small-scale transpositions. These alignments provide evidence that approximately 2% of the genes in the human/mouse common ancestor have been deleted or partially deleted in the mouse. There also appears to be slightly less nontransposon-induced genome duplication in the mouse than in the human lineage. Although some of the events we detect are possibly due to misassemblies or missing data in the current genome sequence or to the limitations of our methods, most are likely to represent genuine evolutionary events. To make these observations, we developed new alignment techniques that can handle large gaps in a robust fashion and discriminate between orthologous and paralogous alignments.
---
paper_title: Strategies and Tools for Whole-Genome Alignments
paper_content:
The availability of the assembled mouse genome makes possible, for the first time, an alignment and comparison of two large vertebrate genomes. We investigated different strategies of alignment for the subsequent analysis of conservation of genomes that are effective for assemblies of different quality. These strategies were applied to the comparison of the working draft of the human genome with the Mouse Genome Sequencing Consortium assembly, as well as other intermediate mouse assemblies. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. We obtained such coverage while preserving specificity. With a view towards the end user, we developed a suite of tools and Web sites for automatically aligning and subsequently browsing and working with whole-genome comparisons. We describe the use of these tools to identify conserved non-coding regions between the human and mouse genomes, some of which have not been identified by other methods.
---
paper_title: Choosing the best heuristic for seeded alignment of DNA sequences
paper_content:
BackgroundSeeded alignment is an important component of algorithms for fast, large-scale DNA similarity search. A good seed matching heuristic can reduce the execution time of genomic-scale sequence comparison without degrading sensitivity. Recently, many types of seed have been proposed to improve on the performance of traditional contiguous seeds as used in, e.g., NCBI BLASTN. Choosing among these seed types, particularly those that use information besides the presence or absence of matching residue pairs, requires practical guidance based on a rigorous comparison, including assessment of sensitivity, specificity, and computational efficiency. This work performs such a comparison, focusing on alignments in DNA outside widely studied coding regions.ResultsWe compare seeds of several types, including those allowing transition mutations rather than matches at fixed positions, those allowing transitions at arbitrary positions ("BLASTZ" seeds), and those using a more general scoring matrix. For each seed type, we use an extended version of our Mandala seed design software to choose seeds with optimized sensitivity for various levels of specificity. Our results show that, on a test set biased toward alignments of noncoding DNA, transition information significantly improves seed performance, while finer distinctions between different types of mismatches do not. BLASTZ seeds perform especially well. These results depend on properties of our test set that are not shared by EST-based test sets with a strong bias toward coding DNA.ConclusionPractical seed design requires careful attention to the properties of the alignments being sought. For noncoding DNA sequences, seeds that use transition information, especially BLASTZ-style seeds, are particularly useful. The Mandala seed design software can be found at http://www.cse.wustl.edu/~yanni/mandala/.
---
paper_title: Comparative architectures of mammalian and chicken genomes reveal highly variable rates of genomic rearrangements across different lineages
paper_content:
Molecular evolution studies are usually based on the analysis of individual genes and thus reflect only small-range variations in genomic sequences. A complementary approach is to study the evolutionary history of rearrangements in entire genomes based on the analysis of gene orders. The progress in whole genome sequencing provides an unprecedented level of detailed sequence data to infer genome rearrangements through comparative approaches. The comparative analysis of recently sequenced rodent genomes with the human genome revealed evidence for a larger number of rearrangements than previously thought and led to the reconstruction of the putative genomic architecture of the murid rodent ancestor, while the architecture of the ancestral mammalian genome and the rate of rearrangements in the human lineage remained unknown. Sequencing the chicken genome provides an opportunity to reconstruct the architecture of the ancestral mammalian genome by using chicken as an outgroup. Our analysis reveals a very low rate of rearrangements and, in particular, interchromosomal rearrangements in chicken, in the early mammalian ancestor, or in both. The suggested number of interchromosomal rearrangements between the mammalian ancestor and chicken, during an estimated 500 million years of evolution, only slightly exceeds the number of interchromosomal rearrangements that happened in the mouse lineage, over the course of about 87 million years.
---
paper_title: Comparative architectures of mammalian and chicken genomes reveal highly variable rates of genomic rearrangements across different lineages
paper_content:
Molecular evolution studies are usually based on the analysis of individual genes and thus reflect only small-range variations in genomic sequences. A complementary approach is to study the evolutionary history of rearrangements in entire genomes based on the analysis of gene orders. The progress in whole genome sequencing provides an unprecedented level of detailed sequence data to infer genome rearrangements through comparative approaches. The comparative analysis of recently sequenced rodent genomes with the human genome revealed evidence for a larger number of rearrangements than previously thought and led to the reconstruction of the putative genomic architecture of the murid rodent ancestor, while the architecture of the ancestral mammalian genome and the rate of rearrangements in the human lineage remained unknown. Sequencing the chicken genome provides an opportunity to reconstruct the architecture of the ancestral mammalian genome by using chicken as an outgroup. Our analysis reveals a very low rate of rearrangements and, in particular, interchromosomal rearrangements in chicken, in the early mammalian ancestor, or in both. The suggested number of interchromosomal rearrangements between the mammalian ancestor and chicken, during an estimated 500 million years of evolution, only slightly exceeds the number of interchromosomal rearrangements that happened in the mouse lineage, over the course of about 87 million years.
---
paper_title: Chaining Multiple-Alignment Blocks
paper_content:
ABSTRACT We derive a time-efficient method for building a multiple alignment consisting of a highestscoring chain of “blocks,” i.e., short gap-free alignments. Besides executing faster than a general-purpose multiple-alignment program, the method may be particularly appropriate when discovery of blocks meeting a certain criterion is the main reason for aligning the sequences. Utility of the method is illustrated by locating a chain of “phylogenetic footprints” (specifically, exact matches of length 6 or more) in the 5'-flanking regions of six mammalian e-globin genes.
---
paper_title: Transforming men into mice: the Nadeau-Taylor chromosomal breakage model revisited
paper_content:
Although analysis of genome rearrangements was pioneered by Dobzhansky and Sturtevant 65 years ago, we still know very little about the rearrangement events that produced the existing varieties of genomic architectures. The genomic sequences of human and mouse provide evidence for a larger number of rearrangements than previously thought and shed some light on previously unknown features of mammalian evolution. In particular, they reveal extensive re-use of breakpoints from the same relatively short regions. Our analysis implies the existence of a large number of very short "hidden" synteny blocks that were invisible in comparative mapping data and were not taken into account in previous studies of chromosome evolution. These blocks are defined by closely located breakpoints and are often hard to detect. Our result is in conflict with the widely accepted random breakage model of chromosomal evolution. We suggest a new "fragile breakage" model of chromosome evolution that postulates that breakpoints are chosen from relatively short fragile regions that have much higher propensity for rearrangements than the rest of the genome.
---
paper_title: AVID: A Global Alignment Program
paper_content:
In this paper we describe a new global alignment method called AVID. The method is designed to be fast, memory efficient, and practical for sequence alignments of large genomic regions up to megabases long. We present numerous applications of the method, ranging from the comparison of assemblies to alignment of large syntenic genomic regions and whole genome human/mouse alignments. We have also performed a quantitative comparison of AVID with other popular alignment tools. To this end, we have established a format for the representation of alignments and methods for their comparison. These formats and methods should be useful for future studies. The tools we have developed for the alignment comparisons, as well as the AVID program, are publicly available. See Web Site References section for AVID Web address and Web addresses for other programs discussed in this paper.
---
paper_title: Computing the Assignment of Orthologous Genes via Genome Rearrangement
paper_content:
The assignment of orthologous genes between a pair of genomes is a fundamental and challenging problem in comparative genomics. Existing methods that assign orthologs based on the similarity between DNA or protein sequences may make erroneous assignments when sequence similarity does not clearly delineate the evolutionary relationship among genes of the same families. In this paper, we present a new approach to ortholog assignment that takes into account both sequence similarity and evolutionary events at genome level, where orthologous genes are assumed to correspond to each other in the most parsimonious evolving scenario under genome rearrangement. It is then formulated as a problem of computing the signed reversal distance with duplicates between two genomes of interest, for which an efcient heuristic algorithm was given by introducing two new optimization problems, minimum common partition and maximum cycle decomposition. Following this approach, we have implemented a high-throughput system for assigning orthologs on a genome scale, called SOAR, and tested it on both simulated data and real genome sequence data. Compared to a recent ortholog assignment method based entirely on homology search (called INPARANOID), SOAR shows a marginally better performance in terms of sensitivity on the real data set because it was able to identify several correct orthologous pairs that were missed by INPARANOID. The simulation results demonstrate that SOAR in general performs better than the iterated exemplar algorithm in terms of computing the reversal distance and assigning correct orthologs.
---
paper_title: Automated Whole-Genome Multiple Alignment of Rat, Mouse, and Human
paper_content:
We have built a whole-genome multiple alignment of the three currently available mammalian genomes using a fully automated pipeline that combines the local/global approach of the Berkeley Genome Pipeline and the LAGAN program. The strategy is based on progressive alignment and consists of two main steps: (1) alignment of the mouse and rat genomes, and (2) alignment of human to either the mouse-rat alignments from step 1, or the remaining unaligned mouse and rat sequences. The resulting alignments demonstrate high sensitivity, with 87% of all human gene-coding areas aligned in both mouse and rat. The specificity is also high: <7% of the rat contigs are aligned to multiple places in human, and 97% of all alignments with human sequence >100 kb agree with a three-way synteny map built independently, using predicted exons in the three genomes. At the nucleotide level <1% of the rat nucleotides are mapped to multiple places in the human sequence in the alignment, and 96.5% of human nucleotides within all alignments agree with the synteny map. The alignments are publicly available online, with visualization through the novel Multi-VISTA browser that we also present.
---
paper_title: Mauve: Multiple alignment of conserved genomic sequence with rearrangements. Genome Res
paper_content:
As genomes evolve, they undergo large-scale evolutionary processes that present a challenge to sequence comparison not posed by short sequences. Recombination causes frequent genome rearrangements, horizontal transfer introduces new sequences into bacterial chromosomes, and deletions remove segments of the genome. Consequently, each genome is a mosaic of unique lineage-specific segments, regions shared with a subset of other genomes and segments conserved among all the genomes under consideration. Furthermore, the linear order of these segments may be shuffled among genomes. We present methods for identification and alignment of conserved genomic DNA in the presence of rearrangements and horizontal transfer. Our methods have been implemented in a software package called Mauve. Mauve has been applied to align nine enterobacterial genomes and to determine global rearrangement structure in three mammalian genomes. We have evaluated the quality of Mauve alignments and drawn comparison to other methods through extensive simulations of genome evolution.
---
paper_title: Genome sequence of the Brown Norway rat yields insights into mammalian evolution
paper_content:
The laboratory rat (Rattus norvegicus) is an indispensable tool in experimental medicine and drug development, having made inestimable contributions to human health. We report here the genome sequence of the Brown Norway (BN) rat strain. The sequence represents a high-quality 'draft' covering over 90% of the genome. The BN rat sequence is the third complete mammalian genome to be deciphered, and three-way comparisons with the human and mouse genomes resolve details of mammalian evolution. This first comprehensive analysis includes genes and proteins and their relation to human disease, repeated sequences, comparative genome-wide studies of mammalian orthologous chromosomal regions and rearrangement breakpoints, reconstruction of ancestral karyotypes and the events leading to existing species, rates of variation, and lineage-specific and lineage-independent evolutionary events such as expansion of gene families, orthology relations and protein evolution.
---
paper_title: Reconstructing an ancestral genome using minimum segments duplications and reversals
paper_content:
We consider a particular model of genomic rearrangements that takes paralogous and orthologous genes into account. Given a particular model of evolution and an optimization criterion, the problem is to recover an ancestor of a modern genome modeled as an ordered sequence of signed genes. One direct application is to infer gene orders at the ancestral nodes of a phylogenetic tree.Implicit in the rearrangement literature is that each gene has exactly one copy in each genome. This hypothesis is clearly false for species containing several copies of highly paralogous genes, e.g. multigene families. One of the most important regional event by which gene duplication can occur has been referred to as duplication transposition. Our model of evolution takes such duplications into account. For a genome G with gene families of different sizes, the implicit hypothesis is that G has an ancestor containing exactly one copy of each gene, and that G has evolved from this ancestor through a series of duplication transpositions and substring reversals. The question is: how can we reconstruct an ancestral genome giving rise to the minimal number of duplication transpositions and reversals? The key idea is to reduce the problem to a series of subproblems involving genomes containing at most two copies of each gene. For this simpler version, we provide tight bounds, and we describe an algorithm, based on Hannenhalli and Pevzner graph and result, that is exact when certain conditions are verified. We then show how to use this algorithm to recover gene orders at the ancestral nodes of a phylogenetic tree.
---
paper_title: A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins
paper_content:
A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every
---
paper_title: Glocal alignment : finding rearrangements during alignment
paper_content:
Motivation: To compare entire genomes from different species, biologists increasingly need alignment methods that are efficient enough to handle long sequences, and accurate enough to correctly align the conserved biological features between distant species. The two main classes of pairwise alignments are global alignment, where one string is transformed into the other, and local alignment, where all locations of similarity between the two strings are returned. Global alignments are less prone to demonstrating false homology as each letter of one sequence is constrained to being aligned to only one letter of the other. Local alignments, on the other hand, can cope with rearrangements between non-syntenic, orthologous sequences by identifying similar regions in sequences; this, however, comes at the expense of a higher false positive rate due to the inability of local aligners to take into account overall conservation maps. Results: In this paper we introduce the notion of glocal alignment, a combination of global and local methods, where one creates a map that transforms one sequence into the other while allowing for rearrangement events. We present Shuffle-LAGAN, a glocal alignment algorithm that is based on the CHAOS local alignment algorithm and the LAGAN global aligner, and is able to align long genomic sequences. To test Shuffle-LAGAN we split the mouse genome into BAC-sized pieces, and aligned these pieces to the human genome. We demonstrate that ShuffleLAGAN compares favorably in terms of sensitivity and specificity with standard local and global aligners. From the alignments we conclude that about 9% of human/mouse homology may be attributed to small rearrangements, 63% of which are duplications. Availability: Our systems, supplemental information, and the alignment of the human and mouse genomes using ∗ To whom correspondence should be addressed.
---
paper_title: Lengths of chromosomal segments conserved since divergence of man and mouse.
paper_content:
Linkage relationships of homologous loci in man and mouse were used to estimate the mean length of autosomal segments conserved during evolution. Comparison of the locations of greater than 83 homologous loci revealed 13 conserved segments. Map distances between the outermost markers of these 13 segments are known for the mouse and range from 1 to 24 centimorgans. Methods were developed for using this sample of conserved segments to estimate the mean length of all conserved autosomal segments in the genome. This mean length was estimated to be 8.1 +/- 1.6 centimorgans. Evidence is presented suggesting that chromosomal rearrangements that determine the lengths of these segments are randomly distributed within the genome. The estimated mean length of conserved segments was used to predict the probability that certain loci, such as peptidase-3 and renin, are linked in man given that homologous loci are chi centimorgans apart in the mouse. The mean length of conserved segments was also used to estimate the number of chromosomal rearrangements that have disrupted linkage since divergence of man and mouse. This estimate was shown to be 178 +/- 39 rearrangements.
---
paper_title: Chromosomal distributions of breakpoints in cancer, infertility, and evolution.
paper_content:
We extract 11 genome-wide sets of breakpoint positions from databases on reciprocal translocations, inversions and deletions in neoplasms, reciprocal translocations and inversions in families carrying rearrangements and the human-mouse comparative map, and for each set of positions construct breakpoint distributions for the 44 autosomal arms. We identify and interpret four main types of distribution: (i) a uniform distribution associated both with families carrying translocations or inversions, and with the comparative map, (ii) telomerically skewed distributions of translocations or inversions detected consequent to births with malformations, (iii) medially clustered distributions of translocation and deletion breakpoints in tumor karyotypes, and (iv) bimodal translocation breakpoint distributions for chromosome arms containing telomeric proto-oncogenes.
---
paper_title: Human and mouse genomic sequences reveal extensive breakpoint reuse in mammalian evolution
paper_content:
The human and mouse genomic sequences provide evidence for a larger number of rearrangements than previously thought and reveal extensive reuse of breakpoints from the same short fragile regions. Breakpoint clustering in regions implicated in cancer and infertility have been reported in previous studies; we report here on breakpoint clustering in chromosome evolution. This clustering reveals limitations of the widely accepted random breakage theory that has remained unchallenged since the mid-1980s. The genome rearrangement analysis of the human and mouse genomes implies the existence of a large number of very short “hidden” synteny blocks that were invisible in the comparative mapping data and ignored in the random breakage model. These blocks are defined by closely located breakpoints and are often hard to detect. Our results suggest a model of chromosome evolution that postulates that mammalian genomes are mosaics of fragile regions with high propensity for rearrangements and solid regions with low propensity for rearrangements.
---
paper_title: Hotspots of mammalian chromosomal evolution
paper_content:
Background ::: Chromosomal evolution is thought to occur through a random process of breakage and rearrangement that leads to karyotype differences and disruption of gene order. With the availability of both the human and mouse genomic sequences, detailed analysis of the sequence properties underlying these breakpoints is now possible.
---
paper_title: Human and mouse genomic sequences reveal extensive breakpoint reuse in mammalian evolution
paper_content:
The human and mouse genomic sequences provide evidence for a larger number of rearrangements than previously thought and reveal extensive reuse of breakpoints from the same short fragile regions. Breakpoint clustering in regions implicated in cancer and infertility have been reported in previous studies; we report here on breakpoint clustering in chromosome evolution. This clustering reveals limitations of the widely accepted random breakage theory that has remained unchallenged since the mid-1980s. The genome rearrangement analysis of the human and mouse genomes implies the existence of a large number of very short “hidden” synteny blocks that were invisible in the comparative mapping data and ignored in the random breakage model. These blocks are defined by closely located breakpoints and are often hard to detect. Our results suggest a model of chromosome evolution that postulates that mammalian genomes are mosaics of fragile regions with high propensity for rearrangements and solid regions with low propensity for rearrangements.
---
paper_title: The Signal in the Genomes
paper_content:
Nostra culpa. Not only did we foist a hastily conceived and incorrectly executed simulation on an overworked RECOMB conference program committee, but worse—nostra maxima culpa—we obliged a team of high-powered researchers to clean up after us! It was never our intention to introduce an alternative way of constructing synteny blocks; the so-called ST-synteny was only a (bungled) attempt to mimic Pevzner and Tesler's method, based on our reading or misreading of their paper [1]. Moreover, shortly after the conference, before preparing the full journal version of our article, we recognized through a back-of-an-envelope calculation that realistic values of the parameters in our simulations would not produce much increase in reuse rate. Consequently, our published article [2] develops only the main part of our communication, modeling and simulating the artifactual increase in reuse rates due to deleting synteny blocks but not that due to the construction of synteny blocks. ::: ::: Unfortunately, our makeshift work distracted from the main point of our communication. The theme in our full article [2], in the RECOMB extended abstract, and elsewhere is not substantially confronted in the recently published PLoS Computational Biology paper by Glenn Tesler and colleagues [3]. Wherever high rates of breakpoint reuse are inferred, whether they are due to bona fide reuse or rather to violations in the assumptions justifying the use of particular algorithms (relating to the construction of synteny blocks or their size thresholds, or to the unrealistically limited repertoire of rearrangement processes recognized by the algorithm), there is a correspondingly high rate of loss in the historical signal. ::: ::: While two genomes diverge without breakpoint reuse, the historical signal is conserved in the breakpoint graph, which consists entirely of four-vertex cycles, specifying exactly which pairs of breakpoints must be healed by reversals or translocations. As breakpoints are reused—as they eventually must be for finite gene orders, or for genomic sequence, where there are criteria for deciding when two breakpoints are too close together to be considered distinct—the four-vertex cycles are merged into larger structures, and the breakpoint graph becomes ambiguous concerning the rearrangements that produced it. The two divergent genomes eventually become randomized with respect to each other. But this randomization also occurs, even if divergence involves only distinct breakpoints, when the assumptions underlying the use of genome rearrangement algorithms are violated, which can happen in many possible ways [4,5]. And we cannot infer whether mutually randomized synteny block orderings derived from two divergent genomes were created through bona fide breakpoint reuse or rather through noise introduced in block construction or through processes other than reversal and translocation. ::: ::: I illustrate this point with data on the human/mouse comparison from Pevzner and Tesler's more detailed paper [6]. We simulated 100 pairs of genomes constructed of 22 and 19 human and mouse autosomes, with 270 blocks distributed exactly as in the human and mouse genomes, except that the blocks were randomly permuted and sign—or strandedness—was assigned randomly to each block. Permutations are within, not between, chromosomes, assuring a realistic reversals/translocations ratio. Output from the standard rearrangement algorithm [7] is summarized in Table 1. ::: ::: ::: ::: Table 1 ::: ::: Human/Mouse Comparison Resembles Randomized Genome Comparison ::: ::: ::: ::: The human/mouse comparison parallels the randomized genomes, and both deviate drastically from the hypothetical case of 270 blocks evolving without breakpoint reuse. There is an excess of 22 four-cycles and three other small cycles in the real data, largely due to reversals within concatenated blocks from a single chromosome in both human and mouse, largely dispersed in the randomized chromosomes. These 25 are what remains of the detailed evolutionary signal; they account for the small differences in distance, in breakpoint reuse, and in the total number of cycles. The giant cycles celebrated in Pevzner and Tesler's paper [6] and Tesler and colleagues' paper [3] have almost identical structure in the human/mouse and randomized comparisons. ::: ::: Note that in contrast to the autosomes, the rearrangement analysis of the human and mouse X chromosomes involves only short cycles, a breakpoint reuse rate close to 1.0 and a clear evolutionary signal. ::: ::: In conclusion, I take issue neither with Pevzner and Tesler's ingenious method for constructing synteny blocks nor with the notion that genomes are spatially heterogeneous in their susceptibility to rearrangement; many types of genomic regions, as reviewed in a previously published paper [5], have documented elevated rates of rearrangement. Nevertheless, a high reuse rate in the output of rearrangement algorithms, which simply indicates loss of signal, is not good evidence for fragile regions. The output of comparisons of randomized genomes has the same characteristics—namely, similar rearrangement distance, similar cycle/path sizes, similar number of chromosomes touched by each large cycle, similar reuse rates, and similar estimates [8] of the number of translocations and reversals.
---
paper_title: Chromosomal Speciation and Molecular Divergence--Accelerated Evolution in Rearranged Chromosomes
paper_content:
Humans and their closest evolutionary relatives, the chimpanzees, differ in approximately 1.24% of their genomic DNA sequences. The fraction of these changes accumulated during the speciation processes that have separated the two lineages may be of special relevance in understanding the basis of their differences. We analyzed human and chimpanzee sequence data to search for the patterns of divergence and polymorphism predicted by a theoretical model of speciation. According to the model, positively selected changes should accumulate in chromosomes that present fixed structural differences, such as inversions, between the two species. Protein evolution was more than 2.2 times faster in chromosomes that had undergone structural rearrangements compared with colinear chromosomes. Also, nucleotide variability is slightly lower in rearranged chromosomes. These patterns of divergence and polymorphism may be, at least in part, the molecular footprint of speciation events in the human and chimpanzee lineages.
---
paper_title: The Fragile Breakage versus Random Breakage Models of Chromosome Evolution
paper_content:
For many years, studies of chromosome evolution were dominated by the random breakage theory, which implies that there are no rearrangement hot spots in the human genome. In 2003, Pevzner and Tesler argued against the random breakage model and proposed an alternative "fragile breakage" model of chromosome evolution. In 2004, Sankoff and Trinh argued against the fragile breakage model and raised doubts that Pevzner and Tesler provided any evidence of rearrangement hot spots. We investigate whether Sankoff and Trinh indeed revealed a flaw in the arguments of Pevzner and Tesler. We show that Sankoff and Trinh's synteny block identification algorithm makes erroneous identifications even in small toy examples and that their parameters do not reflect the realities of the comparative genomic architecture of human and mouse. We further argue that if Sankoff and Trinh had fixed these problems, their arguments in support of the random breakage model would disappear. Finally, we study the link between rearrangements and regulatory regions and argue that long regulatory regions and inhomogeneity of gene distribution in mammalian genomes may be responsible for the breakpoint reuse phenomenon.
---
paper_title: Evolution's cauldron: duplication, deletion, and rearrangement in the mouse and human genomes.
paper_content:
This study examines genomic duplications, deletions, and rearrangements that have happened at scales ranging from a single base to complete chromosomes by comparing the mouse and human genomes. From whole-genome sequence alignments, 344 large (>100-kb) blocks of conserved synteny are evident, but these are further fragmented by smaller-scale evolutionary events. Excluding transposon insertions, on average in each megabase of genomic alignment we observe two inversions, 17 duplications (five tandem or nearly tandem), seven transpositions, and 200 deletions of 100 bases or more. This includes 160 inversions and 75 duplications or transpositions of length >100 kb. The frequencies of these smaller events are not substantially higher in finished portions in the assembly. Many of the smaller transpositions are processed pseudogenes; we define a "syntenic" subset of the alignments that excludes these and other small-scale transpositions. These alignments provide evidence that approximately 2% of the genes in the human/mouse common ancestor have been deleted or partially deleted in the mouse. There also appears to be slightly less nontransposon-induced genome duplication in the mouse than in the human lineage. Although some of the events we detect are possibly due to misassemblies or missing data in the current genome sequence or to the limitations of our methods, most are likely to represent genuine evolutionary events. To make these observations, we developed new alignment techniques that can handle large gaps in a robust fashion and discriminate between orthologous and paralogous alignments.
---
paper_title: Chromosomal breakpoint reuse in genome sequence rearrangement
paper_content:
In order to apply gene-order rearrangement algorithms to the comparison of genome sequences, Pevzner and Tesler bypass gene finding and ortholog identification and use the order of homologous blocks of unannotated sequence as input. The method excludes blocks shorter than a threshold length. Here we investigate possible biases introduced by eliminating short blocks, focusing on the notion of breakpoint reuse introduced by these authors. Analytic and simulation methods show that reuse is very sensitive to the proportion of blocks excluded. As is pertinent to the comparison of mammalian genomes, this exclusion risks randomizing the comparison partially or entirely.
---
paper_title: Genomic features in the breakpoint regions between syntenic blocks
paper_content:
MOTIVATION ::: We study the largely unaligned regions between the syntenic blocks conserved in humans and mice, based on data extracted from the UCSC genome browser. These regions contain evolutionary breakpoints caused by inversion, translocation and other processes. ::: ::: ::: RESULTS ::: We suggest explanations for the limited amount of genomic alignment in the neighbourhoods of breakpoints. We discount inferences of extensive breakpoint reuse as artefacts introduced during the reconstruction of syntenic blocks. We find that the number, size and distribution of small aligned fragments in the breakpoint regions depend on the origin of the neighbouring blocks and the other blocks on the same chromosome. We account for this and for the generalized loss of alignment in the regions partially by artefacts due to alignment protocols and partially by mutational processes operative only after the rearrangement event. These results are consistent with breakpoints occurring randomly over virtually the entire genome.
---
paper_title: Molecular mechanisms for constitutional chromosomal rearrangements in humans.
paper_content:
Cytogenetic imbalance in the newborn is a frequent cause of mental retardation and birth defects. Although aneuploidy accounts for the majority of imbalance, structural aberrations contribute to a significant fraction of recognized chromosomal anomalies. This review describes the major classes of constitutional, structural cytogenetic abnormalities and recent studies that explore the molecular mechanisms that bring about their de novo occurrence. Genomic features flanking the sites of recombination may result in susceptibility to chromosomal rearrangement. One such substrate for recombination is low-copy region-specific repeats. The identification of genome architectural features conferring susceptibility to rearrangements has been accomplished using methods that enable investigation of regions of the genome that are too small to be visualized by traditional cytogenetics and too large to be resolved by conventional gel electrophoresis. These investigations resulted in the identification of previously unrecognized structural cytogenetic anomalies, which are associated with genetic syndromes and allowed for the molecular basis of some chromosomal rearrangements to be delineated.
---
paper_title: Segmental duplications and the evolution of the primate genome
paper_content:
Initial human genome sequence analysis has revealed large segments of nearly identical sequence in particular chromosomal regions. The recent origin of these segments and their abundance (∼5%) has challenged investigators to elucidate their underlying mechanism and role in primate genome evolution. Although the precise fraction is unknown, some of these duplicated segments have recently been shown to be associated with rapid gene innovation and chromosomal rearrangement in the genomes of man and the great apes.
---
paper_title: Murine segmental duplications are hot spots for chromosome and gene evolution
paper_content:
Mouse and rat genomic sequences permit us to obtain a global view of evolutionary rearrangements that have occurred between the two species and to define hallmarks that might underlie these events. We present a comparative study of the sequence assemblies of mouse and rat genomes and report an enrichment of rodent-specific segmental duplications in regions where synteny is not preserved. We show that segmental duplications present higher rates of molecular evolution and that genes in rearranged regions have evolved faster than those located elsewhere. Previous studies have shown that synteny breakpoints between the mouse and the human genomes are enriched in human segmental duplications, suggesting a causative connection between such structures and evolutionary rearrangements. Our work provides further evidence to support the role of segmental duplications in chromosomal rearrangements in the evolution of the architecture of mammalian chromosomes and in the speciation processes that separate the mouse and the rat.
---
paper_title: Enrichment of segmental duplications in regions of breaks of synteny between the human and mouse genomes suggest their involvement in evolutionary rearrangements.
paper_content:
The sequence of the mouse genome allows one to compare the conservation of synteny between the human and mouse genome and exploration of regions that might have been involved in major rearrangements during the evolution of these two species (evolutionary genome rearrangements). Recent segmental duplications (or duplicons) are paralogous DNA sequences with high sequence identity that account for about 3.5-5% of the human genome and have emerged during the past approximately 35 million years of evolution. These regions are susceptible to illegitimate recombination leading to rearrangements that result in genomic disorders or genomic mutations. A catalogue of several hundred segmental duplications potentially leading to genomic rearrangements has been reported. The authors and others have observed that some chromosome regions involved in genomic disorders are shuffled in orientation and order in the mouse genome and that regions flanked by segmental duplications are often polymorphic. We have compared the human and mouse genome sequences and demonstrate here that recent segmental duplications correlate with breaks of synteny between these two species. We also observed that nine primary regions involved in human genomic disorders show changes in the order or the orientation of mouse/human synteny segments, were often flanked by segmental duplications in the human sequence. We found that 53% of all evolutionary rearrangement breakpoints associate with segmental duplications, as compared with 18% expected in a random location of breaks along the chromosome (P<0.0001). Our data suggest that segmental duplications have participated in the recent evolution of the human genome, as driving forces for evolutionary rearrangements, chromosome structure polymorphisms and genomic disorders.
---
paper_title: Structure of chromosomal duplicons and their role in mediating human genomic disorders.
paper_content:
Chromosome-specific low-copy repeats, or duplicons, occur in multiple regions of the human genome. Homologous recombination between different duplicon copies leads to chromosomal rearrangements, such as deletions, duplications, inversions, and inverted duplications, depending on the orientation of the recombining duplicons. When such rearrangements cause dosage imbalance of a developmentally important gene(s), genetic diseases now termed genomic disorders result, at a frequency of 0.7-1/1000 births. Duplicons can have simple or very complex structures, with variation in copy number from 2 to >10 repeats, and each varying in size from a few kilobases in length to hundreds of kilobases. Analysis of the different duplicons involved in human genomic disorders identifies features that may predispose to recombination, including large size and high sequence identity between the recombining copies, putative recombination promoting features, and the presence of multiple genes/pseudogenes that may include genes expressed in germ cells. Most of the chromosome rearrangements involve duplicons near pericentromeric regions, which may relate to the propensity of such regions to accumulate duplicons. Detailed analyses of the structure, polymorphic variation, and mechanisms of recombination in genomic disorders, as well as the evolutionary origin of various duplicons will further our understanding of the structure, function, and fluidity of the human genome.
---
paper_title: Dynamics of Mammalian Chromosome Evolution Inferred from Multispecies Comparative Maps
paper_content:
The genome organizations of eight phylogenetically distinct species from five mammalian orders were compared in order to address fundamental questions relating to mammalian chromosomal evolution. Rates of chromosome evolution within mammalian orders were found to increase since the Cretaceous-Tertiary boundary. Nearly 20% of chromosome breakpoint regions were reused during mammalian evolution; these reuse sites are also enriched for centromeres. Analysis of gene content in and around evolutionary breakpoint regions revealed increased gene density relative to the genome-wide average. We found that segmental duplications populate the majority of primate-specific breakpoints and often flank inverted chromosome segments, implicating their role in chromosomal rearrangement.
---
paper_title: Recent duplication, domain accretion and the dynamic mutation of the human genome
paper_content:
An estimated 5% of the human genome consists of interspersed duplications that have arisen over the past 35 million years of evolution. Two categories of such recently duplicated segments can be distinguished: segmental duplications between nonhomologous chromosomes (transchromosomal duplications) and duplications mainly restricted to a particular chromosome (chromosome-specific duplications). Many of these duplications exhibit an extraordinarily high degree of sequence identity at the nucleotide level (>95%) and span large genomic distances (1–100 kb). Preliminary analyses indicate that these same regions are targets for rapid evolutionary turnover among the genomes of closely related primates. The dynamic nature of these regions because of recurrent chromosomal rearrangement, and their ability to create fusion genes from juxtaposed cassettes suggest that duplicative transposition was an important force in the evolution of our genome.
---
paper_title: Hotspots of mammalian chromosomal evolution
paper_content:
Background ::: Chromosomal evolution is thought to occur through a random process of breakage and rearrangement that leads to karyotype differences and disruption of gene order. With the availability of both the human and mouse genomic sequences, detailed analysis of the sequence properties underlying these breakpoints is now possible.
---
paper_title: Pathological consequences of sequence duplications in the human genome.
paper_content:
As large-scale sequencing accumulates momentum, an increasing number of instances are being revealed in which genes or other relatively rare sequences are duplicated, either in tandem or at nearby locations. Such duplications are a source of considerable polymorphism in populations, and also increase the evolutionary possibilities for the coregulation of juxtaposed sequences. As a further consequence, they promote inversions and deletions that are responsible for significant inherited pathology. Here we review known examples of genomic duplications present on the human X chromosome and autosomes.
---
paper_title: Molecular-evolutionary mechanisms for genomic disorders
paper_content:
Molecular studies of unstable regions in the human genome have identified region-specific low-copy repeats (LCRs). Unlike highly repetitive sequences (e.g. Alus and LINEs), LCRs are usually of 10–400 kb in size and exhibit ≥ 95–97% similarity. According to computer analyses of available sequencing data, LCRs may constitute >5% of the human genome. Through the process of non-allelic homologous recombination using paralogous genomic segments as substrates, LCRs have been shown to facilitate meiotic DNA rearrangements associated with disease traits, referred to as genomic disorders. In addition, this LCR-based complex genome architecture appears to play a major role in both primate karyotype evolution and human tumorigenesis.
---
paper_title: Fourfold Faster Rate of Genome Rearrangement in Nematodes Than in Drosophila
paper_content:
The genes of Caenorhabditis elegans appear to have an unusually rapid rate of evolution. The substitution rates of many C. elegans genes are twice those of their orthologs in non-nematode metazoans (Aguinaldo et al. 1997; see Fig. 3 in Mushegian et al. 1998). Even among nematodes, the C. elegans small subunit ribosomal RNA gene evolves faster than its orthologs in most of the major clades (see Fig. 1 in Blaxter et al. 1998). It has been estimated that two-thirds of C. elegans protein-coding genes evolve more rapidly than their Drosophila orthologs (Mushegian et al. 1998). In vertebrates at least, the rate of nucleotide substitution is correlated with that of chromosomal rearrangement (Burt et al. 1999). ::: ::: Ranz et al. (2001) reported that Drosophila chromosomes rearrange at least 175 times faster than those of other metazoans, and at a rate at least five times greater than the rate of the fastest plant genomes. However, no Caenorhabditis rate data existed to compare with the Drosophila data. Given their fast rate of nucleotide substitution, we guessed that Caenorhabditis genomes might have a fast rate of rearrangement. Here, we have estimated the rate of rearrangement since the divergence of C. elegans from its sister species Caenorhabditis briggsae, using the complete C. elegans genome sequence (The C. elegans Sequencing Consortium 1998) and 13 Mb of sequence from C. briggsae released by the Washington University Genome Sequencing Center (http://genome.wustl.edu/gsc/). Previous studies have shown that C. elegans and C. briggsae have conservation of gene order over stretches of chromosome that can be up to six genes long (Kuwabara and Shah 1994; Thacker et al. 1999). ::: ::: To calculate the rate, we estimated the number of chromosomal rearrangements since the speciation of C. elegans and C. briggsae. Because both species have six chromosomes (Nigon and Dougherty 1949), we assumed that there have not been any fusions or fissions of whole chromosomes since they diverged. Kececioglu and Ravi (1995) and Hannenhalli (1996) have developed computer algorithms that deduce the historical order and sizes of the reciprocal translocations (whereby two nonhomologous chromosomes exchange chunks of DNA by recombination) and/or inversions that have occurred since the divergence of two multichromosomal genomes. However, the C. elegans genome evolves not only by reciprocal translocations and inversions, but also by transpositions (whereby a chunk of DNA excises from one chromosome and inserts into a nonhomologous chromosome) and duplications (Robertson 2001). We designed a simple algorithm to calculate the number and sizes of such mutations, although not the order in which they occurred. Our method starts by finding all perfectly conserved segments between two species, in which gene content, order, and orientation are conserved. Next, these segments are fused into larger segments that have been splintered by duplications, inversions, or transpositions. When no more segments can be merged, the final fused segments are assumed to have resulted from fissure of chromosomes by reciprocal translocations. ::: ::: To convert the observed number of rearrangements into a rate, it is necessary to have an accurate estimate of the briggsae–elegans divergence date. Emmons et al. (1979) were the first to estimate this date, using restriction fragment data, venturing that it must be “tens of millions of years” ago. Butler et al. (1981) speculated that the date was 10–100 million years ago (Mya), judging from 5S rRNA sequences, anatomical differences, and protein electrophoretic mobilities. Subsequent estimates based on sequence data were 30–60 Mya (Prasad and Baillie 1989; one gene), 23–32 Mya (Heschl and Baillie 1990; one gene), 54–58 Mya (Lee et al. 1992; two genes), and 40 Mya (Kennedy et al. 1993; seven genes). Nematode fossils are extremely scarce (Poinar 1983). Therefore, to calibrate the molecular clock, these studies either assumed that all organisms have the same silent substitution rate (Prasad and Baillie 1989; Heschl and Baillie 1990) or nonsilent substitution rate (Lee et al. 1992), or that C. elegans has the same silent rate as Drosophila (Kennedy et al. 1993). These are dubious assumptions; for example, Mushegian et al. (1998) showed that about two-thirds of C. elegans genes have a higher rate of nonsilent substitution than their orthologs in Drosophila. To gain a more reliable interval estimate of the briggsae–elegans speciation date, we used phylogenetic analysis of all genes for which orthologous sequences were available from C. elegans, C. briggsae, Drosophila, and human. Only those genes that did not have a significantly different amino acid substitution rate in the four taxa were used to produce date estimates. ::: ::: The briggsae–elegans sequence data set is the largest available for any pair of congeneric eukaryotes. Such a big sample has a high power for detecting genome-wide trends. For example, the breakpoints of reciprocal translocations and inversions are frequently near repetitive DNA. This has been observed in bacteria (Romero et al. 1999), yeast (Fischer et al. 2000), insects (Caceres et al. 1999), mammals (Dehal et al. 2001), and plants (Zhang and Peterson 1999), but not yet in nematodes. Rearrangements near transposable elements may happen when the element is transposing (Zhang and Peterson 1999), but most rearrangements are hypothesized to occur by homologous recombination between nontransposing transposable elements, dispersed repeats, or gene family members. We find that translocation and transposition breakpoints are strongly associated with repeats in the C. elegans genome.
---
paper_title: Genomic rearrangements by LINE-1 insertion-mediated deletion in the human and chimpanzee lineages
paper_content:
Long INterspersed Elements (LINE-1s or L1s) are abundant non-LTR retrotransposons in mammalian genomes that are capable of insertional mutagenesis. They have been associated with target site deletions upon insertion in cell culture studies of retrotransposition. Here, we report 50 deletion events in the human and chimpanzee genomes directly linked to the insertion of L1 elements, resulting in the loss of � 18 kb of sequence from the human genome and � 15 kb from the chimpanzee genome. Our data suggest that during the primate radiation, L1 insertions may have deleted up to 7.5 Mb of target genomic sequences. While the results of our in vivo analysis differ from those of previous cell culture assays of L1 insertion-mediated deletions in terms of the size and rate of sequence deletion, evolutionary factors can reconcile the differences. We report a pattern of genomic deletion sizes similar to those created during the retrotransposition of Alu elements. Our study provides support for the existence of different mechanisms for small and large L1mediated deletions, and we present a model for the correlation of L1 element size and the corresponding deletion size. In addition, we show that internal rearrangements can modify L1 structure during retrotransposition events associated with large deletions.
---
paper_title: Human Chromosome 19 and Related Regions in Mouse: Conservative and Lineage-Specific Evolution
paper_content:
To illuminate the function and evolutionary history of both genomes, we sequenced mouse DNA related to human chromosome 19. Comparative sequence alignments yielded confirmatory evidence for hypothetical genes and identified exons, regulatory elements, and candidate genes that were missed by other predictive methods. Chromosome-wide comparisons revealed a difference between single-copy HSA19 genes, which are overwhelmingly conserved in mouse, and genes residing in tandem familial clusters, which differ extensively in number, coding capacity, and organization between the two species. Finally, we sequenced breakpoints of all 15 evolutionary rearrangements, providing a view of the forces that drive chromosome evolution in mammals.
---
paper_title: Response to Comment on "Chromosomal Speciation and Molecular Divergence-Accelerated Evolution in Rearranged Chromosomes"
paper_content:
By clever use of outgroups, Lu et al . ([ 1 ][1]) tackle some of the questions raised by Navarro and Barton ([ 2 ][2]). Beyond confirming the previous result that rapidly evolving genes tend to be associated with chromosomes that have been rearranged between humans and chimpanzees ([ 2 ][2]), Lu et
---
paper_title: Murine segmental duplications are hot spots for chromosome and gene evolution
paper_content:
Mouse and rat genomic sequences permit us to obtain a global view of evolutionary rearrangements that have occurred between the two species and to define hallmarks that might underlie these events. We present a comparative study of the sequence assemblies of mouse and rat genomes and report an enrichment of rodent-specific segmental duplications in regions where synteny is not preserved. We show that segmental duplications present higher rates of molecular evolution and that genes in rearranged regions have evolved faster than those located elsewhere. Previous studies have shown that synteny breakpoints between the mouse and the human genomes are enriched in human segmental duplications, suggesting a causative connection between such structures and evolutionary rearrangements. Our work provides further evidence to support the role of segmental duplications in chromosomal rearrangements in the evolution of the architecture of mammalian chromosomes and in the speciation processes that separate the mouse and the rat.
---
paper_title: Chromosomal Speciation and Molecular Divergence--Accelerated Evolution in Rearranged Chromosomes
paper_content:
Humans and their closest evolutionary relatives, the chimpanzees, differ in approximately 1.24% of their genomic DNA sequences. The fraction of these changes accumulated during the speciation processes that have separated the two lineages may be of special relevance in understanding the basis of their differences. We analyzed human and chimpanzee sequence data to search for the patterns of divergence and polymorphism predicted by a theoretical model of speciation. According to the model, positively selected changes should accumulate in chromosomes that present fixed structural differences, such as inversions, between the two species. Protein evolution was more than 2.2 times faster in chromosomes that had undergone structural rearrangements compared with colinear chromosomes. Also, nucleotide variability is slightly lower in rearranged chromosomes. These patterns of divergence and polymorphism may be, at least in part, the molecular footprint of speciation events in the human and chimpanzee lineages.
---
paper_title: Effects of chromosomal rearrangements on human-chimpanzee molecular evolution
paper_content:
Many chromosomes are rearranged between humans and chimpanzees while others remain colinear. It was recently observed, based on over 100 genes, that the rates of protein evolution are substantially higher on rearranged than on colinear chromosomes during human-chimpanzee evolution. This finding led to the conclusion, since debated in the literature, that chromosomal rearrangements had played a key role in human-chimpanzee speciation. Here we re-examine this important conclusion by employing larger a data set (over 7000 genes), as well as alternative analyses. We show that the higher rates of protein evolution on rearranged chromosomes observed in the earlier study are not reproduced by our survey of the larger data set. We further show that the conclusion of the earlier study is likely confounded by two factors introduced by the relatively limited sample size: (1) nonuniform distribution of genes in the genome, and (2) stochastic noise in substitution rates inherent to short lineages such as the human-chimpanzee lineage. Our results offer a general cautionary note on the importance of controlling for hidden factors in studies involving bioinformatic surveys.
---
paper_title: Chromosomal rearrangements are associated with higher rates of molecular evolution in mammals.
paper_content:
Abstract Evolutionary rates are not uniformly distributed across the genome. Knowledge about the biological causes of this observation is still incomplete, but its exploration has provided valuable insight into the genomical, historical and demographical variables that influence rates of genetic divergence. Recent studies suggest a possible association between chromosomal rearrangements and regions of greater divergence, but evidence is limited and contradictory. Here, we test the hypothesis of a relationship between chromosomal rearrangements and higher rates of molecular evolution by studying the genomic distribution of divergence between 12 000 human–mouse orthologous genes. Our results clearly show that genes located in genomic regions that have been highly rearranged between the two species present higher rates of synonymous (0.7686 vs. 0.7076) and non-synonymous substitution (0.1014 vs. 0.0871), and that synonymous substitution rates are higher in genes close to the breakpoints of individual rearrangements. The many potential causes of such striking are discussed, particularly in the light of speciation models suggesting that chromosomal rearrangements may have contributed to some of the speciation processes along the human and mouse lineages. Still, there are other possible causes and further research is needed to properly explore them.
---
paper_title: The molecular basis of common and rare fragile sites.
paper_content:
Fragile sites are specific loci that form gaps and constrictions on chromosomes exposed to partial replication stress. Fragile sites are classified as rare or common, depending on their induction and frequency within the population. These loci are known to be involved in chromosomal rearrangements in tumors and are associated with human diseases. Therefore, the understanding of the molecular basis of fragile sites is of high significance. Here we discuss the works performed in recent years that investigated the characteristics of fragile sites which underlie their inherent instability.
---
paper_title: Evolutionary conserved chromosomal segments in the human karyotype are bounded by unstable chromosome bands
paper_content:
In this paper an ancestral karyotype for primates, defining for the first time the ancestral chromosome morphology and the banding patterns, is proposed, and the ancestral syntenic chromosomal segments are identified in the human karyotype. The chromosomal bands that are boundaries of ancestral segments are identified. We have analyzed from data published in the literature 35 different primate species from 19 genera, using the order Scandentia, as well as other published mammalian species as out-groups, and propose an ancestral chromosome number of 2n = 54 for primates, which includes the following chromosomal forms: 1(a+c(1)), 1(b+c(2)), 2a, 2b, 3/21, 4, 5, 6, 7a, 7b, 8, 9, 10a, 10b, 11, 12a/22a, 12b/22b, 13, 14/15, 16a, 16b, 17, 18, 19a, 19b, 20 and X and Y. From this analysis, we have been able to point out the human chromosome bands more "prone" to breakage during the evolutionary pathways and/or pathology processes. We have observed that 89.09% of the human chromosome bands, which are boundaries for ancestral chromosome segments, contain common fragile sites and/or intrachromosomal telomeric-like sequences. A more in depth analysis of twelve different human chromosomes has allowed us to determine that 62.16% of the chromosomal bands implicated in inversions and 100% involved in fusions/fissions correspond to fragile sites, intrachromosomal telomeric-like sequences and/or bands significantly affected by X irradiation. In addition, 73% of the bands affected in pathological processes are co-localized in bands where fragile sites, intrachromosomal telomeric-like sequences, bands significantly affected by X irradiation and/or evolutionary chromosomal bands have been described. Our data also support the hypothesis that chromosomal breakages detected in pathological processes are not randomly distributed along the chromosomes, but rather concentrate in those important evolutionary chromosome bands which correspond to fragile sites and/or intrachromosomal telomeric-like sequences.
---
paper_title: Fragile Sites in Human and Macaca Fascicularis Chromosomes are Breakpoints in Chromosome Evolution
paper_content:
We have analysed the expression of aphidicolin-induced common fragile sites at two different aphidicolin concentrations (0.1 µmol/L and 0.2 µmol/L) in three female and one male crab-eating macaques (Macaca fascicularis, Cercopithecidae, Catarrhini). A total of 3948 metaphases were analysed: 1754 in cultures exposed to 0.1 µmol/L aphidicolin, 1261 in cultures exposed to 0.2 µmol/L aphidicolin and 933 in controls. The number of breaks and gaps detected ranged from 439 in cultures exposed to 0.1 µmol/L aphidicolin to 2061 in cultures exposed to 0.2 µmol/L aphidicolin. The use of a multinomial FSM statistical model allowed us to identify 95 fragile sites in the chromosomes of M. fascicularis, of which only 16 are expressed in all four specimens. A comparative study between the chromosomes of M. fascicularis and man has demonstrated that 38 human common fragile sites (50%) are found in the equivalent location in M. fascicularis. The analysis of the rearrangements that have taken place during chromosome evolution has revealed that the breakpoints involved in these rearrangements correspond significantly (p < 0.025) to the location of M. fascicularis fragile sites.
---
paper_title: Evolutionary breakpoints are co-localized with fragile sites and intrachromosomal telomeric sequences in primates
paper_content:
The concentration of evolutionary breakpoints in primate karyotypes in some particular regions or chromosome bands suggests that these chromosome regions are more prone to breakage. This is the first extensive comparative study which investigates a possible relationship of two genetic markers (intrachromosomal telomeric sequences [TTAGGG]n, [ITSs] and fragile sites [FSs]), which are implicated in the evolutionary process as well as in chromosome rearrangements. For this purpose, we have analyzed: (a) the cytogenetic expression of aphidicolin-induced FSs in Cebus apella and Cebus nigrivittatus (F. Cebidae, Platyrrhini) and Mandrillus sphinx (F. Cercopithecidae, Catarrhini), and (b) the intrachromosomal position of telomeric-like sequences by FISH with a synthetic (TTAGGG)n probe in C. apella chromosomes. The multinomial FSM statistical model allowed us to determinate 53 FSs in C. apella, 16 FSs in C. nigrivittatus and 50 FSs in M. sphinx. As expected, all telomeres hybridized with the probe, and 55 intrachromosomal loci were also detected in the Cebus apella karyotype. The chi(2) test indicates that the coincidence of the location of Cebus and Mandrillus FSs with the location of human FSs is significant (P < 0.005). Based on a comparative cytogenetic study among different primate species we have identified (or described) the chromosome bands in the karyotypes of Papionini and Cebus species implicated in evolutionary reorganizations. More than 80% of these evolutionary breakpoints are located in chromosome bands that express FSs and/or contain ITSs.
---
paper_title: Translocation and gross deletion breakpoints in human inherited disease and cancer II: Potential involvement of repetitive sequence elements in secondary structure formation between DNA ends
paper_content:
Translocations and gross deletions are responsible for a significant proportion of both cancer and inherited disease. Although such gene rearrangements are nonuniformly distributed in the human genome, the underlying mutational mechanisms remain unclear. We have studied the potential involvement of various types of repetitive sequence elements in the formation of secondary structure intermediates between the single-stranded DNA ends that recombine during rearrangements. Complexity analysis was used to assess the potential of these ends to form secondary structures, the maximum decrease in complexity consequent to a gross rearrangement being used as an indicator of the type of repeat and the specific DNA ends involved. A total of 175 pairs of deletion/translocation breakpoint junction sequences available from the Gross Rearrangement Breakpoint Database [GRaBD; www.uwcm.ac.uk/uwcm/mg/grabd/grabd.html] were analyzed. Potential secondary structure was noted between the 5' flanking sequence of the first breakpoint and the 3' flanking sequence of the second breakpoint in 49% of rearrangements and between the 5' flanking sequence of the second breakpoint and the 3' flanking sequence of the first breakpoint in 36% of rearrangements. Inverted repeats, inversions of inverted repeats, and symmetric elements were found in association with gross rearrangements at approximately the same frequency. However, inverted repeats and inversions of inverted repeats accounted for the vast majority (83%) of deletions plus small insertions, symmetric elements for one-half of all antigen receptor-mediated translocations, while direct repeats appear only to be involved in mediating simple deletions. These findings extend our understanding of illegitimate recombination by highlighting the importance of secondary structure formation between single-stranded DNA ends at breakpoint junctions.
---
paper_title: Non-B DNA conformations, genomic rearrangements, and human disease
paper_content:
The history of investigations on non-B DNA conformations as related to genetic diseases dates back to the mid-1960s. Studies with high molecular weight DNA polymers of defined repeating nucleotide sequences demonstrated the role of sequence in their properties and conformations (1). Investigations with repeating homo-, di-, tri-, and tetranucleotide repeating motifs revealed the powerful role of sequence in molecular behaviors. At that time, this concept was heretical because numerous prior investigations with naturally occurring DNA sequences masked the effect of sequence (1). It may be noted that these studies in the 1960s predated DNA sequencing by at least a decade. Early studies were followed by a number of innovative discoveries on DNA conformational features in synthetic oligomers, restriction fragments, and recombinant DNAs. The DNA polymorphisms were a function of sequence, topology (supercoil density), ionic conditions, protein binding, methylation, carcinogen binding, and other factors (2). A number of non-B DNA structures have been discovered (approximately one new conformation every 3 years for the past 35 years) and include the following: triplexes, left-handed DNA, bent DNA, cruciforms, nodule DNA, flexible and writhed DNA, G4 tetrad (tetraplexes), slipped structures, and sticky DNA (Fig. 1). From the outset, it was realized (1, 2) that these sequence effects probably have profound biological implications, and indeed their role in transcription (3) and in the maintenance of telomere ends (4) has recently been reviewed. However, in the past few years dramatic advances from genomics, human genetics, medicine, and DNA structural biology have revealed the role of non-B conformations in the etiology of at least 46 human genetic diseases (Table I) that involve genomic rearrangements as well as other types of mutation events.
---
paper_title: Breakpoints of gross deletions coincide with non-B DNA conformations.
paper_content:
Genomic rearrangements are a frequent source of instability, but the mechanisms involved are poorly understood. A 2.5-kbp poly(purine.pyrimidine) sequence from the human PKD1 gene, known to form non-B DNA structures, induced long deletions and other instabilities in plasmids that were mediated by mismatch repair and, in some cases, transcription. The breakpoints occurred at predicted non-B DNA structures. Distance measurements also indicated a significant proximity of alternating purine-pyrimidine and oligo(purine.pyrimidine) tracts to breakpoint junctions in 222 gross deletions and translocations, respectively, involved in human diseases. In 11 deletions analyzed, breakpoints were explicable by non-B DNA structure formation. We conclude that alternative DNA conformations trigger genomic rearrangements through recombination-repair activities.
---
paper_title: Translocation and gross deletion breakpoints in human inherited disease and cancer I: Nucleotide composition and recombination-associated motifs
paper_content:
Translocations and gross deletions are important causes of both cancer and inherited disease. Such gene rearrangements are nonrandomly distributed in the human genome as a consequence of selection for growth advantage and/or the inherent potential of some DNA sequences to be frequently involved in breakage and recombination. Using the Gross Rearrangement Breakpoint Database [GRaBD; www.uwcm.ac.uk/uwcm/mg/grabd/grabd.html] (containing 397 germ-line and somatic DNA breakpoint junction sequences derived from 219 different rearrangements underlying human inherited disease and cancer), we have analyzed the sequence context of translocation and deletion breakpoints in a search for general characteristics that might have rendered these sequences prone to rearrangement. The oligonucleotide composition of breakpoint junctions and a set of reference sequences, matched for length and genomic location, were compared with respect to their nucleotide composition. Deletion breakpoints were found to be AT-rich whereas by comparison, translocation breakpoints were GC-rich. Alternating purine-pyrimidine sequences were found to be significantly over-represented in the vicinity of deletion breakpoints while polypyrimidine tracts were over-represented at translocation breakpoints. A number of recombination-associated motifs were found to be over-represented at translocation breakpoints (including DNA polymerase pause sites/frameshift hotspots, immunoglobulin heavy chain class switch sites, heptamer/nonamer V(D)J recombination signal sequences, translin binding sites, and the χ element) but, with the exception of the translin-binding site and immunoglobulin heavy chain class switch sites, none of these motifs were over-represented at deletion breakpoints. Alu sequences were found to span both breakpoints in seven cases of gross deletion that may thus be inferred to have arisen by homologous recombination. Our results are therefore consistent with a role for homologous unequal recombination in deletion mutagenesis and a role for nonhomologous recombination in the generation of translocations. Hum Mutat 22:229–244, 2003. © 2003 Wiley-Liss, Inc.
---
paper_title: Dynamics of Mammalian Chromosome Evolution Inferred from Multispecies Comparative Maps
paper_content:
The genome organizations of eight phylogenetically distinct species from five mammalian orders were compared in order to address fundamental questions relating to mammalian chromosomal evolution. Rates of chromosome evolution within mammalian orders were found to increase since the Cretaceous-Tertiary boundary. Nearly 20% of chromosome breakpoint regions were reused during mammalian evolution; these reuse sites are also enriched for centromeres. Analysis of gene content in and around evolutionary breakpoint regions revealed increased gene density relative to the genome-wide average. We found that segmental duplications populate the majority of primate-specific breakpoints and often flank inverted chromosome segments, implicating their role in chromosomal rearrangement.
---
paper_title: The Evolutionary Chromosome Translocation 4;19 in Gorilla gorilla is Associated with Microduplication of the Chromosome Fragment Syntenic to Sequences Surrounding the Human Proximal CMT1A-REP
paper_content:
Many genomic disorders occur as a result of chromosome rearrangements involving low-copy repeats (LCRs). To better understand the molecular basis of chromosome rearrangements, including translocations, we have investigated the mechanism of evolutionary rearrangements. In contrast to several intrachromosomal rearrangements, only two evolutionary translocations have been identified by cytogenetic analyses of humans and greater apes. Human chromosome 2 arose as a result of a telomeric fusion between acrocentric chromosomes, whereas chromosomes 4 and 19 in Gorilla gorilla are the products of a reciprocal translocation between ancestral chromosomes, syntenic to human chromosomes 5 and 17, respectively. Fluorescence in situ hybridization (FISH) was used to characterize the breakpoints of the latter translocation at the molecular level. We identified three BAC clones that span translocation breakpoints. One breakpoint occurred in the region syntenic to human chromosome 5q13.3, between the HMG-CoA reductase gene (HMGCR) and RAS p21 protein activator 1 gene (RASA1). The second breakpoint was in a region syntenic to human chromosome 17p12 containing the 24 kb region-specific low-copy repeat-proximal CMT1A-REP. Moreover, we found that the t(4;19) is associated with a submicroscopic chromosome duplication involving a 19p chromosome fragment homologous to the human chromosome region surrounding the proximal CMT1A-REP. These observations further indicate that higher order genomic architecture involving low-copy repeats resulting from genomic duplication plays a significant role in karyotypic evolution.
---
paper_title: Molecular Characterization of the Pericentric Inversion That Causes Differences Between Chimpanzee Chromosome 19 and Human Chromosome 17
paper_content:
A comparison of the human genome with that of the chimpanzee is an attractive approach to attempts to understand the specificity of a certain phenotype's development. The two karyotypes differ by one chromosome fusion, nine pericentric inversions, and various additions of heterochromatin to chromosomal telomeres. Only the fusion, which gave rise to human chromosome 2, has been characterized at the sequence level. During the present study, we investigated the pericentric inversion by which chimpanzee chromosome 19 differs from human chromosome 17. Fluorescence in situ hybridization was used to identify breakpoint-spanning bacterial artificial chromosomes (BACs) and plasmid artificial chromosomes (PACs). By sequencing the junction fragments, we localized breakpoints in intergenic regions rich in repetitive elements. Our findings suggest that repeat-mediated nonhomologous recombination has facilitated inversion formation. No addition or deletion of any sequence element was detected at the breakpoints or in the surrounding sequences. Next to the break, at a distance of 10.2-39.1 kb, the following genes were found: NGFR and NXPH3 (on human chromosome 17q21.3) and GUC2D and ALOX15B (on human chromosome 17p13). The inversion affects neither the genomic structure nor the gene-activity state with regard to replication timing of these genes.
---
paper_title: Molecular characterisation of the pericentric inversion that distinguishes human chromosome 5 from the homologous chimpanzee chromosome
paper_content:
Human and chimpanzee karyotypes differ by virtue of nine pericentric inversions that serve to distinguish human chromosomes 1, 4, 5, 9, 12, 15, 16, 17, and 18 from their chimpanzee orthologues. In this study, we have analysed the breakpoints of the pericentric inversion characteristic of chimpanzee chromosome 4, the homologue of human chromosome 5. Breakpoint-spanning BAC clones were identified from both the human and chimpanzee genomes by fluorescence in situ hybridisation, and the precise locations of the breakpoints were determined by sequence comparisons. In stark contrast to some other characterised evolutionary rearrangements in primates, this chimpanzee-specific inversion appears not to have been mediated by either gross segmental duplications or low-copy repeats, although micro-duplications were found adjacent to the breakpoints. However, alternating purine-pyrimidine (RY) tracts were detected at the breakpoints, and such sequences are known to adopt non-B DNA conformations that are capable of triggering DNA breakage and genomic rearrangements. Comparison of the breakpoint region of human chromosome 5q15 with the orthologous regions of the chicken, mouse, and rat genomes, revealed similar but non-identical syntenic disruptions in all three species. The clustering of evolutionary breakpoints within this chromosomal region, together with the presence of multiple pathological breakpoints in the vicinity of both 5p15 and 5q15, is consistent with the non-random model of chromosomal evolution and suggests that these regions may well possess intrinsic features that have served to mediate a variety of genomic rearrangements, including the pericentric inversion in chimpanzee chromosome 4.
---
paper_title: Refinement of a chimpanzee pericentric inversion breakpoint to a segmental duplication cluster
paper_content:
BackgroundPericentric inversions are the most common euchromatic chromosomal differences among humans and the great apes. The human and chimpanzee karyotype differs by nine such events, in addition to several constitutive heterochromatic increases and one chromosomal fusion event. Reproductive isolation and subsequent speciation are thought to be the potential result of pericentric inversions, as reproductive boundaries form as a result of hybrid sterility.ResultsHere we employed a comparative fluorescence in situ hybridization approach, using probes selected from a combination of physical mapping, genomic sequence, and segmental duplication analyses to narrow the breakpoint interval of a pericentric inversion in chimpanzee involving the orthologous human 15q11-q13 region. We have refined the inversion breakpoint of this chimpanzee-specific rearrangement to a 600 kilobase (kb) interval of the human genome consisting of entirely duplicated material. Detailed analysis of the underlying sequence indicated that this region comprises multiple segmental duplications, including a previously characterized duplication of the alpha7 neuronal nicotinic acetylcholine receptor subunit gene (CHRNA7) in 15q13.3 and several Golgin-linked-to-PML, or LCR15, duplications.ConclusionsWe conclude that, on the basis of experimental data excluding the CHRNA7 duplicon as the site of inversion, and sequence analysis of regional duplications, the most likely rearrangement site is within a GLP/LCR15 duplicon. This study further exemplifies the genomic plasticity due to the presence of segmental duplications and highlights their importance for a complete understanding of genome evolution.
---
paper_title: Breakpoint analysis of the pericentric inversion distinguishing human chromosome 4 from the homologous chromosome in the chimpanzee (Pan troglodytes).
paper_content:
The study of breakpoints that occurred during primate evolution promises to yield valuable insights into the mechanisms underlying chromosome rearrangements in both evolution and pathology. Karyotypic differences between humans and chimpanzees include nine pericentric inversions, which may have potentiated the parapatric speciation of hominids and chimpanzees 5-6 million years ago. Detailed analysis of the respective chromosomal breakpoints is a prerequisite for any assessment of the genetic consequences of these inversions. The breakpoints of the inversion that distinguishes human chromosome 4 (HSA4) from its chimpanzee counterpart were identified by fluorescence in situ hybridization (FISH) and comparative sequence analysis. These breakpoints, at HSA4p14 and 4q21.3, do not disrupt the protein coding region of a gene, although they occur in regions with an abundance of LINE and LTR-elements. At 30 kb proximal to the breakpoint in 4q21.3, we identified an as yet unannotated gene, C4orf12, that lacks an homologous counterpart in rodents and is expressed at a 33-fold higher level in human fibroblasts as compared to chimpanzee. Seven out of 11 genes that mapped to the breakpoint regions have been previously analyzed using oligonucleotide-microarrays. One of these genes, WDFY3, exhibits a three-fold difference in expression between human and chimpanzee. To investigate whether the genomic architecture might have facilitated the inversion, comparative sequence analysis was used to identify an approximately 5-kb inverted repeat in the breakpoint regions. This inverted repeat is inexact and comprises six subrepeats with 78 to 98% complementarity. (TA)-rich repeats were also noted at the breakpoints. These findings imply that genomic architecture, and specifically high-copy repetitive elements, may have made a significant contribution to hominoid karyotype evolution, predisposing specific genomic regions to rearrangements.
---
paper_title: Chromosome evolution: the junction of mammalian chromosomes in the formation of mouse chromosome 10.
paper_content:
During evolution, chromosomes are rearranged and become fixed into new patterns in new species. The relatively conservative nature of this process supports predictions of the arrangement of ancestral mammalian chromosomes, but the basis for these rearrangements is unknown. Physical mapping of mouse chromosome 10 (MMU 10) previously identified a 380-kb region containing the junction of material represented in human on chromosomes 21 (HSA 21) and 22 (HSA 22) that occurred in the evolutionary lineage of the mouse. Here, acquisition of 275 kb of mouse genomic sequence from this region and comparative sequence analysis with HSA 21 and HSA 22 narrowed the junction from 380 kb to 18 kb. The minimal junction region on MMU 10 contains a variety of repeats, including an L32-like ribosomal element and low-copy sequences found on several mouse chromosomes and represented in the mouse EST database. Sequence level analysis of an interchromosomal rearrangement during evolution has not been reported previously.
---
paper_title: Inversion, duplication, and changes in gene context are associated with human chromosome 18 evolution
paper_content:
Abstract Human chromosome 18 differs from its homologues in the great apes by a pericentric inversion. We have identified a chimpanzee bacterial artificial chromosome that spans a region where a break is likely to have occurred in a human progenitor and have characterized the corresponding regions in both chimpanzees and humans. Interspecies sequence comparisons indicate that the ancestral break occurred between the genes ROCK1 and USP14. In humans, the inversion places ROCK1 near centromeric heterochromatin and USP14 adjacent to highly repetitive subtelomeric repeats. In addition, we provide evidence for a human segmental duplication that may have provided a mechanism for the inversion.
---
paper_title: Segmental duplication associated with the human-specific inversion of chromosome 18: a further example of the impact of segmental duplications on karyotype and genome evolution in primates
paper_content:
The human-specific pericentric inversion of chromosome 18 was analysed using breakpoint-spanning BACs from the chimpanzee and human genome. Sequence and FISH analyses disclosed that the breakpoints map to an inverted segmental duplication of 19-kb, which most likely mediated the inversion by intrachromosomal homologous recombination. The 19-kb duplication encompasses the 3' end of the ROCK1 gene and occurred in the human lineage. Only one copy of this segment is found in the chimpanzee. Due to the inversion, the genomic context of the ROCK1 and USP14 genes is altered. ROCK1 flanks USP14 in the long arm of the chimpanzee chromosome 17, which is homologous to human chromosome 18. This order is interrupted by the inversion in humans. ROCK1 is localized close to the pericentromeric region in 18q11 and USP14 is inverted to distal 18p11.3 in direct neighbourhood to LSAU-satellites, beta-satellites and telomere-associated repeats. Our findings essentially confirm the analysis of Dennehey et al. (2004). Intriguingly, USP14 is differentially expressed in human and chimpanzee cortex as well as fibroblast cell lines determined previously by the analysis of oligonucleotide arrays. Either position effects mediated by the proximity to the telomeric region or nucleotide divergence in regulatory regions might account for the differential expression of USP14. The assignment of the breakpoint region to a segmental duplication underlines the significance of the genomic architecture in the context of genome and karyotype evolution in hominoids.
---
paper_title: Independent intrachromosomal recombination events underlie the pericentric inversions of chimpanzee and gorilla chromosomes homologous to human chromosome 16
paper_content:
Analyses of chromosomal rearrangements that have occurred during the evolution of the hominoids can reveal much about the mutational mechanisms underlying primate chromosome evolution. We characterized the breakpoints of the pericentric inversion of chimpanzee chromosome 18 (PTR XVI), which is homologous to human chromosome 16 (HSA 16). A conserved 23-kb inverted repeat composed of satellites, LINE and Alu elements was identified near the breakpoints and could have mediated the inversion by bringing the chromosomal arms into close proximity with each other, thereby facilitating intrachromosomal recombination. The exact positions of the breakpoints may then have been determined by local DNA sequence homologies between the inversion breakpoints, including a 22-base pair direct repeat. The similarly located pericentric inversion of gorilla (GGO) chromosome XVI, was studied by FISH and PCR analysis. The p- and q-arm breakpoints of the inversions in PTR XVI and GGO XVI were found to occur at slightly different locations, consistent with their independent origin. Further, FISH studies of the homologous chromosomal regions in macaque and orangutan revealed that the region represented by HSA BAC RP11-696P19, which spans the inversion breakpoint on HSA 16q11-12, was derived from the ancestral primate chromosome homologous to HSA 1. After the divergence of orangutan from the other great apes approximately 12 million years ago (Mya), a duplication of the corresponding region occurred followed by its interchromosomal transposition to the ancestral chromosome 16q. Thus, the most parsimonious interpretation is that the gorilla and chimpanzee homologs exhibit similar but nonidentical derived pericentric inversions, whereas HSA 16 represents the ancestral form among hominoids.
---
paper_title: Human Chromosome 19 and Related Regions in Mouse: Conservative and Lineage-Specific Evolution
paper_content:
To illuminate the function and evolutionary history of both genomes, we sequenced mouse DNA related to human chromosome 19. Comparative sequence alignments yielded confirmatory evidence for hypothetical genes and identified exons, regulatory elements, and candidate genes that were missed by other predictive methods. Chromosome-wide comparisons revealed a difference between single-copy HSA19 genes, which are overwhelmingly conserved in mouse, and genes residing in tandem familial clusters, which differ extensively in number, coding capacity, and organization between the two species. Finally, we sequenced breakpoints of all 15 evolutionary rearrangements, providing a view of the forces that drive chromosome evolution in mammals.
---
paper_title: Breakpoint analysis of the pericentric inversion between chimpanzee chromosome 10 and the homologous chromosome 12 in humans
paper_content:
During this study, we analysed the pericentric inversion that distinguishes human chromosome 12 (HSA12) from the homologous chimpanzee chromosome (PTR10). Two large chimpanzee-specific duplications of 86 and 23 kb were observed in the breakpoint regions, which most probably occurred associated with the inversion. The inversion break in PTR10p caused the disruption of the SLCO1B3 gene in exon 11. However, the 86-kb duplication includes the functional SLCO1B3 locus, which is thus retained in the chimpanzee, although inverted to PTR10q. The second duplication spans 23 kb and does not contain expressed sequences. Eleven genes map to a region of about 1 Mb around the breakpoints. Six of these eleven genes are not among the differentially expressed genes as determined previously by comparing the human and chimpanzee transcriptome of fibroblast cell lines, blood leukocytes, liver and brain samples. These findings imply that the inversion did not cause major expression differences of these genes. Comparative FISH analysis with BACs spanning the inversion breakpoints in PTR on metaphase chromosomes of gorilla (GGO) confirmed that the pericentric inversion of the chromosome 12 homologs in GGO and PTR have distinct breakpoints and that humans retain the ancestral arrangement. These findings coincide with the trend observed in hominoid karyotype evolution that humans have a karyotype close to an ancestral one, while African great apes present with more derived chromosome arrangements.
---
paper_title: Spatial genome organization.
paper_content:
The linear sequence of genomes exists within the three-dimensional space of the cell nucleus. The spatial arrangement of genes and chromosomes within the interphase nucleus is nonrandom and gives rise to specific patterns. While recent work has begun to describe some of the positioning patterns of chromosomes and gene loci, the structural constraints that are responsible for nonrandom positioning and the relevance of spatial genome organization for genome expression are unclear. Here we discuss potential functional consequences of spatial genome organization and we speculate on the possible molecular mechanisms of how genomes are organized within the space of the mammalian cell nucleus.
---
paper_title: Fine-scale structural variation of the human genome
paper_content:
Inversions, deletions and insertions are important mediators of disease and disease susceptibility1. We systematically compared the human genome reference sequence with a second genome (represented by fosmid paired-end sequences) to detect intermediate-sized structural variants >8 kb in length. We identified 297 sites of structural variation: 139 insertions, 102 deletions and 56 inversion breakpoints. Using combined literature, sequence and experimental analyses, we validated 112 of the structural variants, including several that are of biomedical relevance. These data provide a fine-scale structural variation map of the human genome and the requisite sequence precision for subsequent genetic studies of human disease.
---
paper_title: Structural variation in the human genome
paper_content:
The first wave of information from the analysis of the human genome revealed SNPs to be the main source of genetic and phenotypic human variation. However, the advent of genome-scanning technologies has now uncovered an unexpectedly large extent of what we term 'structural variation' in the human genome. This comprises microscopic and, more commonly, submicroscopic variants, which include deletions, duplications and large-scale copy-number variants - collectively termed copy-number variants or copy-number polymorphisms - as well as insertions, inversions and translocations. Rapidly accumulating evidence indicates that structural variants can comprise millions of nucleotides of heterogeneity within every genome, and are likely to make an important contribution to human diversity and disease susceptibility.
---
paper_title: Nuclear architecture and the induction of chromosomal aberrations.
paper_content:
Progress in fluorescence in situ hybridization, three dimensional microscopy and image analysis has provided the means to study the three-dimensional structure and distribution of chromosome territories within the cell nucleus. In this contribution, we summarize the present state of knowledge of the territorial organization of interphase chromosomes and their topological relationships with other macromolecular domains in the human cell nucleus, and present data from computer simulations of chromosome territory distributions. On this basis, we discuss models of chromosome territory and nuclear architecture and topological consequences for the formation of chromosome exchanges.
---
paper_title: The evolutionary history of human chromosome 7.
paper_content:
We report on a comparative molecular cytogenetic and in silico study on evolutionary changes in human chromosome 7 homologs in all major primate lineages. The ancestral mammalian homologs comprise two chromosomes (7a and 7b/16p) and are conserved in carnivores. The subchromosomal organization of the ancestral primate segment 7a shared by a lemur and higher Old World monkeys is the result of a paracentric inversion. The ancestral higher primate chromosome form was then derived by a fission of 7b/16p, followed by a centric fusion of 7a/7b as observed in the orangutan. In hominoids two further inversions with four distinct breakpoints were described in detail: the pericentric inversion in the human/African ape ancestor and the paracentric inversion in the common ancestor of human and chimpanzee. FISH analysis employing BAC probes confined the 7p22.1 breakpoint of the pericentric inversion to 6.8 Mb on the human reference sequence map and the 7q22.1 breakpoint to 97.1 Mb. For the paracentric inversion the breakpoints were found in 7q11.23 between 76.1 and 76.3 Mb and in 7q22.1 at 101.9 Mb. All four breakpoints were flanked by large segmental duplications. Hybridization patterns of breakpoint-flanking BACs and the distribution of duplicons suggest their presence before the origin of both inversions. We propose a scenario by which segmental duplications may have been the cause rather than the result of these chromosome rearrangements.
---
|
Title: A small trip in the untranquil world of genomes A survey on the detection and analysis of genome rearrangement breakpoints
Section 1: Introduction
Description 1: Provide an overview of the topic and explain the scope of the survey, stating the main focus areas such as detecting breakpoints and analyzing breakpoint regions.
Section 2: Biological background
Description 2: Explain genome dynamics, the different types of genome rearrangements, and the biological mechanisms underlying these rearrangements.
Section 3: Detecting breakpoints
Description 3: Discuss the available data for studying rearrangements and outline the methods used to detect conserved segments as a precursor to identifying breakpoints.
Section 4: Experimental methods
Description 4: Describe various experimental techniques used to analyze karyotypes and identify conserved segments, including karyotype comparison, chromosome banding, FISH, CGH, and gene mapping.
Section 5: Genomic alignment
Description 5: Elaborate on the types of genomic alignment algorithms and the challenges posed by whole genome alignment, including the details of specific methods used in practice.
Section 6: Anchoring
Description 6: Detail the anchoring step in alignment algorithms, comparing different models and algorithms used for identifying local similarities between genomes.
Section 7: Clustering or chaining of anchors
Description 7: Explain the methods of chaining and clustering anchors to filter out false positives and retain true homologous segments.
Section 8: Extension or recursivity
Description 8: Describe how selected anchors are used to produce final alignments, focusing on the step differences among alignment methods and their objectives.
Section 9: General comments
Description 9: Analyze the advantages and drawbacks of the discussed methods in the context of identifying breakpoint regions, covering issues like micro-rearrangements and duplications.
Section 10: Analysis of breakpoint regions
Description 10: Review what is known about rearrangement mechanisms, differentiating between systematic and punctual studies, and their findings related to evolutionary and other types of breakpoints.
Section 11: Systematic studies
Description 11: Focus on systematic analyses of breakpoint regions, exploring themes such as the randomness of breakpoints, segmental duplications, duplicated elements, evolutionary rates, and fragile sites.
Section 12: Random or not random?
Description 12: Assess the arguments and evidence for and against the random distribution model of genome rearrangements.
Section 13: Segmental duplications
Description 13: Investigate the role of segmental duplications in genome rearrangements, particularly in the context of evolutionary breakpoints.
Section 14: Various duplicated elements
Description 14: Examine associations between different types of duplicated elements and breakpoint regions, and their potential involvement in rearrangements.
Section 15: Evolutionary rates
Description 15: Discuss findings on the relationship between breakpoints and evolutionary rates, including different theories and speciation models.
Section 16: Fragile sites
Description 16: Explore the correlation between evolutionary breakpoint regions and fragile sites, and their potential role in the rearrangement process.
Section 17: Correlations with other types of breakpoints (polymorphism, inherited disease, cancer)
Description 17: Investigate common features and correlations between evolutionary breakpoints and those involved in polymorphism, inherited diseases, and cancer.
Section 18: Punctual studies
Description 18: Summarize detailed analyses of individual breakpoint regions, highlighting common trends and notable findings from such studies.
Section 19: Conclusion and open problems
Description 19: Conclude the survey by summarizing key points, presenting challenges that remain unsolved, and suggesting potential directions for future research.
|
Giant Magnetoresistance Sensors: A Review on Structures and Non-Destructive Eddy Current Testing Applications
| 29 |
---
paper_title: The Electrical Conductivity of Transition Metals
paper_content:
1— In a recent paper certain property of the transition metals Ni, Pd, and Pt and of their alloys with Cu, Ag, and Au have been discussed from the point of view of the electron theory of metals based on quantum mechanics. In particular, a qualitative explanation was given of the relatively high electrical resistance of the transition metals. It was shown from an examination of the experimental evidence that the conduction electrons in these metals have wave functions derived mainly from s states just as in Cu, Ag, and Au, and that the effective number of conduction electrons is not much less than in the noble metals. On the other hand, the mean free path is much smaller, because under the influence other the lattice vibrations the conduction electrons may make transitions to the unoccupied d states, and the probability of these transitions is several times greater than the probability of ordinary scattering. Since the unoccupied d states are responsible for the ferromagnetism or high paramagnetism of the transition elements, there is a direct connexion between the magnetic properties and the electrical conductivity. The purpose of this paper is as follows: in 2, 3, and 4 we develop a formal theory of conductivity for metals, such as the tradition metals, where two Brillouin zone are of importance for the conductivity; in 5 we apply the theory to show why, at high temperatures, the temperature coefficient of the paramagnetic metals Pd and Pt falls below the normal value; and in 6 we discuss the resistance of ferromagnetic metals, and show in 7 qualitatively why constantan (Cu-Ni) has zero temperature coefficient at room temperature.
---
paper_title: Extending the GMR Current Measurement Range with a Counteracting Magnetic Field
paper_content:
Traditionally, current transformers are often used for current measurement in low voltage (LV) electrical networks. They have a large physical size and are not designed for use with power electronic circuits. Semiconductor-based current sensing devices such as the Hall sensor and Giant Magnetoresistive (GMR) sensor are advantageous in terms of small size, high sensitivity, wide frequency range, low power consumption, and relatively low cost. Nevertheless, the operational characteristics of these devices limit their current measurement range. In this paper, a design based on using counteracting magnetic field is introduced for extending the GMR current measurement range from 9 A (unipolar) to ±45 A. A prototype has been implemented to verify the design and the linear operation of the circuit is demonstrated by experimental results. A microcontroller unit (MCU) is used to provide an automatic scaling function to optimize the performance of the proposed current sensor.
---
paper_title: Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices
paper_content:
We have studied the magnetoresistance of (001)Fe/(001)Cr superlattices prepared by molecularbeam epitaxy. A huge magnetoresistance is found in superlattices with thin Cr layers: For example, with ${t}_{\mathrm{Cr}}=9$ \AA{}, at $T=4.2$ K, the resistivity is lowered by almost a factor of 2 in a magnetic field of 2 T. We ascribe this giant magnetoresistance to spin-dependent transmission of the conduction electrons between Fe layers through Cr layers.
---
paper_title: Spin-valve effect in soft ferromagnetic sandwiches
paper_content:
Abstract We demonstrated in a variety of systems that the in-plane resistivity of sandwiches of soft ferromagnetic layers separated by nonmagnetic metallic layers depends on the relative angle between their magnetizations. We observe this phenomenon, which we term the spin-valve effect, in sandwiches where we are able to control the relative angle between the magnetizations of two ferromagnetic layers either by constraining one layer through exchange anisotropy or by fabricating layers with different coercivities. In the first case, for example Si/50A Ta/60A NiFe/25A Cu/40A NiFe/50A FeMn/50A Ta we have seen relative changes in resistance of more than 4% at room temperature in a range of in-plane field of 0 to 15 Oe. In a system where the layers have different coercivities, Si/8 × (30A Fe/60A Ag/30A Co/60A Ag), we observed a relative change of 1.6% at room temperature for fields between 0 and 50 Oe. Since the ferromagnetic layers are essentially decoupled and have high squareness, one can rule out any mechanism requiring scattering by domain walls. The usual anisotropic magnetoresistance in these structures is much smaller than the spin-valve effect. In contrast to noble metals, when using Ta, Al, Cr or Pd spacers of similar thickness (20 to 150A) between layers of permalloy, only the anisotropic magnetoresistance is observed. We believe the spin-valve effect to be related to spin-dependent scattering at the interface and within the ferromagnetic layers, in balance with spin-dependent relaxation within the layers. We also report the observation of a weak exchange-like coupling between the ferromagnetic layers.
---
paper_title: Spin valve sensors
paper_content:
Abstract This paper demonstrates spin valve sensor applications as read elements in storage systems, or, when in a Wheatstone bridge configuration, as rotational speed control devices (for ABS systems), high current monitoring devices for power lines, and positioning control devices in robotic systems. For recording heads, shielded spin valve sensors are adequate for high linear density recording. A tape head was fabricated with output 400 μV per micron of trackwidth, with a D50 value of 100 kfci, and signal loss of −0.34 dB/kfci. For rotational speed measurement, spin valve bridge sensors with flux guides were used, yielding a 400-mVpp amplitude, and square wave output with rise/fall times below 70 μs, when excited by a magnetized wheel. Amplitude is independent of speed (0–3000 rpm), and of sensor to wheel separation (0.5–2.0 mm). For power line applications, currents up to 2100 A (at 50 Hz) could be measured with a sensitivity of 35 μVrms/A and deviations from linearity of ±1.5%. In robot position control, a maximum error of ±9 μm over a ±0.5-mm span was obtained. In the two last cases, bridges without flux guides are used to maximize linear response.
---
paper_title: Direct observation of the alignment of ferromagnetic spins by antiferromagnetic spins
paper_content:
The arrangement of spins at interfaces in a layered magnetic material often has an important effect on the properties of the material. One example of this is the directional coupling between the spins in an antiferromagnet and those in an adjacent ferromagnet, an effect first discovered in 1956 and referred to as exchange bias. Because of its technological importance for the development of advanced devices such as magnetic read heads and magnetic memory cells, this phenomenon has received much attention. Despite extensive studies, however, exchange bias is still poorly understood, largely due to the lack of techniques capable of providing detailed information about the arrangement of magnetic moments near interfaces. Here we present polarization-dependent X-ray magnetic dichroism spectro-microscopy that reveals the micromagnetic structure on both sides of a ferromagnetic-antiferromagnetic interface. Images of thin ferromagnetic Co films grown on antiferromagnetic LaFeO3 show a direct link between the arrangement of spins in each material. Remanent hysteresis loops, recorded for individual ferromagnetic domains, show a local exchange bias. Our results imply that the alignment of the ferromagnetic spins is determined, domain by domain, by the spin directions in the underlying antiferromagnetic layer.
---
paper_title: Exchange-biased spin-valves for magnetic storage
paper_content:
An overview is given of the material properties of exchange-biased spin-valves. More specifically, we discuss the microstructure and magnetic properties of these materials as relevant for (future) industrial application in magnetoresistive read heads for rigid disk and tape recording and Magnetic Random Access Memory (MRAM) devices.
---
paper_title: Magnetic field sensors using GMR multilayer
paper_content:
Wheatstone bridge magnetic field sensors using giant magnetoresistive ratio (GMR) multilayers were designed, fabricated, and evaluated. The GMR ranged from 10% to 20% with saturation fields of 60 Oe to 300 Oe. The multilater resistances decreased linearly with magnetic field and showed little hysteresis. In one sensor configuration, a permanent magnet bias was placed between two pairs of magnetoresistors, each pair representing opposite legs of the bridge. This sensor gave a bipolar bridge output whose output range was approximately GMR times the bridge source voltage. The second sensor configuration used shielding on one resistor pair, and it gave a bridge output dependent on the magnetic field magnitude, but not polarity, and the output range was approximately one half GMR tines the bridge source voltage. Field amplifications of 3 to 6 were accomplished by creating a gap in a low reluctance magnetic path, thus providing the full range of outputs with 1/3 to 1/6 of the intrinsic saturation fields of the GMR multilayers. >
---
paper_title: A new GMR sensor based on NiFe/Ag multilayers
paper_content:
This work describes the fabrication of a giant magnetoresistive field sensor based on NiFe/Ag multilayers. The stacking of the 21 bilayers of NiFe and Ag deposited at liquid nitrogen temperature on a (100) Si substrate has a surface roughness as low as 0.3 nm, which contributes to the good magnetoresistive properties of the material. Owing to a magnetoresistance ratio ΔR/R of 12% under a saturation field of 160 Oe (12.7 kA/m) and a good thermal stability after an annealing at 180°C, this material has been integrated into sensors through microelectronics processing. The particular design of the Wheatstone bridge which constitutes the sensor allows to bias the four active magnetoresistors with only two small magnets stuck along the bridge arms. Such a sensor presents a linearity better than ±2% over a field range of ±35 Oe (±2.8 kA/m) for temperatures ranging from room temperature to 100°C. Hysteresis was evaluated to be smaller than 1 Oe. As a consequence of these good performances without any signal conditioning, this sensor may be considered for applications requiring low fabrication costs such as automotive ones.
---
paper_title: Robust giant magnetoresistance sensors
paper_content:
Abstract The giant magnetoresistance (GMR) effect offers interesting new possibilities for sensor applications. A short overview is given of the GMR effect in relation to its application in (automotive and industrial) field sensors. In the past the thermal and magnetic stability could not fulfil the requirements for use in automotive and industrial environments. Recently, a new, robust GMR material system has been developed that can withstand high temperatures (>200°C) and large magnetic fields (>200 kA/m). Using this material, GMR sensor elements have been fabricated and measured. Moreover, preliminary measurements on the first robust GMR sensor with a full Wheatstone-bridge configuration will be presented.
---
paper_title: GIANT MAGNETORESISTANCE IN MAGNETIC NANOSTRUCTURES
paper_content:
This chapter contains a brief review of the giant magnetoresistance (GMR) effect exhibited by magnetic multilayers, granular alloys, and related materials. Subjects covered include a description of the phenomenon, and the related oscillatory interlayer exchange coupling in magnetic multilayers; a simple model of giant magnetoresistance; the inverse GMR effect in spin-engineered magnetic multilayers; structures that display large changes in resistance in small magnetic fields, possibly for use in magnetic field sensors; and the dependence of GMR on various aspects of the magnetic structures.
---
paper_title: Optimized Eddy Current Detection of Small Cracks in Steam Generator Tubing
paper_content:
A complete, computer based design methodology is described, aiming to develop an eddy current sensor with increased sensitivity to flaws, and reduced sensitivity to probe lift-off. The first part of the paper contains an analysis performed in order to establish detailed criteria for an effective design. Numerical investigations have been carried out and their results are discussed, regarding various problems of detectability and lift-off noise level. Based on these results, in the second part two probe arrangements are proposed, and it is shown how their performance parameters could be further improved.
---
paper_title: Recent trends in electromagnetic non-destructive sensing
paper_content:
The paper deals with material electromagnetic non-destructive testing (eNDT) with emphasize on eddy current testing (ECT). Various modifications of ECT sensing are compared and discussed from the desired detected signal characteristics point of view. Except of the optimization of usual probe coils arrangements for the concrete applications, the new magnetic sensors as giant magneto-resistance (GMR) and spin dependent tunneling (SDT) are presented. The advanced ECT sensors are characterized by their sensitivity, frequency range and sensor dimensions
---
paper_title: Bobbin-Type Solid-State Hall Sensor Array With High Spatial Resolution for Cracks Inspection in Small-Bore Piping Systems
paper_content:
Bobbin coil and bobbin-type Hall sensor arrays are proposed as an alternative for crack inspection inside a small-bore piping system. The cracks can be imaged at high speed without using a scanner since the electromagnetic (EM) field is distorted by the cracks. An array of 32 × 32 Hall sensors with 0.78 mm spatial resolution was set in a cylinder with diameter of 15 mm and length of 24.96 mm. A bobbin coil operating at 5 kHz of alternating current was positioned inside of a piping system and the sensor array outside the cylinder. Distorted EM fields around outside diameter stress corrosion cracking was imaged at 1 frame/s.
---
paper_title: A measurement system based on magnetic sensors for nondestructive testing
paper_content:
The paper deals with a measurement system based on a low-cost eddy current probe for nondestructive testing on conducting materials aimed at reconstructing the shape and position of thin cracks. The magnetic probe is characterized, highlighting good repeatability, linearity, and overall accuracy. A number of different measurement approaches are investigated, in order to choose the more appropriate for NDT applications. A numerical method was then illustrated; it proves to be able to reconstruct cracks also starting from noisy measurement data.
---
paper_title: Eddy current probes of inclined coils for increased detectability of circumferential cracks in tubing
paper_content:
Abstract Conventional bobbin probes, multi-pancake and/or rotating pancake probes, and transmit-receive probes for eddy current tests (ECTs) are currently used to test metal tubing. Each method has its respective strengths and weaknesses considering their characteristics such as test speed, flaw detection sensitivity, and probe structure complexity. This paper proposes a novel eddy current probe with new features. The structure is designed to be sensitive to circumferential cracks, which are not easily detected using conventional bobbin coil probes, as well as longitudinal cracks. The directions of the eddy current around the coils were designed to be not circumferential. The ECT signals of these probes were acquired and analyzed from the artificial defects manufactured for this study. The experimental results show that the proposed probes are more sensitive to circumferential defects than the comparable conventional bobbin probes. In addition, the proposed probes are also sensitive to axial defects. By employing both the new probes and the conventional bobbin probes, ECTs for the metal tubing can be performed more reliably.
---
paper_title: Rotating field eddy current (RoFEC)-probe for steam generator inspection
paper_content:
A novel design of eddy current probe based on rotating magnetic fields is presented for the inspection of steam generator tubes in nuclear power plants. A major advantage of the rotating field probe is that it offers the same functiona lity as that of a rotating probe coil without the need for mechanical rotation, which in turn translates into higher operating sp eed. The probe design is also sensitive to cracks of all orientations in the tube wall.
---
paper_title: Techniques for processing remote field eddy current signals from bend regions of steam generator tubes of prototype fast breeder reactor
paper_content:
Abstract Steam generator (SG) is one of the most critical components of sodium cooled fast breeder reactor. Remote field eddy current (RFEC) technique has been chosen for in-service inspection (ISI) of these ferromagnetic SG tubes made of modified 9Cr–1Mo steel (Grade 91). Expansion bends are provided in the SGs to accommodate differential thermal expansion. During ISI using RFEC technique, in expansion bend regions, exciter–receiver coil misalignment, bending stresses, probe wobble and magnetic permeability variations produce disturbing noise hindering detection of defects. Fourier filtering, cross-correlation and wavelet transform techniques have been studied for noise reduction as well as enhancement of RFEC signals of defects in bend regions, having machined grooves and localized defects. Performance of these three techniques has been compared using signal-to-noise ratio (SNR). Fourier filtering technique has shown better performance for noise reduction while cross-correlation technique has resulted in significant enhancement of signals. Wavelet transform technique has shown the combined capability of noise reduction and signal enhancement and resulted in unambiguous detection of 10% of wall loss grooves and localized defects in the bend regions with SNR better than 7 dB.
---
paper_title: Steam generator tube integrity program
paper_content:
Abstract The degradation of steam generator tubes in pressurized water nuclear reactors continues to be a serious problem, and the US Nuclear Regulatory Commission (NRC) is developing a performance-based rule and regulatory guide for steam generator tube integrity. To support the evaluation of industry-proposed implementation of these performance-based criteria, the NRC is sponsoring a new research program at Argonne National Laboratory on steam generator tubing degradation. The objective of the new program is to provide the necessary experimental data and predictive correlations and models that will permit the NRC to independently evaluate the integrity of steam generator tubes. The technical work in the program is divided into four tasks, (1) assessment of inspection reliability, (2) research on in-service inspection technology, (3) research on degradation modes and integrity, (4) development of methodology and technical assessments for current and emerging regulatory issues. The objectives of and planned research activities under each of these four tasks are described here.
---
paper_title: Fatigue Crack Length Sizing Using a Novel Flexible Eddy Current Sensor Array
paper_content:
The eddy current probe, which is flexible, array typed, highly sensitive and capable of quantitative inspection is one practical requirement in nondestructive testing and also a research hotspot. A novel flexible planar eddy current sensor array for the inspection of microcrack presentation in critical parts of airplanes is developed in this paper. Both exciting and sensing coils are etched on polyimide films using a flexible printed circuit board technique, thus conforming the sensor to complex geometric structures. In order to serve the needs of condition-based maintenance (CBM), the proposed sensor array is comprised of 64 elements. Its spatial resolution is only 0.8 mm, and it is not only sensitive to shallow microcracks, but also capable of sizing the length of fatigue cracks. The details and advantages of our sensor design are introduced. The working principal and the crack responses are analyzed by finite element simulation, with which a crack length sizing algorithm is proposed. Experiments based on standard specimens are implemented to verify the validity of our simulation and the efficiency of the crack length sizing algorithm. Experimental results show that the sensor array is sensitive to microcracks, and is capable of crack length sizing with an accuracy within ±0.2 mm.
---
paper_title: Inverse Problem in Nondestructive Testing Using Arrayed Eddy Current Sensors
paper_content:
A fast crack profile reconstitution model in nondestructive testing is developed using an arrayed eddy current sensor. The inverse problem is based on an iterative solving of the direct problem using genetic algorithms. In the direct problem, assuming a current excitation, the incident field produced by all the coils of the arrayed sensor is obtained by the translation and superposition of the 2D axisymmetric finite element results obtained for one coil; the impedance variation of each coil, due to the crack, is obtained by the reciprocity principle involving the dyadic Green’s function. For the inverse problem, the surface of the crack is subdivided into rectangular cells, and the objective function is expressed only in terms of the depth of each cell. The evaluation of the dyadic Green’s function matrix is made independently of the iterative procedure, making the inversion very fast.
---
paper_title: Nondestructive evaluation of a crack on austenitic stainless steel using a sheet type induced current and a Hall sensor array
paper_content:
Austenitic stainless steels (hereafter A-STS) such as STS304 and STS316 are paramagnetic metals. However, a small amount of partial magnetization is generated in A-STS because of the imperfect final heat treatment and mechanical processing. Surface cracks on paramagnetic metal with a partially magnetized region (hereafter PMR) are difficult to inspect. In this paper, we propose a method for high speed inspection and evaluation of a crack on A-STS. Cracks can be inspected with high speed by using 64 arrayed Hall sensors (HSA) with 3.5 mm spatial resolution and a sheet type induced current (STIC). Then, a crack can be evaluated quantitatively by using the detailed distribution of the magnetic field obtained by using single Hall sensor scanning (SSS) around the inspected crack area. Several cracks on A-STS with partially magnetized areas were examined and the experimental formulas were derived.
---
paper_title: Impedance analyzing for planar eddy current probe array
paper_content:
It is important to measure thickness of slag dynamically in the process of steelmaking in order to produce high quality steel, eddy current testing has lots advantage to measure it, and system design is based on the analyzing of impedance of sensors. In this paper, we take example of measuring the thickness distribution of liquid slag layer in mold of metallurgical industry; Impedance property is work out by figuring out a three-dimensional physical model's electromagnetic fields distribution using finite element method, a unconventional probe array was adopted in the model.
---
paper_title: Artificially intelligent 3D industrial inspection system for metal inspection
paper_content:
Industrial inspection systems have been in use for some time now. However to-date these systems have been built specifically for the application in which it will function. This has lead to such systems becoming obsolete if the manufacturing process changes. Such systems also relied on the programmer's competence in selecting appropriate algorithms to carry out the tasks of image processing and segmentation. This paper presents a system that is adaptable for many inspection tasks and generic in nature. It selects algorithms automatically depending on the task at hand and the domain knowledge given. >
---
paper_title: Non-Destructive Testing with Magnetic Sensor Using Rotational Magnetic Flux
paper_content:
This paper presents a new nondestructive testing (NDT) method which utilizes rotational magnetic flux. In conventional eddy current NDT, the magnetic sensor is driven by an ac or dc current. Consequently, the magnetic field generated in the magnetic material is aligned in a single direction. However, in order to estimate the shape or position of an unknown defect, a two-dimensional alternating magnetic flux density vector is necessary. An NDT system utilizing rotational magnetic flux is proposed. In this system, the magnitude and phase value are measured and used to obtain information about the defect.
---
paper_title: Rotational Magnetic Sensor With Neural Network For Non-destructive Testing
paper_content:
A new non-destructive testing (NDT) method which utilizes rotational magnetic flux is presented. In this system, the magnitude and phase value are measured and used to obtain information about defects. These values include the information about the shape or position of an unknown defect. The neural network technique is employed for estimation of a defect shape. The experimental results show the validity of the authors' method. >
---
paper_title: The role of leak-before-break in assessments of flaws detected in CANDU pressure tubes
paper_content:
Abstract This paper reviews the role of the Leak-Before-Break (LBB) concept in the Fitness for Service Guidelines being developed for cold worked (cw) Zr-2·5Nb pressure tubes in a CANDU reactor. The guidelines complement the rules of Section XI of the ASME Code and the requirements of Canadian Standards Association (CSA) CAN 3-N285.4-M83. The evaluation procedures in the guidelines consist of a flaw growth analysis to determine the maximum size of the flaw at the end of the evaluation period. It must then be demonstrated that the flaw is stable with adequate margins of safety for the various loading conditions. For the delayed hydride cracking failure mode LBB is used as defense in depth against unstable rupture. First the flaw must be shown to be non-susceptible to propagation by delayed hydride cracking during normal operating conditions. In addition, it must then be demonstrated that, if the flaw were to penetrate the tube wall, the leaking coolant would be detected and the reactor shutdown before the postulated crack became unstable. The Guidelines contain criteria for performing both deterministic and probabilistic LBB analyses.
---
paper_title: Remote field eddy current technique applied to non-magnetic steam generator tubes
paper_content:
Unlike the impedance plane analysis form of common eddy current testing (ECT), the remote field eddy current (RFEC) technique is a through-transmission effect that reduces problems such as lift-off normally associated with ECT. In the inspection of steam generator (SG) tubes, the real issue is to detect the minute cracks growing up from the outside. However, using ECT, it is considered infeasible to accurately find them from the inside because of the limitations of penetration of eddy currents. This paper describes a finite-element approach to the solution of time-harmonic electromagnetic fields for the RFEC technique based on a magnetic vector potential and an electric scalar potential. A comparison is made of experimental and finite-element predictions of electromagnetic phenomena under the inspection of non-magnetic tubes. For the cracks outside demanding high sensitive and precise measurements in the SG tube inspection, numerical results are given for parameters to design a RFEC probe.
---
paper_title: Giant magnetoresistance in electrodeposited superlattices
paper_content:
We have observed ‘‘giant magnetoresistance’’ in short‐period Cu/Co‐Ni‐Cu alloy superlattices electrodeposited from a single electrolyte under potentiostatic control. The superlattices were grown on polycrystalline Cu substrates which were removed before transport measurements were made. Room‐temperature magnetoresistances of over 15% in applied magnetic fields of up to 8 kOe were observed in superlattices having Cu layer thicknesses of less than 10 A.
---
paper_title: Effect of annealing on the structural and magnetic properties of giant magnetostrictive multilayers
paper_content:
Abstract Exchange-coupled multilayered thin films which combine giant magnetostriction and soft magnetic properties are of growing interest for applications. TbFe 2 /Fe and TbFe 2 /FeCo nanometric multilayers were grown by RF-sputtering onto glass substrates. A bias field of 250 Oe was applied along the longitudinal axis, inducing an in-plane uniaxial magnetic anisotropy. For es-deposited multilayers, TbFe layers are amorphous while soft layers are nanocrystallized. The effect of annealing was investigated by 57 Fe Mossbauer spectrometry, magnetic and magnetostriction measurements in the temperature ( T ann ) range from 150 to 350 °C. After annealing, the Mossbauer analysis evidences for an increase of the crystallized iron component, probably due to a thinning of the interfaces. A maximum of the magnetoelastic coefficient is found for T ann around 250 °C, with the highest value obtained in the case of TbFe/FeCo. Furthermore, stress relaxation within the film reduces the anisotropy field, leading to a large magnetoelastic susceptibility of about 250 MPa/kOe.
---
paper_title: Annealing effects on GMR multilayer films
paper_content:
Abstract The annealing effects on GMR of electron-beam evaporated [NiFe/Cu] BL multilayer films (10, 12 and 14 bilayers) were studied. The as-deposited multilayer films give a GMR smaller than 0.6% but can be enhanced by further annealing. The multilayer films were annealed in a vacuum at 300°C for 2.5 h with the pressure −7 mbar and the GMR of order of ∼1% was achieved. However, some samples were annealed in the flowing argon at the same temperature and period of time, giving an improved GMR change up to ∼4–4.5% similar to that presented by Smith et al. [N. Smith, A.M. Zeltser, M.R. Parker, GMR Multilayers and Head Design for Ultrahigh Density Magnetic Recording, IEEE Trans. Magn. 32 (1996) 135]. The oxidisation on the surface of the films is thought to cause this critical difference. The periods of annealing time can also affect the GMR which increases dramatically as the film is annealed up to 1 h and tends to be constant at longer times up to 2.5 h. Application of a forming field during annealing is found to induce uniaxial anisotropy.
---
paper_title: Design and fabrication of GMR multilayers with enhanced thermal stability
paper_content:
Abstract Enhanced thermal stability of NiFeCo/Cu multilayers has been observed for multilayer stacks prepared with ‘overdesigned’ magnetic layers. Multilayers such as these show almost no giant magnetoresistance (GMR) in the as-deposited state, but exhibit huge improvements upon annealing. This methodology represents a considerable improvement over conventional methods.
---
paper_title: The Lift-Off Effect in Eddy Currents on Thickness Modeling and Measurement
paper_content:
This paper uses a linear transformer model to investigate the effect of the lift-off on the results of the thickness measurement of nonferromagnetic metallic plates. The transformer model previews that the time derivative of the magnetization curves obtained for different gaps between the excitation coil and the plate should intercept in a single point when low magnetic coupling factors are considered. To assess the validity of the model, results are compared with experimental data obtained with a giant magnetoresistive sensor probe. For comparison, the sensor output voltage time derivative must be performed as well. The similarity of the theoretical model results and those obtain experimentally with pulsed excitation, confirms the correctness of the transformer approach.
---
paper_title: Output signal prediction of an open-ended rectangular waveguide probe when scanning cracks at a non-zero lift-off
paper_content:
Abstract The paper proposes a modeling technique for output signal prediction of a rectangular waveguide probe with finite flange when scanning a surface long crack in a metal at a finite lift-off. The modeling technique approximates the crack–probe interaction with a two-dimensional problem. In this problem, a parallel-plate waveguide with finite flange scans a long crack in a perfect conductor. The method of moments is employed to solve the governing electric field integral equation. The solution provides the reflection coefficient in the parallel-plate waveguide from which the probe output signal is obtained. The main feature of the model is solving the three-dimensional problem in a two-dimensional frame work, thus reducing the degree of complexity and computation time. To validate the accuracy of the model, several simulation results are presented at an operating frequency in the X-band and are compared with their experimental counterparts. To demonstrate the efficiency of the model, we compare our results with those obtained using the well-known HP-HFSS finite element code. It is shown that the proposed model requires less than half of the time taken to solve the same problem running on the same computer.
---
paper_title: Nondestructive Inspection Using Rotating Magnetic Field Eddy-Current Probe
paper_content:
Rotating magnetic field eddy-current (RoFEC) probe for nondestructive evaluation of steam generator tubes in a nuclear power plant offers an alternate method that has compact configuration and higher speed compared to traditional bobbin coil, rotating probe coils, and array probes. This paper investigates the feasibility of the proposed RoFEC eddy-current probe which is composed of three windings excited by three-phase ac current and does not require mechanical rotation of probe. Results of finite-element modeling using reduced magnetic vector potential (RMVP) formulation are presented for modeling the inspection of ferromagnetic and nonferromagnetic tubes. Design parameters of the excitation coils and GMR pick-up sensor are optimized by means of a parametric study.
---
paper_title: The Pulsed Eddy Current Differential Probe to Detect a Thickness Variation in an Insulated Stainless Steel
paper_content:
Non-destructive testing (NDT) plays an important role in the safety and integrity of the large industrial structures such as pipelines in nuclear power plants (NPPs). The pulsed eddy current (PEC) is an electromagnetic NDT approach which is principally developed for the detection of surface and sub surface flaws. In this study a differential probe for the PEC system has been fabricated to detect the wall thinning in insulated steel pipelines. The differential probe contains an excitation coil with two hall-sensors. A stainless steel test sample was prepared with a thickness that varied from 1 mm to 5 mm and was laminated by plastic insulation with uniform thickness to represent the insulated pipelines in the NPPs. Excitation coil in the probe is driven by a rectangular current pulse, the resultant PEC response which is the difference of the two hall sensors is detected. The discriminating features of the detected pulse, peak value and the time to zero were used to describe the wall thinning in the tested sample. A signal processing technique such as power spectrum density (PSD) is devised to infer the PEC response. The results shows that the differential PEC probe has the potential to detect the wall thinning in an insulated pipeline of the nuclear power plants (NPPs).
---
paper_title: Measurement and Instrumentation: Theory and Application
paper_content:
"Measurement and Instrumentation" introduces undergraduate engineering students to the measurement principles and the range of sensors and instruments that are used for measuring physical variables. Based on Morris' "Measurement and Instrumentation Principles", this brand new text has been fully updated with coverage of the latest developments in such measurement technologies as smart sensors, intelligent instruments, microsensors, digital recorders and displays and interfaces. Clearly and comprehensively written, this textbook provides students with the knowledge and tools, including examples in LABVIEW, to design and build measurement systems for virtually any engineering application. The text features chapters on data acquisition and signal processing with LabVIEW from Dr. Reza Langari, Professor of Mechanical Engineering at Texas A&M University. Early coverage of measurement system design provides students with a better framework for understanding the importance of studying measurement and instrumentation. It includes significant material on data acquisition, coverage of sampling theory and linkage to acquisition/processing software, providing students with a more modern approach to the subject matter, in line with actual data acquisition and instrumentation techniques now used in industry. Extensive coverage of uncertainty (inaccuracy) aids students' ability to determine the precision of instruments. Integrated use of LabVIEW examples and problems enhances students' ability to understand and retain content.
---
paper_title: Excitation current waveform for eddy current testing on the thickness of ferromagnetic plates
paper_content:
Abstract Advantages of pulsed excitation current over conventional harmonic excitation for measuring the thickness of a ferromagnetic plate are studied. Compared with the sinusoidal voltage induced by harmonic excitation current, the time-domain voltage induced by pulsed current is highly more sensitive to the thickness. Quantitative proof of this conclusion is provided by solving the normalized derivatives with respect to the thickness. Furthermore, the effects of the time constant of pulsed current on the measuring sensitivity are examined, and an optimal range of the time constant is proposed. Finally, the theoretical model is verified by experimental results.
---
paper_title: Detection of the Subsurface Cracks in a Stainless Steel Plate Using Pulsed Eddy Current
paper_content:
The nondestructive method to detect subsurface defects is limited because conventional eddy current are concentrated near to the surfaces adjacent to the excitation coil. The PEC technique enables detection of cracks buried deeper under the surface with relatively small current density. In the present study, an attempt has been made to investigate detection of subsurface cracks using a specially designed double-D differential probe. The tested sample is a SS304 with a thickness of 5 mm; small EDM notches were machined in the test sample at different depths from the surface to simulate the sub surface cracks in a pipe. The designed PEC probe has two excitation coils and two detecting Hall-sensors. The difference between two sensors is the resultant PEC signal. The cracks under the surface were detected using peak amplitude of the detected pulse; in addition, for a clear understanding of the crack depth, the Fourier transform is applied. In time domain, the peak amplitude of the detected pulse is decreased, and in the frequency domain, the magnitude of the lower frequency component has been increased with an increase in the crack depth. The experimental results have indicated that the proposed differential probe has the potential to detect the sub surface cracks in a stainless steel structure.
---
paper_title: A Novel High Sensitivity Sensor for Remote Field Eddy Current Non-Destructive Testing Based on Orthogonal Magnetic Field
paper_content:
Remote field eddy current is an effective non-destructive testing method for ferromagnetic tubular structures. In view of conventional sensors' disadvantages such as low signal-to-noise ratio and poor sensitivity to axial cracks, a novel high sensitivity sensor based on orthogonal magnetic field excitation is proposed. Firstly, through a three-dimensional finite element simulation, the remote field effect under orthogonal magnetic field excitation is determined, and an appropriate configuration which can generate an orthogonal magnetic field for a tubular structure is developed. Secondly, optimized selection of key parameters such as frequency, exciting currents and shielding modes is analyzed in detail, and different types of pick-up coils, including a new self-differential mode pick-up coil, are designed and analyzed. Lastly, the proposed sensor is verified experimentally by various types of defects manufactured on a section of a ferromagnetic tube. Experimental results show that the proposed novel sensor can largely improve the sensitivity of defect detection, especially for axial crack whose depth is less than 40% wall thickness, which are very difficult to detect and identify by conventional sensors. Another noteworthy advantage of the proposed sensor is that it has almost equal sensitivity to various types of defects, when a self-differential mode pick-up coil is adopted.
---
paper_title: Noncontact Characterization of Carbon-Fiber-Reinforced Plastics Using Multifrequency Eddy Current Sensors
paper_content:
The characterization of carbon-fiber-reinforced plastics (CFRPs) using multifrequency eddy current sensors is presented in this paper. Three sensors are designed for bulk conductivity measurements, directionality characterization, and fault detection and imaging of unidirection, cross-ply, and impact-damaged CFRP samples. Analytical and finite-element (FE) models describing the interaction of the sensors with the CFRP plate samples are developed to provide an explanation of, and physical insights into, the measured results and observed phenomena. A signal processing method is developed to compensate for the variation in lift-off during the measurements.
---
paper_title: Studies to optimize the probe response for velocity induced eddy current testing in aluminium
paper_content:
Abstract Detection and localization of surface and near surface defects in metallic objects using faster and simpler methods is always a matter of great interest in non-destructive testing (NDT). Early defect detection is of utmost importance to maintain safety of the structure and to reduce the maintenance costs. This work proposes a NDT method based on velocity induced eddy currents to detect the surface defects in electrically conductive metal. The approach used is original as it is the resultant magnetic field generated by the eddy currents of the test material induced by the permanent magnet motion, that is measured in order to detect defects. For this purpose new kind of moving magnetic probes were designed and fabricated. Each probe consists of permanent magnets which, due to the movement, induces eddy currents in the sample and a Hall effect sensor able to measure the resultant magnetic field. The total magnetic field has the information of the perturbation of the induced currents produced by the defect. Commercial simulation software was used for the optimization and design of the probe. In order to test the performance and feasibility of the proposed method several experiments were performed on an aluminium plate specimen having linear defects machined with different orientation and depths. The results were obtained by scanning the probe on test specimen at a constant speed. Experimental results confirm that the proposed method with the proposed sensing solution can be an NDT tool to detect the defects in the electrically conductive materials where motion is involved, for example in the inspection of railroads.
---
paper_title: GMR array uniform eddy current probe for defect detection in conductive specimens
paper_content:
Abstract The usage of eddy current probes (ECP) with a single magnetic field sensor represents a common solution for defect detection in conductive specimens but it is a time consuming procedure that requires huge amount of scanning steps when large surface specimens are to be inspected. In order to speed-up the nondestructive testing procedure, eddy current probes including a single excitation coil and an array of sensing coils present a good solution. The solution investigated in this paper replaces the sensing coils for giant magneto-resistors (GMRs), due to their high sensitivity and frequency broadband response. Thus, the ECP excitation coil can be driven at lower frequencies than the traditional ones allowing defects to be detected in thicker structures. In this work an optimized uniform eddy current probe architecture including two planar excitation coils, a rectangular magnetic field biasing coil and a GMR magnetometer sensor array is presented. An ac current is applied to the planar spiral rectangular coil of the probe, while a set of GMR magnetometer sensors detects the induced magnetic field in the specimens under test. The rectangular coil provides the DC uniform magnetic field, assuring appropriate biasing of the GMR magnetometers of the probe, setting-up the functioning point on the linear region and at the same branch of the GMR static characteristics. The differences on the images obtained for the same specimen for each GMR are reduced if all sensors are biased on the same working point. Elements of the automated measurement system used to inspect the plate under test using the proposed eddy current probe, including a validation procedure based on a 2D template matching algorithm and the corresponding experimental results are included in the paper.
---
paper_title: Giant magnetoresistance-based eddy-current sensor
paper_content:
The purpose of this paper is to introduce a new eddy-current testing technique for surface or near-surface defect detection in nonmagnetic metals using giant magnetoresistive (GMR) sensors. It is shown that GMR-based eddy-current probes are able to accurately detect short surface-breaking cracks in conductive materials. The self-rectifying property of the GMR sensor used in this study leads to a simplified signal conditioning circuit, which can be fully integrated on a silicon chip with the GMR sensor. The ability to manufacture probes having small dimensions and high sensitivity (220 mV/mT) to low magnetic fields over a broad frequency range (from dc up to 1 MHz) enhances the spatial resolution of such an eddy-current testing probe. Experimental results obtained by scanning two different probes over a slotted aluminum specimen are presented. General performance characteristics are demonstrated by measurements of surface and subsurface defects of different sizes and geometries. Dependence of the sensor output on orientation, liftoff distance, and excitation intensity is also investigated.
---
paper_title: Eddy Current Technique Based on ${\rm HT}_{\rm c}$-SQUID and GMR Sensors for Non-Destructive Evaluation of Fiber/Metal Laminates
paper_content:
In this work we present non-destructive evaluation measurements on fiber/metal laminate specimen by using eddy current techniques employing HTc SQUID (superconductive quantum interference device) and giant magneto-resistive (GMR) sensors. Our aim is to compare the performance and the capability of HTc SQUID and GMR sensors to detect the presence of damage inside FML composite materials. Experimental results concerning the detection of artificial defects in aeronautical structures with high magnetic sensitivity by using HTc SQUID, and with high spatial resolution using GMR, will be presented and discussed.
---
paper_title: Excitation current waveform for eddy current testing on the thickness of ferromagnetic plates
paper_content:
Abstract Advantages of pulsed excitation current over conventional harmonic excitation for measuring the thickness of a ferromagnetic plate are studied. Compared with the sinusoidal voltage induced by harmonic excitation current, the time-domain voltage induced by pulsed current is highly more sensitive to the thickness. Quantitative proof of this conclusion is provided by solving the normalized derivatives with respect to the thickness. Furthermore, the effects of the time constant of pulsed current on the measuring sensitivity are examined, and an optimal range of the time constant is proposed. Finally, the theoretical model is verified by experimental results.
---
paper_title: Recent trends in electromagnetic non-destructive sensing
paper_content:
The paper deals with material electromagnetic non-destructive testing (eNDT) with emphasize on eddy current testing (ECT). Various modifications of ECT sensing are compared and discussed from the desired detected signal characteristics point of view. Except of the optimization of usual probe coils arrangements for the concrete applications, the new magnetic sensors as giant magneto-resistance (GMR) and spin dependent tunneling (SDT) are presented. The advanced ECT sensors are characterized by their sensitivity, frequency range and sensor dimensions
---
paper_title: Time-domain analytical solutions to pulsed eddy current field excited by a probe coil outside a conducting ferromagnetic pipe
paper_content:
Abstract Using the second-order vector potential formalism and block matrix, the non-axisymmetric eddy current field induced by a probe coil positioned perpendicularly outside a conducting ferromagnetic pipe is solved analytically. Then, the time-domain expressions of induced voltage and eddy current density in the pipe are obtained through the Laplace inverse transformation, which is carried out by calculating the residues of poles. Furthermore, the diffusion process of pulsed eddy current in the pipe is examined. Finally, the analytical solutions are verified through the experiment results of two steel pipes with different wall thickness.
---
paper_title: Analysis of the Liftoff Effect of Phase Spectra for Eddy Current Sensors
paper_content:
This paper presents an analytical model that describes the inductance change when a double air-cored coil sensor is placed next to a conducting plate. Analysis of the analytical model reveals that the phase signature of such a sensor is virtually liftoff independent. This finding is verified by numerical evaluations. This paper also finds that the phase signature of a ferrite U-cored sensor can be approximated by that of a double air-cored sensor of similar size and, therefore, possesses a similar liftoff-independent property. Measurements made with a sample U-cored sensor next to plates of nonmagnetic and magnetic materials verified the theoretical results.
---
paper_title: Gas Pipeline Corrosion Mapping Using Pulsed Eddy Current Technique
paper_content:
Oil and gas transmission pipelines are critical items of infrastructure in providing energy sources to regions and countries. Steel pipes are commonly used which can be subject to both internal and external corrosion. This paper presents an advanced nondestructive inspection technique for detection of oil-gas pipeline corrosion defect. The Pulsed Eddy Current (PEC) method has been successfully applied in corrosion detection of unburied gas pipeline without removing the insulation. First, the principles of pulse eddy current method is pointed out then, the pulsed eddy current test on a pipe is simulated by Maxwell software to obtain optimum test parameters. To test the new technique, some artificial defects are fabricated on the inner surface of a gas pipe to simulate different corrosions phenomena in practice. Three isolation layers are applied to the pipe in order to show the efficiency of PEC in the detection of wall thinning areas without removing the insulation.
---
paper_title: Analytical modeling for transient probe response in pulsed eddy current testing
paper_content:
An improved analytical model by the Fourier method for transient eddy current response is presented. In this work, an alternative approach is considered to solve the harmonic eddy current problem by the reflection and transmission theory of electromagnetic waves, thus a more concise closed-form expression is expected to be obtained. To reduce the inherent Gibbs phenomenon, a harmonic order-dependent decreasing factor is employed to weight the Fourier series (FS) representation. It is shown that the developed model is promising to be used as a fast and accurate analytical solver for the transient probe response and is helpful to gain a deep insight into pulsed eddy current (PEC) testing.
---
paper_title: Simulation of Edge Cracks Using Pulsed Eddy Current Stimulated Thermography
paper_content:
Thermography has proven to be one of the most effective approaches to detect cracks in conductive specimens over a relatively large area. Pulsed eddy current stimulated thermography is an emerging integrative nondestructive approach for the detection and characterization of surface and subsurface cracks. In this paper, heating behaviors of edge cracks, excited by pulsed eddy currents, are examined using numerical simulations. The simulations are performed using COMSOL multiphysics finite element method simulation software using the AC/DC module. The simulation results show that in the early heating stage, the temperature increases more quickly at the crack tip compared with other points on the sample. The results indicate that to maximize sensitivity, the response should be analyzed in the early stages of the heating period, no more than 100 ms for samples in which we are interested. The eddy current density distribution is changed with a variation in inductor orientation, but the crack tips remain the "hottest" points during the excitation period, which can be used for robust quantitative defect evaluation. Signal feature selection, transient temperature profile of the sample, and influence of the inductor orientation on the detection sensitivity for edge cracks are investigated. The work shows that positioning of the inductor, perpendicular to the crack line, results in the highest sensitivity for defect detection and characterization. The crack orientation can be estimated through the rotation of the linear inductor near the sample edge and the crack tips.
---
paper_title: Interaction of an Eddy-Current Coil With a Right-Angled Conductive Wedge
paper_content:
A fundamental problem in eddy-current nondestructive evaluation is one of finding the quasi-static electromagnetic field of a cylindrical coil in the vicinity of the edge of a metal block. Although the field can be calculated numerically, an effective analytical approach can potentially provide a better understanding of the edge fields and form the basis of a procedure for solving a whole class of related edge problems including edge structures that contain corner cracks. One can represent the metal block as a conductive quarter space in an unbounded region. However, it has been found that the analysis is more straightforward if the problem domain is truncated in two dimensions. With the domain boundaries far from both the coil and the corner, the truncation has a negligible effect on the solution near the edge but the field calculation becomes much easier. A double Fourier series representation of the field is used, as in the case of a rectangular waveguide problem. The field in the conductor is then matched at the interfaces with that in air to determine the expansion coefficients that are used to represent the field in different parts of the domain. In this way we have derived expressions for the magnetic field, the induced eddy-current density and the coil impedance at arbitrary position and orientation of the coil.
---
paper_title: Study of Lift-Off Invariance for Pulsed Eddy-Current Signals
paper_content:
Lift-off invariance (LOI) is getting much attention from researchers in the field of electromagnetic nondestructive evaluation (ENDE) because, at the LOI point, eddy-current signals for different lift-offs intersect and the signal amplitude is independent of the lift-off variance. We discuss our ongoing research into LOI, starting from an overview of the state-of-the art of pulsed eddy-current testing (PEC) systems and their use in the elimination of the lift-off effect. We have investigated LOI characteristics with respect to variation in the configuration of the PEC probe in a theoretical study, implemented by extended truncated region eigenfunction expansion (ETREE) modeling. We found that: 1) the LOI occurs when the first-order time derivative of the magnetic field signals are acquired from Hall sensors; 2) an LOI range instead of a single LOI point, when multiple lift-offs are introduced through both experimental and theoretical studies. The LOI range varies with the Hall sensor position in the probe assembly and the conductivity of the samples under inspection, which are important parameters for the design and development of PEC systems. Based on this understanding, we investigated new approaches using theoretical computation and multiple lift-off, or magnetic sensor arrays, for conductivity and lift-off estimation. Our study can be extended for design and development of multipurpose eddy-current sensor systems for surface form measurement and defect detection.
---
paper_title: Eddy-current interaction of a long coil with a slot in a conductive plate
paper_content:
We describe a truncated-domain method for calculating eddy currents in a plate with a long flaw. The plate is modeled as a conductive half-space and the flaw is a long slot with a rectangular cross section. A long two-dimensional (2-D) coil carrying an alternating current is aligned parallel to the slot. The coil impedance variation with frequency is determined for an arbitrary coil location. The electromagnetic field due to a long coil above a conductive half-space can be expressed as integrals of trigonometric functions. For a half-space with a long slot, however, additional boundary conditions must be satisfied at the slot walls. The truncated-domain method makes this possible by recasting the problem in a finite domain; as a result, the Fourier integral is replaced by a series. The domain can be made arbitrarily large, thereby yielding results that are numerically as close to the infinite domain solution as desired. We have used the truncated domain approach to study both eddy-current flaw interactions and edge effects in the limiting case of a very wide and deep slot. We confirmed the theoretical predictions by comparing them with results of a 2-D finite element calculation and of experiments.
---
paper_title: Assessment of wall thinning in insulated ferromagnetic pipes using the time-to-peak of differential pulsed eddy-current testing signals
paper_content:
Abstract Pulsed eddy current testing (PECT) is a powerful candidate for the detection of wall thinning of insulated ferromagnetic pipes in petrochemical and power generation plants. The main purpose of this study is to find an efficient and easy-to-use signal feature for the assessment of wall thinning. Analytical modeling for a PECT probe over the insulated piping system is performed and its result is verified by experimental test. Two time-related features, the peak value and the time-to-peak, are found in the differential signal obtained by subtracting the test signal from the reference signal. The time-to-peak is superior to the peak value due to its linear variation with wall thickness. Influences of various conditions in practical testing on the PECT signal are investigated. Results show that the time-to-peak is independent of the insulation thickness and the probe lift-off. Robustness of time-to-peak to probe configuration is also validated by employing three probes of different dimensions and structures. To determine the linear range of time-to-peak with amount of wall thinning, differential signals based on different reference thicknesses are examined. Results show that the time-to-peak only keeps linear for the relative wall thinning less than 60%, but still can be effectively used for calibration purpose in periodical in-service inspection of insulated pipeline.
---
paper_title: Development of characteristic test system for GMR sensor
paper_content:
In order to test the characteristics of giant magneto-resistance (GMR) sensor comprehensively and accurately, a test system based on 3D Helmholtz coil is designed. The coil can generate sine magnetic field in the range of −10mT to 10mT, 0Hz to 500Hz in three vertical dimensions. The static and low-frequency characteristics of a GMR sensor are tested and analyzed. The dynamic test results reveal that GMR sensor should be applied in unipolar magnetic in its linear range in which case maximal error is still as large as 20% mainly caused by hysteresis. In order to increase the accuracy furthermore in measuring dynamic instantaneous magnetic induction, appropriate hysteresis compensation methods should be analyzed and applied in further research.
---
paper_title: Reduction of lift-off effects for Pulsed Eddy Current NDT
paper_content:
The lift-off effect is commonly known to be one of the main obstacles for effective eddy current NDT testing as it can easily mask defect signals. Pulsed eddy current techniques, which are believed to be potentially rich of information, are also sensitive to the effect. An approach using normalisation and two reference signals to reduce the lift-off problem with pulsed eddy current techniques is proposed. Experimental testing on the proposed technique and results are presented in this report. Results show that significant reduction in the effect has been achieved mainly in metal loss and sub-surface slot inspection. The technique can also be applied for measurement of metal thickness beneath non-conductive coatings, microstructure, strain/stress measurement, where the output is sensitive to the lift-off effect.
---
paper_title: Routes for GMR-Sensor Design in Non-Destructive Testing
paper_content:
GMR sensors are widely used in many industrial segments such as information technology, automotive, automation and production, and safety applications. Each area requires an adaption of the sensor arrangement in terms of size adaption and alignment with respect to the field source involved. This paper deals with an analysis of geometric sensor parameters and the arrangement of GMR sensors providing a design roadmap for non-destructive testing (NDT) applications. For this purpose we use an analytical model simulating the magnetic flux leakage (MFL) distribution of surface breaking defects and investigate the flux leakage signal as a function of various sensor parameters. Our calculations show both the influence of sensor length and height and that when detecting the magnetic flux leakage of µm sized defects a gradiometer base line of 250 µm leads to a signal strength loss of less than 10% in comparison with a magnetometer response. To validate the simulation results we finally performed measurements with a GMR magnetometer sensor on a test plate with artificial µm-range cracks. The differences between simulation and measurement are below 6%. We report on the routes for a GMR gradiometer design as a basis for the fabrication of NDT-adapted sensor arrays. The results are also helpful for the use of GMR in other application when it comes to measure positions, lengths, angles or electrical currents.
---
paper_title: Removing Eddy-Current probe wobble noise from steam generator tubes testing using Wavelet Transform
paper_content:
One of the most important nondestructive evaluation (NDE) applied to steam generator tubes inspection is the electromagnetic Eddy-Current testing (ECT). The signals generated in this NDE, in general, contain many noises which make difficult the interpretation and analysis of ECT signals. One of the noises present in the signals is the probe wobble noise, which is caused by the existing slack between the probe and the tube walls. In this work, Wavelet Transform (WT) is used in the probe wobble de-noising. WT is a relatively recent mathematical tool, which allows local analysis of non-stationary signals such as ECT signals. This is a great advantage of WT when compared with other analysis tools such as Fourier Transform. However, using WT involves wavelets and coefficients' selection as well as choosing the number of decomposition level needed. ::: ::: This work presents a probe wobble de-noising method when used in conjunction with the traditional ECT evaluation. Comparative results using several WT applied to Eddy-Current signals are presented in a reliable way, in other words, without loss of inherent defect information. ::: ::: A stainless steel tube, with two artificial defects generated by electro-erosion, was inspected by a ZETEC MIZ-17ET ECT equipment. The signals were de-noised through several different WT and the results are presented. The method offers good results and is a promising method because it allows for the removal of Eddy-Current signals' probe wobble effect without loss of essential signal information.
---
paper_title: A Novel Triple-Coil Electromagnetic Sensor for Thickness Measurement Immune to Lift-Off Variations
paper_content:
Lift-off variation causes errors in eddy-current measurement of metallic plate thickness. In this paper, we designed a triple-coil sensor operating as two coil pairs and in a multifrequency mode. It is found that the difference in their peak frequencies (the frequency when the imaginary part of the inductance reaches peak) is linearly proportional to the plate thickness but virtually immune to lift-off variations. Mathematical derivation, simulation, and experimental results verified the validity of the methodology.
---
paper_title: Liftoff insensitive thickness measurement of aluminum plates using harmonic eddy current excitation and a GMR sensor
paper_content:
Abstract This paper describes the implementation of a device that measures the thickness of metallic plates. A pancake coil for magnetic field sinusoidal excitation is used and detection is performed with a bridge giant magneto-resistor sensor. The paper uses the theory of the linear transformer to explain the liftoff effect with a special attention to the point of interception phenomenon. The transformer model shows that to attain the interception points with instantaneous measured voltages independent of the liftoff gap the excitation coil must be driven with imposed current. This effect was explored to show that a simultaneous process of thickness and conductivity measurement is feasible.
---
paper_title: Electrical conductivity measurement of ferromagnetic metallic materials using pulsed eddy current method
paper_content:
Abstract Pulsed eddy current testing (PECT) method for electrical conductivity measurement of ferromagnetic metallic materials is proposed. Based on time-domain analytical solutions to the PECT model of ferromagnetic plates, the conductivity and permeability are determined via an inverse problem established with the calculated and measured values of induced voltage. PECT method for conductivity measurement is verified by the four-point probe method on three carbon steel plates. In addition, the effects of the amplitude of pulsed excitation current and the lift-off of probe coils on measurement results are studied. PECT is an innovative, non-contacting method with good repeatability for electrical conductivity measurement.
---
paper_title: New Method for Suppressing Lift-Off Effects Based on Hough Transform
paper_content:
A major problem for crack or electromagnetic property measurement in eddy current detection technique is the lift-off effect. The nonzero value of lift-off distance tends to smear out the discontinuity in the detected signal, causing errors detection. The paper shows the test results of lift-off effect in normalized impedance plane. Curves are transformed to Hough plane by Hough transform and consequently the lift-off effect is suppressed effectively. The method is convenient to distinguish different electromagnetic characteristic of material which can be used to detect crack or other flaws in material.
---
paper_title: Probe lift-off compensation method for pulsed eddy current thickness measurement
paper_content:
The pulsed eddy current nondestructive testing signal for ferromagnetic plate thickness testing is sensitive to the plate's thickness and probe lift-off effect simultaneously. This feature leads to the difficulty of eigen-value extraction in thickness quantification. To remove the probe lift-off effect, a method based on calculating the time-varied relative magnetic flux changing rate is presented. Experiment result shows the relative magnetic flux changing rate is approximately lift-off independent and only determined by the plate thickness.
---
paper_title: Suppressing sensor lift-off effects on cracks signals in surface magnetic field measurement technique
paper_content:
As in the eddy current technique, a major problem for crack measurement in the surface magnetic field measurement (SMFM) technique is the lift-off distance of the sensor. The nonzero value of lift-off distance tends to smear out the discontinuity in the SMFM crack signal, causing errors in crack characterization. The paper presents a method for sensor lift-off evaluation and crack signal restoration in the SMFM technique. The method employs a deconvolution technique and uses a suitable cost function based on the relations available for electromagnetic field distributions at the metal surface. Simulation results are presented to demonstrate the accuracy of the method.
---
paper_title: An approach to reduce lift-off noise in pulsed eddy current nondestructive technology
paper_content:
Abstract The pulsed eddy current (PEC) technique, as an emerging technique of the eddy current technique, has been used in engineering, such as aircrafts, oil/gas pipelines, nuclear steam pipes and high-speed rails, due to its richer information in time domain and frequency domain. However, the lift-off noise, introduced by varying coating thicknesses, irregular sample surface or movement of transducers, has a serious influence on the accuracy of the detection for the defects in these key structures. It greatly limits the application of PEC in quantitative nondestructive testing. In order to reduce the effect of the lift-off, the lift-off effect is analyzed theoretically and experimentally; based on the investigation of the relationship between the peak value of the difference signal and the lift-off, an approach to reduce the lift-off noise for detection the defect depth or width is proposed. In this approach, the defect depth and width are determined by the slope of the linear curve of the peak value of the difference signal and the lift-off. The proposed approach is verified by experiment and the results indicate that it can highly reduce the lift-off noise in the PEC technique. Therefore, it can be applied in characterization of the surface defects in sample with non-ferrous material.
---
paper_title: A simplified model for non-destructive thickness measurement immune to the lift-off effect
paper_content:
A simplified model for thickness measurement was introduced in this paper by using an electromagnetic sensor. It uses the phase signature of inductance change to reduce the lift-off effect when air-core coil is placed next to a thin nonmagnetic metallic plate. The phase signature can be extracted through the data at several frequencies. The thickness of the plate can be evaluated non-intrusively and directly. Numerical simulations were carried out for different plate thickness. The results validated the effectiveness of the proposed model when used in eddy-current testing.
---
paper_title: Reduction of lift-off effects for Pulsed Eddy Current NDT
paper_content:
The lift-off effect is commonly known to be one of the main obstacles for effective eddy current NDT testing as it can easily mask defect signals. Pulsed eddy current techniques, which are believed to be potentially rich of information, are also sensitive to the effect. An approach using normalisation and two reference signals to reduce the lift-off problem with pulsed eddy current techniques is proposed. Experimental testing on the proposed technique and results are presented in this report. Results show that significant reduction in the effect has been achieved mainly in metal loss and sub-surface slot inspection. The technique can also be applied for measurement of metal thickness beneath non-conductive coatings, microstructure, strain/stress measurement, where the output is sensitive to the lift-off effect.
---
paper_title: Reduction of Lift-Off Effects in Pulsed Eddy Current for Defect Classification
paper_content:
Pulsed eddy-current (PEC) testing is an electromagnetic nondestructive testing & evaluation (NDT&E) technique and defect classification is one of the most important steps in PEC defect characterization. With pulse excitation, the PEC response signals contain more features in time domain and rich information in frequency domain. This paper investigates feature extraction techniques for PEC defect classification including rising time, differential time to peak and differential time to zero, spectrum amplitude, and differential spectrum amplitude. Experimental study has been undertaken on Al-Mn 3003 alloy samples with artificial surface defects, sub-surface defects, and defects in two-layer structures under different lift-off. Experimental results show that methods are effective to classify the defects both in single-layer structures and two-layer structures. Comparing the results of different methods, it is found that differential process can eliminate the lift-off in defect classification in both time domain and frequency domain. The study can be extended to defect classification in complex structures, where lift-off effects are significant.
---
paper_title: Electromagnetic Inspection Technique of Thickness of Nickel-Layer on Steel Plate Without Influence of Lift-Off Between Steel and Inspection Probe
paper_content:
There is a need to inspect the thickness of a nickel layer of nickel-coated steel plate in the production process to guarantee the quality. The conductivity of a nickel layer in the nickel-coated steel is larger than that of mother steel, and its permeability is smaller than that of mother steel. Therefore, the estimation of thickness of the nickel-layer is possible by using differences of these electromagnetic properties. However, the signal is also influenced by the change of distance (lift-off) between the nickel-coated steel and an electromagnetic sensor. In this paper, the inspection method for measuring the thickness of nickel layer taking account of the lift-off is proposed using the 3-D edge-based hexahedral nonlinear FEM.
---
paper_title: Sensor-tilt invariance analysis for eddy current signals
paper_content:
In the application of electromagnetic nondestructive evaluation (NDE), the measured signals received by sensors are decided by the flaw associated with operational parameters during inspection. The techniques of invariant pattern recognition have been studied to render NDE signals insensitive to operational variations and preserve or recover crack information. The invariant scheme and algorithms have been addressed to facilitate magnetostatic flux leakage and eddy current NDE, which eliminated operational parameters and lift-off changes corrupt from signal measurements. A novel invariance analysis of eddy current (EC) signals in the inspection of deeply embedded cracks under layered fastener heads has been presented in this paper. A detection system based on uniform EC excitation and giant-magnetoresistive (GMR) pick-up sensors has been developed and shown improved detectability of 2nd and 3rd layer defects around fastener sites in multilayer structures. However, the sensor-tilt due to variation of probe lift-off can generate redundant response as noise effect and obscure the flaw inspection. The variations of GMR sensor-tilt with crack inspection are investigated using an efficient numerical model that simulates the used EC-GMR system. The scheme of invariance transformation is proposed to exclude the sensor-tilt noise and keep the defect inspection identical. The statistical features insensitive to tilt effects are extracted after the invariance processing, which has ensured the probability of flaw detection.
---
paper_title: Giant magneto resistance based eddy-current testing system
paper_content:
The purpose of this paper is to introduce a new eddy-current testing system for subsurface crack detection based on giant magneto resistive (GMR) sensors. The optimized probes having small dimensions and high sensitivity to low magnetic fields enhances the spatial resolution of an ECT system. Experimental results of different frequencies and different liftoff distance are presented.
---
paper_title: GMR based eddy current system for defect detection
paper_content:
The Giant Magneto Resistance (GMR) sensor is successfully used in eddy current testing because of its good low frequency detection characteristics and high sensitivity. In order to obtain more information about defects, this paper designed a GMR based eddy current system with Field Programmable Gate Array (FPGA) for defect detection. The system can get both the amplitude of the GMR sensor's output signal and the phase related to the exciting current at the same time. Through a series of detection experiments with defects of different widths of 6mm, 8mm and 10mm, it is found that both the amplitude and the phase can clearly reflect the magnetic field changes caused by various defects. By comparing the data curves, it is proved that the phase information matches better with the real size of the defects than the amplitude information. The conclusion is beneficial for further quantitative defect detection.
---
paper_title: Defect evaluation using the phase information of an EC-GMR sensor
paper_content:
Phase information is employed to enhance evaluating defects in the metals by eddy current testing using the conventional probes. But for an EC-GMR sensor, the output phase is not used. In this paper, GMR-based eddy current defect detection is studied, in which the phase of the GMR sensor output is taken into account. Based on Biot-Savart law, amplitude and phase for the EC-GMR sensor's output are studied. This paper shows that defect can be better located through the phase analysis of the GMR sensor's output.
---
paper_title: Crack detection in steel using a GMR-based MFL probe with radial magnetization
paper_content:
This paper presents the development of a portable probe for detecting cracks in steel plates. The probe consists of a magnet and a giant magneto-resistance (GMR) sensor. The magnet provides a radial magnetization at the surface of the steel plate. The GMR sensor detects the tangential component of the magnetic flux leakage due to a crack in the steel plate. Two steel plates were inspected with six cracks of depths: 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 mm, and widths of 0.25 and 0.5 mm, respectively. The cracks were scanned with the GMR sensitivity axis at 90°, 80°, 70°, 60° and 50°. It is demonstrated that the output voltage of the GMR sensor is sensitive to the orientation of the crack.
---
paper_title: Increasing the measurement accuracy of GMR current sensors through hysteresis modeling
paper_content:
A method is presented for increasing the measurement accuracy of GMR (giant magnetoresistive) sensors by numerically eliminating the hysteresis in the output signal. A simplified mathematical model of the hysteresis has been derived from the T(x) hysteresis model for anticipating the measurement values. The model has been implemented in a software simulation environment and compared with real GMR sensor measurements, with very good results. Further, the model has been quantised and implemented on a fixed-point digital signal controller (DSC), connected to the output of the sensors. The linear output characteristic delivered in this case by the DSC confirmed the accuracy of the hysteresis model in the hardware implementation too. An algorithm based on the model has also been developed in order to eliminate error propagation during the measurements.
---
paper_title: GMR array uniform eddy current probe for defect detection in conductive specimens
paper_content:
Abstract The usage of eddy current probes (ECP) with a single magnetic field sensor represents a common solution for defect detection in conductive specimens but it is a time consuming procedure that requires huge amount of scanning steps when large surface specimens are to be inspected. In order to speed-up the nondestructive testing procedure, eddy current probes including a single excitation coil and an array of sensing coils present a good solution. The solution investigated in this paper replaces the sensing coils for giant magneto-resistors (GMRs), due to their high sensitivity and frequency broadband response. Thus, the ECP excitation coil can be driven at lower frequencies than the traditional ones allowing defects to be detected in thicker structures. In this work an optimized uniform eddy current probe architecture including two planar excitation coils, a rectangular magnetic field biasing coil and a GMR magnetometer sensor array is presented. An ac current is applied to the planar spiral rectangular coil of the probe, while a set of GMR magnetometer sensors detects the induced magnetic field in the specimens under test. The rectangular coil provides the DC uniform magnetic field, assuring appropriate biasing of the GMR magnetometers of the probe, setting-up the functioning point on the linear region and at the same branch of the GMR static characteristics. The differences on the images obtained for the same specimen for each GMR are reduced if all sensors are biased on the same working point. Elements of the automated measurement system used to inspect the plate under test using the proposed eddy current probe, including a validation procedure based on a 2D template matching algorithm and the corresponding experimental results are included in the paper.
---
paper_title: GMR versus differential coils in velocity induced eddy current testing
paper_content:
This paper presents a development on a new nondestructive testing method using eddy currents induced by velocity. This new method uses a constant magnetic field that attached to a moving media induces eddy currents in the conductive material to be tested. By measuring the opposing magnetic field generated by the eddy currents it is possible to obtain information regarding the presence of defects. Two different magnetic field sensors, GMR and differential pick-up coils, were used and compared in the detection of perpendicular components of the magnetic field created by disrupted eddy currents due to linear defects machined on an aluminum plate.
---
paper_title: An SVM approach with electromagnetic methods to assess metal plate thickness
paper_content:
Abstract Eddy current testing (ECT) is a non-destructive technique that can be used in the measurement of conductive material thickness. In this work ECT and a machine learning algorithm (support vector machine – SVM) are used to determine accurately the thickness of metallic plates. The study has been made with ECT measurements on real specimens. At a first stage, a few number of plates is considered and SVM is used for a multi-class classification of the conductive plate thicknesses within a finite number of categories. Several figures of merit were tested to investigate the features that lead to “good” separating hyperplanes. Then, based on a SVM regressor, a reliable estimation of the thickness of a large quantity of plates is tested. Eddy currents are induced by imposing a voltage step in an excitation coil (transient eddy currents – TEC), while a giant magnetoresistance (GMR) is the magnetic sensor that measures the transient magnetic field intensity in the sample vicinity. An experimental validation procedure, including machine training with linear and exponential kernels and classification errors, is presented with sets of samples with thicknesses up to 7.5 mm.
---
paper_title: Rotating Field EC-GMR Sensor for Crack Detection at Fastener Site in Layered Structures
paper_content:
Eddy current-based techniques have been investigated for the inspection of embedded cracks under fastener heads in riveted structures. However, these techniques are limited in their ability to detect cracks that are not perpendicular to induced current flows. Further, the presence of a steel fastener of high permeability produces a strong signal that masks relatively smaller indication from a crack. In this paper, a rotating electromagnetic field is designed to rotate the applied magnetic fields and related eddy currents electrically so that the sensor shows uniform sensitivity in detecting cracks in all radial directions around fastener sites. Giant magnetoresistive sensors are employed to image the normal component of this rotating field, to detect different crack orientations at aluminum and ferromagnetic fastener sites. Numerical model-based studies and experimental validation are presented.
---
paper_title: Open crack depth evaluation using eddy current methods and GMR detection
paper_content:
In this paper the eddy current nondestructive method is used to determine the depth of linear cracks that were machined on aluminum plates. To improve this method a constant field probe was used to produce linear eddy currents launched across the crack. The tangential to plate magnetic field components perpendicular and parallel to the applied field were measured by a giant magneto-resistor sensor at different frequencies to devise one method for crack's depth measurement.
---
paper_title: Evaluation of portable ECT instruments with positioning capability
paper_content:
Abstract In this paper two different low-cost eddy current testing (ECT) systems used in detecting and measuring defects in conductive surfaces are evaluated and its performance compared with a commercial equipment. Both developed systems include a probe with a giant magnetoresistance (GMR) as the magnetic field sensor and a computer mouse pointer as the positioning system. This configuration, despite its low cost, allows better performances than commercial equipments because deeper defects can be detected due to the higher sensitivity of the GMR sensor, which is constant in a very large frequency range (10 Hz–1 MHz), but also because a precisely located graphical representation of the defect is delivered to the user due to the incorporated positioning system. Although having the same goal, both developed ECT system implementations differ on their architectures and signal processing algorithms. One system is based on a digital signal processor (DSP) where raw data is digitally processed and the other system uses analog circuits to process the acquired signals. This paper includes a detailed description of each implementation, the obtained results and a performance comparison with commercial equipment.
---
paper_title: Routes for GMR-Sensor Design in Non-Destructive Testing
paper_content:
GMR sensors are widely used in many industrial segments such as information technology, automotive, automation and production, and safety applications. Each area requires an adaption of the sensor arrangement in terms of size adaption and alignment with respect to the field source involved. This paper deals with an analysis of geometric sensor parameters and the arrangement of GMR sensors providing a design roadmap for non-destructive testing (NDT) applications. For this purpose we use an analytical model simulating the magnetic flux leakage (MFL) distribution of surface breaking defects and investigate the flux leakage signal as a function of various sensor parameters. Our calculations show both the influence of sensor length and height and that when detecting the magnetic flux leakage of µm sized defects a gradiometer base line of 250 µm leads to a signal strength loss of less than 10% in comparison with a magnetometer response. To validate the simulation results we finally performed measurements with a GMR magnetometer sensor on a test plate with artificial µm-range cracks. The differences between simulation and measurement are below 6%. We report on the routes for a GMR gradiometer design as a basis for the fabrication of NDT-adapted sensor arrays. The results are also helpful for the use of GMR in other application when it comes to measure positions, lengths, angles or electrical currents.
---
paper_title: Design and analysis of a GMR eddy current probe for NDT
paper_content:
Defect detection in metallic plates represents an important issue in metal industry, because its potential use in quality control process. Eddy current testing is one of the most extensively used nondestructive techniques for inspecting electrically conductive materials. The purpose of this paper is to present an eddy current testing system for surface defect detection in conducting materials using a giant magnetoresistive (GMR) sensor. An alternate magnetic field is produced by a solenoid and eddy currents are generated in the material under test. The GMR sensor was mounted inside the coil and the arrangement was adapted in the axis of a vertical machining center. In order to validating the measurement device, defects were induced by cracks machined in workpieces made of aluminum. Thus, the parts were scanned with the sensor prototype and a method to estimate the width and depth of the induced defects was proposed after analyzing the output voltage signal.
---
paper_title: Detection capabilities evaluation of the advanced sensor types in Eddy Current Testing
paper_content:
The purpose of this paper is to compare performances of various sensing elements in eddy current non-destructive inspection. A new eddy current testing probe is designed to compare detection and resolution capabilities of different sensors. Four magnetic sensors, specifically GMR (1D and 3D- sensor), AMR, Fluxgate, and a standard induction coil are used as a sensing element. For comparison of these sensors the numerical simulations and experimental measurements are performed under the same conditions. The results are presented and discussed in the paper. Streszczenie. W artykule umieszczono wyniki porownania czujnikow wykorzystywanych w defektoskopii wiroprądowej. Autorzy dokonują porownania przy pomocy probnika wlasnego projektu, badając czujniki: GMR, AMR, czujnik pola magnetycznego i standardową cewke indukcyjną. Dodatkowo zostają wykonane symulacje numeryczne odzwierciedlające wykonane pomiary. (Ocena zdolności detekcyjnych czujnikow uzywanych w defektoskopii wiroprądowej)
---
paper_title: Crack detection in steel using a GMR-based MFL probe with radial magnetization
paper_content:
This paper presents the development of a portable probe for detecting cracks in steel plates. The probe consists of a magnet and a giant magneto-resistance (GMR) sensor. The magnet provides a radial magnetization at the surface of the steel plate. The GMR sensor detects the tangential component of the magnetic flux leakage due to a crack in the steel plate. Two steel plates were inspected with six cracks of depths: 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 mm, and widths of 0.25 and 0.5 mm, respectively. The cracks were scanned with the GMR sensitivity axis at 90°, 80°, 70°, 60° and 50°. It is demonstrated that the output voltage of the GMR sensor is sensitive to the orientation of the crack.
---
paper_title: Current around a crack in an aluminum plate under nondestructive evaluation inspection
paper_content:
This paper presents an inversion problem algorithm applied in eddy current inspection using a sinusoidal magnetic field with fixed-amplitude, in a metallic plate sample with machined crack defects. The uniform magnetic field is obtained in a wide area around the probe due to the building technique of the excitation coil. The magnetic sensor in use is a giant magnetoresistance (GMR) with high sensibility. It measures the amplitude and phase of the magnetic field originated from eddy currents. A perturbation occurs in the measured magnetic field in the proximity of a crack defect. The current density is then determined around the crack defect through an inversion problem algorithm. The applied method consists in determining the transformation kernel and applying a Tikhonov regularization algorithm. With this data it is possible to obtain information about the defect geometrical characteristics.
---
paper_title: A GMR–ECT based embedded solution for applications on PCB inspections
paper_content:
Abstract Real-time non-destructive testing and evaluation (NDT/E) of conducting materials using eddy current techniques (ECTs) has gained significance in the last few years. This paper proposes a real-time application of ECT–NDT system exploiting giant magneto-resistive (GMR) sensors for inspection of printed circuit boards (PCBs). Probe design aims to crack inspection over flat surface, especially suitable for micro-defect detection on high density bare PCB. We propose a system based on a GMR sensor able to detect the magnetic field resulting from the interaction between a planar coil exciter and PCBs. The EC signals, detected by the GMR sensor, have been acquired by a high speed analog-to-digital (A/D) converter, for a subsequent application of signal processing based on digital techniques. The achieved results have highlighted the efficient design of the system. The advantages of the proposed models and some possible improvements of the system are also discussed.
---
paper_title: Magnetic sensors assessment in velocity induced eddy current testing
paper_content:
Abstract This paper presents an enhancement in the probes to be used on a new nondestructive testing method with eddy currents induced by velocity. In this method, a permanent magnet that is attached to a moving carriage creates eddy currents in the conductive material to be inspected. By measuring the opposing magnetic field generated by the eddy currents, it is possible to obtain information regarding the presence of defects. Different magnetic field sensors, such as, differential pick-up coils, giant magneto resistors (GMR) and Hall sensors have been used and compared. A permanent magnet moving above a plate was studied using a numerical model to allow further improvements to be made in the probe. Depending on each sensor's geometry, sensing axis and range, its position and orientation must be strategically chosen in order to increase defect sensitivity. The best probe's position is the one that guarantees the highest sensibility to the defects’ presence.
---
paper_title: The Lift-Off Effect in Eddy Currents on Thickness Modeling and Measurement
paper_content:
This paper uses a linear transformer model to investigate the effect of the lift-off on the results of the thickness measurement of nonferromagnetic metallic plates. The transformer model previews that the time derivative of the magnetization curves obtained for different gaps between the excitation coil and the plate should intercept in a single point when low magnetic coupling factors are considered. To assess the validity of the model, results are compared with experimental data obtained with a giant magnetoresistive sensor probe. For comparison, the sensor output voltage time derivative must be performed as well. The similarity of the theoretical model results and those obtain experimentally with pulsed excitation, confirms the correctness of the transformer approach.
---
paper_title: Flexible GMR Sensor Array for Magnetic Flux Leakage Testing of Steel Track Ropes
paper_content:
This paper presents design and development of a flexible GMR sensor array for nondestructive detection of service-induced defects on the outer surface of 64 mm diameter steel track rope. The number of GMR elements and their locations within saddle-type magnetizing coils are optimized using a three dimensional finite element model. The performance of the sensor array has been evaluated by measuring the axial component of leakage flux from localized flaw (LF) and loss of metallic cross-sectional area (LMA) type defects introduced on the track rope. Studies reveal that the GMR sensor array can reliably detect both LF and LMA type defects in the track rope. The sensor array has a fast detection speed along the length of the track rope and does not require circumferential scanning. It is also possible to image defects using the array sensor for obtaining their spatial information.
---
paper_title: Pulsed Eddy-Current Based Giant Magnetoresistive System for the Inspection of Aircraft Structures
paper_content:
Research in nondestructive evaluation is constantly increasing the sensitivity of detection of small cracks embedded deep in layered aircraft structures. Pulsed eddy-current (PEC) techniques using coil probes have shown considerable promise in detection and characterization of buried cracks in multilayered structures. In this paper, we describe the design and development of a nondestructive inspection system that uses pulse excitation of a planar multiline coil to generate a transient field that is detected via a giant magnetoresistive (GMR) field sensor. An analysis algorithm using features in time and frequency domain processes the experimentally measured signals for automatic detection of small cracks under fasteners in multilayered structures at a depth of up to 10 mm.
---
paper_title: EC-GMR Data Analysis for Inspection of Multilayer Airframe Structures
paper_content:
Eddy-current testing (ECT) is widely used in inspection of multilayer aircraft skin structures for the detection of cracks under fasteners (CUF). Detection of deep hidden CUF poses a major challenge in traditional ECT techniques largely because the weak eddy-current signal due to a subsurface crack is dominated by the strong signal from the aluminum or steel fastener. Giant magnetoresistive (GMR) sensors are finding increasing applications in directly measuring weak magnetic fields associated with induced eddy currents. The measured flux image at a fastener site is in general symmetric and an asymmetry is introduced by the presence of a subsurface crack, which is used for defect detection. This paper presents novel methods that employ the resident phase information, for improving detection probability of GMR signal analysis. Using computational model, the effectiveness of the proposed methods for enhancing detection of CUF is investigated. Results demonstrating the potential of these techniques for detection of second layer CUF are presented.
---
|
Title: Giant Magnetoresistance Sensors: A Review on Structures and Non-Destructive Eddy Current Testing Applications
Section 1: Introduction
Description 1: Introduce the importance of non-destructive testing (NDT) in industries, particularly in the petroleum and gas industry, and highlight the need for efficient defect inspection methods like eddy current testing (ECT).
Section 2: Overview of Giant Magnetoresistance Sensors
Description 2: Discuss the discovery, research, and development of giant magnetoresistance (GMR) sensors, including their advantages and applications in non-destructive testing.
Section 3: Method
Description 3: Explain the foundational principles and studies related to GMR sensors, including structural layers and temperature effects, and introduce different types of GMR sensors.
Section 4: Giant Magnetoresistance Spin Valve Sensor
Description 4: Describe the structure and principles of spin valve GMR sensors, including research findings and different types of spin valves used in various applications.
Section 5: Giant Magnetoresistance Multilayer Sensor
Description 5: Examine the structure and function of multilayer GMR sensors and discuss studies on their behavior, advantages, and specific designs like the Wheatstone bridge GMR sensor.
Section 6: Types of Non-Destructive Eddy Current Testing Probe
Description 6: Detail the various types of ECT probes, including impedance variation probes and excitation-detection probes, and their specific functionalities.
Section 7: Bobbin Probe
Description 7: Discuss the features, applications, and differences between absolute and differential bobbin probes used in tube inspection.
Section 8: Full Saturation Probe
Description 8: Explain the use, advantages, and challenges of full saturation probes in inspecting ferromagnetic or magnetic stainless steel tubes.
Section 9: Rotating Bobbin Probe
Description 9: Describe the functionality and advantages of rotating bobbin probes in detecting circumferential cracks and defects in tube inspection.
Section 10: Array Probe
Description 10: Discuss the structure, design, and application of array probes, including different models like C-Probe, X-Probe, and Smart Array Probe.
Section 11: C-Probe
Description 11: Provide specific details about the design and evolution of the C-Probe and its applications.
Section 12: X-Probe
Description 12: Explain the development, structure, and combined functionalities of the X-Probe in non-destructive testing.
Section 13: Smart Array Probe
Description 13: Detail the improved characteristics of the Smart Array Probe compared to the X-Probe and its simplified control circuits.
Section 14: Intelligent Probe
Description 14: Discuss the design and capabilities of the Intelligent Probe, including field trials and unique coil designs.
Section 15: Rotational Magnetic Flux Sensor
Description 15: Explain the principles, design, and applications of the rotational magnetic flux sensor for flat plate and tube inspections.
Section 16: Rotating Magnetic Field Probe
Description 16: Describe the development and testing of rotating magnetic field probes and their application in small diameter, non-magnetic tubing inspections.
Section 17: The Influence of Various Parameters on the GMR Measurement
Description 17: Identify and discuss key factors that affect GMR measurement, including surface quality, layer thickness, and temperature.
Section 18: Structural Quality of Giant Magnetoresistance Sensor
Description 18: Examine how the surface quality of GMR structures influences magnetic resistance properties and the impact of different fabrication processes.
Section 19: Thickness Structure Layers of Giant Magnetoresistance Sensor
Description 19: Discuss the effects of spacer thickness on the magnetic resistance of GMR structures and optimization techniques for high magnetic resistance properties.
Section 20: Temperature
Description 20: Detail the relationship between temperature and magnetic resistance in GMR structures and methods to enhance thermal stability.
Section 21: Factors Affecting the Eddy Current Testing Inspection
Description 21: Identify and examine factors that influence eddy current inspection results, such as coil frequency, magnetic permeability, and lift-off.
Section 22: Exciting Coil Frequency and Skin Depth Effect
Description 22: Explain the importance of coil frequency in determining defect depth and the effect of skin depth on eddy current density in materials.
Section 23: Material Magnetic Permeability
Description 23: Discuss how material permeability impacts eddy current penetration and defect detection capabilities.
Section 24: Lift-off
Description 24: Examine how lift-off affects probe sensitivity and techniques to mitigate its impact on ECT signal readings.
Section 25: Conductivity of Material
Description 25: Describe how changes in material conductivity influence the magnetic field and sensor output in eddy current probes.
Section 26: Limitations of Coil Sensor in Eddy Current Probe
Description 26: Address the limitations of traditional coil sensors in ECT, including poor sensitivity at low frequencies and the development of hybrid ECT probes.
Section 27: Compensation Techniques in Eddy Current Testing Probes
Description 27: Detail various techniques to compensate for factors like lift-off, temperature, and edge effects in ECT probes to improve inspection accuracy.
Section 28: Application of GMR Sensors in Hybrid Eddy Current Testing Probes
Description 28: Discuss the implementation and benefits of using GMR sensors in hybrid ECT probes for improved defect detection and measurement accuracy.
Section 29: Conclusions
Description 29: Summarize the importance and advancements of GMR sensors in ECT, highlighting their impact on inspection accuracy and defect detection capabilities.
|
Survey-based Comparison of Chord Overlay Networks
| 13 |
---
paper_title: One ring to rule them all: service discovery and binding in structured peer-to-peer overlay networks
paper_content:
Self-organizing, structured peer-to-peer (p2p) overlay networks like CAN, Chord, Pastry and Tapestry offer a novel platform for a variety of scalable and decentralized distributed applications. These systems provide efficient and fault-tolerant routing, object location, and load balancing within a self-organizing overlay network.One major problem with these systems is how to bootstrap them. How do you decide which overlay to join? How do you find a contact node in the overlay to join? How do you obtain the code that you should run? Current systems require that each node that participates in a given overlay supports the same set of applications, and that these applications are pre-installed on each node.In this position paper, we sketch the design of an infrastructure that uses a universal overlay to provide a scalable infrastructure to bootstrap multiple service overlays providing different functionality. It provides mechanisms to advertise services and to discover services, contact nodes, and service code.
---
paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
---
paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
---
paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
---
paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
---
paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications
paper_content:
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
---
paper_title: PChord: Improvement on Chord to Achieve Better Routing Efficiency by Exploiting Proximity
paper_content:
Routing efficiency is the critical issue when constructing peer-to-peer overlay. However, Chord has often been criticized on its careless of routing locality. A routing efficiency enhancement protocol on top of Chord is illustrated in this paper, which is called PChord. PChord aims to achieve better routing efficiency than Chord by exploiting proximity of the underlying network topology. The simulation shows that PChord has achieved lower RDP per message routing.
---
paper_title: MR-Chord: A scheme for enhancing Chord lookup accuracy and performance in mobile P2P network
paper_content:
In the recent years, Peer-to-Peer (P2P) sharing network has become very popular in the Internet. However, most P2P protocols are designed for traditional wired networks. When deployed in wireless network environment, many challenges are encountered. For instance, the nodes in an unstable wireless network tend to leave or rejoin the P2P network easily. In this case, the routing information in every node must become overdue, which may lead to lookup failures when the nodes retrieve these overdue routing information. In this paper, we propose a modified Chord protocol called MobileRobust-Chord (MR-Chord). MR-Chord is designed with the aim of keeping the Finger Table fresh. To achieve this goal, we have modified the Distributed Hash Table (DHT)-based protocol a Chord Protocol in such a way that the Finger Table is kept updated to provide the necessary lookup services in the P2P network. Simulations studies show that our proposed MR-Chord protocol outperforms the original Chord protocol in the following aspects: (1) increase in the lookup success rate and overlay consistency, (2) reduction of the lookup delay time.
---
paper_title: PChord: Improvement on Chord to Achieve Better Routing Efficiency by Exploiting Proximity
paper_content:
Routing efficiency is the critical issue when constructing peer-to-peer overlay. However, Chord has often been criticized on its careless of routing locality. A routing efficiency enhancement protocol on top of Chord is illustrated in this paper, which is called PChord. PChord aims to achieve better routing efficiency than Chord by exploiting proximity of the underlying network topology. The simulation shows that PChord has achieved lower RDP per message routing.
---
|
Title: Survey-based Comparison of Chord Overlay Networks
Section 1: INTRODUCTION
Description 1: Provide an overview of Chord structured P2P overlay networks and their importance, including the role of distributed hash tables (DHTs) and key features like load balance, decentralization, scalability, and availability.
Section 2: The Base Chord Protocol
Description 2: Detail the foundational mechanisms of the Chord protocol, including key lookup, node join operations, and recovery from node failures or departures.
Section 3: Consistent Hashing
Description 3: Explain the consistent hashing technique used by Chord to map keys to nodes, including the use of SHA-1 as a base hash function and the concept of the identifier circle.
Section 4: Scalable Key Location
Description 4: Describe how Chord efficiently locates keys in the network, focusing on the finger table and its role in reducing lookup path lengths.
Section 5: Node Joins
Description 5: Discuss the process and importance of updating successor and predecessor pointers when a new node joins the Chord system, highlighting the stabilization protocol.
Section 6: Node Failure
Description 6: Explain Chord’s approach to handling node failures, including maintaining a successor list and replicating keys to ensure robustness.
Section 7: Related Work
Description 7: Review various studies that have proposed enhancements to the original Chord protocol to improve different aspects such as routing efficiency, security, fault tolerance, lookup accuracy, and performance in mobile P2P networks.
Section 8: Improvement on Chord to Achieve Better Routing Efficiency
Description 8: Outline enhancements for improving Chord’s routing efficiency, such as the PChord protocol, which incorporates proximity routing to reduce Relative Delay Penalty (RDP).
Section 9: Improvement on Chord to Achieve Better Routing Security
Description 9: Describe security enhancements to Chord, particularly the Sechord extension, designed to manage and mitigate routing threats and misrouting attacks.
Section 10: Improvement on Chord to Achieve Better Fault Tolerance
Description 10: Provide details on mechanisms added to Chord to enhance its fault tolerance, such as redundancy and aggressive repair algorithms for handling node failures and broken links.
Section 11: Improvement on Chord to Achieve Better Lookup Accuracy and Performance in Mobile P2P Network
Description 11: Analyze improvements to the Chord protocol aimed at increasing lookup accuracy and performance in mobile P2P networks, emphasizing real-time updates and self-organizing behavior.
Section 12: Building Power Grid Applications using Chord
Description 12: Discuss the application of Chord in power grid management, detailing how its decentralized nature and self-organizing capabilities make it suitable for distributed control and optimization.
Section 13: Improvement on Chord to Achieve Better Load Balancing
Description 13: Examine strategies for enhancing load balancing in Chord, including the WSDBC model and algorithms for node join-in and self-balancing to distribute load evenly across nodes.
|
A Review of Non-destructive Detection for Fruit Quality
| 9 |
---
paper_title: State and New Technology on Storage of Fruits and Vegetables in China
paper_content:
This paper investigated the status of fruits and vegetables' production, circulation and stor-age in China within the recent years; the authors also pointed out that though the output of fruits and vegetables in China is the greatest in the world, it is importam to develop the storage technology of fruits and vegetables in order to reduce the post-harvest loss. The way to solve the problem in fruits and vegetables' storage is not to seek the new and high technology which costs great and wastes much but to spread the application of low-cost technology with credible effect. This paper introduced the authors' research results and application foreground about the storage technology using natural cold re-source, high voltage electric field and electrolyzed functional water.
---
paper_title: Application of Near Infrared Spectra Technique for Non-destructive Measurement on Fruit Internal Qualities
paper_content:
The basic principles and methods of near infrared spectra technique were introduced.The researches about the techniques applied in the aspects of components measuring of fruit,fruit diseases and defects as well as equipment for measuring internal qualities of fruit were summarized.The problems in measuring internal qualities of fruit with the technique were put forward and the future research was prospected.
---
paper_title: Study of Mechanisms of Mechanical Damage and Transport Packaging in Fruits Transportation
paper_content:
Fruits are susceptible to the static pressure, extrusion, vibration and impact during transport. It forms the in-time damage mainly caused by plastic deformation and hysteresis damage caused by viscoelastic deformation. The major reason of fruits damage at post-harvest and decreasing storage properties lies in the mechanical damage. Studies on mechanisms of the mechanical damage and techniques of transport packages in fruits transportation were review. The rheologic properties of fruits, mechanisms and variation of mechanical damage, techniques of simulation testing and transport packages were summarized.
---
|
Title: A Review of Non-destructive Detection for Fruit Quality
Section 1: INTRODUCTION
Description 1: This section introduces the importance of non-destructive detection methods in fruit quality assessment, and sets the context for their necessity and application.
Section 2: DETECTION OF FRUIT QUALITY USING OPTICAL PROPERTIES
Description 2: This section discusses the use of optical properties for non-destructive detection of fruit quality, including specific methodologies and their effectiveness.
Section 3: DETECTION OF FRUIT QUALITY USING SONIC VIBRATION
Description 3: This section covers the principles and applications of using sonic and ultrasonic vibrations to assess fruit quality, particularly for detecting internal damages.
Section 4: DETECTION OF FRUIT QUALITY USING MACHINE VISION TECHNIQUE
Description 4: This section explores the application of machine vision techniques in the non-destructive evaluation and grading of fruit quality.
Section 5: DETECTION OF FRUIT QUALITY USING NUCLEAR MAGNETIC RESONANCE (NMR)
Description 5: This section explains how NMR is used to detect various quality parameters of fruits, based on the concentration and mobility of hydrogen nuclei.
Section 6: DETECTION OF FRUIT QUALITY USING ELECTRICAL PROPERTIES
Description 6: This section describes the use of electrical properties, such as impedance and dielectric constants, to determine fruit quality.
Section 7: DETECTION OF FRUIT QUALITY USING COMPUTED TOMOGRAPHY
Description 7: This section details the use of computed tomography (CT) to evaluate internal and external features of fruits non-destructively.
Section 8: DETECTION OF FRUIT QUALITY USING ELECTRONIC NOSES
Description 8: This section discusses the development and use of electronic noses to detect and classify odors for assessing fruit quality.
Section 9: CONCLUSION
Description 9: This section summarizes the advantages and limitations of various non-destructive detection methods for assessing fruit quality and emphasizes their potential for future application and research.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.