reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 4) MULTI-USER MULTI-ARMED BANDIT PROBLEM <s> We consider dynamic spectrum access where distributed secondary users search for spectrum opportunities without knowing the primary traffic statistics. In each slot, a secondary transmitter chooses one channel to sense and subsequently transmit if the channel is sensed as idle. Sensing is imperfect, i.e., an idle channel may be sensed as busy and vice versa. Without centralized control, each secondary user needs to independently identify the channels that offer the most opportunities while avoiding collisions with both primary and other secondary users. We address the problem within a cooperative game framework, where the objective is to maximize the throughput of the secondary network under a constraint on the collision with the primary system. The performance of a decentralized channel access policy is measured by the system regret, defined as the expected total performance loss with respect to the optimal performance in the ideal scenario where the traffic load of the primary system on each channel is known to all secondary users and collisions among secondary users are eliminated through centralized scheduling. By exploring the rich communication structure of the problem, we show that the optimal system regret has the same logarithmic order as in the centralized counterpart with perfect sensing. A decentralized policy is constructed to achieve the logarithmic order of the system regret. In a broader context, this work addresses imperfect reward observation in decentralized multi-armed bandit problems. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 4) MULTI-USER MULTI-ARMED BANDIT PROBLEM <s> We investigate efficient channel learning and opportunity utilization problem in cognitive radio networks (CRN). We find that the sensing order of multiple channels and channel accessing policy play a critical role in designing effective and efficient scheme to maximize the throughput. Leveraging this important finding, we propose a near optimal online channel access policy. We prove that, our policy can converge to an optimal point in a guaranteed probability. Further, we design a computational efficient channel access policy, integrating optimal stopping theory and multi-armed bandit policy effectively. The computational complexity is reduced from O(K NK) to O(K), where N is the number of channels, and K is the maximum number of sensing/probing times in each procedure. Our simulation results validate our policy, showing at least 40% performance improvement over statistically optimal but fixed policy. <s> BIB002
|
This technique is suitable for the single user CRN system but is not suitable for CIoT system as it involves multi-users (objects) having multi-arm bandit problem. This technique can be deployed in our CIoT framework by combining with evolutionary game theory models. Multi-user bandit is very useful technique as it enables cognition which is the heart of the system by learning without the statistical information about other objects and environment. Existing CIoT solutions like BIB001 , BIB002 , consider the multi-user scenario but these solutions are more application specific and hence cannot be applied effectively to more generic scenarios of CIoT as dicussed in Section IIId. Therefore, we suggest to combine the game theory model with multi-arm bandit problem to achieve a more general and efficient solution for learning. The literature review provides its example in BIB001 , but these research shows the initial work that has to be improved for the practical implementation in CIoT. The authors presented the method to efficiently calculate the orthogonality among the users. Moreover, they have calculated the optimal system regret having a logarithmic order with time that converges to maximum throughput of known channel model and centralized users. Similar work presented in BIB002 presented the orthogonality among multiple users according to time. Both the paper's work provides excellent methods and policies to learn about the players in stationary environment. These approaches can be applied on the CIoT environment by combining it with game theory which enables multi-armed bandit problem to find the excellent learning policy in nonstationary environment. The need of the hour is to conduct more research on this hybrid technique for the practical implementation of the CIoT in near future.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 5) OPTIMAL STOPPING PROBLEM IN MARKOVIAN ENVIRONMENT <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 5) OPTIMAL STOPPING PROBLEM IN MARKOVIAN ENVIRONMENT <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB002
|
The previous section presented the optimal stopping problem in detail which shows that the random variables are distributed independently which means they are spread randomly and have no dependence on each other. Same is the case with the IoT environment in which the objects are scattered independently and the decision makers are also multiple and are distributed across the network. This technique possesses many overheads of cost and complexity for multi user scenario for maximizing the reward BIB002 , BIB001 , . In BIB001 , Xu et al. proposed an intelligent learning algorithm which is known as stochastic learning automata that converges to N.E of game. Moreover, a bio-inspired system that consumes a localized selfless game while each player maximizing utilities and collecting utilities of its neighbors was proposed to achieve global optimization via local information exchange. In , the authors used the recall OSP model in which the usage of the previously observed variable is allowed and the decision is supported by the previously observed state. They have used 1-SLA rule which can continuously sense the stages and information of the players. Moreover, this rule provides an optimal solution for monotone R-OSP models as discussed in Section IIIc. This can be applied to the CIoT by formulating it with the Markovian decision process to minimize the cost and maximize the reward for decision maker. The combination of this strategy and priori environmental information maximizes expected rewards which depends on a piecewise-deterministic process which gives the posterior likelihoods of the unobserved Markovian environment. This exciting new combination also creates new fundamental challenges like the sensing priorities and sequence, serves as an important requirement in CIoT domain. More specifically, any CIoT implementation should consider the sensing priorities in each epoch as adaptive and optimized based on the observation of Markovian environment. Ideally, a practical hybrid optimal stopping problem with Markovian game model can be solved by providing adequate solution on cost minimizations of the objects.
|
SURVEY ON SOFTWARE REMODULARIZATION TECHNIQUES <s> INTRODUCTION <s> Large software systems tend to have a rich and complex structure. Designers typically depict the structure of software systems as one or more directed graphs. For example, a directed graph can be used to describe the modules (or classes) of a system and their static interrelationships using nodes and directed edges, respectively. We call such graphs "module dependency graphs" (MDGs). MDGs can be large and complex graphs. One way of making them more accessible is to partition them, separating their nodes (i.e. modules) into clusters (i.e. subsystems). In this paper, we describe a technique for finding "good" MDG partitions. Good partitions feature relatively independent subsystems that contain modules which are highly interdependent. Our technique treats finding a good partition as an optimization problem, and uses a genetic algorithm (GA) to search the extraordinarily large solution space of all possible MDG partitions. The effectiveness of our technique is demonstrated by applying it to a medium-sized software system. <s> BIB001 </s> SURVEY ON SOFTWARE REMODULARIZATION TECHNIQUES <s> INTRODUCTION <s> Abstract Clustering techniques have shown promising results for the architecture recovery and re-modularization of legacy software systems. Clusters that are obtained as a result of the clustering process may not be easy to interpret until they are assigned appropriate labels. Automatic labeling of clusters reduces the time required to understand them and can also be used to evaluate the effectiveness of the clustering process, if the assigned labels are meaningful and convey the purpose of each cluster effectively. In this paper, we present a labeling scheme based on identifiers of an entity. As the clustering process proceeds, keywords within identifiers are ranked using two ranking schemes: frequency and inverse frequency. We present experimental results to demonstrate the effectiveness of our labeling approach. A comparison between the ranking schemes reveals the inverse frequency scheme to form more meaningful labels, especially for small clusters. A comparison of clustering results of the complete and weighted combined algorithms based on labels of the clusters produced by them during clustering shows that the latter produces a more understandable cluster hierarchy with easily identifiable software sub-systems. <s> BIB002 </s> SURVEY ON SOFTWARE REMODULARIZATION TECHNIQUES <s> INTRODUCTION <s> Gaining an architectural level understanding of a software system is important for many reasons. When the description of a system's architecture does not exist, attempts must be made to recover it. In recent years, researchers have explored the use of clustering for recovering a software system's architecture, given only its source code. The main contributions of this paper are given as follows. First, we review hierarchical clustering research in the context of software architecture recovery and modularization. Second, to employ clustering meaningfully, it is necessary to understand the peculiarities of the software domain, as well as the behavior of clustering measures and algorithms in this domain. To this end, we provide a detailed analysis of the behavior of various similarity and distance measures that may be employed for software clustering. Third, we analyze the clustering process of various well-known clustering algorithms by using multiple criteria, and we show how arbitrary decisions taken by these algorithms during clustering affect the quality of their results. Finally, we present an analysis of two recently proposed clustering algorithms, revealing close similarities in their apparently different clustering approaches. Experiments on four legacy software systems provide insight into the behavior of well-known clustering algorithms and their characteristics in the software domain. <s> BIB003 </s> SURVEY ON SOFTWARE REMODULARIZATION TECHNIQUES <s> INTRODUCTION <s> The order in which requirements are implemented in a system affects the value delivered to the final users in the successive releases of the system. Requirements prioritization aims at ranking the requirements so as to trade off user priorities and implementation constraints, such as technical dependencies among requirements and necessarily limited resources allocated to the project. Requirement analysts possess relevant knowledge about the relative importance of requirements. We use an Interactive Genetic Algorithm to produce a requirement ordering which complies with the existing priorities, satisfies the technical constraints and takes into account the relative preferences elicited from the user. On a real case study, we show that this approach improves non interactive optimization, ignoring the elicited preferences, and that it can handle a number of requirements which is otherwise problematic for state of the art techniques. <s> BIB004 </s> SURVEY ON SOFTWARE REMODULARIZATION TECHNIQUES <s> INTRODUCTION <s> The structure of a software system has a major impact on its maintainability. To improve maintainability, software systems are usually organized into subsystems using the constructs of packages or modules. However, during software evolution the structure of the system undergoes continuous modifications, drifting away from its original design, often reducing its quality. In this paper we propose an approach for helping maintainers to improve the quality of software modularization. The proposed approach analyzes the (structural and semantic) relationships between classes in a package identifying chains of strongly related classes. The identified chains are used to define new packages with higher cohesion than the original package. The proposed approach has been empirical evaluated through a case study. The context of the study is represented by an open source system, JHotDraw, and two software systems developed by teams of students at the University of Salerno. The analysis of the results reveals that the proposed approach generates meaningful re-modularization of the studied systems, which can lead to higher quality. <s> BIB005
|
Software systems in general consist of modules and methods that interact with each other in order to accomplish the purpose for which those systems are actually developed. The unchanging fact is that these developed software systems are exposed to modifications or changes. This may be done in the view of detecting and correcting errors or in need of improving the efficiency of software systems by introducing additional features based on their future requirements. This is termed to be the re-modularization process of the software systems. ) says the modifications made in the developed software may however reduces the cohesiveness of the modules and increases the coupling between various modules and thus making the resultant software system to be harder to maintain and possibly be more fault-prone. To overcome this situation (i.e.) to improve cohesion with the module in the software system and to reduce coupling between various modules within the software systems, various re-modularization techniques are used. Hierarchical clustering based techniques BIB003 , automated clustering approaches BIB002 and clustering using Genetic Algorithm(GA) BIB001 BIB004 are some ways in which remodularization process of software systems can be done. The resultant package or modules of the software system will possess high cohesiveness and low coupling characteristics. Apart from these techniques we also discuss some other techniques like re-modularization based on structural and semantic metrics BIB005 , clustering based on frequent common changes and supervised software re-modularization process .
|
A Review Paper on Microprocessor Based Controller Programming <s> CONTROLLER PROGRAMMING <s> Asynchronous design methods are known to have higher performance in power consumption and execution speed than synchronous ones because they just needs to activate the required module without feeding clock and power to the entire system. In this paper, we propose an asynchronous processor, A8051, compatible with the Intel 8051, which is a challenge for a pipelined asynchronous design for a CISC type microcontroller. The A8051 has special features such as an optimal instruction execution scheme that eliminates the bubble state, variable instruction length handling and multi-looping pipeline architectures for a CISC machine. The A8051 is composed of 5 pipeline stages based on the CISC architecture. It is implemented with RTL level languages and a verified behavioral model is synthesized with a 0.35 /spl mu/m CMOS standard cell library. The results show that the A8051 exhibits about 18 times higher speed than that of the Intel 80C51 and about 5 times higher than another asynchronous 8051 design in (H. van Gageldonk et al. Proc. Int. Symp. on Advanced Research in Asynchronous Circuits and Systems, p.96-107, 1998). <s> BIB001 </s> A Review Paper on Microprocessor Based Controller Programming <s> CONTROLLER PROGRAMMING <s> Microcontrollers are widely used on simple systems; thus, how to keep them operating with high robustness and low power consumption are the two most important issues. It is widely known that asynchronous circuit is the best solution to address these two issues at the same time. However, it’s not very easy to realize asynchronous circuit and certainly very hard to model processors with asynchronous pipeline. That's why most processors are implemented with synchronous circuit. There are several ways to model asynchronous pipeline. The most famous of all is the micropipeline; in addition, most micropipeline based asynchronous systems are implemented with single-rail bundleddelay model. However, we implemented our 8-bit microprocessor core for asynchronous microcontrollers with an alternative – the Muller pipeline. We implemented our microprocessor core with dual-rail quasi-delay-insensitive model with Verilog gate-level design. The instruction set for the target microprocessor core is compatible with PIC18. The correctness was verified with ModelSim software, and the gate-level design was synthesized into Altera Cyclone FPGA. In fact, the model we used in this paper can be applied to implement other simple microprocessor core without much difficulty. <s> BIB002
|
Controller programming makes the controller usable for a specific control action. Programming of microcomputerbased controllers can be subdivided into four discrete categories: Some controllers require all four levels of program entry while other controllers, used for standardized applications, require fewer levels. Configuration programming matches the which hardware and software matches the control action required. It requires the selection of both hardware and software package to match the application requirement. System initialization programming consists of entering appropriate startup values using a keypad or a keyboard. Star tup data parameters include set point, throttling range, gain, reset time, time of day, occupancy time, and night setback temperature BIB002 . These data are equivalent to the settings on a mechanical control system, but there are usually more items because of the added functionality of the digital control system. Requirement of data file programming depends upon whether the system variables are fixed or variable. For example at zonal lev el programming where input sensors are fixed and programmer knows which relay will get output then the use of data file programming is irrelevant. But at the system level programming where controller controls wide variety of sensors and gives output to various relays, use of data file programming is must. For the controller to properly process input data, for example, it must kno w if the point type is analog or digital. If the point is analog, the controller must know the sensor type, the range, whether or not the input value is linear, whether or not alarm limits are assigned, what the high and low alarm limit values are if limits are assigned, and if there is a lockout point. See Table 2 .If the point is digital, the controller must know its normal state (o pen or closed) BIB001 , whether the given state is an alarm state or merely a status condition, and whether or not the condition triggers an event -initiated program. Custom control programming is the most involved programming category. Custom control programming requires a step -by-step procedure that closely resembles standard computer programming. A macro view of the basic tasks is shown in Figure 4 .
|
Survey of Visual Question Answering: Datasets and Techniques <s> Introduction <s> We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing $$\sim $$~0.25 M images, $$\sim $$~0.76 M questions, and $$\sim $$~10 M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa). <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Introduction <s> Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question/answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models. <s> BIB002
|
Visual Question Answering is a task that has emerged in the last few years and has been getting a lot of attention from the machine learning community BIB001 BIB002 . The task typically involves showing an image to a computer and asking a question about that image which the computer must answer. The answer could be in any of the following forms: a word, a phrase, a yes/no answer, choosing out of several possible answers, or a fill in the blank answer. Visual question answering is an important and appealing task because it combines the fields of computer vision and natural language processing. Computer vision techniques must be used to understand the image and NLP techniques must be used to understand the question. Moreover, both must be combined to effectively answer the question in context of the image. This is challenging because historically both these fields have used distinct methods and models to solve their respective tasks. This survey describes some prominent datasets and models that have been used to tackle the visual question answering task and provides a comparison on how well these models perform on the various datasets. Section 2 covers VQA datasets, Section 3 describes models and Section 4 discusses the results and provides some possible future directions.
|
Survey of Visual Question Answering: Datasets and Techniques <s> DAQUAR <s> We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> DAQUAR <s> This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented. <s> BIB002 </s> Survey of Visual Question Answering: Datasets and Techniques <s> DAQUAR <s> In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. <s> BIB003 </s> Survey of Visual Question Answering: Datasets and Techniques <s> DAQUAR <s> We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. <s> BIB004
|
Figure 1: Taken from BIB002 2.2 Visual7W BIB004 Visual 7W is a dataset generated using images from the MS-COCO dataset BIB001 for image captioning, recognition and segmentation. The Visual7W dataset gets it name from generating multiple-choice questions of the form (Who, What, Where, When, Why, How and Which). Workers on Amazon Mechanical Turk (AMT) were used to generate the questions. A separate set of three workers were used to rate the questions and those with less than two positive votes were discarded. Multiple choice answers were generated both automatically and by AMT workers. AMT workers were also asked to draw bounding boxes of objects mentioned in the question in the image, firstly to resolve textual ambiguity (Eg. An image has two red cars. Then 'red car' in the question could refer to either of these.), and secondly to enable answers of a visual nature ('pointing' at an object). The dataset contains 47,300 images and 327,939 questions. BIB003 The Visual Madlibs dataset is a fill-in-the-blanks as well as multiple choice dataset. Images are collected from MS-COCO. Descriptive fill-in-theblank questions are generated automatically using templates and object information. Each question generated in this way is answered by a group of 3 AMT workers. The answer can be a word or a phrase. Multiple choices for the blanks are also provided as an additional evaluation benchmark. The dataset contains 10,738 images and 360,001 questions. The multiple choice questions are evaluated on the accuracy metric.
|
Survey of Visual Question Answering: Datasets and Techniques <s> Visual Madlibs <s> This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Visual Madlibs <s> In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. <s> BIB002 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Visual Madlibs <s> We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing $$\sim $$~0.25 M images, $$\sim $$~0.76 M questions, and $$\sim $$~10 M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa). <s> BIB003
|
2.4 COCO-QA BIB001 The COCO-QA dataset is another dataset based on MS-COCO. Both questions and answers are generated automatically using image captions from MS-COCO and broadly belong to four categories: Object, Number, Color and Location. There is one question per image and answers are single-word. The dataset contains a total of 123,287 images. Evaluation is done using either accuracy or WUPS score. BIB001 2.5 FM-IQA BIB002 The Freestyle Multilingual Image Question Answering dataset (FM-IQA) takes images from the MS-COCO dataset and uses the Baidu crowdsourcing server to get workers to generate questions and answers. Answers can be words, phrases or full sentences. Question/answer pairs are available in Chinese as well as their English translations. The dataset contains 158,392 images and 316,193 questions. They propose human evaluation through a visual Turing Test, which may be one reason this dataset has not gained much popularity. 2.6 VQA BIB003 The Visual Question Answering (VQA) dataset is the most widely used dataset for the VQA task. This dataset was released as part of the visual question answering challenge. It is divided into two parts: one dataset contains real-world images from MS-COCO, and another dataset contains abstract clipart scenes created from models of humans and animals to remove the need to process noisy images and only perform high level reasoning. Questions and answers are generated from crowd-sourced workers and 10 answers are obtained for each question from unique workers. Answers are typically a word or a short phrase. Approximately 40% of the questions have a yes or no answer. For evaluation, both open-ended answer generation as well as multiple choice formats are available. Multiple choice questions have 18 candidate responses. To evaluated open-ended answers, a machine generated answer is normalized by the VQA evaluation system and then evaluated as Score = min(#humans who provided that exact answer / 3, 1). So, an answer is considered completely correct if it matches the responses of at least three human annotators. If it matches none of the 10 candidate responses then it is given a score of 0. The original VQA dataset has 204,721 MS-COCO images with 614,163 questions and 50,000 abstract images with 150,000 questions. The 2017 iteration of the VQA challenge has a bigger dataset with a total of 265,016 MS-COCO and abstract images and an average of 5.4 questions per image. The exact number of questions is not mentioned on the challenge website. BIB003
|
Survey of Visual Question Answering: Datasets and Techniques <s> Non-attention Deep Learning Models <s> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Non-attention Deep Learning Models <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB002 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Non-attention Deep Learning Models <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB003 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Non-attention Deep Learning Models <s> In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art. <s> BIB004
|
Deep learning models for VQA typically involve the use of Convolutional Neural Networks (CNNs) to embed the image and word embeddings such as Word2Vec BIB003 along with Recurrent Neural Networks (RNNs) to embed the question. These embeddings are combined and processed in various ways to obtain the answer. The following model descriptions assume that the reader is familiar with both CNNs BIB002 as well as RNN-variants like Long Short Term Memory units (LSTMs) BIB001 and Gated Recurrent Units (GRUs) . Some approaches do not involve the use of RNNs. These are discussed first. propose a baseline model called iBOWIMG for VQA. They use the output of a later layer of the pre-trained GoogLe Net model for image classification to extract image features. The word embeddings of each word in the question are taken as the text features, so the text features are simple bag-ofwords. The image and text features are concatenated and softmax regression is performed across the answer classes. They showed that this model achieved performance comparable to several RNN based approaches on the VQA dataset. BIB004 propose a CNN-only model that we refer to here as Full-CNN. They use three different CNNs: an image CNN to encode the image, a question CNN to encode the question, and a join CNN to combine the image and question encoding together and produce a joint representation.
|
Survey of Visual Question Answering: Datasets and Techniques <s> Full-CNN <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Full-CNN <s> This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented. <s> BIB002 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Full-CNN <s> We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. ::: In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. ::: Moreover, we also extend our analysis to VQA, a large-scale question answering about images dataset, where we investigate some particular design choices and show the importance of stronger visual models. At the same time, we achieve strong performance of our model that still uses a global image representation. Finally, based on such analysis, we refine our Ask Your Neurons on DAQUAR, which also leads to a better performance on this challenging task. <s> BIB003
|
The image CNN uses the same architecture as VGGnet BIB001 and obtains a 4096-length vector from the second-last layer of this network. This is passed through another fully connected layer to get the image representation vector of size 400. The question CNN involves three layers of convolution + max pooling. The size of the convolutional receptive field is set to 3. In other words, the kernel looks at a word along with its immediate neighbors. The joint CNN, which they call the multi-modal CNN, performs convolution across the question representation with receptive field size 2. Each convolution operation is provided the full image representation. The final representation from the multi-modal CNN is given to a softmax layer to predict the answer. The model is evaluated on the DAQUAR and COCO-QA datasets. The following models use both CNNs as well as RNNs. BIB003 This model uses a CNN to encode the image x and obtain a continuous vector representation of the image. The question q is encoded using an LSTM or a GRU network for which the input at time step t is the word embedding for the t th question word q t , as well as the encoded image vector. The hidden vector obtained at the final time step is the question encoding. A simple bag of words baseline the authors use is to encode the question as the sum of all the word embeddings. The answer can be decoded in two different ways, either as a classification over different answers, or as a generation of the answer. Classification is performed by a fully connected layer followed by a softmax over possible answers. Generation, on the other hand, is performed by a decoder LSTM. This LSTM at each time point takes as input the previously generated word, as well as the question and image encoding. The next word is predicted using a softmax over the vocabulary. An important point to note is that this model shares some weights between the encoder and decoder LSTMs. The model is evaluated on the DAQUAR dataset. BIB002 This model is very similar to the AYN model. The model uses the final layer of VGGnet to obtain the image encoding. They use an LSTM to encode the question. In contrast to the previous model, they provide the encoded image as the first 'word' to this LSTM network, before the question. The output of this LSTM goes through a fully connected followed by softmax layer. They call this model Vis+LSTM.
|
Survey of Visual Question Answering: Datasets and Techniques <s> Other Models <s> Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, "powerful," "strong" and "Paris" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperforms bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Other Models <s> Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. <s> BIB002 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Other Models <s> We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases. <s> BIB003
|
The following models use more ideas than simply changing how to attend to the image or question and as such do not fit in the previous sections. 3.4.1 Neural Module Networks (NMNs) BIB002 This model involves generating a neural network on the fly for each individual image and question. This is done through choosing from various sub-modules based on the question and composing these to generate the neural network. Modules are of five kinds: Attention[c] (which computes an attention map for a given image and given c; c can be 'dog' for instance, then Attention[dog] will try to find a dog), classification[c] (which outputs a distribution over labels belonging to c for a given image and attention map; c can be 'color'), reattention[c] (which takes an attention map and recomputes it based on c; c can be 'above' which means shift attention upward), Measurement [c] (which outputs a distribution over labels based on attention map alone) and combination[c] (which merges two attention maps as specified by c; c could be 'and' or 'or') . To decide which modules to compose together, they first parse the question using a dependency parser and use this dependency to create a symbolic expression based on the head word. An example from the paper is 'What is standing on the field?' becomes what(stand). These symbolic forms are then used to identify which modules to use. The whole system is then trained end to end through backpropagation. The authors test their model on the VQA dataset and also a more challenging synthetic dataset as they found that the VQA dataset did not require too much high level reasoning or composition. BIB003 ) present the Ask Me Anything (AMA) model, that tries to leverage information from an external knowledge base to help guide visual question answering. It first obtains a set of attributes like object names, properties etc. of the images based on caption of the image. Image captioning model is trained on using standard image captioning techniques on the MS-COCO dataset. There are 256 possible attributes and the attribute generator is trained on MS-COCO using a variation of the VGG net. The top five attributes are used to generate queries for the DBpedia database . Each query returns a text which is summarized using Doc2Vec (Le and BIB001 . This summary is passed as an additional input to the decoder LSTM which generates the answer. The authors show results on VQA and COCO-QA datasets.
|
Survey of Visual Question Answering: Datasets and Techniques <s> Discussion and Future Work <s> Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications. <s> BIB001 </s> Survey of Visual Question Answering: Datasets and Techniques <s> Discussion and Future Work <s> Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1. <s> BIB002
|
As has been the trend in recent years, deep learning models outperform earlier graphical model based approaches across all VQA datasets. However, it is interesting to note that the Answer Type Prediction (ATP) model performs better than the non-attention models, which proves that simply introducing convolutional and/or recurrent neural networks is not enough: identifying parts of the image that are relevant in a principled manner is important. ATP is even competitive with or better than some attention models like Where to Look (WTL) and Stacked Attention Networks (SAN). Significant improvement is shown by Hierarchical Co-Attention Networks (CoAtt), which was the first to attend on the question in addition to the image. This may be helpful especially for longer questions, which are harder to encode into a single vector representation by LSTMs/GRUs, so first encoding each word and then using the image to attend to important words helps the model perform better. The Neural Module Networks (NMN) uses the novel and interesting idea of automatically composing sub-modules for each image/question pair which performs similar to CoAtt on the VQA dataset, but outperforms all models on a synthetic dataset requiring more high level reasoning, indicating that this could be a valuable approach in the real world. However, more investigation is required to judge the performance of this model. The best performing model on COCO-QA is Ask Me Anything (AMA) which incorporates information from an external knowledge base (DBpedia). A possible reason for improved performance is that the knowledge base helps answer questions that involve world or common sense knowledge that may not be present in the dataset. The performance of this model is not as good on VQA dataset, which might be because not too many questions in this dataset require world knowledge. This model naturally gives rise to two avenues for future work. The first would be recognizing when external knowledge is needed: some sort of model hybrid of CoAtt and AMA along with a decision maker for whether to access the KB might provide the best of both worlds. The decision might even be a soft one to enable end to end training. The second direction would be exploring the use of other knowledge bases like Freebase BIB001 , NELL (Carlson et al., 2010) or OpenIE extractions . As we have seen, novel ways of computing attention continue to improve performance on this task. This has been seen in the textual question answering task as well BIB002 , so more recent models from that space can be used to guide VQA models. A study providing an estimated upper bound on performance for the various VQA datasets would be very valuable as well to get an idea for the scope of possible improvement, especially for COCO-QA which is automatically generated. Finally, most VQA tasks treat answering as a classification task. Only the VQA dataset allows for answer generation in a limited manner. It would be interesting to explore answering as a generation task more deeply, but dataset collection and effective evaluation methodologies for this remain an open question.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> Algorithms are developed for solving problems to minimize the length of production schedules. The algorithms generate anyone, or all, schedules of a particular subset of all possible schedules, called the active schedules. This subset contains, in turn, a subset of the optimal schedules. It is further shown that every optimal schedule is equivalent to an active optimal schedule. Computational experience with the algorithms shows that it is practical, in problems of small size, to generate the complete set of all active schedules and to pick the optimal schedules directly from this set and, when this is not practical, to random sample from the bet of all active schedules and, thus, to produce schedules that are optimal with a probability as close to unity as is desired. The basic algorithm can also generate the particular schedules produced by well-known machine loading rules. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> An algorithm is developed for sequencing jobs on a single processor in order to minimize maximum lateness, subject to ready times and due dates. The method that we develop could be classified as branch-and-bound. However, it has the unusual feature that a complete solution is associated with each node of the enumeration tree. <s> BIB002 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> We survey and extend the results on the complexity of machine scheduling problems. After a brief review of the central concept of NP-completeness we give a classification of scheduling problems on single, different and identical machines and study the influence of various parameters on their complexity. The problems for which a polynomial-bounded algorithm is available are listed and NP-completeness is established for a large number of other machine scheduling problems. We finally discuss some questions that remain unanswered. <s> BIB003 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> Let M 1 and M 3 be non-bottleneck machines and M 2 a bottleneck machine processing only one job at a time. Suppose that n jobs have to be processed on M 1 , M 2 and M 1 (in that order) and job i has to spend a time a , on M 1 , d 1 on M 2 and q 1 on M 3 : we want to minimize the makespan. This problem is important since its resolution provides a bound on the makespan of complicated systems such as job shops. It is NP-hard in the strong sense. However, efficient branch and bound methods exist and we describe one of them. Our bound for the tree-search is very close to the bound used by Florian et al., but the principle of branching is quite different. At every node, we construct by an O( n log n ) algorithm a Schrage schedule; then we define a critical job c , a critical set J and consider two subsets of schedules; the schedules when c procedes every job of J and the schedules when c follows every job of J . We give the results of this method and prove that the difference between the optimum and the Schrage schedule is less than d 1 . <s> BIB004 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> Scheduling a set of jobs on a single machine is studied where each job has a specified ready time and due date (limit times). A dominance relationship is established within the set of possible sequences and results in an important restriction of the number of sequences to be considered in solving the feasibility problem. Establishing the dominance relationship leading to this restriction requires only the ordering of ready times and due dates and is thus independent of job processing times and of any change in limit times which does not affect the given order. <s> BIB005 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Regular Objective Function <s> Abstract This paper presents an optimal procedure for sequencing n jobs on one machine to minimize the maximum lateness when they may have different ready times, processing times, and due dates. This procedure is significantly more efficient for difficult problems than the McMahon/Florian algorithm which has achieved the best results in this area to date. The primary measures of computational efficiency for these procedures are the mean computer processing time and the percentage of problems solved optimally within a given time limit. An experiment that employs response surface methodology to locate problems within the region of highest difficulty shows the comparative efficiency of this procedure. <s> BIB006
|
In the earliest known work on IIT schedules, Gi er and BIB001 limited the scope of consideration to the set of active schedules. They deÿned an active schedule, as ": : : a feasible schedule having the property that no operation can be made to start sooner by permissible left shifting". Said simply, an active schedule is one in which no task's completion time can be reduced without increasing some other task's completion time. They showed that active schedules are important because they comprise a dominant set for scheduling situations in which the performance measure is regular and provided an algorithm for generating active schedules. Their focus was on the static job shop scheduling problem (J | |reg), but their results can be extended easily to the more general case of J |r j |reg. Minimizing Maximum Lateness. Several papers dealing with 1|r j |reg have appeared in the literature. Early work focused on designing e cient enumeration schemes for solving the problem of minimizing maximum lateness on a single machine with job ready times (1|r j |L max ). BIB003 showed this problem to be NP-hard and equivalent to the socalled delivery time model (1|r j ; delivery times|C max ): The works of BIB002 , BIB003 , BIB004 , BIB005 , and BIB006 are particularly relevant here because their algorithms permit inserted idle times in the schedule. BIB002 presented a novel forward scheduling procedure. Their search procedure deÿnes a complete schedule at each node and derives a lower and an upper bound. Using a jumptracking strategy the search expands the node with the lowest lower bound value. They labeled the job that realizes maximum lateness in a schedule as a critical job. Their procedure holds the machine idle for the critical job j by delaying the start of jobs whose due dates are greater than that of j, and that precede j in the block containing j. The tree search stops when the lateness of the critical job at the current node is less than or equal to the least lower bound of all open nodes. Realizing that the McMahon and Florian algorithm is e cient only when r max − r min ¿ d max − d min , BIB003 proposed an inversion scheme to reclaim the e ciency when r max − r min ¡ d max − d min . The scheme exchanges each job's ready time with its due date to form an inverted problem (in which the ready time and due date ranges are reversed). The optimum solution of the inverted problem is reversed to obtain the optimum solution to the original problem. BIB004 presented an improvement to Schrage's algorithm (see Blazewicz et al. 1993, p. 60) to permit inserted idle times in the schedule, without any added computational burden, for the 1|r j , delivery times|C max problem. He developed also two dominance properties and a lower bounding scheme. He used these ideas in a branch-and-bound method, which solved problems of up to 1000 jobs. Note that, as already mentioned, this problem is equivalent to the 1|r j |L max problem. BIB005 presented an attractive dominance property that is independent of job processing times. By ordering the jobs on the basis of their ready times and due dates it enables a smaller set of schedules to be considered. BIB006 presented improvements to the McMahon and Florian algorithm in terms of how the sequences are constructed, how to test for optimality, and how to generate new nodes. presented as sophisticated approach for solving the problem 1|r j ; p j = p|L max , where p is an arbitrary integer.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> Minimizing Flowtime Related Measures. <s> Abstract This paper compares tatically delayed scheduling to non-delay scheduling. While both of these types of schedules belong to the class of active schedules, tactically delayed schedules have the feature of introducing deliberate idle periods into the schedule. Non-delay schedules permit no such delays. To compare these scheduling methods, a number of sample single-machine problems are analyzed. For each problem, all possible non-delay and tactically delayed schedules are enumerated and the resulting tardiness of each schedule is recorded. Non-delay schedules are found to be considerably less numerous with lower total tardiness on average. Tactically delayed schedules appear to have better best case behavior than non-delay schedules. The results here allure further research into active scheduling methods. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Minimizing Flowtime Related Measures. <s> We consider the problem of scheduling n jobs to minimize the total earliness and tardiness penalty. We review the literature on this topic, providing a framework to show how results have been generalized starting with a basic model that contains symmetric penalties, one machine and a common due date. To this base we add such features as parallel machines, complex penalty functions and distinct due dates. We also consolidate many of the existing results by proving general forms of two key properties of earliness/tardiness models. <s> BIB002
|
In what is probably the ÿrst of few papers recognizing the need to consider IIT for owtime related measures, studied the so-called deadline problem, 1|r j ;d j |C max . They devised a branch-and-bound algorithm that constructed schedules by choosing at each node a job to attach to the end of the current partial schedule. They deÿned a block as a group of jobs with the ÿrst job starting at its earliest start time and all other jobs following without delay until the end of the schedule. When a schedule has a block with the property that the earliest start time of all jobs after the ÿrst in the block have start times greater than or equal to the earliest start of the ÿrst job, then the schedule, if feasible, is optimum for 1|r j ;d j |C max . used this property to test complete feasible solutions for optimality and thus, when successful, enable their algorithm to end the search. Bianco and Ricciardelli (1982) studied the 1|r j | w j C j problem. They provided six di erent dominance properties, two tests for optimality, and a lower bounding procedure. They reported computational experience of a branch-andbound algorithm that incorporated these ideas. Their results for problems of up to 10 jobs are encouraging. From their work it is clear that inserted idle time is important also for other weighted measures such as total weighted ow time and maximum weighted ow time. An interesting problem variation occurs when jobs are not permitted to leave the scheduling system early. Then the ow time for a job becomes max{C j ; d j } − r j , a regular performance measure. When arrival times are not identical, it becomes necessary to consider inserted idle time to ÿnd a minimum total ow time schedule. To illustrate, consider the simple problem of two jobs (A, B) with respective ready times (0, 2), processing times (4, 6), and due dates (12, 9). The two possible schedules are obviously AB and BA. AB is a nondelay schedule with a total owtime of 20; BA is an IIT schedule with a total owtime of 19. Clearly, deliberate idle time can be beneÿcial for such problems. BIB001 have shown that this problem is equivalent to the single machine tardiness problem. Minimizing Tardiness. Numerous researchers have studied the tardiness problem. Only a few have considered IIT in their analysis. One of the earliest published methodologies for inserting deliberate idle time was provided by with his so-called hold-o and sneak-in heuristics, which he tested as an augmentation to his COVERT dispatching method for job shop scheduling (J |r j | j T j ). These heuristics work as follows. At time t the decision to hold o a machine (insert idle time) is made by considering the estimated cost of delay, c j , for jobs in queue as well as those yet to arrive already tardy jobs. Let h j be the hiatus time for job j (h j = max{0; r j − t}). Select the job with largest c j =(p j + h j ). If the selected job is a yet to arrive job, then schedule the machine to start its processing upon arrival. Search the list of considered jobs (in descending order of c j ) for the possibility of starting and completing a job before the arrival of the selected job (i.e., look for possible sneak-ins). Schedule all possible sneak-ins to start as soon as possible. Carroll's heuristics turn out to be a rather circuitous way of gauranteeing an active schedule. We describe below a more straightforward approach. Carroll's simulation results comparing COVERT with and without hold-o and sneak-in show that the heuristics marginally but signiÿcantly (in the statistical sense) improve schedule performance. Moreover, his results give some indication that the added beneÿt of heuristics for inserting idle time declines with the allowance level and increases with the utilization rate. That is to say, the percent improvement in mean tardiness will be most marked in cases of loose due dates and high utilization. In the same spirit as Carroll, heuristic methods for inserting idle time developed by Morton and Ramnath (1992) have been reported in Morton and Pentico (1993, p. 164 -168) . Their procedure is somewhat more elegant than Carroll's in that it is explicitly connected to utilization. They deÿned a soon-to-arrive job as one whose arrival time r is less than (t + p min ), where p min is the smallest required processing time among the waiting jobs at time t. Calculate a priority j for each job j in the queue and each soon-toarrive job. Reduce the priority of a soon-to-arrive job j by application of the following: where B is a constant directly proportional to the utilization level. Then choose the job with highest priority to next seize the idle machine. Their preliminary results show this procedure provides notable improvement in weighted tardiness. Their results seem to show that the marginal improvement in using hold-o heuristics is more marked in cases of lower utilization and tighter due dates-in apparent direct contrast to Carroll's observation. One explanation may be that Carroll reported raw tardiness ÿgures, whereas Morton and Ramnath reported normalized relative tardiness values. The recent results of Sridharan and Zhou (1996a) suggest that the value of inserting idle time is indeed a function of utilization, with marked improvement when the machine is not heavily loaded. Under high utilization, there were fewer attractive opportunities to insert idle time. However, the few instances in which idle time was inserted produced substantial improvement in tardiness. This is consistent with Carroll's earlier results. Due date tightness appears unimportant. Due data range (arbitrariness) appears to have a signiÿcant and substantial e ect on the improvement, with higher improvement when due date range is increased. A more detailed explanation of these interactions awaits further investigation. We can improve the procedures used by Carroll and Morton and Ramnath to determine a soon-to-arrive jobs by directly applying the Gi er and Thompson speciÿcation for an active schedule. PROPOSITION. Assume a machine is idle with at least one waiting job at time t. For any regular performance measure, it is unnecessary to consider inserted idle time for any job with arrival time greater than min{r j + p j }, where r j = max{t; r j }. PROOF. Assume to the contrary, namely that some schedule S was constructed with a delay longer than min{r j +p j }−t. Then one could schedule the job with min{r j + p j } in the idle period without delaying the completion of any other job, yieldng a schedule no worse than S. The implication of the proposition is that we could redeÿne a soon-to-arrive job as one arriving before min{r j +p j }, and obtain a smaller set than that provided by Carroll's or Morton and Ramnath's deÿnition. Their deÿnition unnecessarily permits considering the scheduling of jobs with arrival times in the interval (min{r j + p j }; t + p min ), whenever min{r j + p j }¡t + p min . The procedure suggested here would be faster, never permit a worse solution, and would guarantee an active schedule. Using this deÿnition, Sridharan and Zhou developed a decision theory based heuristic for the 1|r j | T j problem. Via a set of simulation experiments, they demonstrated the importance of permitting inserted idle times when the due dates are arbitrary. Chu and Portmann (1992) presented a priority rule called PRTT (Priority Rule for Total Tardiness) for the 1|r j | T j 102 / KANET AND SRIDHARAN problem: PRTT (j; t) = max{r j ; t} + max{max{r j ; t} + p j ; d j }. They then deÿned a T-active schedule as an active schedule in which for any pair of adjacent jobs i and j (i followed by j) either max{r i ; }¡ max{r j ; } or PRTT(i; )¡PRTT(j; ), where = C k if some job k immediately precedes i; = −∞, if i is the ÿrst job in the sequence. They then proved that the set of T-active schedules is dominant for the criterion unweighted tardiness. Their priority function PRTT represents an important extension of the Modiÿed Due Date rule studied by BIB002 and Baker and Kanet (1983) in so much as it uses a job's arrival time in computing its priority. We can envision the utility of Chu and Portmann's result for constructing search procedures for tardiness problems. For example, consider a branch and bound algorithm which constructs schedules in a forward direction. At any stage k we have a T-active partial schedule PS k . Branch from PS k only with jobs for which the T-active property holds.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We survey and extend the results on the complexity of machine scheduling problems. After a brief review of the central concept of NP-completeness we give a classification of scheduling problems on single, different and identical machines and study the influence of various parameters on their complexity. The problems for which a polynomial-bounded algorithm is available are listed and NP-completeness is established for a large number of other machine scheduling problems. We finally discuss some questions that remain unanswered. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We shall be concerned with finding optimal preemptive schedules on parallel machines, subject to release dates for the jobs. Two polynomial-time algorithms are presented. The first algorithm minimizes maximum completion time on an arbitrary number of uniform machines. The second algorithm minimizes maximum lateness with respect to due dates for the jobs on an arbitrary number of identical machines or on two uniform machines. A third algorithm for minimizing maximum lateness on an arbitrary number of uniform machines is briefly discussed. NP-hardness is established for the problem of minimizing total weighted completion time on a single machine. <s> BIB002 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We consider one-processor scheduling problems having the following form: Tasks T1, T2,..., TN are given, with each Ti having a specified length li and a preferred starting time ai or, equivalently, a preferred completion time bi. The tasks are to be scheduled nonpreemptively i.e., a task cannot be split on a single processor to begin as close to their preferred starting times as possible. We examine two different cost measures for such schedules, the sum of the absolute discrepancies from the preferred starting times and the maximum such discrepancy. For the first of these, we show that the problem of finding minimum cost schedules is NP-complete; however, we give an efficient algorithm that finds minimum cost schedules whenever the tasks either all have the same length or are required to be executed in a given fixed sequence. For the second cost measure, we give an efficient algorithm that finds minimum cost schedules in general, with no constraints on the ordering or lengths of the tasks. <s> BIB003 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We examine the problem of scheduling a given set of jobs on a single machine to minimize total early and tardy costs. Two dispatch priority rules are proposed and tested for this NP-complete problem. These were found to perform far better than known heuristics that ignored early costs. For situations where the potential cost savings are sufficiently high to justify more sophisticated techniques, we propose a variation of the Beam Search method developed by researchers in artificial intelligence. This variant, called Filtered Beam Search, is able to use priority functions to search a number of solution paths in parallel. A computational study showed that this search method was not only efficient but also consistent in providing near-optimal solutions with a relatively small search tree. The study also includes an investigation of the impacts of Beam Search parameters on three variations of Beam Search for this problem. <s> BIB004 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> The problem of minimizing the total tardiness for a set of independent jobs on one machine is considered. Lawler has given a pseudo-polynomial-time algorithm to solve this problem. In spite of extensive research efforts for more than a decade, the question of whether it can be solved in polynomial time or it is NP-hard in the ordinary sense remained open. In this paper the problem is shown to be NP-hard in the ordinary sense. <s> BIB005 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> Abstract We address the problem of determining schedules for static, single-machine scheduling problems where the objective is to minimize the sum of weighted tardiness and weighted earliness. We develop optimal and heuristic procedures for the special case of weights that are proportional to the processing times of the respective jobs. The optimal procedure uses dominance properties to reduce the number of sequences that must be considered, and some of the heuristic use these properties as a basis for constructing good initial sequences. A pairwise interchange procedure is used to improve the heuristic solutions. An experimental study shows that the heuristic procedures perform very well. <s> BIB006 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> Abstract The earliness and tardiness (E/T) penalty problem in scheduling gained more importance in part due to its application in Just-In-Time (JIT) production system. Inorder to meet JIT production schedules preventive maintenance plan must be in place. During the maintenance periods machine is not available for processing. The time duration planned for preventive maintenance or meal breaks is called machine vacation. Incorporating E/T penalty and machine vacation a single machine scheduling model is developed. Heuristic methods for solving this problem and computational results are also presented. <s> BIB007 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We address a single-machine scheduling problem in which penalties are assigned for early and tardy completion of jobs. These penalties are common in industrial settings where early job completion can cause the cash commitment to resources in a time frame earlier than needed, giving rise to early completion penalties. Tardiness penalties arise from a variety of sources, such as loss of customer goodwill, opportunity costs of lost sales, and direct cash penalties. Accounting for earliness cost makes the performance measure nonregular, and this nonregularity has apparently discouraged researchers from seeking solutions to this problem. We found that it is not much more difficult to design an enumerative search for this problem than it would be if the performance measure were regular. We present and demonstrate an efficient timetabling procedure which can be embedded in an enumerative algorithm allowing the search to be conducted over the domain of job permutations.© 1993 John Wiley & Sons, Inc. <s> BIB008 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> Many manufacturing firms that use Material Requirements Planning MRP cannot deliver products on schedule and within budget. Faced with bewildering bottlenecks, erratic process flows, and unrealistic due dates, they are unable to develop accurate schedules for their raw material acquisitions, workforce, and equipment. Their MRP plans must be translated into a workable schedule, one which determines when individual tasks will be performed by workers at work centers. There is a clear need for such an enhancement to MRP, a means to operate on detailed task data, yet capable of producing a schedule that directly relates to the MRP plan, the master production schedule, and the resource plan. We describe a method for determining feasible and cost-effective schedules for both labor and machines in a job shop. The method first sequences tasks at resources and then minimizes overall earliness and lateness cost by solving a series of maximum flow problems. By using the model as an enhancement to a company's MRP system, we simulated the cost effects of redeploying its workforce. Although the model was not used for real-time scheduling, it served a strategic role in workforce expansion and deployment decisions. <s> BIB009 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> We consider a single-machine scheduling problem with the objective of minimizing the mean (or equivalently, total) tardiness and earliness when due dates may differ among jobs. Some properties of the optimal solution are discussed, and these properties are used to develop both optimal and heuristic algorithms. Results of computational tests indicate that optimal solutions can be found for problems with up to 20 jobs, and that two of the heuristic procedures provide optimal or very near optimal solutions in many instances. © 1994 John Wiley & Sons, Inc. <s> BIB010 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> The article deals with a single machine earliness-tardiness scheduling model where idle times are permitted in job processing. Based on a cluster concept we develop properties of the model that lead to a very fast algorithm to find an optimal timing schedule for a given sequence of jobs. The performance of this algorithm is tested on 480 randomly generated problems involving 100, 200, 400 and 500 jobs. It takes less than two seconds to solve a 500 job problem on a PC. <s> BIB011 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Literature on Problems with a Nonregular <s> Abstract We consider the dynamic single-machine scheduling problem where the objective is to minimize the sum of weighted earliness and weighted tardiness costs. A single pass heuristic, based on decision theory, is developed for constructing schedules. The heuristic permits schedules with idle time between jobs and behaves like a dispatching procedure. The performance of the new heuristic is examined using 116 published problems for which the optimum solution is known. Its performance is also investigated using 540 randomly generated problems covering a variety of conditions by comparing it to two well known dispatching procedures, adapted for dynamic early/tardy problems. The results indicate that the heuristic performs very well. <s> BIB012
|
Objective Function There are important scheduling problems in which the performance measure is not regular. The most obvious case is when there are penalties incurred for earliness. Then we see that IIT may be beneÿcial. A special case here is 1| | E j , already shown to be NP-hard and equivalent to 1| | T j BIB005 . A variant of this is the problem of minimizing total weighted earliness when each job must be completed by a deadlined j (problem 1|d j | E j ). A simple solution procedure for the special case of 1|pmtn;d j | E j can be developed as follows. Call an instance of this problem P1. Because preemption is allowed, jobs may be interrupted and split into processing segments. Now consider P2, and instance of 1|pmtn; r j | F j subject to the constraint C max 6M. P1 is equivalent to P2, after making the following substitutions: r j in P2 :=d max −d j in P1; M in P2 :=d max in P1. Solve P2 and make the substitution: C j in P1 := M − s j in P2, where s j is the start time for the ÿrst segment of job j in P2. Figure 2 illustrates these substitutions. For P2, we know from that an optimum schedule is one in which the machine is kept busy with the available job (segment) with minimum remaining processing time. Because this results in a nondelay schedule, then if C max 6M we have an optimum schedule, else no feasible solution exits. Notice that the times in P2 when the machine awaits the arrival of a job correspond to the IIT periods in P1. For the weighted version of this problem (1|pmtn;d j | w j E j ) the results are not as encouraging. Using the same reduction algorithm as above, we get 1|pmtn;d j | w j E j reduces to 1|pmtn; r j | w j F j . But 1|pmtn; r j | w j F j reduces to 1|pmtn; r j | w j C j which is strongly NP-hard BIB002 . So 1|pmtn;d j | w j E j is NP-hard. Similarly, 1|d j | w j E j reduces to 1|r j | w j F j . But 1|r j | w j F j reduces to 1|r j | w j C j which is known to be NP-hard BIB001 . So 1|d j | w j E j is NP-hard. Reduction of 1|pmtn;d j | E j to 1|pmtn; r j | F j . Earliness=Tardiness Problems. There is a growing body of literature on the earliness=tardiness (E=T ) problem. and Baker and Scudder (1990) have already provided reviews of that literature. However, most of the E=T work that has been reported avoids the issue of inserted idle time either by restricting the solution to be a nondelay schedule or by assuming a common due date for all jobs. For the 1|d j = d| g j (E j ) + h j (T j ) problem (the so-called common due date problem), proved that it is unnecessary to consider schedules with inserted idle time except prior to the ÿrst job in the schedule. Their result holds for any cost function of the form n j=1 f(C j − d), where f(·) is nonincreasing in the interval [−∞; 0), nondecreasing in the interval (0; ∞] and f(0) = 0. The reader can assume that any study mentioned by Raghavachari or Baker and Scudder and not included here either makes the restricting nondelay assumption or deals with a problem for which the Cheng-Kahlbacher result holds. In the ÿrst case, such papers are not directly relevant to the issue of inserted idle time. We agree with Baker and Scudder's observation that the essence of the E=T problem lies in its nonregular performance measure and to impose the arbitrary restriction that there be no idle time diminishes the importance of this objective. Because Baker and Scudder have provided an extensive review of the literature on the common due date problem, and in light of Cheng and Kahlbacher's result, we refrain from repeating such a review here. After taking all this into consideration the IIT-E=T literature is scanty. We can characterize the available literature into four broad groups of papers dealing with (1) optimizing procedures (2) special purpose E=T heuristics, (3) heuristic search procedures, and (4) timetabling algorithms. Optimizing Procedures. Mixed integer programming formulations have been presented by Fry et al. (1987) , and Balakrishnan et al. (1997) . Branch-and-bound schemes have been developed by Fry et al. (1986 Fry et al. ( , 1987 . Fry et al. considered the single machine problem of minimizing a weighted mixture of ow time, earliness, and tardiness. formulated a single machine problem with sequence dependent setup times and earliness=tardiness penalties. Balakrishnan et al.'s formulation extends the models of Fry et al. and Coleman to include multiple parallel uniform machines, sequence dependent setups, and job ready times (Q|r j ; setups| j E j + j T j ). They assumed that processing times on a machine m are scaled by a factor m 61. With their formulation, they were able to solve eight-job, twomachine problems with an average of 8175 pivots in about 30 seconds, while 10-job, two-machine problems required an average of about 50,000 pivots and about four minutes on a 333-Mhz Pentium processor. Recognizing the discouraging nature of these results, the authors described a Bender's decomposition approach for separating the problem into an integer master problem that focuses on ÿnding the machine assignments and the sequence in which jobs are processed, and a continuous valued linear subproblem that focuses on ÿnding the exact completion time of each job. E=T Heuristics. Special purpose E=T heuristics have been proposed by BIB007 , Nandkeolyar et al. (1993) , and BIB012 . Mannur and Addagatla developed two heuristics for E=T problems with machine "vacations," one of which permits schedules with inserted idle time. Their limited results show the nondelay heuristic to be superior, but their problem instances were all such that the utilization was so high as to always cause a nondelay schedule to be optimum. Nandkeolyar et al. studied the single machine w j (E j + T j ) problem with dynamically arriving jobs and proposed a two-step modular approach. In the ÿrst step, a marginal cost analysis is performed in order to decide whether or not to keep the machine idle in anticipation of an important soonto-arrive job. In the second phase, they deployed and tested the performance of various dispatching rules to select a job to next occupy the machine. They also optionally used a socalled "balancing routine" to timetable the ÿnal schedule. It is di cult to assess the quality of their approach because no comparison to optimum solutions was made available. Sridharan and Zhou presented a nearly online (Sanlaville 1995) scheduling heuristic of complexity O(n 2 ) for the 1|r j | j E j + j T j problem. Their heuristic identiÿed soon-toarrive jobs and kept the machine deliberately idle for them. At each decision epoch t, their heuristic looked ahead to max {t + p j ; d j } to identify arriving jobs. Thus, the candidate job set at t included all jobs in the queue and soonto-arrive jobs. The heuristic, based on a decision theoretic approach, proceeds to select the best job to schedule next as follows. First, C j , the best completion time of job j, is determined as if it were processed next. Assuming all remaining jobs follow j in a nondelay mode, the average completion time of the remaining jobs is estimated using their average processing time. If the average completion time of unscheduled jobs is greater than their average due date, then C j is adjusted accordingly, provided it is feasible and economical. Upon determining the best completion time of job j, the completion times of remaining jobs are estimated using their individual processing times and the average processing time of all unscheduled jobs. Using these estimates the total cost of scheduling j next is obtained. Repeating this process for each job in the candidate job set at time t, they obtain an estimate of the cost consequence of scheduling each job next and select the job that produces the lowest cost to process next. They tested their heuristic on the 116 published static problems in BIB008 and BIB006 . Their heuristic was found faster than the heuristics of both Yano and Kim and Davis and Kanet, and it produced superior results. In additional tests involving dynamic problems with up to 5000 jobs, and under a variety of conditions, their heuristic was found to consistently outperform adapted (by incorporating the above described look-ahead feature) version of EXP-E=T BIB004 and EDD to handle dynamic E=T problems. Heuristic Search. In this category of papers the focus has been on either neighborhood search method development or application of genetic algorithms. proposed an adjacent pairwise exchange heuristic for solving 1| | E j + T j . Using a set of nine precedence relationship rules to reduce the number of candidates for interchanging jobs and a straightforward linear programming formulation to timetable the resulting sequences, they were able to solve problems of up to 16 jobs, ÿnding an optimum solution in 122 of 192 test problems. BIB006 and BIB010 considered two cases of the 1| | j E j + j T j problem: when j = j for all j; and when j and j are proportional to the job processing times and the restriction that 06 j 6 j . They provided a branch-and-bound method and a pairwise interchange heuristic and demonstrated their use in solving problems of up to 30 jobs. They were able to obtain the optimum solution for 99 of the 100 problems considered. Keyser and Sarper (1991) also developed a pairwise interchange heuristic. They presented a target start time heuristic 104 / KANET AND SRIDHARAN to minimize the sum of earliness, tardiness, and waiting time costs. The heuristic permits machine idle times between jobs in the schedule produced. The heuristic solution is improved using an adjacent pairwise interchange algorithm. They formulated and solved the problem as a mixed integer program and compared their heuristic for problems of up to six jobs. Their results are encouraging, albeit limited. Both Kanet and Sridharan (1991) and developed genetic-based algorithms for E=T problems. Kanet and Sridharan investigated the problem of n jobs with nonidentical ready times and sequence-dependent setup times to be scheduled on m uniform machines, with the convex objective function n j=1 j E j + j T j + j SU j ; where SU j represents setup time for job j. Their algorithm creates successive generations of schedules, with each generation inheriting the characteristics of a subset of the prior generation. To avoid convergence to a local optimum, the algorithm has a mutation feature, regulated by the algorithm's rate of convergence. That is, as the successive improvement in schedule populations begins to diminish, the probability of the appearance of mutant schedules is increased. It is di cult to assess the quality of their procedure because they made no comparisons to optimum solutions. presented a search procedure for 1| | j E j + j T j problems to generate near-optimum sequences using crossover and mutation operators and linear scaling of the ÿtness function. They used an embedded timetabling procedure to determine the optimum starting times of jobs in a sequence by inserting idle times when necessary. They solved up to 80 job problems with both proportional and general penalty weights. Compared to Yano and Kim's heuristic, their algorithm produced 12% to 33% lower total cost, for a set of random problems, especially when the problem size is increased and penalty weights are general, albeit at increased computational times. Timetabling Algorithms. The issue of ÿnding best ways for timetabling a given job sequence has attracted the attention of a number of researchers. Starting with Sidney in 1977, timetabling procedures have been proposed by , BIB003 , BIB008 , , and BIB011 . developed linear programming formulation to timetable jobs. BIB009 BIB009 formulated the timetabling problem as a maximum network ow model and described a real-life implementation. work is possibly the ÿrst appearance of a study involving E=T problems. He studied the 1 max{g(max{E j }); h(max{T j })} problem, where both g and h are monotonically nondecreasing continuous functions such that g(0) = h(0) = 0. For each job j there is a target start time a j and a target completion time (due date) b j ¿a j . These parameters have the property that if a i ¡a k , then b i 6b k . This condition assures there is at least one optimal schedule with the property that the jobs are simultaneously ordered by nondecreasing a j and nondecreasing b j , making it trivial to obtain an optimum permutation. Given the permutation, he then computed an upper bound for E j and T j and used these bounds to timetable the jobs using a simple two-step procedure. Sidney's algorithm was reÿned by , who improved the complexity from O(n 2 ) to O(n log n). The work of BIB003 is probably the most comprehensive treatment of timetabling algorithms for E=T problems. They, in fact, addressed two problems: 1 E j + T j ; and 1 max{E j ; T j } and several of their variants. In addition to showing that the 1 E j + T j problem is NP-hard, they also provided an O(n log n) timetabling algorithm for the case when a sequence is given. They showed that the variant 1|p j = p | E j + T j can be solved by ÿrst sorting the jobs in nondecreasing order of due date and then applying the timetabling procedure (still O(n log n)). They showed also that the timetabling algorithm can be altered, without added complexity, to 1 w j (E j + T j ) and to the cases when window constraints or consecutive task constraints are present. Window constraints occur when each job j is given a window of time [u j ; v j ] in which the job must start, with the restriction that v j + p j 6v j+1 . Consecutive task constraints occur when for sets of jobs {J j ; J j+1 ; : : : ; J k }, job J i is constrained to start immediately after job J i−1 for i = j+1; : : : ; k. Note that the 1 max{E j ; T j } problem is a reduction of the objective deÿned by in problem 1 max{g(max{E j }); h(max{T j })}. BIB003 were able to ÿnd a pseudopolynomial time algorithm for the 1 max{E j ; T j } problem without Sidney's restriction on target start and target completion times (if a i ¡a k , then b i 6b k ). The algorithm of Garey et al. is of complexity O(n(log n + log p max )). It remains open whether or not a polynomial time (in n) algorithm can be found for the unrestricted version of the 1 max{g(max{E j }); h(max{T j })} problem. BIB008 tackled the case where the penalties are general convex functions of earliness and tardiness (1 g j (E j ) + h j (T j )) and proposed a pseudopolynomial algorithm, i.e., complexity O(nH ); where H is the number of units of time in the planning horizon. BIB011 provided an e cient timetabling algorithm for the 1 j E j + j T j problem. They showed that the solution will be composed of m6n clusters of uninterrupted jobs, possibly separated by idle periods. They observed and proved that the cluster partitions can be determined in advance (i.e., before actually deciding the size of the idle periods). Clusters can be identiÿed by observing that within the sequence of n jobs, for any two adjacent jobs a; b; to be in a cluster: d b − d a 6p b must hold. They showed that for any cluster, the tardy jobs are always preceded by the early jobs, that the earliness of consecutive jobs in a cluster is nonincreasing, and that the tardiness of consecutive jobs in a cluster is nondecreasing. They then provided an e cient two-stage procedure for ÿrst identifying clusters and then timetabling them. An essentially equivalent algorithm has been independently developed by Lee and Choi.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> KANET AND SRIDHARAN / 105 <s> Many manufacturing firms that use Material Requirements Planning MRP cannot deliver products on schedule and within budget. Faced with bewildering bottlenecks, erratic process flows, and unrealistic due dates, they are unable to develop accurate schedules for their raw material acquisitions, workforce, and equipment. Their MRP plans must be translated into a workable schedule, one which determines when individual tasks will be performed by workers at work centers. There is a clear need for such an enhancement to MRP, a means to operate on detailed task data, yet capable of producing a schedule that directly relates to the MRP plan, the master production schedule, and the resource plan. We describe a method for determining feasible and cost-effective schedules for both labor and machines in a job shop. The method first sequences tasks at resources and then minimizes overall earliness and lateness cost by solving a series of maximum flow problems. By using the model as an enhancement to a company's MRP system, we simulated the cost effects of redeploying its workforce. Although the model was not used for real-time scheduling, it served a strategic role in workforce expansion and deployment decisions. <s> BIB001
|
Considering the 1 j E j + j T j problem, described a straightforward linear program formulation that produces an optimum timetable for a given sequence in one pivot. BIB001 BIB001 developed and tested a two-phase sequencingtimetabling procedure for a multimachine job shop problem (J |r j | j E j + j T j ). In the ÿrst phase, the jobs are forward loaded according to precedence relationships to create the dispatching sequence at the work centers. Given the Phase 1 sequence, Phase 2 of the procedure formulates the problem as a maximum network ow model and iteratively reschedules (timetables) the tasks to minimize total cost. In a subsequent paper they reported application of their approach to a real factory with over 26,000 tasks and 52 work centers. Theirs is the ÿrst reported case of acknowledging the importance of inserted idle time in the design of a real-life production scheduling system.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> A TAXONOMY FOR IIT SCHEDULING PROBLEMS <s> Algorithms are developed for solving problems to minimize the length of production schedules. The algorithms generate anyone, or all, schedules of a particular subset of all possible schedules, called the active schedules. This subset contains, in turn, a subset of the optimal schedules. It is further shown that every optimal schedule is equivalent to an active optimal schedule. Computational experience with the algorithms shows that it is practical, in problems of small size, to generate the complete set of all active schedules and to pick the optimal schedules directly from this set and, when this is not practical, to random sample from the bet of all active schedules and, thus, to produce schedules that are optimal with a probability as close to unity as is desired. The basic algorithm can also generate the particular schedules produced by well-known machine loading rules. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> A TAXONOMY FOR IIT SCHEDULING PROBLEMS <s> In this paper the n/1/rj ΣjwjCj problem under the assumptions of nonpreemptive sequencing and sequence independent processing times is investigated. After pointing out the fundamental properties, some dominance sufficient conditions among sequences are obtained and a branch and bound algorithm is proposed. Computational results are reported and discussed. <s> BIB002 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> A TAXONOMY FOR IIT SCHEDULING PROBLEMS <s> Let M 1 and M 3 be non-bottleneck machines and M 2 a bottleneck machine processing only one job at a time. Suppose that n jobs have to be processed on M 1 , M 2 and M 1 (in that order) and job i has to spend a time a , on M 1 , d 1 on M 2 and q 1 on M 3 : we want to minimize the makespan. This problem is important since its resolution provides a bound on the makespan of complicated systems such as job shops. It is NP-hard in the strong sense. However, efficient branch and bound methods exist and we describe one of them. Our bound for the tree-search is very close to the bound used by Florian et al., but the principle of branching is quite different. At every node, we construct by an O( n log n ) algorithm a Schrage schedule; then we define a critical job c , a critical set J and consider two subsets of schedules; the schedules when c procedes every job of J and the schedules when c follows every job of J . We give the results of this method and prove that the difference between the optimum and the Schrage schedule is less than d 1 . <s> BIB003 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> A TAXONOMY FOR IIT SCHEDULING PROBLEMS <s> Scheduling a set of jobs on a single machine is studied where each job has a specified ready time and due date (limit times). A dominance relationship is established within the set of possible sequences and results in an important restriction of the number of sequences to be considered in solving the feasibility problem. Establishing the dominance relationship leading to this restriction requires only the ordering of ready times and due dates and is thus independent of job processing times and of any change in limit times which does not affect the given order. <s> BIB004
|
The literature leads us to three major situations (problem parameters) in which it may be sensible to deliberately introduce idle time into a schedule: When there is more than one processor. SITUATION 2: When there are jobs with nonidentical ready times. SITUATION 3: When the scheduling performance measure is nonregular. Notice that the union of these situations forms the complement of the intersection of the three special conditions of in describing when IIT is not required. Figure 3 presents a Venn diagram describing the relationship of these three classes of scheduling problems. At the core of the diagram in Figure 3 is the problem speciÿcation of , namely a single machine, jobs with identical ready times, and a regular performance measure. The remaining sets, numbered 1 through 7, identify cases where inserted idle time may be required. The relevant problem sets are: Group 1 (1|r j |reg): Single machine, nonidentical ready times, regular performance measure. Group 2 (m | |reg): Multimachine, identical ready times, regular performance measure. Group 3 (1| |nonreg): Single machine, identical ready times, nonregular performance measure. Group 4 (m|r j |reg): Multimachine, nonidentical ready times, regular performance measure. Group 5 (m | |nonreg): Multimachine, identical ready times, nonregular performance measure. Group 6 (1|r j |nonreg): Single machine, nonidentical ready times, nonregular performance measure. Group 7 (m|r j |nonreg): Multimachine, nonidentical ready times, nonregular performance measure. Venn diagram showing groups of scheduling problems where inserted idle time may be required. Table 1 maps the extant IIT literature according to the group structure deÿned in Figure 3 . Figure 4 illustrates the relationship of timetabled, active, and nondelay schedules and shows what is known about the search space for various problem groups. The innermost set (nondelay schedules) dominates for 1| |reg; 1|pmtn; r j |reg, and m|pmtn; r j |reg problems. The set of active schedules, which includes nondelay schedules, dominates for m|r j |reg, and may contain IIT schedules. The outermost set, timetabled schedules, dominates for m|r j |nonreg problems. We deÿne a timetabled schedule as a schedule in which no local shift (left or right) can reduce the objective function value. Note that the set of timetabled schedules is in fact a generalization of the set of "semi-active" schedules described by Gi er and BIB001 . A semi-active schedule is achieved by removing all super uous idle time appearing to the left of every job in the schedule (i.e., a special case of timetabling). The descriptive work of Kanet (1981) has shown that the set of active schedules, although dominant over the set of nondelay schedules, is signiÿcantly larger. This is where contributions like those of BIB002 , BIB003 , BIB004 and play a role. In each case the results serve to reduce the required search space within active schedules for a speciÿc scheduling objective.
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> In this paper the n/1/rj ΣjwjCj problem under the assumptions of nonpreemptive sequencing and sequence independent processing times is investigated. After pointing out the fundamental properties, some dominance sufficient conditions among sequences are obtained and a branch and bound algorithm is proposed. Computational results are reported and discussed. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> The problem of minimizing the total tardiness for a set of independent jobs on one machine is considered. Lawler has given a pseudo-polynomial-time algorithm to solve this problem. In spite of extensive research efforts for more than a decade, the question of whether it can be solved in polynomial time or it is NP-hard in the ordinary sense remained open. In this paper the problem is shown to be NP-hard in the ordinary sense. <s> BIB002 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> Abstract We address the problem of determining schedules for static, single-machine scheduling problems where the objective is to minimize the sum of weighted tardiness and weighted earliness. We develop optimal and heuristic procedures for the special case of weights that are proportional to the processing times of the respective jobs. The optimal procedure uses dominance properties to reduce the number of sequences that must be considered, and some of the heuristic use these properties as a basis for constructing good initial sequences. A pairwise interchange procedure is used to improve the heuristic solutions. An experimental study shows that the heuristic procedures perform very well. <s> BIB003 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> Many manufacturing firms that use Material Requirements Planning MRP cannot deliver products on schedule and within budget. Faced with bewildering bottlenecks, erratic process flows, and unrealistic due dates, they are unable to develop accurate schedules for their raw material acquisitions, workforce, and equipment. Their MRP plans must be translated into a workable schedule, one which determines when individual tasks will be performed by workers at work centers. There is a clear need for such an enhancement to MRP, a means to operate on detailed task data, yet capable of producing a schedule that directly relates to the MRP plan, the master production schedule, and the resource plan. We describe a method for determining feasible and cost-effective schedules for both labor and machines in a job shop. The method first sequences tasks at resources and then minimizes overall earliness and lateness cost by solving a series of maximum flow problems. By using the model as an enhancement to a company's MRP system, we simulated the cost effects of redeploying its workforce. Although the model was not used for real-time scheduling, it served a strategic role in workforce expansion and deployment decisions. <s> BIB004 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> We address a single-machine scheduling problem in which penalties are assigned for early and tardy completion of jobs. These penalties are common in industrial settings where early job completion can cause the cash commitment to resources in a time frame earlier than needed, giving rise to early completion penalties. Tardiness penalties arise from a variety of sources, such as loss of customer goodwill, opportunity costs of lost sales, and direct cash penalties. Accounting for earliness cost makes the performance measure nonregular, and this nonregularity has apparently discouraged researchers from seeking solutions to this problem. We found that it is not much more difficult to design an enumerative search for this problem than it would be if the performance measure were regular. We present and demonstrate an efficient timetabling procedure which can be embedded in an enumerative algorithm allowing the search to be conducted over the domain of job permutations.© 1993 John Wiley & Sons, Inc. <s> BIB005 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> The article deals with a single machine earliness-tardiness scheduling model where idle times are permitted in job processing. Based on a cluster concept we develop properties of the model that lead to a very fast algorithm to find an optimal timing schedule for a given sequence of jobs. The performance of this algorithm is tested on 480 randomly generated problems involving 100, 200, 400 and 500 jobs. It takes less than two seconds to solve a 500 job problem on a PC. <s> BIB006 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Research Opportunities <s> Abstract We consider the dynamic single-machine scheduling problem where the objective is to minimize the sum of weighted earliness and weighted tardiness costs. A single pass heuristic, based on decision theory, is developed for constructing schedules. The heuristic permits schedules with idle time between jobs and behaves like a dispatching procedure. The performance of the new heuristic is examined using 116 published problems for which the optimum solution is known. Its performance is also investigated using 540 randomly generated problems covering a variety of conditions by comparing it to two well known dispatching procedures, adapted for dynamic early/tardy problems. The results indicate that the heuristic performs very well. <s> BIB007
|
We see several areas where further research in inserted idle time scheduling might prove beneÿcial. These areas Fry, Darby-Dowman, and Armstrong (1986); Fry, Leong, and Rakes (1987); ; BIB003 ; ; BIB006 ; BIB007 Group 7 (m|rj|non-reg) J |rj| j Ej + j Tj BIB004 BIB004 ) Q|rj; setups| j Ej + j Tj Kanet and Sridharan (1991); Balakrishnan, Kanet, and Sridharan (1997) can be organized according to the following interrelated categories: 1. Further development of algorithms and dominance properties. 2. Integration of timetabling into search procedures. 3. Development of heuristic methods for constructing inserted idle time schedules. Further Development of Algorithms and Dominance Properties. The separation of scheduling into sequencing and timetabling has obvious implications on strategies for the construction of schedules. (This point shall be developed further in the paragraphs to follow.) Aside from this, however, the availability of pure timetabling procedures may have practical advantages. For example, consider the scheduling of preventive maintenance in a factory. We want to schedule such maintenance when it causes the least disruption to the production schedule, i.e., when machines are idle. Given any current schedule of production, a timetabling algorithm could be deployed to reassign the idle time of machines to greatest advantage, allowing the maintenance to occur when least disruptive. When the objective function is well behaved (i.e., piecewise linear), then the algorithm of BIB006 could be adapted to e ciently re-timetable jobs. However, for more general cost functions, the available algorithm is that of BIB005 with complexity O(nH ). Here may be an opportunity for further development along two avenues: either by exploiting the properties of a speciÿc type of function (e.g., quadratic), or by deployment of general line search methods such as interval bisection or golden section (e.g., see Wagner 1977, p. 539) . So, there seem to be a number of opportunities for further development of timetabling procedures. In the area of complexity analysis, the unrestricted problem 1 max{g(max{E j }); h(max{T j })} and its variants, 1 max{ j E j ; j T j }; 1 w j max{E j ; T j }, and 1 max{E j ; T j }, remain open for analysis. It is yet to be shown whether these problems have polynomial (in n) time algorithms or if they belong to the class of NP-hard problems. There seem to be a number of opportunities also for developing dominance properties. For example, it may be possible to exploit the results of BIB001 when addressing the 1|d j | j E j problem. Such an Venn diagram illustrating the relation of timetabled schedules to active and nondelay schedules. extension would be analogous to the procedure we outlined earlier for mapping the constrained weighted earliness problem (1|pmtn;d j | j E j ) to the weighted completion time problem (1|pmtn; r j | w j C j ). Bianco and Ricciardelli's theorems for establishing dominance properties between adjacent jobs for 1|r j | w j C j may have a possible counterpart for 1|d j | w j E j . In a similar vein, the work of on establishing dominance properties for 1|r j | T j might be extendible to other related problems. An obvious ÿrst step might be the 1|r j | j T j problem. For this problem it should be possible to build on the dominance property for 1 j T j already developed by . For E=T problems, similar extensions may be possible. We know, for example, from BIB002 that 1 T j and 1 E j are equivalent. So, analogs of Chu and Portmann's results to 1|r j | E j or even 1|r j | j E j + j T j may be possible. This would serve to reduce the search space for certain E=T problems to a smaller set than the set of timetabled schedules (refer to Figure 4) .
|
Scheduling with inserted idle time: problem taxonomy and literature review <s> Integration of Timetabling into Search Procedures. <s> Abstract Creating an optimum long-term schedule for the Hubble Space Telescope is difficult by almost any standard due to the large number of activities, many relative and absolute time constraints, prevailing uncertainties and an unusually wide range of timescales. This problem has motivated research in neural networks for scheduling. The novel concept of continuous suitaility functions defined over a continuous time domain has been developed to represent soft temporal relationships between activities. All constraints and preferences are automatically translated into the weights of an appropriately designed artificial neural network. The constraints are subject to propagation and consistency enhancement in order to increase the number of explicitly represented constraints. Equipped with a novel stochastic neuron update rule, the resulting GDS-network effectively implements a Las Vegas-type algorithm to generate good schedules with an unparalleled efficiency. When provided with feedback from execution the network allows dynamic schedule revision and repair. <s> BIB001 </s> Scheduling with inserted idle time: problem taxonomy and literature review <s> Integration of Timetabling into Search Procedures. <s> Abstract We consider the dynamic single-machine scheduling problem where the objective is to minimize the sum of weighted earliness and weighted tardiness costs. A single pass heuristic, based on decision theory, is developed for constructing schedules. The heuristic permits schedules with idle time between jobs and behaves like a dispatching procedure. The performance of the new heuristic is examined using 116 published problems for which the optimum solution is known. Its performance is also investigated using 540 randomly generated problems covering a variety of conditions by comparing it to two well known dispatching procedures, adapted for dynamic early/tardy problems. The results indicate that the heuristic performs very well. <s> BIB002
|
There is a growing body of knowledge in the area of advanced computer search methods for scheduling. The types of approaches we refer to here include methods such as heuristic branch and bound, simulated annealing, beam search, tabu search, etc. For a review of these methods see . Other approaches that appear promising include application of genetic algorithms (see Sridharan 1991 and , application of basic decision theory (see , Kanet and Zhou 1993 , Sridharan and Zhou 1996a BIB002 , and application of neural networks (see BIB001 . All these methods distinguish themselves from simple forward simulations in that they may include a (limited) capability for backtracking and=or the feature of dynamically changing the search path. Our earlier discussion regarding the separation of the scheduling task into sequencing and timetabling leads to the tempting conclusion that the application of such search approaches to problems involving inserted idle time might be quite simple. For example, a tempting heuristic might be to ÿrst ignore any inserted idle time and deploy a search procedure for identifying a good permutation. Then, having identiÿed the ÿnal sequence, to timetable it using a timetabling algorithm to insert idle times. This strategy of wanton separation of sequencing and timetabling can be dangerous as illustrated by the example depicted in Figure 5 of an eightjob instance of 1 E j + T j . The ÿgure shows three schedules for the sample problem. Schedule 1 is an optimum schedule; its total cost is 341. Schedule 2 is an optimum schedule under the constraint that no inserted idle time is permitted; i.e., Schedule 2 is an Figure 5 . Illustration of the e ect of separating sequencing and timetabling. optimum nondelay schedule and costs 1392. Schedule 3, obtained by timetabling Schedule 2, is far from optimum with a cost of 1154. Note the drastic di erence in sequence for Schedule 1 and Schedule 2. This example serves to illustrate that a simple strategy of ÿrst ignoring timetabling to ÿnd a sequence when the performance measure is nonregular can lead to signiÿcantly suboptimum performance. Yet this is the procedure deployed by Faaland and Schmitt and Yano and Kim. We know that Lee and Choi embedded a timetabling algorithm in their genetic-based search procedure and obtained signiÿcantly better results in a direct comparison with the procedure of Yano and Kim. The explanation may well lie in their integration of timetabling into the search procedure. An important direction for research would be to more thoroughly investigate this phenomenon. In the application of search methods to E=T problems there appears to be a need to develop procedures for embedding timetabling into the fabric of the search procedure. One area of development, for example, rests in the observation that all the timetabling procedures discussed here index through the complete set of jobs, starting with the last job in the sequence. E cient timetabling procedures operating on partial schedules that comprise a subsequence of either the ÿrst jobs or the last jobs in a schedule, however, would seem worthy of development. Such procedures could prove valuable in branchand-bound approaches where it is necessary to be able to calculate a lower bound for a given partial schedule. Development of Heuristics for Construction of IIT. The development of heuristics for construction of inserted idle time schedules is important for two reasons: (1) IIT scheduling problems are extremely complex, so exact solution methods may never be practical; and (2) In practice, the problem deÿnition is under constant revision because of the dynamic nature of real-life scheduling environments, so that quick solutions are an absolute necessity. The works of Morton and Ramnath (1992) and Sridharan and Zhou (1996a) indicate the potential fruitfulness of this research theme for single machine tardiness problems. An interesting follow-up would be to examine the e ects of redeÿning soon-to-arrive jobs as suggested here and report the computational experience. Another extension would be to investigate the behavior of these types of procedures to problems with sequence dependent setup times. Yet another interesting extension would be to see how such idle time insertion procedures might be designed=adapted for situations when the penalty function is nonregular (e.g., when earliness costs also come into play). We know that for such problems active schedules do not dominate. So a rethinking of the concept soonto-arrive might well be warranted. (For an initial e ort along this line of research see BIB002 Likewise, as discussed earlier, the e ect of environmental variables such as utilization, due date tightness, and due date range on the improvement in performance with the use of such hold-o heuristics seems to warrant further clariÿcation. Finally, from the practitioner's point of view, nearly on-line algorithms seem to hold the maximum potential for use, whereas virtually all published research has focused on o -line algorithms. In this context, the works of Nandkeolyar et al. (1993) and BIB002 are worth noting because they both presented heuristic procedures which assumed minimum forward visibility and thus may be considered nearly online. Additional research extending and improving their heuristics by incorporating queuing theory based busy period analysis to determine the look-ahead window for nearly on-line algorithms may prove extremely fruitful and valuable. In addition to further study of combining hold-o and sneak-in heuristics to priority dispatching methods a' la and Morton and Ramnath (1992) , a decision theory approach as described by and Kanet and Zhou (1993) might be sucessfully adapted to include inserted idle time as one of the alternatives. In the decision theory approach a schedule is constructed in a forward direction (as with a dispatching approach). A decision point corresponds to the event that a resource has become available. A decision alternative corresponds to the selection of a waiting job from the queue. At each decision point, the total cost of an extended schedule corresponding to each decision alternative is estimated. The most favourable (least estimated cost) alternative is then chosen and the construction program advances to the next decision point. An interesting question from here is to what extent might system performance be improved by including the additional alternative leave machine idle, i.e., including the possibility of inserting idle time. (For an initial e ort along this line of research see Zhou 1996a, 1996b .) A related issue concerns estimating job completion times to obtain an estimate of the total cost of an extended schedule.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> OBJECTIVE ::: To determine the relation between exposure to glycaemia over time and the risk of macrovascular or microvascular complications in patients with type 2 diabetes. ::: ::: ::: DESIGN ::: Prospective observational study. ::: ::: ::: SETTING ::: 23 hospital based clinics in England, Scotland, and Northern Ireland. ::: ::: ::: PARTICIPANTS ::: 4585 white, Asian Indian, and Afro-Caribbean UKPDS patients, whether randomised or not to treatment, were included in analyses of incidence; of these, 3642 were included in analyses of relative risk. ::: ::: ::: OUTCOME MEASURES ::: Primary predefined aggregate clinical outcomes: any end point or deaths related to diabetes and all cause mortality. Secondary aggregate outcomes: myocardial infarction, stroke, amputation (including death from peripheral vascular disease), and microvascular disease (predominantly retinal photo-coagulation). Single end points: non-fatal heart failure and cataract extraction. Risk reduction associated with a 1% reduction in updated mean HbA(1c) adjusted for possible confounders at diagnosis of diabetes. ::: ::: ::: RESULTS ::: The incidence of clinical complications was significantly associated with glycaemia. Each 1% reduction in updated mean HbA(1c) was associated with reductions in risk of 21% for any end point related to diabetes (95% confidence interval 17% to 24%, P<0.0001), 21% for deaths related to diabetes (15% to 27%, P<0.0001), 14% for myocardial infarction (8% to 21%, P<0.0001), and 37% for microvascular complications (33% to 41%, P<0.0001). No threshold of risk was observed for any end point. ::: ::: ::: CONCLUSIONS ::: In patients with type 2 diabetes the risk of diabetic complications was strongly associated with previous hyperglycaemia. Any reduction in HbA(1c) is likely to reduce the risk of complications, with the lowest risk being in those with HbA(1c) values in the normal range (<6.0%). <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> OBJECTIVE ::: The goal of this study was to estimate the prevalence of diabetes and the number of people of all ages with diabetes for years 2000 and 2030. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: Data on diabetes prevalence by age and sex from a limited number of countries were extrapolated to all 191 World Health Organization member states and applied to United Nations' population estimates for 2000 and 2030. Urban and rural populations were considered separately for developing countries. ::: ::: ::: RESULTS ::: The prevalence of diabetes for all age-groups worldwide was estimated to be 2.8% in 2000 and 4.4% in 2030. The total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030. The prevalence of diabetes is higher in men than women, but there are more women with diabetes than men. The urban population in developing countries is projected to double between 2000 and 2030. The most important demographic change to diabetes prevalence across the world appears to be the increase in the proportion of people >65 years of age. ::: ::: ::: CONCLUSIONS ::: These findings indicate that the "diabetes epidemic" will continue even if levels of obesity remain constant. Given the increasing prevalence of obesity, it is likely that these figures provide an underestimate of future diabetes prevalence. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> Twenty years on from a review in the first issue of this journal, this contribution revisits glucose sensing for diabetes with an emphasis on commercial developments in the home blood glucose testing market. Following a brief introduction to the needs of people with diabetes, the review considers defining technologies that have enabled the introduction of commercial products and then reviews the products themselves. Drawing heavily on the performance of actual instruments and publicly available information from the companies themselves, this work is designed to complement more conventional reviews based on papers published in scholarly journals. It focuses on the commercial reality today and the products that we are likely to see in the near future. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> OBJECTIVE ::: To review the importance of controlling blood glucose levels and the role of self-monitoring of blood glucose (SMBG) in the management of pregnancy complicated by diabetes. ::: ::: ::: METHODS ::: This report describes the relationship between hyperglycemia and maternal and neonatal complications, reviews the utility of meal-based SMBG in modifying food choices and adjusting insulin doses, and proposes an algorithm to achieve normoglycemia in pregnancies complicated by diabetes. ::: ::: ::: RESULTS ::: The risk of diabetes-related complications in pregnancy is more strongly associated with 1-hour postprandial plasma glucose concentrations than with fasting plasma glucose levels. SMBG strategies that incorporate postprandial glucose testing provide better glycemic control and greater reductions in risk of complications than does preprandial glucose testing alone. Although the optimal timing and frequency of SMBG remain controversial, available clinical evidence supports testing 4 times per day (before breakfast and 1 hour after each meal) in women with gestational diabetes managed by medical nutrition therapy only and 6 times per day (before and 1 hour after each meal) in pregnant women treated with insulin. ::: ::: ::: CONCLUSION ::: Meal-based SMBG is a valuable tool for improving outcomes in pregnancy complicated by diabetes. The lessons learned in this setting should have relevance to the general population of patients with diabetes, in whom microvascular and macrovascular complications are the outcomes of importance. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> BACKGROUND ::: Our systematic review and meta-analysis of the benefit of self-monitoring of blood glucose (SMBG) in improving glycemic control in type 2 diabetes was published in 2008. With the few studies that have emerged afterward, we undertook subsequent meta-analysis of the available evidence to update the results. ::: ::: ::: METHODS ::: Clinical trials of SMBG were identified through electronic searches (MEDLINE, EMBASE, and The Cochrane Library) up to and including June 2009. Studies were included if they met the following inclusion criteria: (1) randomized controlled trial comparing SMBG versus non-SMBG in type 2 diabetes patients not using insulin and (2) hemoglobin A1c (HbA(1c)) reported as an outcome measure. The efficacy was estimated with the mean difference in the changes of HbA(1c) from baseline to final assessment between the SMBG and the non-SMBG groups. ::: ::: ::: RESULTS ::: SMBG was effective in reducing HbA(1c) in non-insulin-treated type 2 diabetes (pooled mean difference, -0.24%; 95% confidence interval, -0.34% to -0.14%; P < 0.00001). Glycemic control significantly improved among the subgroup of patients whose baseline HbA(1c) was >or=8%. In contrast, no significant effect of SMBG was detected in patients who had HbA(1c) <8%. ::: ::: ::: CONCLUSIONS ::: The available evidence suggests the usefulness of SMBG in improving glycemic control in non-insulin-treated type 2 diabetes as demonstrated by the reduction of HbA(1c) levels. In particular, SMBG proved to be useful in the subgroup of patients whose baseline HbA(1c) was >or=8%. <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> OBJECTIVE ::: Little attention has been given and few studies have been published focusing on how to optimize self-monitoring of blood glucose (SMBG) use to monitor daily therapy for persons with type 1 diabetes mellitus. This study was designed to evaluate the effect on glycated hemoglobin (A1C) of a structured intervention focused on SMBG in type 1 diabetes patients with insufficient metabolic control (A1C ≥8%) using a randomized clinical trial design. ::: ::: ::: METHOD ::: One hundred fifty-nine outpatients with type 1 diabetes on multiple injection therapy with insulin and A1C ≥8% were recruited and randomized to one group receiving a focused, structured 9-month SMBG intervention (n=59) and another group receiving regular care based on guidelines (n=64). ::: ::: ::: RESULTS ::: Glycated hemoglobin values (mean % ± standard deviation) at study start was similar: 8.65 ± 0.10 in the intervention group and 8.61 ± 0.09 in the control group. The two groups were comparable (age, gender, body mass index, complication rate, and treatment modality) at study start and had mean diabetes duration and SMBG experience of 19 and 20 years, respectively. At study end, there was decrease in A1C in the intervention group (p<.05), and the A1C was 0.6% lower compared with the control group (p<.05). No increase in the number of minor or major hypoglycemia episodes was observed in the intervention group during the study period. ::: ::: ::: CONCLUSIONS ::: A simple, structured, focused SMBG intervention improved metabolic control in patients with longstanding diabetes type 1 and A1C ≥8%. The intervention was based on general recommendations, realistic in format, and can be applied in a regular outpatient setting. <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> Results from landmark diabetes studies have established A1C as the gold standard for assessing long-term glycemic control. However, A1C does not provide “real-time” information about individual hyperglycemic or hypoglycemic excursions. Real-time information provided by self-monitoring of blood glucose (SMBG) represents an important adjunct to A1C, because it can differentiate fasting, preprandial, and postprandial hyperglycemia; detect glycemic excursions; identify hypoglycemia; and provide immediate feedback about the effect of food choices, physical activity, and medication on glycemic control. The importance of SMBG is widely appreciated and recommended as a core component of management in patients with type 1 or insulin-treated type 2 diabetes, as well as in diabetic pregnancy, for both women with pregestational type 1 and gestational diabetes. Nevertheless, SMBG in management of non–insulin-treated type 2 diabetic patients continues to be debated. Results from clinical trials are inconclusive, and reviews fail to reach an agreement, mainly because of methodological problems. Carefully designed large-scale studies on diverse patient populations with type 2 diabetes with the follow-up period to investigate long-term effects of SMBG in patients with type 2 diabetes should be carried out to clarify how to make the best use of SMBG, in which patients, and under what conditions. <s> BIB007 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> Self-monitoring of blood glucose (SMBG) is advocated as a valuable aid in the management of diabetes. The volume and cost of monitoring continues to increase. SMBG has a number of theoretical advantages/disadvantages which might impact on treatment, outcome and wellbeing. Investigating and quantifying the effect of self-monitoring in a condition where self-management plays a central role poses major methodological difficulties because of the need to minimize confounding factors. Despite the absence of definitive evidence, some situations where monitoring is generally accepted to be beneficial include patients on insulin, during pregnancy, in patients with hypoglycaemia unawareness and while driving. An area of controversy is the role of monitoring in non-insulin-requiring type-2 diabetes where observational and controlled studies give conflicting results. The available evidence does not support the general use of monitoring by all patients with type-2 diabetes, although further research is needed to identify specific subgroups of patients or specific situations where monitoring might be useful. The best use of SMBG in patients with type-2 diabetes might be for those receiving insulin and those on sulphonylurea drugs. The impact of monitoring on patient wellbeing must also be considered, with some studies suggesting adverse psychological effects. Given the large increase in the prevalence of type-2 diabetes, it will be important to define the role of SMBG so that resources can be used appropriately. Presently, the widespread use of SMBG (particularly in type-2 diabetes patients) is a good example of self-monitoring that was adopted in advance of robust evidence of its clinical efficacy. <s> BIB008 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> AIM ::: We estimated the number of people worldwide with diabetes for the years 2010 and 2030. ::: ::: ::: METHODS ::: Studies from 91 countries were used to calculate age- and sex-specific diabetes prevalences, which were applied to national population estimates, to determine national diabetes prevalences for all 216 countries for 2010 and 2030. Studies were identified using Medline, and contact with all national and regional International Diabetes Federation offices. Studies were included if diabetes prevalence was assessed using a population-based methodology, and was based on World Health Organization or American Diabetes Association diagnostic criteria for at least three separate age-groups within the 20-79 year range. Self-report or registry data were used if blood glucose assessment was not available. ::: ::: ::: RESULTS ::: The world prevalence of diabetes among adults (aged 20-79 years) will be 6.4%, affecting 285 million adults, in 2010, and will increase to 7.7%, and 439 million adults by 2030. Between 2010 and 2030, there will be a 69% increase in numbers of adults with diabetes in developing countries and a 20% increase in developed countries. ::: ::: ::: CONCLUSION ::: These predictions, based on a larger number of studies than previous estimates, indicate a growing burden of diabetes, particularly in developing countries. <s> BIB009 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Introduction <s> AbstractObjective:Stakeholders in the US and elsewhere are interested in country-specific and cohort-specific information with which to assess the long-term value of self-monitoring of blood glucose (SMBG) for patients with type 2 diabetes mellitus (T2DM) on oral anti-diabetes drugs (OADs). This study modeled the cost-effectiveness of SMBG at frequencies of once, twice, or three times per day for this population, and included those who had used SMBG in the prior year.Research design and methods:Based on clinical findings of a longitudinal Kaiser Permanente study, a validated model was used to project 40-year clinical and economic outcomes for SMBG at (averages of) once, twice, or three times per day versus no SMBG. Baseline HbA1c (7.6%), age and gender represented the Kaiser study ‘prevalent’ SMBG users cohort. Unit costs came primarily from a 2003 published article; inflated to US$2006. Outcomes were discounted at 3% per annum, with sensitivity analyses on discount rates and time horizons. Analyses were ... <s> BIB010
|
Diabetes mellitus is the most common endocrine disorder of carbohydrate metabolism. Worldwide, it is a leading cause of morbidity and mortality and a major health problem for most developed societies. The prevalence of diabetes continues to increase. The crude estimated prevalence of diabetes in adults in the United States (US) has been reported to be 9.6% (20.4 million) in 2003-2006 . Moreover, it is OPEN ACCESS predicted that 48.3 million people in the US will have diabetes by 2050 . The World Health Organization (WHO) has put the number of persons with diabetes worldwide at approximately 171 million in 2000, and this is expected to increase to 366 million by 2030 BIB002 . A recent study estimated that the world prevalence of diabetes among adults (20-79 years of age) would be 6.4%, affecting 285 million adults in 2010 BIB009 . And it will increase to 7.7%, affecting 439 million adults by 2030. A sedentary lifestyle combined with changes in eating habits and the increasing frequency of obesity is thought to be the major causes of such increased rates. Multiple laboratory tests are used for the diagnosis and management of patients with diabetes. The blood glucose concentration is the major diagnostic criterion for diabetes with HbA1c level and is a useful tool for patient monitoring. Self-monitoring of blood glucose (SMBG) has been established as a valuable tool for the management of diabetes BIB005 BIB006 BIB010 BIB007 BIB004 BIB008 . The goal of SMBG is to help the patient achieve and maintain normal blood glucose concentrations in order to delay or even prevent the progression of microvascular (retinopathy, nephropathy and neuropathy) and macrovascular complications (stroke and coronary artery disease). The findings of the Diabetes Control and Complications Trial (DCCT) and the United Kingdom Prospective Diabetes Study (UKPDS) clearly showed that intensive control of elevated levels of blood glucose in patients with diabetes, decreases the frequency of complications such as nephropathy, neuropathy, and retinopathy, and may reduce the occurrence and severity of large blood vessel disease BIB001 . In addition, it can also be useful for detecting hypoglycemia and providing realtime information for adjusting medications, dietary regimens, and physical activity in order to achieve glycemic goals BIB007 . Regular and frequent measurement of blood glucose may provide data for optimizing and/or changing patient treatment strategies. According to the recommendations of the ADA, SMBG should be used in patients on intensive insulin therapy (at least three times daily). And it may useful in patients using less frequent insulin injections, noninsulin therapies, or medical nutrition therapy alone . Due to such recommendations for maintaining normal blood glucose levels, a series of suitable glucose-measuring devices have been developed. Biosensor technology has developed rapidly and can play a key role providing a powerful analytical tool with major applications particularly in medicine. Today's biosensor market is dominated by glucose biosensors. In 2004, glucose biosensors accounted for approximately 85% of the world market for biosensors, which had been estimated to be around $5 billion USD BIB003 . The glucose biosensor market growth is accelerating and manufacturers are engaged in fierce competition. According to the recent report by Global Industry Analysts, Inc., the global market for glucose biosensors and strips will reach $11.5 billion USD by 2012. This article reviews the brief history of biosensors, basic principles of operation, analytical performance requirements, and the present status of glucose biosensors. In addition, how to assess the reliability of testing in clinical practice will be discussed.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> The enzyme electrode is a miniature chemical transducer which functions by combining an electrochemical procedure with immobilized enzyme activity. This particular model uses glucose oxidase immobilized on a gel to measure the concentration of glucose in biological solutions and in the tissues in vitro. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Abstract The pH dependence of the steady state parameters of the glucose oxidase (EC 1.1.3.4, from Aspergillus niger) reaction was determined by O2-monitored experiments over the entire pH range from 3 to 10 at 25°, with d-glucose as substrate. The data were fitted to a three-parameter steady state rate equation and the significance of the steady state parameters was examined by stopped flow half-reaction and turnover measurements at the extremes of the pH range used. The major conclusions from these studies can be summarized as follows. 1. At low pH, in the presence of halide, the maximum turn-over number (kcat) is determined entirely by the rate of flavin reduction (k2) in the reductive half-reaction. Furthermore, substrate combines only with an unprotonated form of the oxidized enzyme and the reductive half-reaction can be represented as follows. H+ E0 (K1)/⇄/(H+) E0 + S (k1)/⇄/(k-1) E0 - S (k2)/→ Er + δ-lactone Since kcat and k2 are both specifically decreased by halides at low pH values, it is probable that the turnover rate in the low pH range is also limited by k2 in the absence of halide. The steady state absorption spectrum of E0 - S is indistinguishable from the spectrum of E0. This finding, together with the fact that removal of the 1-hydrogen from d-glucose is a ratelimiting process in flavin reduction is consistent with both a hydride transfer mechanism and with a flavin-glucose adduct mechanism in which this adduct is relatively unstable and never accumulates significantly as a kinetic intermediate. 2. The importance of k2 as a limiting first order process in turnover diminishes as the pH is raised. Thus, at pH 10 the major first order process in turnover is the breakdown of a species of oxidized enzyme, E'0, in the oxidative half-reaction. The rate of this process at pH 10.0 is 214 sec-1, whereas k2 has a value of 800 sec-1. 3. The reduced enzyme exists in two kinetically significant states of ionization, Er and Er-. The rapid reoxidation of Er with O2, to regenerate E0, is predominant at pH values less than 7. At pH values greater than 7, a much less rapid reaction of Er- with O2, leading to the formation of E'0-, becomes increasingly important. The species E'0- is unreactive with glucose and it is the conversion of a protonated form of E'0- to E0 which principally governs kcat at pH values greater than 7. We present a complete kinetic scheme describing the effects of pH and discuss the possible chemical significance of the species E'0. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Glucose dehydrogenase (GDH), one of the recently discovered NAD(P)+-independent 'quinoprotein' class of oxidoreductase enzymes, was purified from Acinetobacter calcoaceticus LMD 79.41 and immobilised on a 1,1'-dimethylferrocene-modified graphite foil electrode. The second-order rate constant (ks) for the transfer of electrons between GDH and ferrocenemonocarboxylic acid (FMCA) in a homogeneous system, determined using direct current (DC) cyclic voltammetry, was found to be 9.4 x 10(6) litres mol-1 s-1. This value of ks for GDH was more than 40 times greater than that for the flavoprotein glucose oxidase (GOD) under identical conditions. Such high catalytic activities were also observed when GDH was immobilised in the presence of an insoluble ferrocene derivative; a biosensor based on GDH was found to produce more than twice the current density of similar GOD-based electrodes. The steady-state current produced by the GDH-based electrode was limited by the enzymic reaction since methods which increased the enzyme loadings elevated the upper limit of glucose detection from 5 mM to 15 mM. The temperature, pH, stability and response characteristics of the GDH-based glucose sensor illustrate its potential usefulness for a variety of practical applications. In particular, the high catalytic activity and oxygen insensitivity of this biosensor make it suitable for in vivo blood glucose monitoring in the management of diabetes. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Diabetes is one of the leading causes of death and disability in the world. There is a large population in the world suffering from this disease, and the healthcare costs increase every year. It is a chronic disorder resulting from insulin deficiency and hyperglycemia and has a high risk of development of complications for the eyes, kidneys, peripheral nerves, heart, and blood vessels. Quick diagnosis and early prevention are critical for the control of the disease status. Traditional biosensors such as glucose meters and glycohemoglobin test kits are widely used in vitro for this purpose because they are the two major indicators directly involved in diabetes diagnosis and long-term management. The market size and huge demand for these tests make it a model disease to develop new approaches to biosensors. In this review, we briefly summarize the principles of biosensors, the current commercial devices available for glucose and glycohemoglobin measurements, and the recent work in the area of artificial receptors and the potential for the development of new devices for diabetes specifically connected with in vitro monitoring of glucose and glycohemoglobin HbA(1c). <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> The present review summarizes the state of the art in molecular recognition of biowarfare agents and other pathogens and emphasizes the advantages of using particular types of reagents for a given target (e.g. detection of bacteria using antibodies versus nucleic acid probes). It is difficult to draw firm conclusions as to type of biorecognition molecule to use for a given analyte. However, the detection method and reagents are generally target-driven and the user must decide on what level (genetic versus phenotypic) the detection should be performed. In general, nucleic acid-based detection is more specific and sensitive than immunological-based detection, while the latter is faster and more robust. This review also points out the challenges faced by military and civilian defense components in the rapid and accurate detection and identification of harmful agents in the field. Although new and improved sensors will continue to be developed, the more crucial need in any biosensor may be the molecular recognition component (e.g. antibody, aptamer, enzyme, nucleic acid, receptor, etc.). Improvements in the affinity, specificity and mass production of the molecular recognition components may ultimately dictate the success or failure of detection technologies in both a technical and commercial sense. Achieving the ultimate goal of giving the individual soldier on the battlefield or civilian responders to an urban biological attack or epidemic, a miniature, sensitive and accurate biosensor may depend as much on molecular biology and molecular engineering as on hardware engineering. Fortunately, as this review illustrates, a great deal of scientific attention has and is currently being given to the area of molecular recognition components. Highly sensitive and specific detection of pathogenic bacteria and viruses has increased with the proliferation of nucleic acid and immuno-based detection technologies. If recent scientific progress is a fair indicator, the future promises remarkable new developments in molecular recognition elements for use in biosensors with a vast array of applications. <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> The function of amperometric biosensors is related to electron-transfer processes between the active site of an (immobilized) enzyme and an electrode surface which is poised to an appropriate working potential. Problems and specific features of architectures for amperometric biosensors using different electron-transfer pathways such as mediated electron transfer, electron-hopping in redox polymers, electron transfer using mediator-modified enzymes and carbon-paste electrodes, direct electron transfer by means of self-assembled monolayers or via conducting-polymer chains are discussed. <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> The aims of simplifying biochemical measurement and of extending assay reliability outside the con®nes of a central laboratory are present in many applied biology sectors, not just clinical chemistry. An increasing range of desktop analysers are commercially available that are economical both of sample and of operator time. A common theme running through many approaches is the exploitation of biological reagents, with the ultimate in simpli®cation being integration of biological and measurement elements into a simple, monolithic device. This is the basic concept of the biosensor, a biological sample-interactive phase in close contact with a physical or chemical transducer. A typical biosensor construct has three features ± a recognition element, a signal transducing structure and an ampli®cation/processing element (see Fig. 1). Various transduction mechanisms have been used: electrochemical, electrical, optical, thermal and piezoelectric, as summarized in Table 1. Most commonly, in a biosensor, a biorecognition phase (e.g. enzyme, antibody, receptor, single-stranded DNA) interacts with the analyte of interest to produce some chargebased or optical change at the local sensor± transducer interface. Through signal processing this interaction is converted into digital values that relate to the build-up of concentration or activity of the analyte in the vicinity of the device, which in turn relates to the ambient levels in the bulk sample under investigation. A biosensor is not necessarily a stand-alone entity, but should be considered as part of a general development in instrumentation, designed to address generic medical and non-medical measurement science problems. Biosensors, when deployed in a clinical setting, offer the advantage of extra-laboratory analysis of a variety of relevant substances, including hormones, drugs of abuse and metabolites (both in vivo and in vitro). Continuous realtime monitoring of analytes is also a possibility; for example, monitoring of metabolites in blood (where the sample matrix is inevitably optically opaque) in the critical care situation. Generally, biosensors permit the use of low cost, `clean’ technologywith reduced requirements for sample pre-treatment and large sample volume; ultimately, the user can be someone without prior laboratory skills. The `niche’ application is therefore extra-laboratory testing, as realized with conventional dry reagent dipsticks. This review provides basic descriptions of the main subtypes of biosensorwith an indication of their operational capability from a clinical chemistry perspective. Owing to a greater appreciation of the capabilities and limitations of biosensors and new input from the microfabrication and materials science ®elds, the direction of biosensor research is undergoing rapid change. The descriptions that follow are intended to re ect this change and illustrate the shift in emphasis from a preoccupation with bioreagent immobilization and chemistry to a renewed effort towards total system integration. A functionally ef®cient juxtaposition of sample and sensor remains essential for proper function, and the descriptions given provide some relevant examples. <s> BIB007 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> *A special report on the International Union of Pure and Applied Chemistry, Physical Chemistry Division, Commission I.7 (Biophysical Chemistry), Analytical Chemistry Division, Commission V.5 (Electroanalytical Chemistry). <s> BIB008 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> A novel design is described for an amperometric biosensor based on NAD(P)-dependent glucose dehydrogenase (GDH) combined with a plasma-polymerized thin film (PPF). The GDH is sandwiched between several nanometer thick acetonitrile PPFs on a sputtered gold electrode (PPF/GDH/PPF/Au). The lower PPF layer plays the role as an interface between enzyme and electrode because it is extremely thin, adheres well to the substrate (electrode), has a flat surface and a highly-crosslinked network structure, and is hydrophilic in nature. The upper PPF layer (overcoating) was directly deposited on immobilized GDH. The optimized amperometric biosensor characteristics covered 2.5 - 26 mM glucose concentration at +0.6 V of applied potential; the least-squares slope was 320 nA mM-1 cm-2 and the correlation coefficient was 0.990. Unlike conventional wet-chemical processes that are incompatible with mass production techniques, this dry-chemistry procedure has great potential for enabling high-throughput production of bioelectronic devices. <s> BIB009 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Abstract : Molecular recognition is central to biosensing. Since the first biosensor was developed by Updike and Hicks (1967) many biosensors have been studied and developed. As shown in Fig. 1, a biosensor can be defined as a "compact analytical device or unit incorporating a biological or biologically derived sensitive "recognition" element integrated or associated with a physio-chemical transducer" (Turner, 2000). Initially, biosensor recognition elements were isolated from living systems. However many biosensor recognition elements now available are not naturally occurring but have been synthesized in the laboratory. <s> BIB010 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Over 7,000 peer reviewed articles have been published on electrochemical glucose assays and sensors over recent years. Their number makes a full review of the literature, or even of the most recent advances, impossible. Nevertheless, this chapter should acquaint the reader with the fundamentals of the electrochemistry of glucose and provide a perspective of the evolution of the electrochemical glucose assays and monitors helping diabetic people, who constitute about 5 % of the world’s population. Because of the large number of diabetic people, no assay is performed more frequently than that of glucose. Most of these assays are electrochemical. The reader interested also in nonelectrochemical assays used in, or proposed for, the management of diabetes is referred to a 2007 excellent review of Kondepati and Heise [1]. <s> BIB011 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Basic Principles of Glucose Biosensors <s> Abstract Glucose oxidase (β- d -glucose:oxygen 1-oxidoreductase; EC 1.1.2.3.4) catalyzes the oxidation of β- d -glucose to gluconic acid, by utilizing molecular oxygen as an electron acceptor with simultaneous production of hydrogen peroxide. Microbial glucose oxidase is currently receiving much attention due to its wide applications in chemical, pharmaceutical, food, beverage, clinical chemistry, biotechnology and other industries. Novel applications of glucose oxidase in biosensors have increased the demand in recent years. Present review discusses the production, recovery, characterization, immobilization and applications of glucose oxidase. Production of glucose oxidase by fermentation is detailed, along with recombinant methods. Various purification techniques for higher recovery of glucose oxidase are described here. Issues of enzyme kinetics, stability studies and characterization are addressed. Immobilized preparations of glucose oxidase are also discussed. Applications of glucose oxidase in various industries and as analytical enzymes are having an increasing impact on bioprocessing. <s> BIB012
|
A biosensor can be defined as a "compact analytical device or unit incorporating a biological or biologically derived sensitive recognition element integrated or associated with a physio-chemical transducer" . There are three main parts of a biosensor: (i) the biological recognition elements that differentiate the target molecules in the presence of various chemicals, (ii) a transducer that converts the biorecognition event into a measurable signal, and (iii) a signal processing system that converts the signal into a readable form BIB001 BIB009 . The molecular recognition elements include receptors, enzymes, antibodies, nucleic acids, microorganisms and lectins BIB010 BIB005 . The five principal transducer classes are electrochemical, optical, thermometric, piezoelectric, and magnetic . The majority of the current glucose biosensors are of the electrochemical type, because of their better sensitivity, reproducibility, and easy maintenance as well as their low cost. Electrochemical sensors may be subdivided into potentiometric, amperometric, or conductometric types BIB006 BIB007 BIB008 . Enzymatic amperometric glucose biosensors are the most common devices commercially available, and have been widely studied over the last few decades. Amperometric sensors monitor currents generated when electrons are exchanged either directly or indirectly between a biological system and an electrode BIB004 . Generally, glucose measurements are based on interactions with one of three enzymes: hexokinase, glucose oxidase (GOx) or glucose-1-dehydrogenase (GDH) BIB003 . The hexokinase assay is the reference method for measuring glucose using spectrophotometry in many clinical laboratories . Glucose biosensors for SMBG are usually based on the two enzyme families, GOx and GDH. These enzymes differ in redox potentials, cofactors, turnover rate and selectivity for glucose BIB011 . GOx is the standard enzyme for biosensors; it has a relatively higher selectivity for glucose. GOx is easy to obtain, cheap, and can withstand greater extremes of pH, ionic strength, and temperature than many other enzymes, thus allowing less stringent conditions during the manufacturing process and relatively relaxed storage norms for use by lay biosensor users BIB011 BIB012 . The basic concept of the glucose biosensor is based on the fact that the immobilized GOx catalyzes the oxidation of β-D-glucose by molecular oxygen producing gluconic acid and hydrogen peroxide BIB002 . In order to work as a catalyst, GOx requires a redox cofactor-flavin adenine dinucleotide (FAD). FAD works as the initial electron acceptor and is reduced to FADH2.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> Abstract An amperometric method utilizing a glucose electrode has been developed for the determination of blood glucose. The time of measurement is less than 12 s if a kinetic method is used and 1 min if a steady-state method is used. The long-term stability of the electrode is ca. 0.1% change from maximum response per day when stored at room temperature for over 10 months. The enzyme electrode determination of blood sugar compares favorably with commonly used methods with respect to accuracy, precision, and stability. The only reagent required for blood sugar determinations is a buffer solution. The electrode consists of a metallic sensing layer covered by a thin film of immobilized glucose oxidase held in place by means of cellophane. When poised at the correct potential, the current produced is proportional to the glucose concentration. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> There are now a number of distinct strategies which can be employed to make amperometric enzyme electrodes. These include the use of homogeneous mediators, modified electrodes and organic conducting salts. In this paper we review these strategies and discuss their application to NAD(P)H dependent dehydrogenase and flavoprotein based biosensors. In addition we discuss recent work on the immobilisation of glucose oxidase in polypyrrole, poly-N-methylpyrrole, polyaniline and polyphenol films electrochemically grown at the electrode surface and on the covalent attachment of redox mediators to glucose oxidase in order to achieve direct electron transfer to the electrode. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> The role of pyrroloquinoline quinone (PQQ) as a redox shuttle between an electrode and the active site of soluble quinoprotein glucose dehydrogenase (sGDH) from Acinetobacter calcoaceticus has been investigated using both electrochemical and spectrophotometric methods. Reversible redox behavior of PQQ was observed at cystamine-modified gold electrodes. sGDH is able to reduce free PQQ, i.e. PQQ that is not bound to the enzyme and therefore could act as a mediator between the enzyme and the cystamine-modified electrode. The second order rate constants for the reduction of PQQ by sGDH are 6 x 10(3) M(-1) S(-1) and 64 M(-1) S(-1) in the absence and in the presence of calcium ions, respectively. Similarly, the interaction with a second redox protein is realized via the PQQ shuttle. Using DC voltammetry, the reduction rate of cytochrome c (cyt c) by PQQH2 was determined to be on the order of 10(4) M(-1) S(-1) <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> A novel method to generate an integrated electrically contacted glucose dehydrogenase electrode by the surface reconstitution of the apo-enzyme on a pyrroloquinoline quinone (PQQ)-modified polyaniline is described. In situ electrochemical surface plasmon resonance (SPR) is used to characterize the bioelectrocatalytic functions of the system. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> The direct electrochemical oxidation of beta-nicotinamide adenine dinucleotide (NADH) at clean electrodes proceeds through a radical cation intermediate at high overpotentials and is subject to rapid fouling. Consequently, there has been a considerable body of work over the last 20 years looking at ways in which to catalyse the reaction using a wide variety of different types of modified electrode. These studies have resulted in a good knowledge of the essential features required for efficient catalysis. In designing modified electrodes for NADH oxidation, it is not only important to identify suitable redox groups, which can catalyse NADH oxidation and can be attached to the electrode surface; it is also important to ensure facile charge transport between the immobilised redox sites in order to ensure that, in multilayer systems, the whole of the redox film contributes to the catalytic oxidation. One way to achieve this is by the use of electronically conducting polymers such as poly(aniline). <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> A review is presented dealing with electrocatalytic NADH oxidation at mediator-modified electrodes, summarising the history of the topic, as well as the present state of the art. <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose GOx FAD Glucolactone GOx FADH <s> An electrically contacted glucose dehydrogenase (GDH) enzyme electrode is fabricated by the reconstitution of the apo-GDH on pyrroloquinoline quinone (PQQ)-functionalized Au nanoparticles (Au-NPs), 1.4 nm, associated with a Au electrode. The Au-NPs functionalized with a single amine group were attached to the Au surface by 1,4-benzenedithiol bridges, and PQQ was covalently linked to the Au-NPs. The apo-GDH was then reconstituted on the PQQ cofactor sites. The surface coverage of GDH corresponded to 1.4 × 10-12 mol cm-2. The reconstituted enzyme revealed direct electrical contact with the electrode surface, and the bioelectrocatalytic oxidation of glucose occurred with a turnover number of 11 800 s-1. In contrast, a system that included the covalent attachment of GDH to the PQQ-Au-NPs monolayer in a random, nonaligned, configuration revealed lack of electrical communication between the enzyme and the electrode, albeit the enzyme existed in a bioactive structure. The bioelectrocatalytic function of the late... <s> BIB007
|
The cofactor is regenerated by reacting with oxygen, leading to the formation of hydrogen peroxides. Hydrogen peroxide is oxidized at a catalytic, classically platinum (Pt) anode. The electrode easily recognizes the number of electron transfers, and this electron flow is proportional to the number of glucose molecules present in blood BIB001 . Three general strategies are used for the electrochemical sensing of glucose; by measuring oxygen consumption, by measuring the amount of hydrogen peroxide produced by the enzyme reaction or by using a diffusible or immobilized mediator to transfer the electrons from the GOx to the electrode. The number and types of GDH-based amperometric biosensors have been increasing recently. The GDH family includes GDH-pyrroquinolinequinone (PQQ) BIB003 BIB007 BIB004 and GDH-nicotinamide-adenine dinucleotide (NAD) BIB002 BIB005 BIB006 . The enzymatic reaction of GDH is independent of the dissolved oxygen. The quinoprotein GDH recognition element uses PQQ as a cofactor.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> First-generation of Glucose Biosensors <s> The enzyme electrode is a miniature chemical transducer which functions by combining an electrochemical procedure with immobilized enzyme activity. This particular model uses glucose oxidase immobilized on a gel to measure the concentration of glucose in biological solutions and in the tissues in vitro. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> First-generation of Glucose Biosensors <s> By coupling an immobilized enzyme system with an electrochemical sensor, the reagent requirement for this glucose method is eliminated. Miniaturization and a further simplification of the instrumentation for the continuous analysis of glucose is achieved. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> First-generation of Glucose Biosensors <s> First-generation glucose biosensors relied on the use of the natural oxygen cosubstrate and the production and detection of hydrogen peroxide and were much simpler, especially when miniaturized sensors are concerned. More sophisticated bioelectronic systems for enhancing the electrical response, based on patterned monolayer or multilayer assemblies and organized enzyme networks on solid electrodes, have been developed for contacting GOx with the electrode support. Electrochemical biosensors are well suited for satisfying the needs of personal (home) glucose testing, and the majority of personal blood glucose meters are based on disposable (screen-printed) enzyme electrode test strips, which are mass produced by the thick film (screen-printing) microfabrication technology. In the counter and an additional “baseline” working electrode, various membranes (mesh) are incorporated into the test strips along with surfactants, to provide a uniform sample coverage. Such devices offer considerable promise for obtaining the desired clinical information in a simpler, user-friendly, faster, and cheaper manner compared to traditional assays. Continuous ex-vivo monitoring of blood glucose was proposed in 1974 and the majority of glucose sensors used for in-vivo applications are based on the GOx-catalyzed oxidation of glucose by oxygen. The major factors that play a role in the development of clinically accurate in-vivo glucose sensors include issues related to biocompatibility, miniaturization, long-term stability of the enzyme and transducer, oxygen deficit, short stabilization times, in-vivo calibration, baseline drift, safety, and convenience. <s> BIB003
|
The concept of the biosensor for measuring glucose levels was first proposed in 1962 by Clark and Lyons from the Children's Hospital of Cincinnati . This glucose biosensor was composed of an oxygen electrode, an inner oxygen semipermeable membrane, a thin layer of GOx, and an outer dialysis membrane. Enzymes could be immobilized at an electrochemical detector to form an enzyme electrode. A decrease in the measured oxygen concentration was proportional to the glucose concentration. Updike and Hicks significantly simplified the electrochemical glucose assay by immobilizing and thereby stabilizing GOx BIB001 BIB002 . They immobilized GOx in a polyacrylamide gel on an oxygen electrode for the first time and measured glucose concentration in biological fluids BIB001 . The first commercially successful glucose biosensor using Clark's technology was the Yellow Springs Instrument Company analyzer (Model 23A YSI analyzer) for the direct measurement of glucose in 1975, which was based on the amperometric detection of hydrogen peroxide. This analyzer was almost exclusively used in clinical laboratories because of its high cost due to the expensive platinum electrode. The first-generation glucose biosensors were based on the use of natural oxygen substrate and on the detection of the hydrogen peroxide produced. Measurements of peroxide formation have the advantage of being simpler, especially when miniature devices are being considered BIB003 . However, the main problem with the first-generation of glucose biosensors was that the amperometric measurement of hydrogen peroxide required a high operation potential for high selectivity. Considerable efforts during the late 1980s were devoted to minimize the interference of endogenous electroactive species, such as ascorbic acid, uric acid, and certain drugs. Another drawback was the restricted solubility of oxygen in biological fluids, which produced fluctuations in the oxygen tension, known as the "oxygen deficit" .
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Abstract Miniaturisation of the bedside artificial endocrine pancreas is necessary to provide a means of restoring physiological glycaemic excursions in diabetic patients in the long term. One of the remaining problems in producing such a sophisticated device is the difficulty in developing a sufficiently small glucose-monitoring system. A needle-type glucose sensor has been developed which is suitable for use in a closed-loop glycaemic control system. The wearable artificial endocrine pancreas, incorporating the needle-type glucose sensor, a computer calculating infusion rates of insulin, glucagon, or both, and infusion pumps, was tested in pancreatectomised dogs: the device produced perfect control of blood glucose for up to 7 days. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Electrochemical methods traditionally have found important applications in sample analysis and organic and inorganic synthesis. The electrode surface itself can be a powerful tool. This article is an update of chemically modified electrodes (CMEs) and rational molecular design of electrode surfaces. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Oxidoreductases, such as glucose oxidase, can be electrically wired to electrodes by electrostatic complexing or by covalent binding of redox polymers so that the electrons flow from the enzyme, through the polymer, to the electrode. We describe two materials for amperometric biosensors based on a cross-linkable poly(vinylpyridine) complex of (Os-(bpy){sub 2}Cl){sup +/2+} that communicates electrically with flavin adenine dinucleiotide redox centers of enzymes such as glucose oxidase. The uncomplexed pyridines of the poly(vinylpyridine) are quaternized with two types of groups, one promoting hydrophilicity (2-bromoethanol or 3-bromopropionic acid), the other containing an active ester (N-hydroxysuccinimide) that forms amide bonds with both lysines on the enzyme surface and with an added polyamine cross-linking agent (tri-ethylenetetraamine, trien). In the presence of glucose oxidase and trien this polymer forms rugged, cross-linked, electroactive films on the surface of electrodes, thereby eliminating the requirement for a membrane for containing the enzyme and redox couple. The glucose response time of the resulting electrodes is less than 10 s. The glucose response under N{sub 2} shows an apparent Michaelis constant, K{sub m}{prime} = 7.3 mM, and limiting current densities, j{sub max}, between 100 and 800 {mu}A/cm{sup 2}. Currents are decreased by 30-50% in air-saturated solutions because of competitionmore » between O{sub 2} and the Os(III) complex for electrons from the reduced enzyme. Rotating ring disk experiments in air-saturated solutions containing 10 mM glucose show that about 20% of the active enzyme is electrooxidized via the Os(III) complex, while the rest is oxidized by O{sub 2}. These results suggest that only part of the active enzyme is in electrical contact with the electrode.« less <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> The market for decentralized clinical testing is undergoing expansion. Electrochemical biosensors represent one approach to the different demands of this market. A range of sensing systems are described which use electrochemical techniques for the measurement of various analytes and which have been demonstrated to be applicable to the manufacturing methods required for single-use disposable tests. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Diabetes is one of the leading causes of death and disability in the world. There is a large population in the world suffering from this disease, and the healthcare costs increase every year. It is a chronic disorder resulting from insulin deficiency and hyperglycemia and has a high risk of development of complications for the eyes, kidneys, peripheral nerves, heart, and blood vessels. Quick diagnosis and early prevention are critical for the control of the disease status. Traditional biosensors such as glucose meters and glycohemoglobin test kits are widely used in vitro for this purpose because they are the two major indicators directly involved in diabetes diagnosis and long-term management. The market size and huge demand for these tests make it a model disease to develop new approaches to biosensors. In this review, we briefly summarize the principles of biosensors, the current commercial devices available for glucose and glycohemoglobin measurements, and the recent work in the area of artificial receptors and the potential for the development of new devices for diabetes specifically connected with in vitro monitoring of glucose and glycohemoglobin HbA(1c). <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Direct electrode transfer between enzyme and the electrode in biosensors requires high efficiency therefore, synthetic replacement for oxygen led to the development of enzyme mediators and modified electrodes in biosensor fabrication. In this context, a number of electron acceptors and complexes have been used. Present paper gives an overview of various methodologies involved in the mediated systems, their merits and wide applications. <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Recent progress in third-generation electrochemical biosensors based on the direct electron transfer of proteins is reviewed. The development of three generations of electrochemical biosensors is also simply addressed. Special attention is paid to protein-film voltammetry, which is a powerful way to obtain the direct electron transfer of proteins. Research activities on various kinds of biosensors are discussed according to the proteins (enzymes) used in the specific work. <s> BIB007 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> Carbon nanotube (CNT) is a very attractive material for the development of biosensors because of its capability to provide strong electrocatalytic activity and minimize surface fouling of the sensors. This article reviews our recent developments of oxidase- and dehydrogenase-amperometric biosensors based on the immobilization of CNTs, the co-immobilization of enzymes on the CNTs/Nafion or the CNT/Teflon composite materials, or the attachment of enzymes on the controlled-density aligned CNT-nanoelectrode arrays. The excellent electrocatalytic activities of the CNTs on the redox reactions of hydrogen peroxide, nicotinamide adenine dinucleotide (NADH), and homocysteine have been demonstrated. Successful applications of the CNT-based biosensors reviewed herein include the low-potential detections of glucose, organophosphorus compounds, and alcohol. <s> BIB008 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> An amperometric-mediated glucose sensor has been developed by employing a silica sono-gel carbon composite electrode (SCC). The chosen mediators, ferrocene (Fc) and 1,2-diferrocenylethane (1), have been immobilized in the sono-gel composite matrix. The complex 1 has been employed for the first time as an electron transfer mediator for signal transduction from the active centre of the enzyme to the electrode conductive surface. After the optimisation of the construction procedure the best operative conditions for the analytical performance of the biosensor have been investigated in terms of pH, temperature and applied potential. Cyclic voltammetric and amperometric measurements have been used to study the response of both the glucose sensors, which exhibit a fast response and good reproducibility. The sensitivity to glucose is quite similar (6.7+/-0.1 microA/mM versus 5.3+/-0.1 microA/mM) when either Fc or 1 are used as mediators as are the detection limit ca. 1.0 mM (S/N=3) and the range of linear response (up to 13.0 mM). However, the dynamic range for glucose determination results wider when using 1 (up to 25.0 mM). The apparent Michaelis-Menten constants, calculated from the reciprocal plot under steady state conditions, are 27.7 and 31.6 mM for SCC-Fc/GOx and SCC-1/GOx electrodes, respectively, in agreement with a slightly higher electrocatalytic efficiency for the mediator 1. <s> BIB009 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Second-generation of Glucose Biosensors <s> First-generation glucose biosensors relied on the use of the natural oxygen cosubstrate and the production and detection of hydrogen peroxide and were much simpler, especially when miniaturized sensors are concerned. More sophisticated bioelectronic systems for enhancing the electrical response, based on patterned monolayer or multilayer assemblies and organized enzyme networks on solid electrodes, have been developed for contacting GOx with the electrode support. Electrochemical biosensors are well suited for satisfying the needs of personal (home) glucose testing, and the majority of personal blood glucose meters are based on disposable (screen-printed) enzyme electrode test strips, which are mass produced by the thick film (screen-printing) microfabrication technology. In the counter and an additional “baseline” working electrode, various membranes (mesh) are incorporated into the test strips along with surfactants, to provide a uniform sample coverage. Such devices offer considerable promise for obtaining the desired clinical information in a simpler, user-friendly, faster, and cheaper manner compared to traditional assays. Continuous ex-vivo monitoring of blood glucose was proposed in 1974 and the majority of glucose sensors used for in-vivo applications are based on the GOx-catalyzed oxidation of glucose by oxygen. The major factors that play a role in the development of clinically accurate in-vivo glucose sensors include issues related to biocompatibility, miniaturization, long-term stability of the enzyme and transducer, oxygen deficit, short stabilization times, in-vivo calibration, baseline drift, safety, and convenience. <s> BIB010
|
The abovementioned limitations of the first-generation glucose biosensors were overcome by using mediated glucose biosensors, i.e., second-generation glucose sensors. The improvements were achieved by replacing oxygen with non-physiological electron acceptors, called redox mediators that were able to carry electrons from the enzyme to the surface of the working electrode . A reduced mediator is formed instead of hydrogen peroxide and then reoxidized at the electrode, providing an amperometric signal and regenerating the oxidized form of the mediator BIB005 . A variety of electron mediators, such as ferrocene, ferricyanide, quinines, tetrathialfulvalene (TTF), tetracyanoquinodimethane (TCNQ), thionine, methylene blue, and methyl viologen were used to improve sensor performance BIB001 BIB006 . Ferrocenes fit all criteria for a good mediator, such as not reacting with oxygen, stable in both the oxidized and reduced forms, independent of pH, showing reversible electron transfer kinetics, and reacting rapidly with the enzyme BIB006 . They were extensively studied as electron-shuttling mediators between both GOx and GDH-PQQ and the electrodes BIB009 . The first research on the amperometric determination of blood glucose using a redox couple-mediated, GOx-catalyzed reaction was demonstrated in 1970 . However, this study did not lead to the rapid application of amperometry in SMBG in the home setting . During the 1980s, mediator-based second-generation glucose biosensors, the introduction of commercial screen-printed strips for SMBG, and the use of modified electrodes and tailored membranes for enhancing the sensor performance were developed and implemented BIB004 BIB002 BIB007 . The first electrochemical blood glucose monitor for self-monitoring of diabetic patients was pen-sized and was launched in 1987 as ExacTech by Medisense Inc. It used GDH-PQQ and a ferrocene derivative . Its success led to a revolution in the health care of diabetic patients. The current operation of most commercial glucose biosensors does not differ significantly from that of the ExacTech meter. Various self-monitoring glucose biosensors are based on the use of ferrocene or ferricyanide mediators. Various strategies to facilitate electron transfer between the GOx redox center and the electrode surface have been employed, such as enzyme wiring of GOx by electron-conducting redox hydrogels, the chemical modification of GOx with electron-relay groups and the application of nanomaterial as electrical connectors BIB010 BIB003 BIB008 .
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> A novel approach to prepare a stable charge transfer complex (CTC) electrode for the direct oxidation of flavoproteins and the fabrication of a third generation amperometric biosensor (Koopal, C.G.J.; Feiters, M.C.; Nolte, R.J.M. Bioelectrochem. Bioenerg. 1992, 29, 159-175) system is described. Tetrathiafulvalene-tetracyanoquinodimethane (TTF-TCNQ), an organic CTC, is grown at the surface of a shapable electroconductive (SEC) film (a polyanion-doped polypyrrole film) in such a way that it makes a tree-shaped crystal structure standing vertically on the surface. Glucose oxidase (GOx) is adsorbed and cross-linked with glutaraldehyde to fix at the surface of the CTC structure. The space between crystals is filled with cross-linked gelatin to ensure the stability of the treelike crystal structure as well as the stability of the enzyme. Because of the close proximity and the favorable orientation of the enzyme at the CTC surface, the enzyme is directly oxidized at the crystal surface, which leads to a glucose sensor with remarkably improved performance. It works at a potential from 0.0 to 0.25 V (vs Ag/AgCl). The maximum current density at 0.25 V reaches 1.8 mA/cm2, with an extended linear range. The oxygen in the normal buffer solution has little effect on the sensor output. The current caused by interference contained in the physiological fluids is negligible. The working life as well as the shelf life of the sensor is substantially prolonged. The sensor was continuously used in a flow injection system with a continuous polarization at 0.1 V, and the samples (usually 10 mM glucose) were injected at 30 min intervals. After 100 days of continuous use, the current output dropped to 40% of the initial level. No change in the output of the sensor was observed over a year when the sensor was stored dry in a freezer. The electrochemical rate constants and the effective Michaelis constant of the system are reported. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> Abstract The in situ potentiostatic electropolymerization of pyrrole (Py) on a Pt electrode in a thin-layer amperometric cell and the entrapment of the enzyme glucose oxidase (GOx) for the determination of glucose are reported. Polypyrrole (PPy) is directly formed by continuous passage of a buffered solution of the monomer (0.4 M) and enzyme (250 U mL −1 at pH 7 at a flow rate of 0.05−0.1 mL min −1 under a constant applied potential of + 0.85 V vs Ag/AgCl↓. The electrosynthesis of PPy by injection of 500 μL of a Py + GOx solution in a carrier electrolyte consisting of 0.05 M phosphate buffer and 0.1 M KCl at pH 7.0 was also assayed. The influence of the electropolymerization conditions on the analytical response of the sensor to glucose was investigated. The analytical performance of the PPy/GOx sensor was also studied in terms of durability and storage life, as well as selectivity against electroactive species such as ascorbic acid and uric acid as a function of the thickness of the polymer film formed. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> A disposable glucose biosensor based on glucose oxidase immobilized on tetrathiafulvalene−tetracyanoquinodimethane (TTF−TCNQ) conducting organic salt synthesized in situ onto an overoxidized poly(pyrrole) (PPyox) film is described. The TTF−TCNQ crystals grow through the nonconducting polypyrrole film (ensuring electrical connection to the underlying Pt electrode) and emerge from the film forming a treelike structure. The PPyox film prevents the interfering substances from reaching the electrode surface. The sensor behavior can be modeled by assuming a direct reoxidation of the enzyme at the surface of the TTF−TCNQ crystals. A heterogeneous rate constant around 10-6−10-7 cm s-1 has been estimated. The biosensor is nearly oxygen- and interference-free and when integrated in a flow injection system displays a remarkable sensitivity (70 nA/mM) and stability. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> Peroxidases have conquered a prominent position in biotechnology and associated research areas (enzymology, biochemistry, medicine, genetics, physiology, histo- and cytochemistry). They are one of the most extensively studied groups of enzymes and the literature is rich in research papers dating back from the 19th century. Nevertheless, peroxidases continue to be widely studied, with more than 2000 articles already published in 2002 (according to the Institute for Scientific Information). The importance of peroxidases is emphasised by their wide distribution among living organisms and by their multiple physiological roles. They have been divided into three superfamilies according to their source and mode of action: plant peroxidases, animal peroxidases and catalases. Among all peroxidases, horseradish peroxidase (HRP) has received a special attention and will be the focus of this review. A brief description of the three super-families is included in the first section of this review. In the second section, a comprehensive description of the present state of knowledge of the structure and catalytic action of HRP is presented. The physiological role of peroxidases in higher plants is described in the third section. And finally, the fourth section addresses the applications of peroxidases, especially HRP, in the environmental and health care sectors, and in the pharmaceutical, chemical and biotechnological industries. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> Recent progress in third-generation electrochemical biosensors based on the direct electron transfer of proteins is reviewed. The development of three generations of electrochemical biosensors is also simply addressed. Special attention is paid to protein-film voltammetry, which is a powerful way to obtain the direct electron transfer of proteins. Research activities on various kinds of biosensors are discussed according to the proteins (enzymes) used in the specific work. <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> A new material consisting of a water-dispersed complex of polypyrrole-polystyrensulfonate (PPy) embedded in polyacrylamide (PA) has been prepared and tested as enzyme immobilizing system for its use in amperometric biosensors. Glucose oxidase (GOx) and the water-dispersed polypyrrole complex were entrapped within polyacrylamide microgels by polymerization of acrylamide in the dispersed phase of concentrated emulsions containing GOx and PPy. Polymerization of the dispersed phase provides microparticles whose size lies between 3.5 and 7 microm. The aim of incorporating polypyrrole into the polyacrylamide microparticles was to facilitate the direct transfer of the electrons released in the enzymatic reaction from the catalytic site to the platinum electrode surface. The conductivity of the microparticles was measured by a four-point probe method and confirmed by the successful anaerobic detection of glucose by the biosensor. Thus, the polyacrylamide-polypyrrole (PAPPy) microparticles combine the conductivity of polypyrrole and the pore size control of polyacrylamide. The effects of the polyacrylamide-polypyrrole ratio and cross-linking on the biosensor response have been investigated, as well as the influence of analytical parameters such as pH and enzymatic loading. The PAPPy biosensor is free of interferences arising from ascorbic and uric acids, which allows its use for quantitative analysis in human blood serum. <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Third-generation of Glucose Biosensors <s> A mediator-free glucose biosensor, termed a “third-generation biosensor,” was fabricated by immobilizing glucose oxidase (GOD) directly onto an oxidized boron-doped diamond (BDD) electrode. The surface of the oxidized BDD electrode possesses carboxyl groups (as shown by Raman spectra) which covalently cross-link with GOD through glutaraldehyde. Glucose was determined in the absence of a mediator used to transfer electrons between the electrode and enzyme. O2 has no effect on the electron transfer. The effects of experimental variables (applied potential, pH and cross-link time) were investigated in order to optimize the analytical performance of the amperometric detection method. The resulting biosensor exhibited fast amperometric response (less than 5 s) to glucose. The biosensor provided a linear response to glucose over the range 6.67×10−5 to 2×10−3 mol/L, with a detection limit of 2.31×10−5 mol/L. The lifetime, reproducibility and measurement repeatability were evaluated and satisfactory results were obtained. <s> BIB007
|
The third-generation glucose biosensors are reagentless and based on direct transfer between the enzyme and the electrode without mediators. Instead of mediators with high toxicity, the electrode can perform direct electron transfers using organic conducting materials based on charge-transfer complexes BIB001 BIB003 . Therefore, third-generation glucose biosensors have led to implantable, needle-type devices for continuous in vivo monitoring of blood glucose. Conducting organic salts, such as tetrathiafulvalene-tetracyanoquinodimethane (TTF-TCNQ), are known to mediate the electrochemistry of pyrrole-quinolinequinone enzymes (GDH-PQQ) as well as of flavoproteins (GOx). And the absence of mediators provides the biosensors with superior selectivity. However, only a few enzymes including peroxidases have been proved to exhibit direct electron transfer at normal electrode surfaces BIB005 BIB004 . Several studies for other direct electron transfer approaches on the third-generation glucose biosensors have been reported, including TTF-TCNQ that has a tree-like crystal structure BIB001 BIB003 , the GOx/polypyrrole system BIB001 BIB006 BIB002 , and oxidized boron-doped diamond electrodes BIB007 .
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> An artificial pancreas capable of maintaining blood sugar homeostasis within the physiological range is described in this paper. The blood sugar is continuously monitored and then interpreted by a minicomputer which in turn controls and implements the delivery of insulin (or glucose). The entire system is automatic and by giving insulin according to a projected blood sugar level the pattern of insulin administration is similar to the biphasic response of the normal pancreas. Five parameters for control can be selected and altered at will so that any level of normoglycemia can be maintained. Hypoglycemia is not encountered, and none of the patients experienced any side effects during or after the trials. The clinical trials involved a two-day study. On the first day the blood sugar profiles were monitored throughout the day. The patients were given their usual doses of subcutaneous insulin and ate measured meals and snacks. On the second day, they received no subcutaneous insulin; insulin was administered intravenously in accordance with the moment-to-moment requirements of the patients who were given meals the same as those of the previous day. Graphs plotted on a common time scale compare the blood sugar patterns on the two successive days and show the significant improvement in blood sugar homeostasis achieved by this artificial pancreas. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> Abstract Miniaturisation of the bedside artificial endocrine pancreas is necessary to provide a means of restoring physiological glycaemic excursions in diabetic patients in the long term. One of the remaining problems in producing such a sophisticated device is the difficulty in developing a sufficiently small glucose-monitoring system. A needle-type glucose sensor has been developed which is suitable for use in a closed-loop glycaemic control system. The wearable artificial endocrine pancreas, incorporating the needle-type glucose sensor, a computer calculating infusion rates of insulin, glucagon, or both, and infusion pumps, was tested in pancreatectomised dogs: the device produced perfect control of blood glucose for up to 7 days. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> A new miniaturized glucose oxidase based needle-type glu¬ cose mlcrosensor has been developed for subcutaneous glu¬ cose monitoring. The sensor Is equivalent In shape and size to a 26-gauge needle (0.45-mm o.d.) and can be Implanted with ease without any Incision. The novel configuration greatly facilitates the deposition of enzyme and polymer films so that sensors with characteristics suitable for In vivo use (upper limit of linear range > 15 mM, response time 60%). The sensor response is largely Independent of ox¬ ygen tension In the normal physiological range. It also ex¬ hibits good selectivity against common interferences except for the exogenous drug acetaminophen. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> OBJECTIVE ::: To develop a reliable and practical glucose monitoring system by combining a needle-type glucose sensor with a microdialysis sampling technique for long-term subcutaneous tissue glucose measurements. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: A microdialysis Cuprophan hollow-fiber probe (inner diameter, 0.20 mm; length, 15 mm) was perfused with isotonic saline solution (120 microliters/h) and glucose concentrations in the dialysate were measured by a needle-type glucose sensor extracorporeally. This system was tested both in vitro and in vivo. Subcutaneous tissue glucose concentrations were then monitored continuously in 5 healthy and 8 diabetic volunteers for 7 to 8 days. A hollow-fiber probe was inserted into the abdominal subcutaneous tissue. ::: ::: ::: RESULTS ::: This monitoring system achieved excellent results in vitro. Subcutaneous tissue glucose concentrations were measured in a wide range from 1.7 to > 27.8 mM glucose, with a time delay of 6.9 +/- 1.2 min associated with a rise in glucose and 8.8 +/- 1.6 min with a fall in the glucose level (means +/- SE). The overall correlation between subcutaneous tissue (Y) and blood (X) glucose concentration was Y = 1.08X + 0.19 (r = 0.99). The subcutaneous tissue glucose concentration could be monitored precisely for 4 days without any in vivo calibrations and for 7 days by introducing in vivo calibrations. ::: ::: ::: CONCLUSIONS ::: Glycemic excursions could be monitored precisely in the subcutaneous tissue by this microdialysis sampling method with a needle-type glucose sensor in ambulatory diabetic patients. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> An implantable 0.29 mm o.d. flexible wire electrode was designed for subcutaneous monitoring of glucose. The electrode was formed by sequentially depositing in a 0.09 mm deep shielded recess at the tip of a polyimide-insulated 0.25 mm gold wire a "wired" glucose oxidase (GOX) sensing layer, a mass transport limiting layer, and a nonfouling biocompatible layer. The glucose sensing layer was formed by cross-linking (poly[(1-vinylimidazolyl)osmium(4,4'-dimethylbipyridine)2Cl] )+/2+(PVI13-dme - Os) and GOX with poly(ethylene glycol) diglycidyl ether (PEG). The glucose mass transport restricting layer consisted of a poly(ester sulfonic acid) film (Eastman AQ 29D) and a copolymer of polyaziridine and poly(vinyl pyridine) partially quaternized with methylene carboxylate. The outer biocompatible layer was formed by photo-cross-linking tetraacrylated poly(ethylene oxide). The three layers contained no leachable components and had a total mass less than 2.2 micrograms (approximately 50 ng of Os). When poised at +200 mV vs SCE and operated at 37 degrees C, the 5 x 10(-4) cm2 electrode had in vitro a sensitivity of 1-2.5 nA mM-1. The current increased with the glucose concentration up to 60 mM, and the 10-90% response time was approximately 1 min when the glucose concentration was abruptly raised from 5 to 10 mM. The sensitivity decreased by less than 4% over a test period of 1 week, during which the electrode was operated continuously in a 10 mM glucose physiological buffer solution at 37 degrees C.(ABSTRACT TRUNCATED AT 250 WORDS) <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> Abstract ::: The kinetics of the fall in subcutaneous fluid glucose concentration in anesthetized rats (n = 7) after intravenous injection of insulin (0.5 units/kg) was studied by using 5 × 10−4 cm2 active area, <150-sec 10–90% response time, amperometric glucose sensors. The onset of the decline in the subcutaneous glucose concentration was delayed and statistically different (P < 0.001) from that in blood (8.9 ± 2.1 min vs. 3.3 ± 0.5 min). Similarly, the rate of drop in glucose concentration between 6 and 20 min after the insulin injection was different for subcutaneous tissue (3.9 ± 1.3 mg⋅dl−1⋅ min−1) and blood (6.8 ± 2.0 mg⋅dl−1⋅min−1) (P = 0.003). The hypoglycemic nadir in subcutaneous fluid occurred 24.5 ± 6.8 min after that in the blood (P < 0.001). A “forward” mass-transfer model, predicting the subcutaneous glucose concentration from the blood glucose concentrations and an “inverse” model, predicting the blood glucose concentration from the subcutaneous glucose concentration were derived. By using an algorithm based on the latter, the average discrepancy between the measured blood glucose concentration and that estimated from the subcutaneous measurement through the entire 4-hr experiment was reduced from 22.9% to 11.1% (P = 0.025). The maximum discrepancy during the 40-min period after the injection of insulin was reduced from 84.1% to 29.3% (P = 0.006). <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> Current treatment regiments for individuals depending on exogenous insulin are based on measurements of blood glucose obtained through painful finger sticks. The shift to minimal or noninvasive continuous glucose monitoring primarily involves a shift from blood glucose measurements to devices measuring subcutaneous interstitial fluid (ISF) glucose. As the development of these devices progresses, details of the dynamic relationship between blood glucose and interstitial glucose dynamics need to be firmly established. This is a challenging task insofar as direct measures of ISF glucose are not readily available. The current article investigated the dynamic relationship between plasma and ISF glucose using a model-based approach. A two-compartment model system previously validated on data obtained with the MiniMed Continuous Glucose Monitoring System (CGMS) is reviewed and predictions from the original two-compartment model were confirmed using new data analysis of glucose dynamics in plasma and hindlimb lym... <s> BIB007 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> Background: The recent availability of a continuous glucose monitor offers the opportunity to match the demands of intensive diabetes management with a period of equally intensive blood glucose monitoring. The present study evaluates the performance of the MiniMed® continuous glucose monitoring system (CGMS) in patients with diabetes during home use. Methods: Performance data and demographic information were obtained from 135 patients who were (mean ± SD) 40.5 ± 14.5 years old, had an average duration of diabetes of 18.0 ± 9.8 years, 50% were female, 90% were Caucasian, and 87% of whom had been diagnosed with type 1 diabetes. Patients were selected by their physician, trained on the use of the CGMS and wore the device at home for 3 days or more. The performance of the CGMS was evaluated against blood glucose measurements obtained using each patient’s home blood glucose meter. Evaluation statistics included correlation, linear regression, mean difference and percent absolute difference scores, and Clarke e... <s> BIB008 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> The performances and the stability of a novel subcutaneous glucose monitoring system have been evaluated. GlucoDay (A. Menarini I.F.R. S.r.l, Florence Italy) is a portable instrument provided with a micro-pump and a biosensor coupled to a microdialysis system capable of recording the subcutaneous glucose level every 3 min. Long and short term stability of the biosensor are discussed and the results of some critical in vitro and in vivo (on rabbits) experiments are reported. A linear response up to 30 mM has been found for in vivo glucose concentration. The sensitivity referred to blood glucose is better than 0.1 mM and the zero current is typically below the equivalent of 0.1 mM. In the accuracy study a mean bias of 2.7 mg/dl and a correlation coefficient equal to 0.9697 have been found. At room temperature, an excellent membrane stability assures good performances up to 6 months from the first use. <s> BIB009 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> OBJECTIVE ::: We examined the reliability of two continuous glucose sensors in type 1 diabetic patients at night and during rapid glucose excursions and verified the hypothesized nocturnal hypoglycemic drift of the needle-type sensor (CGMSgold) and delay of the microdialysis sensor (GlucoDay). ::: ::: ::: RESEARCH DESIGN AND METHODS ::: Blood was sampled overnight twice per hour in 13 patients. Rapid-acting insulin was given subcutaneously 30 min after breakfast. Sampling once per minute started 45 min after breakfast and 75 min after insulin injection for 30 min, with the aim of determining peak and nadir glucose values. Mean absolute differences (MADs) between sensor and blood glucose values were calculated. Sensor curves were modeled for all patients using linear regression. Horizontal and vertical shifts of sensor curves from the blood glucose curves were assessed. A vertical shift indicates sensor drift and a horizontal shift sensor delay. ::: ::: ::: RESULTS ::: Drift was minimal in the needle-type and microdialysis sensors (-0.02 and -0.04 mmol/l). Mean +/- SD delay was 7.1 +/- 5.5 min for the microdialysis sensor (P < 0.001). MAD was 15.0% for the needle-type sensor and 13.6% for the microdialysis sensor (P = 0.013). After correction for the 7-min delay, the microdialysis MAD improved to 11.7% (P < 0.0001). ::: ::: ::: CONCLUSIONS ::: The microdialysis sensor was more accurate than the needle-type sensor, with or without correction for a 7-min delay. In contrast to the previous version, the current needle-type sensor did not exhibit nocturnal hypoglycemic drift. Continuous subcutaneous glucose sensors are valuable adjunctive tools for glucose trend analyses. However, considering the large MADs, individual sensor values should be interpreted with caution. <s> BIB010 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> BACKGROUND ::: The value of continuous glucose monitoring in the management of type 1 diabetes mellitus has not been determined. ::: ::: ::: METHODS ::: In a multicenter clinical trial, we randomly assigned 322 adults and children who were already receiving intensive therapy for type 1 diabetes to a group with continuous glucose monitoring or to a control group performing home monitoring with a blood glucose meter. All the patients were stratified into three groups according to age and had a glycated hemoglobin level of 7.0 to 10.0%. The primary outcome was the change in the glycated hemoglobin level at 26 weeks. ::: ::: ::: RESULTS ::: The changes in glycated hemoglobin levels in the two study groups varied markedly according to age group (P=0.003), with a significant difference among patients 25 years of age or older that favored the continuous-monitoring group (mean difference in change, -0.53%; 95% confidence interval [CI], -0.71 to -0.35; P<0.001). The between-group difference was not significant among those who were 15 to 24 years of age (mean difference, 0.08; 95% CI, -0.17 to 0.33; P=0.52) or among those who were 8 to 14 years of age (mean difference, -0.13; 95% CI, -0.38 to 0.11; P=0.29). Secondary glycated hemoglobin outcomes were better in the continuous-monitoring group than in the control group among the oldest and youngest patients but not among those who were 15 to 24 years of age. The use of continuous glucose monitoring averaged 6.0 or more days per week for 83% of patients 25 years of age or older, 30% of those 15 to 24 years of age, and 50% of those 8 to 14 years of age. The rate of severe hypoglycemia was low and did not differ between the two study groups; however, the trial was not powered to detect such a difference. ::: ::: ::: CONCLUSIONS ::: Continuous glucose monitoring can be associated with improved glycemic control in adults with type 1 diabetes. Further work is needed to identify barriers to effectiveness of continuous monitoring in children and adolescents. (ClinicalTrials.gov number, NCT00406133.) <s> BIB011 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> Aims The aim of this study was to assess the performance of the Continuous Research Tool (CRT) in a multicentre clinical-experimental study. ::: ::: ::: ::: Methods Three patient groups totalling 28 subjects with diabetes [group A 10 Type 1 (Ulm), group B 10 Type 1 (Neuss), group C eight Type 2 (Aarhus)] participated in this trial. Two CRT microdialysis probes were inserted in parallel in the abdominal subcutaneous tissue for 120 h in each subject. In subjects in group A, glucose excursions were induced on one study day and those in group B underwent a glucose clamp (eu-, hypo- or hyperglycaemic) on one study day. CRT data were calibrated once with a retrospective calibration model based on a run-in time of 24 h and three blood glucose measurements per day. ::: ::: ::: ::: Results All analysable experiments, covering a broad range of blood glucose values, yielded highly accurate data for the complete experimental time with a mean relative absolute difference of 12.8 ± 6.0% and a predictive residual error sum of squares of 15.6 ± 6.3 (mean ± SD). Of all measurement results, 98.2% were in zones A and B of the error grid analysis. The average absolute differences were 1.14 mmol/l for Type 1 and 0.88 mmol/l for Type 2 diabetic patients. Relative absolute differences were 16.0% for Type 1 and 12.6% for Type 2 diabetic patients. ::: ::: ::: ::: Conclusions These results demonstrate that this microdialysis system allows reliable continuous glucose monitoring in patients with diabetes of either type. <s> BIB012 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Continuous Glucose Monitoring Systems (CGMS) <s> OBJECTIVE ::: To evaluate long-term effects of continuous glucose monitoring (CGM) in intensively treated adults with type 1 diabetes. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: We studied 83 of 86 individuals >or=25 years of age with type 1 diabetes who used CGM as part of a 6-month randomized clinical trial in a subsequent 6-month extension study. RESULTS After 12 months, median CGM use was 6.8 days per week. Mean change in A1C level from baseline to 12 months was -0.4 +/- 0.6% (P < 0.001) in subjects with baseline A1C >or=7.0%. A1C remained stable at 6.4% in those with baseline A1C <7.0%. The incidence rate of severe hypoglycemia was 21.8 and 7.1 events per 100 person-years in the first and last 6 months, respectively. Time per day with glucose levels in the range of 71-180 mg/dl increased significantly (P = 0.02) from baseline to 12 months. ::: ::: ::: CONCLUSIONS ::: In intensively treated adults with type 1 diabetes, CGM use and benefit can be sustained for 12 months. <s> BIB013
|
Continuous ex vivo monitoring of blood glucose was proposed in 1974 BIB001 , while in vivo glucose monitoring was demonstrated in 1982 BIB002 . CGMS would offer an improved control of diabetes in providing real-time data of an internal insulin release system. Two types of continuous glucose monitoring systems are currently in use -a continuous subcutaneous glucose monitor and a continuous blood glucose monitor. However, due to surface contamination of the electrode by proteins and coagulation factors and the risk of thromboembolism, most of the CGMSs do not measure blood glucose directly. Therefore, subcutaneously implantable needle-type electrodes measuring glucose concentrations in interstitial fluid have been developed, which reflect the blood glucose level BIB003 BIB005 BIB006 BIB007 . Shichiri et al. described the first needle-type enzyme electrode for subcutaneous implantation in 1982 BIB002 . The first commercial needle-type glucose biosensor was marketed by Minimed (Sylmar, CA, USA). However, it did not provide real-time data, the results of 72 hr monitoring could be downloaded in a physician's office BIB008 . The FDA-approved, needle-type CGMS devices including Minimed Guardian REAL-Time system by Medtronic (Minneapolis, MN, USA), SEVEN by Dexcom (San Diego, CA, USA) and Freestyle Navigator by Abbott (Abbott Park, IL, USA) are most widely used CGMS on the market. These devices display updated real-time glucose concentrations every one to five minutes. The disposable sensor can be used for three to seven days . Continuous subcutaneous glucose monitoring can also be achieved without direct contact between the interstitial fluid and transducer by using the microdialysis technique BIB004 BIB009 . GlucoDay (Menarini, Florence, Italy) and SCGM (Roche, Mannheim, Germany) are based on a microdialysis technique. This approach provides both better precision and accuracy, and lower signal drift than needle-type sensors BIB010 BIB012 . However, numerous requirements of in vivo CGMS include biocompatibility, calibration, long-term stability, specificity, linearity, and miniaturization. The accuracy of these innovative devices is lower than that of traditional glucose biosensors. Although CGM can be associated with improved glycemic control in adults and children with type 1 diabetes BIB011 BIB013 , the clinical usefulness of CGMS has not yet been established .
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> We have described the concept of using the aqueous humor glucose as a measure of the blood glucose concentration, with a view to developing a noninvasive glucose monitor for diabetic individuals. We have conceived of a scleral lens that houses a light source, polarizers, other electro-optic units, and a light detector, and which measures the optical rotation of the aqueous humor continuously. We have built an optical bench mock-up of the glucose sensor and assessed the limits of its capabilities. We have described a physical method, employing the Faraday effect, that modulates the incident light and uses a compensator to introduce a feedback mechanism giving a null-point technique capable of measuring extremely small rotations with an accuracy of 0.4 s of arc. We have used this and have measured the optical rotations of glucose solutions from 0.02 to 0.1%, and have demonstrated linearity in both cases. Miniaturization of the technique is discussed. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> OBJECTIVE ::: To analyze a transcutaneous near-infrared spectroscopy system as a technique for in vivo noninvasive blood glucose monitoring during euglycemia and hypoglycemia. ::: ::: ::: RESEARCH DESIGN AND METHODS ::: Ten nondiabetic subjects and two patients with type 1 diabetes were examined in a total of 27 studies. In each study, the subject's plasma glucose was lowered to a hypoglycemia level (approximately 55 mg/dl) followed by recovery to a glycemic level of approximately 115 mg/dl using an intravenous infusion of insulin and 20% dextrose. Plasma glucose levels were determined at 5-min intervals by standard glucose oxidase method and simultaneously by a near-infrared spectroscopic system. The plasma glucose measured by the standard method was used to create a calibration model that could predict glucose levels from the near-infrared spectral data. The two data sets were correlated during the decline and recovery in plasma glucose, within 10 mg/dl plasma glucose ranges, and were examined using the Clarke Error Grid Analysis. ::: ::: ::: RESULTS ::: Two sets of 1,704 plasma glucose determinations were examined. The near-infrared predictions during the fall and recovery in plasma glucose were highly correlated (r = 0.96 and 0.95, respectively). When analyzed during 10 mg/dl plasma glucose segments, the mean absolute difference between the near-infrared spectroscopy method and the chemometric reference ranged from 3.3 to 4.4 mg/dl in the nondiabetic subjects and from 2.6 to 3.8 mg/dl in the patients with type 1 diabetes. Using the Error Grid Analysis, 97.7% of all the near-infrared predictions were assigned to the A-zone. ::: ::: ::: CONCLUSIONS ::: Our findings suggest that the near-infrared spectroscopy method can accurately predict plasma glucose levels during euglycemia and hypoglycemia in humans. <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> We report here on in vitro and in vivo experiments that are intended to explore the feasibility of photoacoustic spectroscopy as a tool for the noninvasive measurement of blood glucose. The in vivo results from oral glucose tests on eight subjects showed good correlation with clinical measurements but indicated that physiological factors and person-to-person variability are important. In vitro measurements showed that the sensitivity of the glucose measurement is unaffected by the presence of common blood analytes but that there can be substantial shifts in baseline values. The results indicate the need for spectroscopic data to develop algorithms for the detection of glucose in the presence of other analytes. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> The concentration of glucose in the blood may soon be measured noninvasively, without puncturing the finger to obtain a drop of blood. Current prototype devices for this purpose require greater accuracy and miniaturization to be commercially viable. No such device has been approved for marketing by the U.S. Food and Drug Administration. The technology used for noninvasive blood glucose monitoring involves either radiation or fluid extraction. With radiation technology, an energy beam is 1) applied to the body, 2) modified proportionate to the concentration of glucose in the blood, and 3) measured. The blood glucose concentration is then calculated. With fluid extraction technology, a body fluid containing glucose in a concentration proportionate to the blood glucose concentration is extracted and measured. The blood glucose concentration is then calculated. The most promising technologies are 1) near-infrared light spectroscopy, 2) far-infrared radiation spectroscopy, 3) radio wave impedance, 4) optical rotation of polarized light, 5) fluid extraction from skin, and 6) interstitial fluid harvesting. Each method has features predictive of commercial viability, as well as technical problems to overcome. <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> OBJECTIVE —To study the feasibility of noninvasive blood glucose monitoring using optical coherence tomography (OCT) technique in healthy volunteers. RESEARCH DESIGN AND METHODS —An OCT system with the wavelength of 1,300 nm was used in 15 healthy subjects in 18 clinical experiments. Standard oral glucose tolerance tests were performed to induce changes in blood glucose concentration. Blood samples were taken from the right arm vein every 5 or 15 min. OCT images were taken every 10–20 s from the left forearm over a total period of 3 h. The slope of the signals was calculated at the depth of 200–600 μm from the skin surface. RESULTS —A total of 426 blood samples and 8,437 OCT images and signals were collected and analyzed in these experiments. There was a good correlation between changes in the slope of noninvasively measured OCT signals and blood glucose concentrations throughout the duration of the experiments. The slope of OCT signals changed significantly (up to 2.8% per 10 mg/dl) with variation of plasma glucose values. The good correlation obtained between the OCT signal slope and blood glucose concentration is due to the coherent detection of backscattered photons, which allows measurements of OCT signal from a specific tissue layer without unwanted signal from other tissue layers. CONCLUSIONS —This pilot study demonstrated the capability of the OCT technique to monitor blood glucose concentration noninvasively in human subjects. Further studies with a larger number of subjects including diabetic subjects are planned to validate these preliminary results. <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Non-invasive Glucose Monitoring System <s> Glucose monitoring technology has been used in the management of diabetes for three decades. Traditional devices use enzymatic methods to measure glucose concentration and provide point sample information. More recently continuous glucose monitoring devices have become available providing more detailed data on glucose excursions. In future applications the continuous glucose sensor may become a critical component of the closed loop insulin delivery system and, as such, must be selective, rapid, predictable and acceptable for continuous patient use. Many potential sensing modalities are being pursued including optical and transdermal techniques. This review aims to summarize existing technology, the methods for assessing glucose sensing devices and provide an overview of emergent sensing modalities. <s> BIB006
|
Non-invasive glucose analysis is another goal of glucose sensor technology and significant efforts have been made to achieve this goal. Optical or transdermal approaches are the most common noninvasive glucose sensing methods BIB004 BIB006 . The optical glucose sensors use the physical properties of light in the interstitial fluid or the anterior chamber of the eye. These approaches include polarimetry BIB001 , Raman spectroscopy , infrared absorption spectroscopy BIB002 , photo acoustics BIB003 , and optical coherence tomography BIB005 . The GlucoWatch Biographer, manufactured by Cygnus, Inc. (Redwood City, CA, USA), was the first transdermal glucose sensor approved by the US FDA. This watch-like device was based on transdermal extraction of interstitial fluid by reverse iontophoresis. It never widely accepted in the market due to long warm up time, false alarm, inaccuracy, skin irritation and sweating. It was withdrawn in 2008. Considerable efforts have been made in the development of non-invasive glucose devices. However, reliable non-invasive glucose measuring method is still not available.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose Biosensors for Pont-of-Care Testing (POCT) <s> Diabetes is one of the leading causes of death and disability in the world. There is a large population in the world suffering from this disease, and the healthcare costs increase every year. It is a chronic disorder resulting from insulin deficiency and hyperglycemia and has a high risk of development of complications for the eyes, kidneys, peripheral nerves, heart, and blood vessels. Quick diagnosis and early prevention are critical for the control of the disease status. Traditional biosensors such as glucose meters and glycohemoglobin test kits are widely used in vitro for this purpose because they are the two major indicators directly involved in diabetes diagnosis and long-term management. The market size and huge demand for these tests make it a model disease to develop new approaches to biosensors. In this review, we briefly summarize the principles of biosensors, the current commercial devices available for glucose and glycohemoglobin measurements, and the recent work in the area of artificial receptors and the potential for the development of new devices for diabetes specifically connected with in vitro monitoring of glucose and glycohemoglobin HbA(1c). <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose Biosensors for Pont-of-Care Testing (POCT) <s> In current clinical practice, plasma and blood glucose are used interchangeably with a consequent risk of clinical misinterpretation. In human blood, glucose is distributed, like water, between ery ... <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose Biosensors for Pont-of-Care Testing (POCT) <s> Over 7,000 peer reviewed articles have been published on electrochemical glucose assays and sensors over recent years. Their number makes a full review of the literature, or even of the most recent advances, impossible. Nevertheless, this chapter should acquaint the reader with the fundamentals of the electrochemistry of glucose and provide a perspective of the evolution of the electrochemical glucose assays and monitors helping diabetic people, who constitute about 5 % of the world’s population. Because of the large number of diabetic people, no assay is performed more frequently than that of glucose. Most of these assays are electrochemical. The reader interested also in nonelectrochemical assays used in, or proposed for, the management of diabetes is referred to a 2007 excellent review of Kondepati and Heise [1]. <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Glucose Biosensors for Pont-of-Care Testing (POCT) <s> Abstract The self-monitoring of blood glucose (SMBG), traditionally performed by “point-of-care” (POC) devices called portable glucose monitors (PGM) is now considered an integral part of managed care of diabetic patients, especially type 1 diabetics and those on insulin therapy. In patients with type 2 diabetes, SMBG can help to achieve a better glycaemic control, although there is not sufficient evidence to attest that strict monitoring in these patients is associated with an improved outcome. The outcome of several clinical studies, especially in diabetics on insulin therapy, has shown that SMBG plays a key role in preventing complications in the short, medium and long term. According to the current recommendations, SMBG is aimed to achieve and maintain glycaemic control, prevent and identify hypoglycaemia, prevent severe hyperglycaemia, adjust lifestyle changes and establish the need to begin treatment with insulin in gestational diabetes mellitus. However, as clearly highlighted by the American Diabetes Association (ADA) and the National Academy of Clinical Biochemistry (NACB), patients and healthcare personnel should be trained on the appropriate use of the device, as well as on the correct interpretation of data. Moreover, definite analytical targets and appropriate acceptance criteria for performance should be fulfilled before a new device is introduced in the hospital environment, or recommended to the patients. Performance limitations such as hematocrit extremes and analytical interferences should be clearly acknowledged by the operators, before taking test results for granted. The current article aims to review the current indications for SMGB and highlight the most important criteria for the appropriate use of PGMs. <s> BIB004
|
Although laboratory analysis is the most accurate method for evaluating glucose levels, because of cost and time delays, POCT is widely used to determine glucose levels in the inpatient (ER/ICU/ward) and outpatient (office/home) setting. The majority of POC glucose biosensors rely on disposable, screen-printed enzyme electrode test strips BIB001 . These plastic or paper strips have electrochemical cells and contain GDH-PQQ, GDH-NAD, GDH-FAD, or GOx along with a redox mediator BIB003 . A test strip is first inserted into the meter, and then a small drop of capillary blood is obtained from the fingertip with a lancing device, and is applied to the test strip. Finally, a conversion factor is applied and the measurement results are typically displayed as plasma glucose equivalents according to the IFCC recommendation BIB002 . Since the launching of ExacTech in 1987, the portable glucose biosensors have achieved the most significant commercial success. Subsequently, many different devices have been introduced on the global market. The 2010 issue of the Diabetes Forecast Resource Guide, which has a clear focus on the US market, lists 56 different POC glucose sensors from 18 different companies. However, over 90% of the market consists of products manufactured by four major companies, including Abbott, Bayer, LifeScan, and Roche. A brief summary of the key features of commercially available glucose biosensors is provided in Table 2 . Most of the meters are plasma-blood calibrated. The measurement requires a 0.3-to 1.5-µL drop of blood and usually takes less than 10 seconds for the result. To choose a glucose biosensor, the practical (ease of use, size of the test strip, amount of blood needed), technical (analytical reliability, testing speed, ability to store test results in memory) and economical (cost of the meter and or the test strips) factors should be considered BIB004 . Currently, many POC devices can be directly connected to laboratory information systems via proprietary data management system. This significantly expands the data management and networking capabilities of bedside glucose biosensors, and allows for centralized quality control management.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Nineteen patient blood samples each with modified hematocrit concentrations of ∼20, 30, 40, 50, and 60%, were assayed for their glucose concentration by the Glucometer II. Blood removal from the test strip was by the one- and two-blot techniques. The reference method was the Yellow Springs Instruments (YSI) blood glucose analyzer. Glucometer II results were falsely high for the single blot (13–59%, mean 33%) and double blot (12–41%, mean 19%) at 20% hematocrit and falsely low at 60% hematocrit for the single blot (22–44%, mean 37%) and the double blot (26–49%, mean 43%). At 40- 50% hematocrit, Glucometer II and YSI results agreed only for the one-blot technique. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> The glucose polymer icodextrin has become widely used in continuous ambulatory peritoneal dialysis (CAPD) (1). Icodextrin is hydrolyzed in the systemic circulation to oligosaccharides such as maltose, maltotriose, and maltotetraose. The use of icodextrin leads to substantial concentrations of these icodextrin metabolites in the blood, where they are not normally found (2) . The presence of these metabolites could have an effect on enzymatic glucose measurement. A common complication in diabetes mellitus patients is renal insufficiency, which may lead to dialysis. Patients with diabetes mellitus treated by icodextrin-CAPD are at risk for having erroneous blood glucose measurements. ::: ::: To evaluate the possibility of interference of the icodextrin metabolites in various … <s> BIB002 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Bedside capillary glucose monitoring has become widespread in most hospitals. Glucose meters have been shown to provide a reasonably acceptable degree of accuracy compared with laboratory instruments when proper quality control is in place (1). However, a recent clinical case shows that such systems have limitations in hospital settings. ::: ::: A 55 years-old woman was admitted to the emergency room with suspected acetaminophen overdose. She had been found lying on the floor of her apartment in an altered level of consciousness, and a bottle of acetaminophen was discovered beside her. The patient had recently been hospitalized for a period of three months for depression. She had no history of diabetes. Capillary blood glucose as measured with the Glucometer Elite (Bayer) at the emergency room showed values of 8.4, 12.8, and 9.4 mmol/L (samples taken within 2.5 h of arrival). Serum analysis in the laboratory … <s> BIB003 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> c Objectives.—To determine the effects of low, normal, and high hematocrit levels on glucose meter measurements and to assess the clinical risks of hematocrit errors. Design.—Changes in glucose measurements between low and high hematocrit levels were calculated to determine hematocrit effects. The differences between glucose measured with meters and with a plasma glucose method (YSI 2300) also were compared. Setting.—Six handheld glucose meters were assessed in vitro at low (19.1%), normal (38.5%), and high (58.3%) hematocrit levels, and at 6 glucose concentrations ranging from 2.06 mmol/L (37.1 mg/dL) to 30.24 mmol/L (544.7 mg/dL). Results.—Most systems, regardless of the reference to which they were calibrated, demonstrated positive bias at lower hematocrit levels and negative bias at higher hematocrit levels. Low, normal, and high hematocrit levels progressively lowered Precision G and Precision QID glucose measurements. Hematocrit effects on the other systems were more dependent on the glucose concentration. Overall, Accu-Chek Comfort Curve showed the least sensitivity to hematocrit changes, except at the lowest glucose concentration. Conclusions.—We strongly recommend that clinical professionals choose glucose systems carefully and interpret glucose measurements with extreme caution when the patient’s hematocrit value changes, particularly if there is a simultaneous change in glucose level. (Arch Pathol Lab Med. 2000;124:1135‐1140) <s> BIB004 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Thirty drugs used primarily in critical care and hospital settings were tested in vitro to observe interference on glucose measurements with 6 handheld glucose meters and a portable glucose analyzer. Paired differences of glucose measurements between drugspiked samples and unspiked control samples were calculated to determine bias. A criterion of ± 6 mg/dL was used as the cutoff for interference. Ascorbic acid interfered with the measurements on all glucose devices evaluated. Acetaminophen, dopamine, and mannitol interfered with glucose measurements on some devices. Dose-response relationships help assessment of drug interference in clinical use. High dosages of these drugs may be given to critically ill patients or selfadministered by patients without medical supervision. Package inserts for the glucose devices may not provide adequate warning information. Hence, we recommend that clinicians choose glucose devices carefully and interpret results cautiously when glucose measurements are performed during or after drug interventions. Handheld glucose meters are used widely for point-ofcare testing and for self-monitoring of blood glucose at home. The use of glucose meters in the care of critically ill patients is controversial. 1-3 Surveys 4,5 show that some hospitals do not allow handheld glucose meters in critical care units. Kost et al, 6 evaluated a new oxygen-insensitive, glucose dehydrogenase–based electrochemical biosensor and studied the clinical performance of the new handheld glucose meter system in critical care, hospitalized, and ambulatory patients. 6 Little research is available describing drug interference errors with the newest generations of point-of-care glucose devices. The objectives of this study were as follows: (1) to study how drugs commonly used to treat critically ill patients affect glucose measurements obtained with new glucose devices, (2) to introduce a quantitative error criterion for drug interference, and (3) to determine the clinical relevance of drug interferences for point-of-care glucose testing. <s> BIB005 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Aims Diabetic patients on continuous ambulatory peritoneal dialysis (CAPD) for renal failure depend on glucose analysers for regular monitoring of glycaemic control. We aim to inform health professionals of the potentially dangerous overestimation of blood glucose values by some analysers in patients using Icodextrin for dialysis. ::: ::: ::: ::: Methods Twenty-five patients on continuous ambulatory peritoneal dialysis (10 patients on an 8–12-h nocturnal exchange of Icodextrin) had random glucose analysis performed on venous blood using standardized reference laboratory (lab) technique (glucose oxidase GOD-PAP), and simultaneously on capillary blood using the Precision Q·I·D System (glucose oxidase method) and the Advantage meter (glucose dehydrogenase method). ::: ::: ::: ::: Results The Precision Q·I·D System agreed with the lab results in both the Icodextrin group and the non-Icodextrin group (80–90% of values fell within 20% of the corresponding lab result). In contrast, the Advantage meter agreed with the lab results only in the non-Icodextrin group (95% of values within 20% of the corresponding lab value), and not in the Icodextrin group, where only 5% of the analyser values fell within 20% of the corresponding lab value. ::: ::: ::: ::: Conclusions The Precision Q·I·D System, which utilizes glucose oxidase reaction, is safe for use in diabetic patients treated with Icodextrin. All analysers must be cross-checked with the laboratory reference method before use in these patients. ::: ::: ::: ::: Diabet. Med. 19, 693–696 (2002) <s> BIB006 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> BACKGROUND ::: Blood glucose meters are widely used in point of care testing, however, many studies have shown inaccuracies in the glucose measurement due to a number of factors. The present study evaluated the accuracy of a new glucometer capable of simultaneous measurement of patient's hematocrit with algorithmic adjustment of glucose result. This meter was compared with a reference method and 2 other existing meters widely used in the market. ::: ::: ::: METHODS ::: Venous whole blood samples from healthy volunteers were pooled and reconstituted to produce 5 different hematocrit (30-60%) concentrations. Each hematocrit specimen was spiked to produce 4 different glucose (50-500 mg/dl) concentrations. ::: ::: ::: RESULTS ::: Hematocrit measured by the new meter correlated well with the reference method. Mean percentage error differences, compared to the reference method, showed obvious differences between existing meters across the wide hematocrit range at various glucose concentrations. The new meter showed a steady and consistent glucose concentrations compared to the reference method. ::: ::: ::: CONCLUSION ::: The new glucometer, which simultaneously measures hematocrit and performs automated correction for the hematocrit effect, provides a glucose result with improved accuracy. Its measurement of hematocrit from the same blood sample will eliminate the need for additional collection of blood or measurement using another method. <s> BIB007 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Maltose, a disaccharide composed of two glucose molecules, is used in a number of biological preparations as a stabilizing agent or osmolality regulator. Icodextrin, which is converted to maltose, is present in a peritoneal dialysis solution. Galactose and xylose are found in some foods, herbs, and dietary supplements; they are also used in diagnostic tests. When some blood glucose monitoring systems are used--specifically, those that use test strips containing the enzymes glucose dehydrogenase-pyrroloquinolinequinone or glucose dye oxidoreductase--in patients receiving maltose, icodextrin, galactose, or xylose, interference of blood glucose levels can occur. Maltose, icodextrin, galactose, and xylose are misinterpreted as glucose, which can result in erroneously elevated serum glucose levels. This interference can result in the administration of insulin, which may lead to hypoglycemia. In severe cases of hypoglycemia, deaths have occurred. If patients are receiving maltose, icodextrin, galactose, or xylose, clinicians must review the package inserts of all test strips to determine the type of glucose monitoring system being used and to use only those systems whose tests strips contain glucose oxidase, glucose dehydrogenase-nicotinamide adenine dinucleotide, or glucose dehydrogenase-flavin adenine dinucleotide. <s> BIB008 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Over 7,000 peer reviewed articles have been published on electrochemical glucose assays and sensors over recent years. Their number makes a full review of the literature, or even of the most recent advances, impossible. Nevertheless, this chapter should acquaint the reader with the fundamentals of the electrochemistry of glucose and provide a perspective of the evolution of the electrochemical glucose assays and monitors helping diabetic people, who constitute about 5 % of the world’s population. Because of the large number of diabetic people, no assay is performed more frequently than that of glucose. Most of these assays are electrochemical. The reader interested also in nonelectrochemical assays used in, or proposed for, the management of diabetes is referred to a 2007 excellent review of Kondepati and Heise [1]. <s> BIB009 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Interferences <s> Current point-of-care testing (POCT) glucometers are based on various test principles. Two major method groups dominate the market: glucose oxidase-based systems and glucose dehydrogenase-based systems using pyrroloquinoline quinone (GDH-PQQ) as a cofactor. The GDH-PQQ-based glucometers are replacing the older glucose oxidase-based systems because of their lower sensitivity for oxygen. On the other hand, the GDH-PQQ test method results in falsely elevated blood glucose levels in peritoneal dialysis patients receiving solutions containing icodextrin (e.g., Extraneal; Baxter, Brussels, Belgium). Icodextrin is metabolized in the systemic circulation into different glucose polymers, but mainly maltose, which interferes with the GDH-PQQ-based method. Clinicians should be aware of this analytical interference. The POCT glucometers based on the GDH-PQQ method should preferably not be used in this high-risk population and POCT glucose results inconsistent with clinical suspicion of hypoglycemic coma should be retested with another testing system. <s> BIB010
|
A number of variables can influence the reliability of the test results, including hematocrit, hypoxemia, hypotension, altitude, temperature, and humidity. Electrochemical interferents in the blood cause a false high glucose reading by donating non-glucose-derived electrons. An interfering molecule is a species that is electroactive at the operating potential of the amperometric sensor. Suggested standard interferents developed by the FDA include: acetaminophen, salicylic acid, tetracycline, dopamine, ephedrine, ibuprofen, L-DOPA, methy-DOPA, tolazamide, ascorbic acid, bilirubin, cholesterol, creatinine, triglycerides, and uric acid BIB009 . Hematocrit values have a marked effect on the strip-based glucose assay BIB001 BIB007 . Oxygen from red blood cells can compete with the redox mediator for glucose-derived electrons in strips when the enzyme used is GOx. Further, the viscosity of blood increases with increasing hematocrit values, and this increase slows the diffusion of all components and reduces the current in the amperometric sensors BIB004 . Low hematocrit values may be the result of anemia and are associated with overestimated results. Hematocrit causes the most significant error in POC glucose biosensors, especially in the intensive care unit. Ascorbic acid is one of the most common interfering substances that affect the accuracy of glucose biosensors . For glucose biosensors based on electrochemical analysis, ascorbic acid is oxidized at the electrode surface, resulting in the production of more electrons and the generation of a greater current. Increased levels of ascorbic acid lead to increased glucose levels due to the varying degrees of interference caused by ascorbic acid on the glucose biosensors; this may be due to the differences in the enzymes used, technical methodology, or construction of the test strips. GDH-PQQ catalyzes not only the oxidation of glucose, but also of other sugars, such as maltose, maltriose, maltotetraose, and icodextrin (Extraneal; Baxter, Brussels, Belgium) BIB008 BIB010 BIB002 . Most GDHbased POC devices have significant overestimations of glucose in patients undergoing peritoneal dialysis using icodextrin as an osmotic agent BIB002 BIB006 . Icodextrin is metabolized in the systemic circulation into different glucose polymers, but mainly maltose, which interferes with the GDH-PQQbased method. The maltose effect on glucose measurement has been demonstrated by Janssen et al. BIB002 . These authors found that icodextrin metabolites can cause positive interference that may lead to a missed diagnosis of hypoglycemia. Clinicians should be aware of this analytical interference. The glucose biosensors based on the GDH-PQQ method should preferably not be used in this high-risk population, and POC glucose results inconsistent with clinical suspicion of hypoglycemic coma should be retested with another testing system. Various drugs have been shown to interfere with glucose BIB005 . Acetaminophen is one of the most common drugs associated with both accidental and intentional poisoning. A high dose of acetaminophen can generate analytical interference on electrochemical biosensors BIB003 . This drug is directly oxidized after diffusing across a porous membrane to the electrode surface, producing an interfering current that increases the glucose reading.
|
Glucose Biosensors: An Overview of Use in Clinical Practice <s> Conclusions <s> Background: Glucometry is an essential part of diabetes treatment, but so far, no standard quality control procedure verifying blood glucose meter results is available. In this study, we evaluated the analytical performance of eight glucose meters: GX and Esprit™ (Bayer Diagn.), MediSense® Card Sensor, ExacTech (MediSense®) with strips Selfcare™ (Cambridge Diagn), One Touch® Basic, One Touch® II, One Touch® Profile (Lifescan) and Glucotrend® (Boehringer Mannheim/Roche). Methods: The evaluation included within-run imprecision, linearity, comparison with the laboratory method and calculation of differences between individual glucometers. Results: Within-run imprecision ranged from 1.5% to 4.5%, linearity assessed as the correlation between measured and calculated glucose concentrations yielded r2 values from 0.97 to 0.981. Analytical bias of glucose concentration values obtained by the glucometry amounted from 0.14% to 16.9% of values measured by the laboratory method. Bias higher than 5% was found for One Touch® Basic, II and Profile meters (however, glucose concentrations in plasma obtained by the laboratory method One Touch® meters showed analytical bias from 3.0% to 8.8%). The regression analysis yielded slope values from 0.77 to 1.09 and r2 values from 0.86 to 0.98. The best correlations with the laboratory method were found for One Touch® Basic, II Profile, Glucotrend® and Esprit™ meters. The calculated differences between the individual glucose meters can constitute 0.02–1.49 mmol/l (0.96–26.9%) at glucose concentration 5.55 mmol/l, and 0.16–4.16 mmol/l (0.96–24.96%) at glucose concentration 16.67 mmol/l. Error grid analyses have shown that Glucometers One Touch® Basic and One Touch® Profile yielded all results in zone A (acceptable). The remaining glucometers yielded 1–7% of results in zones B (insignificant errors), C or D (lack of detection and treatment). Conclusions: All studied glucometers had both small deviation from laboratory reference values (<10%) and high concurrence with results obtained by the laboratory method. <s> BIB001 </s> Glucose Biosensors: An Overview of Use in Clinical Practice <s> Conclusions <s> Self-monitoring of blood glucose (SMBG) is an important component in diabetes management, helping patients to achieve and maintain normal blood glucose levels. The benefit of SMBG depends on the quality of the measurement performed. Therefore, it is important to know the factors affecting the measurements and to assure that the quality of SMBG measurements is at the highest achievable level possible. To accomplish this, all aspects of the measurement procedure need to be taken into consideration. Sources of variability can be related to the monitor itself, its calibration and use, including blood collection. Improving the variability caused by each source requires specifically designed and targeted efforts. Variability related to the monitor can be assessed in studies that minimize other sources of variability. Variability related to monitor calibration can be assessed and minimized through harmonization or standardization programs, while variability related to the use of the monitors can be addressed through patient-oriented assessment and training. The latter may follow procedures similar to external quality assessment (EQA) programs used in clinical laboratory medicine. However, to obtain an optimal impact on patient care, such programs need to have a wide reach and the social and cultural competency to work efficiently with all patients. The EQA approach or approaches that would provide the most benefit to the patient remain to be determined. <s> BIB002
|
The measurement of blood glucose levels is carried out using various glucose biosensors for the screening, diagnosis, and long-term management of patients with diabetes. Since the prevalence of diabetes is increasing, novel glucose biosensor technologies, including POC devices, CGMS, and noninvasive glucose monitoring systems, have been developed during the last few decades. Recently, the value of glucose biosensors at the POCT by medical professionals and the SMBG by patients has been widely accepted. Rapid and effective corrections of blood glucose levels are based on regular glucose measurements using glucose biosensors. Glucose biosensors have evolved to be more reliable, rapid, and accurate and are also more compact and easy to use. Research for advanced technologies, including electrodes, membrane, immobilization strategies, and nanomaterials, continue to be performed. Despite the impressive advances in glucose biosensor technology, there are still several challenges related to the achievement of reliable glucose monitoring. The ADA recommends the accuracy of a blood glucose POC assay to be <5% of the measured value. However, many POC devices do not meet this criterion. Biosensor technology is less precise and less accurate than the methods used in central laboratories BIB001 . A more systematic evaluation of the analytical performance of glucose biosensors is recommended to ensure reliable and accurate testing. Analytical requirements for suitable hospital or home POC devices include good linearity, precision, and correlation when compared to a clinical laboratory reference method as well as resistance to common interferences. The calibration of the devices and quality control should be performed on a regular basis according to the manufacturer's instructions. User-dependent factors can also affect data quality, and by extension, treatment outcomes. The most commonly cited problems are incorrect use of the test strip, lack of quality control procedure, fingers that are not clean and dirty devices. Various studies have shown that education and continuous training can reduce errors caused by the aforementioned factors and improve measurement performance BIB002 . Therefore, in addition to further technical improvements of the biosensors, standardization of the analytical goals for improved performance, and continuous assessment and training of lay users should be established.
|
Survey On Scheduling And Radio Resources Allocation In Lte <s> INTRODUCTION <s> The problem of allocating resources to multiple users on the downlink of a Long Term Evolution (LTE) cellular communication system is discussed. An optimal (maximum throughput) multiuser scheduler is proposed and its performance is evaluated. Numerical results show that the system performance improves with increasing correlation among OFDMA subcarriers. It is found that a limited amount of feedback information can provide a relatively good performance. A sub-optimal scheduler with a lower computational complexity is also proposed, and shown to provide good performance. The sub-optimal scheme is especially attractive when the number of users is large, as the complexity of the optimal scheme may then be unacceptably high in many practical situations. The performance of a scheduler which addresses fairness among users is also presented. <s> BIB001 </s> Survey On Scheduling And Radio Resources Allocation In Lte <s> INTRODUCTION <s> The choice of SC-FDMA for uplink access in Long Term Evolution (LTE) facilitates great flexibility in allocating medium resources to users while adapting to medium condition. A multicarrier multiple access technique, SC-FDMA gains an advantage over OFDMA in that it reduces the energy requirements in user equipment. 3GPP Releases 8 and 9, however, do not detail a specific scheduler and, accordingly, proposals have been made in the literature in designing an efficient and capable uplink scheduler for LTE. This paper presents a preliminary performance evaluation for representative proposals, and offers medium of comparison in order to highlight the individual characteristics of each proposals. <s> BIB002
|
Long Term Evolution (LTE) or 3.9G systems is an important technology originally designed to achieve a significant data rates (50Mbit/s in the uplink and 100Mbit/s in the downlink in a system bandwidth 20 MHz), while allowing the minimizing of the latency and providing a flexible deployment of the bandwidth. LTE offers several main benefits for the subscribers as well as to the service providers. It significantly satisfies the user's requirement by targeting the broadband mobile applications with enhanced mobility. It is designated as the successor networks 3G. It allows an efficient execution of internet services emerging in recent years. It uses the packet switching process as well as 3G networks, the difference is the using of Time Division multiplexing (TD) and Frequency Division multiplexing (FD) at the same time which is not the case of High Speed Packet Access HSPA networks, which performs only the time division multiplexing, this allows us to have a throughput gain (in spectral efficiency) concerning 40 %. BIB001 Orthogonal Frequency Division Multiple Access OFDMA is the multiple access method used in the downlink direction. It combines Time Division Multiple Access TDMA and Frequency Division Multiple Access FDMA. It is derived from OFDM multiplexing, but it allows the multiple access of the radio resources shared among multiple users. The OFDMA technology divides the available bandwidth into many narrow-band subcarriers and allocates a group of subcarriers to a user based on: its requirements, current system load and system configuration, this process helps to fight the Inter Symbol Interference ISI problem or the channel frequencyselective, as well as, it allows for the same bandwidth a higher spectral efficiency (number of bits transmitted per Hertz) and it has the ability to maintain high throughput even in unfavorable environments with echoes and multipath radio waves. For the uplink direction, Single Carrier-Frequency Division Multiple Access SC-FDMA method is used, it is a variant of OFDMA, they have the same performance (throughput, efficiency ... etc.), but SC-FDMA transmits sub bands sequentially to minimize the Peak -to-Average Power Ratio PAPR (OFDMA has a huge PAPR), this is the reason of choosing SC-FDMA in the uplink side, to deal with the limited power budget (the use of battery by the UE) to minimizing the PAPR. An important element of the LTE architecture is the eNodeB, which has an interesting task, the RRM consisting mainly of two sub-tasks: the AC and the PS. The AC sub-task is responsible for accepting and rejecting new requests, in fact, the decision to accept or reject a request depends on the network capacity to deliver the QoS required by the request (application) while ensuring the QoS asked by the already admitted users in the system. The PS meanwhile, performs the radio resource allocation to the users already accepted by the AC, i.e., performing the UE-RB mapping by selecting UEs who will use the channel affecting their radios resources RBs that permit them to maximize system performance. Several parameters can be used to evaluate the system performance such as, spectral efficiency, delay, fairness and system throughput. The variety of parameters results on the creation of multiple scheduling algorithm and strategies. All these parameters can be summarized in one term, the consideration of flow's QoS. Trying to satisfy all these parameters is impossible, simply because the scheduling and resource allocation is an NP-hard problem, because of this; different scheduling strategies have been developed. An important parameter in the design of schedulers is the support for QoS. This forced the LTE network to differentiate between the data streams and therefore can be distinguished: Conversational class: this is the most sensitive to delay; it includes video conferencing and telephony. It does not tolerate delays because it assumes that in the two ends of the connection is a human. Streaming class: similar to the previous class, but it assumes that only one person is at the end of the connection, therefore, it is less demanding in terms of time and delays. Eg: video streaming Interactive class: examples of this class are: web browsing, access to databases ... etc. Unlike the previously mentioned types, data needs to be delivered in a time interval, but this type of traffic emphasizes the rate of loss of data (Packet Error Rate). Background class: Also known as Best Effort class, no QoS is applied; it tolerates delays, packet loss. Examples of this class: FTP, E-mails etc. BIB002 Two other parameters influence the design of scheduling algorithms in LTE uplink. The later are imposed by the SC-FDMA access method, which are: the minimization of the transmit power (up to maximize the lifetime of UEs batteries), as well, the RBs allocated to a single UE must be contiguous. This makes the radio resource allocation for LTE uplink more difficult than for the downlink. The rest of the paper is as follows, in section 2 will be presented the mathematical modeling of the radio resource allocation problem; in section 3, a state of art of the radio resource allocation strategies and a detailed study of several scheduling algorithms proposed for LTE (uplink and downlink) is made, we will present the scheduling algorithms existing in the literature and evaluate the performance of these algorithms with some criticism in section 4, then a conclusion and perspectives will be presented in section 5.
|
Survey On Scheduling And Radio Resources Allocation In Lte <s> 2.2.The mathematical formulation of the problem <s> The problem of allocating resources to multiple users on the downlink of a Long Term Evolution (LTE) cellular communication system is discussed. An optimal (maximum throughput) multiuser scheduler is proposed and its performance is evaluated. Numerical results show that the system performance improves with increasing correlation among OFDMA subcarriers. It is found that a limited amount of feedback information can provide a relatively good performance. A sub-optimal scheduler with a lower computational complexity is also proposed, and shown to provide good performance. The sub-optimal scheme is especially attractive when the number of users is large, as the complexity of the optimal scheme may then be unacceptably high in many practical situations. The performance of a scheduler which addresses fairness among users is also presented. <s> BIB001
|
Due to the limited signaling resources, sub-carriers are often allocated in groups; that's mean, sub-carriers are grouped into Resource Blocks RBs of 12 adjacent sub-carriers with an inter-subcarrier spacing of 15 kHz. One RB corresponds to 0.5 ms (one time slot) in the time domain, and represents 6 or 7 OFDM symbols BIB001 . The smallest resource unit that can be allocated to a user is a Scheduling Block (SB), which consists of two consecutive RBs, and it's the minimal quantity of radio resource that can be allocated to an UE, constituting a sub-frame time duration of 1 ms. figure 2. We consider an LTE system with N SB and K UEs, the minimum data rate required by the k-th user is R Mbit/s. BIB001 We define one SB as a set N OFDM symbols in time domain and N sub-carriers in frequency domain. Due to control signals and other pilots, only N (s)of the N will be used to transmit data of the s-th OFDM symbol, with s ∈ {1,2, … , N } and N (s) ≤ N . Assuming that j ∈ {1,2, … , J}, J is total number of the supported MCS (Modulation and Coding Scheme), R ( ) the associated code of MCS j, M s the constellation of the j-th MCS and T is the OFDM symbol duration, then the achieved data rate r ( ) by a single SB is: We define as CQI (Channel Quality Indicator, CQI is definite according to the modulation scheme and channel coding) of user k on the n-th SB, the CQI of user k on the N SB (all SB) is g = [g , , g , , … , g , ] and for all users on all SBs G = [g , g , … , g ]. Each user k sends it's g , to the eNodeB to determine whose MSC must be selected by the n-th SB. Furthermore, let q , ( , * ) ∈ {1,2, … , J} be the index of the highest-rate MCS that can be supported by user k for the n-th SB at CQI value g , * , i.e. The achievable throughput by user k on one sub-frame is: Where: ρ , =1 if the n-th SB is allocated to k-th user, and ρ ′ , = 0 for all k ′ ≠ k (one SB is assigned to one and a single user). b Is the MCS selected by the user k on all SBs allocated to it, b =1 means that the j-th MSC is chosen by the user k. Therefore, the radio resource allocation problem can be reported to the throughput maximization for all users under the following constraints: Constraint to: r ≥ R ∀k (5) (4), represents the objective function that is designed to maximize the data rate. (5), means the constraint that aims to guarantee the minimal data rate for each user. (6), is constraint assuring that one SB is assigned to one and a single user. (7), all SBs owed to a user employ the same MSC (it is an LTE networks constraint). In literature, it is proven that the problem (4) is an NP-hard one, after that several authors have proposed their algorithms aimed solving it.
|
Survey On Scheduling And Radio Resources Allocation In Lte <s> Uplink scheduling algorithms <s> This paper considers the multi-user scheduling algorithms for mixed streaming and best-effort service scenario over wideband frequency-selective and time varying channel in cellular network such as E-UTRAN [1] and compares their performance. First, the frequency-selective proportional fair (PF) scheduling is proposed under user's maximum transmission power capability and continuous resource constraints, which is used to reduce the peak to average power ratio (PAPR) for uplink. In addition, the proportional fair with guaranteed bit rate (PFGBR) is proposed to enhance the quality of service (QoS) of streaming services and improve the user experience in terms of bit rate and delay. The PFGBR is to optimize the concave utility function of ? i log R i while provides guaranteed bit rate specifically for streaming applications. At last, various scheduling algorithms are studied and evaluated to support the mixture of the streaming and best-effort services simultaneously with streaming users receiving their desired QoS and best-effort users receiving the maximum possible throughput without compromising the QoS requirement of streaming users. <s> BIB001 </s> Survey On Scheduling And Radio Resources Allocation In Lte <s> Uplink scheduling algorithms <s> Single-carrier frequency division multiple access (SC-FDMA) has been selected as the uplink access scheme in the UTRA Long Term Evolution (LTE) due to its low peak-to-average power ratio properties compared to orthogonal frequency division multiple access. Nevertheless, in order to achieve such a benefit, it requires a localized allocation of the resource blocks, which naturally imposes a severe constraint on the scheduler design. In this paper, three new channel-aware scheduling algorithms for SC-FDMA are proposed and evaluated in both local and wide area scenarios. Whereas the first maximum expansion (FME) and the recursive maximum expansion (RME) are relative simple solutions to the above-mentioned problem, the minimum area-difference to the envelope (MADE) is a more computational expensive approach, which, on the other hand, performs closer to the optimal combinatorial solution. Simulation results show that adopting a proportional fair metric all the proposed algorithms quickly reach a high level of data-rate fairness. At the same time, they definitely outperform the round-robin scheduling in terms of cell spectral efficiency with gains up to 68.8% in wide area environments. <s> BIB002 </s> Survey On Scheduling And Radio Resources Allocation In Lte <s> Uplink scheduling algorithms <s> We propose two scheduling and resource allocation schemes that deal with Quality of Service (QoS) requirements in Uplink Long Term Evolution (LTE) systems. QoS for a multiclass system has been seldom taken into account in previous resource allocation algorithms for LTE uplink. In one of the new algorithms, we investigate the possibility of assigning more than one resource block and its consequences on satisfying stringent QoS requirements in the context of heavy traffic, either in terms of end-to-end delays or of minimum rates. System capacity and the number of effectively served requests are used as performance metrics. Numerical results show that it is possible to manage a multiclass scheme while satisfying the QoS constraints of all requests. Allowing the assignment of more than one resource block per request did not appear to be a meaningful advantage. Indeed, it is only useful when there is a heavy traffic, and some of the requests have stringent QoS requirements. But then, satisfying those requests can only be done at the expense of reducing the overall system capacity and of limiting the number of users who can be served. <s> BIB003 </s> Survey On Scheduling And Radio Resources Allocation In Lte <s> Uplink scheduling algorithms <s> We have focused on SC-FDMA based resource allocation in uplink cellular systems. Subchannel and power allocation constraints specific to SC-FDMA are considered. We considered a binary integer programming-based solution recently proposed for weighted sum rate maximization and extended it to different problems. We considered problems such as rate constraint satisfaction with minimum number subchannels and sum-power minimization subject to rate constraints. Besides stating the binary integer programming formulations for these problems, we propose simpler greedy algorithms for the three problems. Numerical evaluations show that the greedy algorithms perform very close to the optimal solution, with much less computation time. <s> BIB004
|
In this sub-section, we will give an art's state of the well known scheduling algorithms families for the LTE uplink side. • Legacy schedulers This family contains the famous classical algorithm, the Round Robin algorithm; it is also called the base scheduler's family, the RR algorithm has been used in many old systems. The Round Robin algorithm principle is to divide the available RBs into groups of RBs according to . Then, distribute the formed groups among available UEs. • Best effort schedulers The main objective of this category is to maximize the utilization of the radio resource and the equity in the system. It doesn't mean that this category treat only best effort flows, best effort schedulers means it is a greedy algorithm that try to do the better that it can. As we have already said, each algorithm has an objective function to optimize, this type of algorithm uses the PF metric. Several algorithms have been proposed in this family, we noted that the greedy algorithms are very suitable for this kind of traffic. The principle of greedy algorithm is that the RBs are grouped into RCs, with each RC containing a set of contiguous RBs. After that each RC gets allocated to the UE having the highest metric in the matrix, the RC and UE will be removed from the available RC list and UE schedulable list. The algorithm aims to maximize the fairness in resource allocation among UEs. This algorithm uses the PF paradigm and tries to maximize the following objective function R(u) represents the data rate at instant t. The using of the logarithmic function is to have the proportional fairness. After that, authors in have proposed three algorithms, First Maximum Expansion (FME), Recursive Maximum Expansion (RME) and Minimum Area Difference. They belong to the same category, so they use the same objective function, but they differ in the manner that the resources were allocated. For the FME, the algorithm starts with searching the UE having the highest metric value, once found, it expands the allocation process in the left or in the right (it compares the value of RB in the left with right value for the same UE and chooses the highest), until the algorithm finds no more RB whose having highest metric for the same user selected above. In the other hand, The RME scheduler starts similarly to FME (it searches for the couple (UE-RB) having the highest metric value), then it expands the allocation process both in the left and the right until there will be no more users whose maximum metrics belong to the same user. The MAD algorithm is a search-tree based; its problem is having a higher computational complexity. It has been proven that RME has higher performance compared to FME in term of spectral efficiency. So, after that, RME has been explored in BIB002 [18], the authors proposed two variants of RME, the Improved RME (IRME), and the Improved Tree-Based RME (ITRME).The results show an improvement in spectral efficiency by 15%. • QoS based algorithms Two important elements must be taken into account by this scheduler's family, the maximum delay and the throughput. Also the algorithm must offer the required QoS parameters for each user regarding to the already served users. The Proportional Fair with Guaranteed Bit Rate (PFGBR) is a QoS based algorithm, From its name, we identify two metrics, PF and GBR, the PF metric is used to schedule the UEs with non GBR flows and for those with GBR flows , the algorithm changes the metric in order to differentiate the EU (giving priorities for UEs handling GBR streams). This algorithm has two objectives, maximizing the fairness of non GBR flows and preserves the QoS of GBR. The objective function is as follows. BIB001 ( , ) = R (u) : Average throughput of user u at TTI t R * (u, c) : Estimated throughput of user u at resource chunk c at TTI t. Resource Chunk RC is a set of continues RBs. This algorithm performs very well with the UEs having QoS requirement ant treats the starvation problem of UEs handling best effort traffic. The authors in BIB003 have proposed two algorithms, they use a combined utility based metric with guaranteed bit rate and delay provisioning. The objective function used is defined as follows: α , :=1 if the RB r is allocated to the UE u. f is defining as : The first one, named Single Channel Scheduling Algorithm SC-SA assigns one RB to each UE at a given TTI. In case that number of active users is lesser than the number of RBs, the algorithm distributes the RBs proportionally between users according to . Otherwise i.e. if the number of schedulable users is higher than the total number of available RBs, it assigns RBs to users experiencing the poorest conditions (eg, users that the maximum delay is almost reached). The main objective of this algorithm is to allocate resources to UEs with severe QoS constraints. The second is called Multiple Channel Scheduling Algorithm (MC-SA). It is similar to the first one; the main difference is the possibility to allocate more than one RB to the users that are not meeting the throughput target. These algorithms have the same behavior in case the number of UEs is smaller than the number of available RBs in the system. In the case where the number UE is higher than that of available RBs, it allocates the taking into account the (19) equation. It starts with bad conditions one; it first looks for all the RB that maximizes data rate and then looks at the left and right of this RB to the allocation of remaining n RBs. • Power-Optimizing schedulers The main purpose of this class of algorithms is to minimize the transmitted signal power trying to extend the duration activity of UE. In fact, it coincides with the objective of using SC-FDMA method. Schedulers of this family usually have some QoS treatments, so they perform some decisions to reduce the transmitted power till maintaining the minimal QoS requirements. This approach was not really too addressed by researchers, therefore, few algorithms appear in the literature. Such as BIB004
|
A review of dynamic vehicle routing problems <s> Introduction <s> The paper considers the single vehicle routing problem with stochastic demands. While most of the literature has studied the a priori solution approach, this work focuses on computing a reoptimization-type routing policy. This is obtained by sequentially improving a given a priori solution by means of a rollout algorithm. The resulting rollout policy appears to be the first computationally tractable algorithm for approximately solving the problem under the reoptimization approach. After describing the solution strategy and providing properties of the rollout policy, the policy behavior is analyzed by conducting a computational investigation. Depending on the quality of the initial solution, the rollout policy obtains 1% to 4% average improvements on the a priori approach with a reasonable computational effort. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Introduction <s> This paper examines approximate dynamic programming algorithms for the single-vehicle routing problem with stochastic demands from a dynamic or reoptimization perspective. The methods extend the rollout algorithm by implementing different base sequences (i.e. a priori solutions), look-ahead policies, and pruning schemes. The paper also considers computing the cost-to-go with Monte Carlo simulation in addition to direct approaches. The best new method found is a two-step lookahead rollout started with a stochastic base sequence. The routing cost is about 4.8% less than the one-step rollout algorithm started with a deterministic sequence. Results also show that Monte Carlo cost-to-go estimation reduces computation time 65% in large instances with little or no loss in solution quality. Moreover, the paper compares results to the perfect information case from solving exact a posteriori solutions for sampled vehicle routing problems. The confidence interval for the overall mean difference is (3.56%, 4.11%). <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Introduction <s> We consider the vehicle-routing problem with stochastic demands (VRPSD) under reoptimization. We develop and analyze a finite-horizon Markov decision process (MDP) formulation for the single-vehicle case and establish a partial characterization of the optimal policy. We also propose a heuristic solution methodology for our MDP, named partial reoptimization, based on the idea of restricting attention to a subset of all the possible states and computing an optimal policy on this restricted set of states. We discuss two families of computationally efficient partial reoptimization heuristics and illustrate their performance on a set of instances with up to and including 100 customers. Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that our best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances. <s> BIB003
|
becomes idle BIB002 BIB001 BIB003 . Based on these dimensions, Table 1 identifies four categories of routing problems.
|
A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The standard vehicle-scheduling problem is deterministic, assuming all factors are known with certainty in advance of scheduling. In practice there are several areas which might contain uncertainty. This paper suggests ways of tackling these, but concentrates on problems where some customers do not need deliveries during a scheduling period. If the number of such customers is small, semi-fixed routes may be acceptable. As the number of customers omitted rises, there comes a point when rescheduling becomes preferable. The potential savings made by semi-fixed or variable routes over fixed routes are estimated for standard problems. The implications of these savings are then evaluated for a wholesale distributor. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> This paper considers the vehicle routing problem with stochastic demands. The objective is to provide an overview of this problem, and to examine a variety of solution methodologies. The concepts and the main issues are reviewed along with some properties of optimal solutions. The existing stochastic mathematical programming formulations are presented and compared and a new formulation is proposed. A new solution framework for the problem using Markovian decision processes is then presented. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> This paper considers vehicle routing problems (VRPs) with stochastic service and travel times, in which vehicles incur a penalty proportional to the duration of their route in excess of a preset constant. Three mathematical programming models are presented: a chance constrained model, a three-index simple recourse model and a two-index recourse model. A general branch and cut algorithm for the three models is described. Computational results indicate that moderate size problems can be solved to optimality. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> In recent years new insights and algorithms have been obtained for the classical, deterministic vehicle routing problem as well as for natural stochastic and dynamic variations of it. These new developments are based on theoretical analysis, combine probabilistic and combinatorial modeling, and lead to new algorithms that produce near-optimal solutions, and a deeper understanding of uncertainty issues in vehicle routing. In this paper, we survey these new developments with an emphasis on the insights gained and on the algorithms proposed. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> Abstract The purpose of this review article is to provide a summary of the scientific literature on stochastic vehicle routing problems. The main problems are described within a broad classification scheme and the most important contributions are summarized in table form. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> Abstract The paper considers a version of the vehicle routing problem where customers’ demands are uncertain. The focus is on dynamically routing a single vehicle to serve the demands of a known set of geographically dispersed customers during real-time operations. The goal consists of minimizing the expected distance traveled in order to serve all customers’ demands. Since actual demand is revealed upon arrival of the vehicle at the location of each customer, fully exploiting this feature requires a dynamic approach. This work studies the suitability of the emerging field of neuro-dynamic programming (NDP) in providing approximate solutions to this difficult stochastic combinatorial optimization problem. The paper compares the performance of two NDP algorithms: optimistic approximate policy iteration and a rollout policy. While the former improves the performance of a nearest-neighbor policy by 2.3%, the computational results indicate that the rollout policy generates higher quality solutions. The implication for the practitioner is that the rollout policy is a promising candidate for vehicle routing applications where a dynamic approach is required. Scope and purpose Recent years have seen a growing interest in the development of vehicle routing algorithms to cope with the uncertain and dynamic situations found in real-world applications (see the recent survey paper by Powell et al. [1] ). As noted by Psaraftis [2] , dramatic advances in information and communication technologies provide new possibilities and opportunities for vehicle routing research and applications. The enhanced capability of capturing the information that becomes available during real-time operations opens up new research directions. This informational availability provides the possibility of developing dynamic routing algorithms that take advantage of the information that is dynamically revealed during operations. Exploiting such information presents a significant challenge to the operations research/management science community. The single vehicle routing problem with stochastic demands [3] provides an example of a simple, yet very difficult to solve exactly, dynamic vehicle routing problem [2, p. 157] . The problem can be formulated as a stochastic shortest path problem [4] characterized by an enormous number of states. Neuro-dynamic programming [5] , [6] is a recent methodology that can be used to approximately solve very large and complex stochastic decision and control problems. In this spirit, this paper is meant to study the applicability of neuro-dynamic programming algorithms to the single-vehicle routing problem with stochastic demands. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The Vehicle Routing Problem covers both exact and heuristic methods developed for the VRP and some of its main variants, emphasizing the practical issues common to VRP. The book is composed of three parts containing contributions from well-known experts. The first part covers basic VRP, known more commonly as capacitated VRP. The second part covers three main variants of VRP with time windows, backhauls, and pickup and delivery. The third part covers issues arising in real-world VRP applications and includes both case studies and references to software packages. The book will be of interest to both researchers and graduate-level students in the communities of operations research and matematical sciences. It focuses on a specific family of problems while offering a complete overview of the effective use of the most important techniques proposed for the solution of hard combinatorial problems. Practitioners will find this book particularly usef <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> Abstract This paper considers the redeployment problem for a fleet of ambulances. This problem is encountered in the real-time management of emergency medical services. A dynamic model is proposed and a dynamic ambulance management system is described. This system includes a parallel tabu search heuristic to precompute redeployment scenarios. Simulations based on real-data confirm the efficiency of the proposed approach. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The classical Vehicle Routing Problem consists ofdetermining optimal routes form identical vehicles, starting and leaving at the depot, such that every customer is visited exactly once. In the capacitated version (CVRP) the total demand collected along a route cannot exceed the vehicle capacity. This article considers the situation where some ofthe demands are stochastic. This implies that the level of demand at each customer is not known before arriving at the customer. In some cases, the vehicle may thus be unable to load the customer's demand, even ifthe expected demand along the route does not exceed the vehicle capacity. Such a situation is referred to as a failure. The capacitated vehicle routing problem with stochastic demands (SVRP) then consists ofminimizing the total cost ofthe planned routes and of expected failures. Here, penalties for failures correspond to return trips to the depot. The vehicle first returns to the depot to unload, then resumes its trip as originally planned. This article studies an implementation of the IntegerL-shaped method for the exact solution of the SVRP. It develops new lower bounds on the expected penalty for failures. In addition, it provides variants of the optimality cuts for the SVRP that also hold at fractional solutions. Numerical experiments indicate that some instances involving up to 100 customers and few vehicles can be solved to optimality within a relatively short computing time. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> In a companion paper (Godfrey and Powell 2002) we introduced an adaptive dynamic programming algorithm for stochastic dynamic resource allocation problems, which arise in the context of logistics and distribution, fleet management, and other allocation problems. The method depends on estimating separable nonlinear approximations of value functions, using a dynamic programming framework. That paper considered only the case in which the time to complete an action was always a single time period. Experiments with this technique quickly showed that when the basic algorithm was applied to problems with multiperiod travel times, the results were very poor. In this paper, we illustrate why this behavior arose, and propose a modified algorithm that addresses the issue. Experimental work demonstrates that the modified algorithm works on problems with multiperiod travel times, with results that are almost as good as the original algorithm applied to single period travel times. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> We consider an aggregated version of a large-scale driver scheduling problem, derived from an application in less-than-truckload trucking, as a dynamic resource allocation problem. Drivers are aggregated into groups characterized by an attribute vector which capture the important attributes required to incorporate the work rules. The problem is very large: over 5,000 drivers and 30,000 loads in a four-day planning horizon. We formulate a problem that we call theheterogeneous resource allocation problem, which is more general than a classical multicommodity flow problem. Since the tasks have one-sided time windows, the problem is too large to even solve an LP relaxation. We formulate the problem as a multistage dynamic program and solve it using adaptive dynamic programming techniques. Since our problem is too large to solve using commercial solvers, we propose three independent benchmarks and demonstrate that our technique appears to be providing high-quality solutions in a reasonable amount of time. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> We consider stochastic vehicle routing problems on a network with random travel and service times. A fleet of one or more vehicles is available to be routed through the network to service each node. Two versions of the model are developed based on alternative objective functions. We provide bounds on optimal objective function values and conditions under which reductions to simpler models can be made. Our solution method embeds a branch-and-cut scheme within a Monte Carlo sampling-based procedure. <s> BIB012 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. ::: ::: We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time. <s> BIB013 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> Abstract This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. <s> BIB014 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> There has been considerable recent interest in the dynamic vehicle routing problem, but the complexities of this problem class have generally restricted research to myopic models. In this paper, we address the simpler dynamic assignment problem, where a resource (container, vehicle, or driver) can serve only one task at a time. We propose a very general class of dynamic assignment models, and propose an adaptive, nonmyopic algorithm that involves iteratively solving sequences of assignment problems no larger than what would be required of a myopic model. We consider problems where the attribute space of future resources and tasks is small enough to be enumerated, and propose a hierarchical aggregation strategy for problems where the attribute spaces are too large to be enumerated. Finally, we use the formulation to also test the value of advance information, which offers a more realistic estimate over studies that use purely myopic models. <s> BIB015 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The capacitated vehicle routing problem (CVRP) is the problem in which a set of identical vehicles located at a central depot is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints. This paper provides a review of the most recent developments that had a major impact in the current state-of-the-art of exact algorithms for the CVRP. The most important mathematical formulations for the problem together with various CVRP relaxations are reviewed. The paper also describes the recent exact methods for the CVRP and reports a comparison of their computational performances. <s> BIB016 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> In the Vehicle Routing Problem (VRP), the aim is to design a set of m minimum cost vehicle routes through n customer locations, so that each route starts and ends at a common location and some side constraints are satisfied. Common applications arise in newspaper and food delivery, and in milk collection. This article summarizes the main known results for the classical VRP in which only vehicle capacity constraints are present. The article is structured around three main headings: exact algorithms, classical heuristics, and metaheuristics. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007 <s> BIB017 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> We consider online routing optimization problems where the objective is to minimize the time needed to visit a set of locations under various constraints; the problems are online because the set of locations are revealed incrementally over time. We consider two main problems: (1) the online traveling salesman problem (TSP) with precedence and capacity constraints, and (2) the online TSP with m salesmen. For both problems we propose online algorithms, each with a competitive ratio of 2; for the m-salesmen problem, we show that our result is best-possible. We also consider polynomial-time online algorithms. ::: ::: We then consider resource augmentation, where we give the online servers additional resources to offset the powerful offline adversary advantage. In this way, we address a main criticism of competitive analysis. We consider the cases where the online algorithm has access to faster servers, servers with larger capacities, additional servers, and/or advanced information. We derive improved competitive ratios. We also give lower bounds on the competitive ratios under resource augmentation, which in many cases are tight and lead to best-possible results. ::: ::: Finally, we study online algorithms from an asymptotic point of view. We show that, under general stochastic structures for the problem data, unknown and unused by the online player, the online algorithms are almost surely asymptotically optimal. Furthermore, we provide computational results that show that the convergence can be very fast. <s> BIB018 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> Dynamic response to emergencies requires real time information from transportation agencies, public safety agencies and hospitals as well as the many essential operational components. In emergency response operations, good vehicle dispatching strategies can result in more efficient service by reducing vehicles’ travel times and system preparation time and the coordination between these components directly influences the effectiveness of activities involved in emergency response. In this chapter, an integrated emergency response fleet deployment system is proposed which embeds an optimization approach to assist the dispatch center operators in assigning emergency vehicles to emergency calls, while having the capability to look ahead for future demands. The mathematical model deals with the real time vehicle dispatching problem while accounting for the service requirements and coverage concerns for future demand by relocating and diverting the on-route vehicles and remaining vehicles among stations. A rolling-horizon approach is adopted in the model to reduce the relocation sites in order to save computation time. A simulation program is developed to validate the model and to compare various dispatching strategies <s> BIB019 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The Vehicle Routing Problem (VRP) was introduced 50 years ago by Dantzig and Ramser under the title “The Truck Dispatching Problem.” The study of the VRP has given rise to major developments in the fields of exact algorithms and heuristics. In particular, highly sophisticated exact mathematical programming decomposition algorithms and powerful metaheuristics for the VRP have been put forward in recent years. The purpose of this article is to provide a brief account of this development. <s> BIB020 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> We consider the vehicle-routing problem with stochastic demands (VRPSD) under reoptimization. We develop and analyze a finite-horizon Markov decision process (MDP) formulation for the single-vehicle case and establish a partial characterization of the optimal policy. We also propose a heuristic solution methodology for our MDP, named partial reoptimization, based on the idea of restricting attention to a subset of all the possible states and computing an optimal policy on this restricted set of states. We discuss two families of computationally efficient partial reoptimization heuristics and illustrate their performance on a set of instances with up to and including 100 customers. Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that our best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances. <s> BIB021 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The multi-compartment vehicle routing problem (MC-VRP) consists of designing transportation routes to satisfy the demands of a set of customers for several products that, because of incompatibility constraints, must be loaded in independent vehicle compartments. Despite its wide practical applicability the MC-VRP has not received much attention in the literature, and the few existing methods assume perfect knowledge of the customer demands, regardless of their stochastic nature. This paper extends the MC-VRP by introducing uncertainty on what it is known as the MC-VRP with stochastic demands (MC-VRPSD). The MC-VRPSD is modeled as a stochastic program with recourse and solved by means of a memetic algorithm. The proposed memetic algorithm couples genetic operators and local search procedures proven to be effective on deterministic routing problems with a novel individual evaluation and reparation strategy that accounts for the stochastic nature of the problem. The algorithm was tested on instances of up to 484 customers, and its results were compared to those obtained by a savings-based heuristic and a memetic algorithm (MA/SCS) for the MC-VRP that uses a spare capacity strategy to handle demand fluctuations. In addition to effectively solve the MC-VRPSD, the proposed MA/SCS also improved 14 best known solutions in a 40-problem testbed for the MC-VRP. <s> BIB022 </s> A review of dynamic vehicle routing problems <s> Dynamic and deterministic <s> The vehicle routing problem with stochastic demands (VRPSD) consists of designing transportation routes of minimal expected cost to satisfy a set of customers with random demands of known probability distribution. This paper tackles a generalization of the VRPSD known as the multicompartment VRPSD (MC-VRPSD), a problem in which each customer demands several products that, because of incompatibility constraints, must be loaded in independent vehicle compartments. To solve the problem, we propose three simple and effective constructive heuristics based on a stochastic programming with recourse formulation. One of the heuristics is an extension to the multicompartment scenario of a savings-based algorithm for the VRPSD; the other two are different versions of a novel look-ahead heuristic that follows a route-first, cluster-second approach. In addition, to enhance the performance of the heuristics these are coupled with a post-optimization procedure based on the classical 2-Opt heuristic. The three algorithms were tested on instances of up to 200 customers from the MC-VRPSD and VRPSD literature. The proposed heuristics unveiled 26 and 12 new best known solutions for a set of 180 MC-VRPSD problems and a 40-instance testbed for the VRPSD, respectively. <s> BIB023
|
Dynamic and stochastic Table 1 : Taxonomy of vehicle routing problems by information evolution and quality. In static and deterministic problems, all input is known beforehand and vehicle routes do not change once they are in execution. This classical problem has been extensively studied in the literature, and we refer the interested reader to the recent reviews of exact and approximate methods by Baldacci et al. BIB016 , Cordeau et al. , Laporte BIB017 BIB020 , and Toth and Vigo BIB007 . Static and stochastic problems are characterized by input partially known as random variables, which realizations are only revealed during the execution of the routes. Additionally, it is assumed that routes are designed apriori and only minor changes are allowed afterwards. For instance, allowable changes include planning a trip back to the depot or skipping a customer. Applications in this category do not require any technological support. Uncertainty may affect any of the input data, yet the three most studied cases are : stochastic customers, where a customer needs to be serviced with a given probability BIB001 ; stochastic times, in which either service or travel times are modeled by random variables BIB012 BIB003 BIB013 ; and lastly, stochastic demands BIB002 BIB009 BIB022 BIB023 BIB006 BIB021 . Further details on the static stochastic vehicle routing can be found in the reviews by Bertsimas and Simchi-Levi BIB004 , Cordeau et al. , and Gendreau et al. BIB005 . In dynamic and deterministic problems, part or all of the input is unknown and revealed dynamically during the design or execution of the routes. For these problems, vehicle routes are redefined in an ongoing fashion, requiring technological support for real-time communication between the vehicles and the decision maker (e.g., mobile phones and global positioning systems). This class of problems are also referred to as online or real time by some authors BIB018 . Similarly, dynamic and stochastic problems have part or all of their input unknown and revealed dynamically during the execution of the routes, but in contrast with the latter category, exploitable stochastic knowledge is available on the dynamically revealed information. As before, the vehicle routes can be redefined in an ongoing fashion with the help of technological support. Besides dynamic routing problems, where customer visits must be explicitly sequenced along the routes, there are other related vehicle dispatching problems, such as managing a fleet of emergency vehicles BIB014 BIB008 BIB019 , or the so-called dynamic allocation problems in the area of long haul truckload trucking BIB010 BIB011 BIB015 . In this paper, we focus solely on dynamic problems with an explicit routing dimension. The remainder of this document is organized as follows. Section 2 presents a general description of dynamic routing problems and introduce the notion of degree of dynamism. Section 3 reviews different applications in which dynamic routing problems arise, while Section 4 provides a comprehensive survey of solution approaches. Finally, Section 5 concludes this paper and gives directions for further research.
|
A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed in two parts I and II. Part I focuses on the “static” case of the problem. In this case, intermediate requests that may appear during the execution of the route are not considered. A generalized objective function is examined, the minimization of a weighted combination of the time to service all customers and of the total degree of “dissatisfaction” experienced by them while waiting for service. This dissatisfaction is assumed to be a linear function of the waiting and riding times of each customer. Vehicle capacity constraints and special priority rules are part of the problem. A Dynamic Programming approach is developed. The algorithm exhibits a computational effort which, although an exponential function of the size of the problem, is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size. Part II extends this approach to solving the equivalent “dynamic” case. In this case, new customer requests are automatically eligible for consideration at the time they occur. The procedure is an open-ended sequence of updates, each following every new customer request. The algorithm optimizes only over known inputs and does not anticipate future customer requests. Indefinite deferment of a customer's request is prevented by the priority rules introduced in Part I. Examples in both “static” and “dynamic” cases are presented. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> We propose and analyze a generic mathematical model for dynamic, stochastic vehicle routing problems, the dynamic traveling repairman problem (DTRP). The model is motivated by applications in which the objective is to minimize the wait for service in a stochastic and dynamically changing environment. This is a departure from classical vehicle routing problems where one seeks to minimize total travel time in a static, deterministic environment. Potential areas of application include repair, inventory, emergency service and scheduling problems. The DTRP is defined as follows: Demands for service arrive in time according to a Poisson process, are independent and uniformly distributed in a Euclidean service region, and require an independent and identically distributed amount of on-site service by a vehicle. The problem is to find a policy for routing the service vehicle that minimizes the average time demands spent in the system. We propose and analyze several policies for the DTRP. We find a provably optima... <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> Abstract The paper considers a version of the vehicle routing problem where customers’ demands are uncertain. The focus is on dynamically routing a single vehicle to serve the demands of a known set of geographically dispersed customers during real-time operations. The goal consists of minimizing the expected distance traveled in order to serve all customers’ demands. Since actual demand is revealed upon arrival of the vehicle at the location of each customer, fully exploiting this feature requires a dynamic approach. This work studies the suitability of the emerging field of neuro-dynamic programming (NDP) in providing approximate solutions to this difficult stochastic combinatorial optimization problem. The paper compares the performance of two NDP algorithms: optimistic approximate policy iteration and a rollout policy. While the former improves the performance of a nearest-neighbor policy by 2.3%, the computational results indicate that the rollout policy generates higher quality solutions. The implication for the practitioner is that the rollout policy is a promising candidate for vehicle routing applications where a dynamic approach is required. Scope and purpose Recent years have seen a growing interest in the development of vehicle routing algorithms to cope with the uncertain and dynamic situations found in real-world applications (see the recent survey paper by Powell et al. [1] ). As noted by Psaraftis [2] , dramatic advances in information and communication technologies provide new possibilities and opportunities for vehicle routing research and applications. The enhanced capability of capturing the information that becomes available during real-time operations opens up new research directions. This informational availability provides the possibility of developing dynamic routing algorithms that take advantage of the information that is dynamically revealed during operations. Exploiting such information presents a significant challenge to the operations research/management science community. The single vehicle routing problem with stochastic demands [3] provides an example of a simple, yet very difficult to solve exactly, dynamic vehicle routing problem [2, p. 157] . The problem can be formulated as a stochastic shortest path problem [4] characterized by an enormous number of states. Neuro-dynamic programming [5] , [6] is a recent methodology that can be used to approximately solve very large and complex stochastic decision and control problems. In this spirit, this paper is meant to study the applicability of neuro-dynamic programming algorithms to the single-vehicle routing problem with stochastic demands. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In the Dial-a-Ride problem (DARP) users specify transportation requests between origins and destinations to be served by vehicles. In the dynamic DARP, requests are received throughout the day and the primary objective is to accept as many requests as possible while satisfying operational constraints. This article describes and compares a number of parallel implementations of a Tabu search heuristic previously developed for the static DARP, i.e., the variant of the problem where all requests are known in advance. Computational results show that the proposed algorithms are able to satisfy a high percentage of user requests. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> The dynamic pickup and delivery problem with time windows arises in courier companies making same-day pickup and delivery of letters and small parcels. In this problem solution quality is affected by the way waiting time is distributed along vehicle routes. This article defines and compares four waiting strategies. An extensive empirical study is carried out on instances generated using real-life data. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> We introduce the concept of fruitful regions in a dynamic routing context: regions that have a high potential of generating loads to be transported. The objective is to maximise the number of loads transported, while keeping to capacity and time constraints. Loads arrive while the problem is being solved, which makes it a real-time routing problem. The solver is a self-adaptive evolutionary algorithm that ensures feasible solutions at all times. We investigate under what conditions the exploration of fruitful regions improves the effectiveness of the evolutionary algorithm. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper considers online stochastic optimization problems where uncertainties are characterized by a distribution that can be sampled and where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It reviews our recent progress in this area, proposes some new algorithms, and reports some new experimental results on two problems of fundamentally different nature: packet scheduling and multiple vehicle routing (MVR). In particular, the paper generalizes our earlier generic online algorithm with precomputation, least-commitment, service guarantees, and multiple decisions, all which are present in the MVR applications. Robustness results are also presented for multiple vehicle routing. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> With the increasing availability of real-time information and communication systems in logistics, the need for appropriate planning algorithms, which make use of this technology, arises. Customers in transport markets increasingly expect quicker and more flexible fulfillment of their orders, especially in the electronic marketplace. This paper considers a dynamic routing system that dispatches a fleet of vehicles according to customer orders arriving at random during the planning period. Each customer order requires a transport from a pickup location to a delivery location in a given time window. The system disposes of online communication with all drivers and customers and, in addition, disposes of online information on travel times from a traffic management center. This paper presents a planning framework for this situation which, to our knowledge, has not yet been addressed in the literature. It then describes three routing procedures for event-based dispatching, which differ in the length of the planning horizon per event. We focus on the use of dynamic travel time information, which requires dynamic shortest path calculations. The procedures are tested and compared using real-life data of an urban traffic management center and a logistics service provider. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> The field of dynamic vehicle routing and scheduling is growing at a fast pace nowadays, due to many potential applications in courier services, emergency services, truckload and less-than-truckload trucking, and many others. In this paper, a dynamic vehicle routing and scheduling problem with time windows is described where both real-time customer requests and dynamic travel times are considered. Different reactive dispatching strategies are defined and compared through the setting of a single "tolerance" parameter. The results show that some tolerance to deviations with the current planned solution usually leads to better solutions. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper presents a dynamic vehicle routing and scheduling model that incorporates real time information using variable travel times. Dynamic traffic simulation was used to update travel times. The model was applied to a test road network. Results indicated that the total cost decreased by implementing the dynamic vehicle routing and scheduling model with the real time information based on variable travel times compared with that of the forecast model. As well, in many cases total running times of vehicles were also decreased. Therefore, the dynamic vehicle routing and scheduling model will be beneficial for both carriers in reducing total costs and society at large by alleviating traffic congestion. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In this paper we present a formulation for the dynamic vehicle routing problem with time-dependent travel times. We also present a genetic algorithm to solve the problem. The problem is a pick-up or delivery vehicle routing problem with soft time windows in which we consider multiple vehicles with different capacities, real-time service requests, and real-time variations in travel times between demand nodes.The performance of the genetic algorithm is evaluated by comparing its results with exact solutions and lower bounds for randomly generated test problems. For small size problems with up to 10 demands, the genetic algorithm provides almost the same results as the exact solutions, while its computation time is less than 10% of the time required to produce the exact solutions. For the problems with 30 demand nodes, the genetic algorithm results have less than 8% gap with lower bounds.This research also shows that as the uncertainty in the travel time information increases, a dynamic routing strategy that takes the real-time traffic information into account becomes increasingly superior to a static one. This is clear when we compare the static and dynamic routing strategies in problem scenarios that have different levels of uncertainty in travel time information. In additional tests on a simulated network, the proposed algorithm works well in dealing with situations in which accidents cause significant congestion in some part of the transportation network. <s> BIB012 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In this paper, we study a rich vehicle routing problem incorporating various complexities found in real-life applications. The General Vehicle Routing Problem (GVRP) is a combined load acceptance and generalised vehicle routing problem. Among the real-life requirements are time window restrictions, a heterogeneous vehicle fleet with different travel times, travel costs and capacity, multi-dimensional capacity constraints, order/vehicle compatibility constraints, orders with multiple pickup, delivery and service locations, different start and end locations for vehicles, and route restrictions for vehicles. The GVRP is highly constrained and the search space is likely to contain many solutions such that it is impossible to go from one solution to another using a single neighbourhood structure. Therefore, we propose iterative improvement approaches based on the idea of changing the neighbourhood structure during the search. <s> BIB013 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> The statement of the standard vehicle routing problem cannot always capture all aspects of real-world applications. As a result, extensions or modifications to the model are warranted. Here we consider the case when customers can call in orders during the daily operations; i.e., both customer locations and demands may be unknown in advance. This is modeled as a combined dynamic and stochastic programming problem, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically, and combined iteratively to form a solution to the overall problem. <s> BIB014 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment. <s> BIB015 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In this article, the real-time time-dependent vehicle routing problem with time windows is formulated as a series of mixed integer programming models that account for real-time and time-dependent travel times, as well as for real-time demands in a unified framework. In addition to vehicles routes, departure times are treated as decision variables, with delayed departure permitted at each node serviced. A heuristic comprising route construction and route improvement is proposed within which critical nodes are defined to delineate the scope of the remaining problem along the time rolling horizon and an efficient technique for choosing optimal departure times is developed. Fifty-six numerical problems and a real application are provided for demonstration. <s> BIB016 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper considers a dynamic and stochastic routing problem in which information about customer locations and probabilistic information about future service requests are used to maximize the expected number of customers served by a single uncapacitated vehicle. The problem is modeled as a Markov decision process, and analytical results on the structure of the optimal policy are derived. For the case of a single dynamic customer, we completely characterize the optimal policy. Using the analytical results, we propose a real-time heuristic and demonstrate its effectiveness compared with a series of other intuitively appealing heuristics. We also use computational tests to determine the heuristic value of knowing both customer locations and probabilistic information about future service requests. <s> BIB017 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In this chapter we describe an innovative real-time fleet management system designed and implemented for eCourier Ltd (London, UK) for which patents are pending in the United States and elsewhere. This paper describes both the business challenges and benefits of the implementation of a real-time fleet management system (with reference to empirical metrics such as courier efficiency, service times, and financial data), as well as the theoretical and implementation challenges of constructing such a system. In short, the system dramatically reduces the requirements of human supervisors for fleet management, improves service and increases courier efficiency. We first illustrate the overall architecture, then depict the main algorithms, including the service territory zoning methodology, the travel time forecasting procedure and the job allocation heuristic <s> BIB018 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> The distribution of goods based on road services in urban areas, usually known as City Logistics, contributes to traffic congestion and is affected by traffic congestion, generates environmental impacts and incurs in high logistics costs. Therefore a holistic approach to the design and evaluation of City Logistics applications requires an integrated framework in which all components could work together that is must be modelled not only in terms of the core models for vehicle routing and fleet management, but also in terms of models able of including the dynamic aspects of traffic on the underlying road network, namely if Information and Communication Technologies (ICT) applications are taken into account. This paper reports on the modelling framework developed in the national projects SADERYL-I and II, sponsored by the Spanish “Direccion General de Ciencia y Tecnologia” (DGCYT) and tested in the European Project MEROPE of the INTERREG IIIB Programme. The modelling framework consists of a Decision Support System whose core architecture is composed by a Data Base, to store all the data required by the implied models: location of logistic centres and customers, capacities of warehouses and depots, transportation costs, operational costs, fleet data, etc.; a Database Management System, for the updating of the information stored in the data base; a Model Base, containing the family of models and algorithms to solve the related problems, discrete location, network location, street vehicle routing and scheduling; a Model Base Management System, to update, modify, add or delete models from the Model Base; a GIS based Graphic User Interface supporting the dialogues to define and update data, select the model suitable to the intended problem, generate automatically from the digital map of the road network the input graph for the Network Location and Vehicle Routing models, apply the corresponding algorithm, visualize the problem and the results, etc. To account for the dynamics of urban traffic flows the system includes an underlying dynamic traffic simulation model (AIMSUN in this case) which is able to track individually the fleet vehicles, emulating in this way the monitoring of fleet vehicles in a real time fleet management system, gathering dynamic data (i.e. current position, previous position, current speed, previous speed, etc.) while following the vehicle, in a similar way as the data that in real life an equipped vehicle could provide. This is the information required by a “Dynamic Router and Scheduler” to determine which vehicle will be assigned to the new service and which will be the new route for the selected vehicle <s> BIB019 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> Distribution schedules designed a priori may not cope adequately with unexpected events that occur during the plan execution, such as adverse traffic conditions or vehicle failures. This limitation may lead to delays, higher costs, and inferior customer service. This chapter presents the design and implementation of a real-time fleet management system that handles such unexpected events during urban freight distribution. The system monitors delivery vehicles, detects deviations from the distribution plan using dynamic travel time prediction, and adjusts the schedule accordingly by suggesting effective rerouting interventions. The system has been tested in a Greek 3PL operator and the results show significant improvements in customer service <s> BIB020 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper examines approximate dynamic programming algorithms for the single-vehicle routing problem with stochastic demands from a dynamic or reoptimization perspective. The methods extend the rollout algorithm by implementing different base sequences (i.e. a priori solutions), look-ahead policies, and pruning schemes. The paper also considers computing the cost-to-go with Monte Carlo simulation in addition to direct approaches. The best new method found is a two-step lookahead rollout started with a stochastic base sequence. The routing cost is about 4.8% less than the one-step rollout algorithm started with a deterministic sequence. Results also show that Monte Carlo cost-to-go estimation reduces computation time 65% in large instances with little or no loss in solution quality. Moreover, the paper compares results to the perfect information case from solving exact a posteriori solutions for sampled vehicle routing problems. The confidence interval for the overall mean difference is (3.56%, 4.11%). <s> BIB021 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> We consider the vehicle-routing problem with stochastic demands (VRPSD) under reoptimization. We develop and analyze a finite-horizon Markov decision process (MDP) formulation for the single-vehicle case and establish a partial characterization of the optimal policy. We also propose a heuristic solution methodology for our MDP, named partial reoptimization, based on the idea of restricting attention to a subset of all the possible states and computing an optimal policy on this restricted set of states. We discuss two families of computationally efficient partial reoptimization heuristics and illustrate their performance on a set of instances with up to and including 100 customers. Comparisons with an existing heuristic from the literature and a lower bound computed with complete knowledge of customer demands show that our best partial reoptimization heuristics outperform this heuristic and are on average no more than 10%--13% away from this lower bound, depending on the type of instances. <s> BIB022 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> When a public transit vehicle breaks down on a scheduled trip, one or more vehicles need to be rescheduled to serve that trip and other service trips originally scheduled for the disabled vehicle. In this paper, the vehicle rescheduling problem (VRSP) is investiaged to consider operating costs, schedule disruption costs, and trip cancellation costs. The VRSP is proven to be NP-hard, and a Lagrangian relaxation based insertion heuristic is developed. Extensive computational experiments on randomly generated problems are reported. The results show that the Lagrangian heuristic performs very well for solving the VRSP. <s> BIB023 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper introduces and studies real-time vehicle rerouting problems with time windows, applicable to delivery and/or pickup services that undergo service disruptions due to vehicle breakdowns. In such problems, one or more vehicles need to be rerouted, in real-time, to perform uninitiated services, with the objective to minimize a weighted sum of operating, service cancellation and route disruption costs. A Lagrangian relaxation based-heuristic is developed, which includes an insertion based-algorithm to obtain a feasible solution for the primal problem. A dynamic programming based algorithm solves heuristically the shortest path problems with resource constraints that result from the Lagrangian relaxation. Computational experiments show that the developed Lagrangian heuristic performs very well. <s> BIB024 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This study analyzes and solves a patient transportation problem arising in large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations in a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study includes several complicating constraints which are specific to a hospital context. The study provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high-quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles. <s> BIB025 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> The developments in mobile communication technologies are a strong motivation for the study of dynamic vehicle routing and scheduling problems. In particular, the planned routes can be quickly modified to account for the occurrence of new customer requests, which might imply diverting a vehicle away from its current destination. In this paper, a previously developed problem-solving approach for a vehicle routing problem with dynamic requests and dynamic travel times is extended to account for more sophisticated communication means between the drivers and the central dispatch office. Computational results are reported to empirically demonstrate the benefits of this extension. <s> BIB026 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper describes a dynamic capacitated arc routing problem motivated from winter gritting applications. In this problem, the service cost on each arc is a piecewise linear function of the time of beginning of service. This function also exhibits an optimal time interval where the service cost is minimal. Since the timing of an intervention is crucial, the dynamic aspect considered in this work stems from changes to these optimal service time intervals due to weather report updates. A variable neighborhood descent heuristic, initially developed for the static version of the problem, where all service cost functions are known in advance and do not change thereafter, is adapted to this dynamic variant. <s> BIB027 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> This paper introduces a new class of problem, the disrupted vehicle routing problem (VRP), which deals with the disruptions that occur at the execution stage of a VRP plan. The paper then focuses on one type of such problem, in which a vehicle breaks down during the delivery and a new routing solution needs to be quickly generated to minimise the costs. Two Tabu Search algorithms are developed to solve the problem and are assessed in relation to an exact algorithm. A set of test problems has been generated and computational results from experiments using the heuristic algorithms are presented. <s> BIB028 </s> A review of dynamic vehicle routing problems <s> Dynamic vehicle routing problems 2.1 A general definition <s> In just-in-time (JIT) manufacturing environments, on-time delivery is a key performance measure for dispatching and routing of freight vehicles. Growing travel time delays and variability, attributable to increasing congestion in transportation networks, are greatly impacting the efficiency of JIT logistics operations. Recurrent and non-recurrent congestion are the two primary reasons for delivery delay and variability. Over 50% of all travel time delays are attributable to non-recurrent congestion sources such as incidents. Despite its importance, state-of-the-art dynamic routing algorithms assume away the effect of these incidents on travel time. In this study, we propose a stochastic dynamic programming formulation for dynamic routing of vehicles in non-stationary stochastic networks subject to both recurrent and non-recurrent congestion. We also propose alternative models to estimate incident induced delays that can be integrated with dynamic routing algorithms. Proposed dynamic routing models exploit real-time traffic information regarding speeds and incidents from Intelligent Transportation System (ITS) sources to improve delivery performance. Results are very promising when the algorithms are tested in a simulated network of South-East Michigan freeways using historical data from the MITS Center and Traffic.com. <s> BIB029
|
The first reference to a dynamic vehicle routing problem is due to Wilson and Colvin . They studied a single vehicle DARP, in which customer requests are trips from an origin to a destination that appear dynamically. Their approach uses insertion heuristics able to perform well with low computational effort. Later, Psaraftis BIB001 introduced the concept of immediate request: a customer requesting service always wants to be serviced as early as possible, requiring immediate replanning of the current vehicle route. A number of technological advances have led to the multiplication of real-time routing applications. With the introduction of the Global Positioning System (GPS) in 1996, the development and widespread use of mobile and smart phones, combined with accurate Geographic Information Systems (GIS), companies are now able to track and manage their fleet in real time and cost effectively. While traditionally a two-step process (i.e., plan-execute), vehicle routing can now be done dynamically, introducing greater oppor-tunities to reduce operational costs, improve customer service, and reduce environmental impact. The most common source of dynamism in vehicle routing is the online arrival of customer requests during the operation. More specifically, requests can be a demand for goods BIB005 BIB013 BIB014 BIB015 BIB006 BIB007 or services BIB025 BIB008 BIB002 BIB003 BIB017 . Travel time, a dynamic component of most real-world applications, has been recently taken into account BIB018 BIB019 BIB016 BIB009 BIB029 BIB012 BIB026 BIB010 BIB027 BIB011 BIB020 ; while service time has not been explicitly studied (but can be added to travel time). Finally, some recent work considers dynamically revealed demands for a set of known customers BIB021 BIB004 BIB022 and vehicle availability BIB023 BIB024 BIB028 , in which case the source of dynamism is the possible breakdown of vehicles. In the following we use the prefix "D-" to label problems in which new requests appear dynamically. To better understand what we mean by dynamic, Figure 1 illustrates the route execution of a single vehicle D-VRP. Before the vehicle leaves the depot (time t 0 ), an initial route plans to visit the currently known requests (A, B, C, D, E). While the vehicle executes its route, two new requests (X and Y ) appear at time t 1 and the initial route is adjusted to fulfill them. Finally, at time t f the executed route is (A, B, C, D, Y, E, X). This example reveals how dynamic routing inherently adjusts routes in an ongoing fashion, which requires real-time communication between vehicles and the dispatching center. Figure 2 illustrates this real-time communication scheme, where the environment refers to the real-world while the dispatcher is the agent that gives instructions to the vehicle. Once the vehicle is ready (first dotted arrow), the dispatcher makes a decision and instructs the vehicle to fulfill request A (first double-headed arrow). When the vehicle starts (second dotted arrow) and ends (third dotted arrow) service at request A, it notifies the dispatcher, which in turns updates the available information and communicates the vehicle its next request (second double-headed arrow).
|
A review of dynamic vehicle routing problems <s> Differences with static routing <s> Advances in communication, automatic vehicle location, and geographic information system technologies have made available several types of real-time information with benefits for commercial vehicle operations. Continuous updates on vehicle locations and demands create considerable potential for developing automated, real-time dispatching systems. The potential benefits of a diversion strategy in response to real-time information are explored under idealized conditions, and the technologies that are available for use in commercial vehicle operations and selected results derived from simulation are described. The results illustrate potential savings from simple diversion strategies under real-time information and highlight the need for methodological development to support improved truckload carrier operations decisions. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> Although most real-world vehicle routing problems are dynamic, the traditional methodological arsenal for this class of problems has been based on adaptations of static algorithms. Still, some important new methodological approaches have recently emerged. In addition, computer-based technologies such as electronic data interchange (EDI), geographic information systems (GIS), global positioning systems (GPS), and intelligent vehicle-highway systems (IVHS) have significantly enhanced the possibilities for efficient dynamic routing and have opened interesting directions for new research. This paper examines the main issues in this rapidly growing area, and surveys recent results and other advances. The assessment of possible impact of new technologies and the distinction of dynamic problems vis-a-vis their static counterparts are given emphasis. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> The application of intelligent transportation system technologies to freight mobility requires dynamic decision-making techniques for commercial fleet operations, using real-time information. Recognizing the productivity-enhancing operational changes possible using real-time information about vehicle locations and demands coupled with constant communication between dispatchers and drivers, a general carrier fleet management system is described. The system features dynamic dispatching, load acceptance, and pricing strategies. A simulation framework is developed to evaluate the performance of alternative load acceptance and assignment strategies using real-time information. Real-time decision making for fleet operations involves balancing a complicated set of often conflicting objectives. The simulation framework provides a means for exploring the trade-offs between these objectives. Results suggest that reductions in cost and improvements in service quality should result from the use of dynamic dispatching... <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> The problem of dynamic fleet management for truckload carrier fleet operations is introduced, and the principal elements of a simulation framework for the evaluation of dynamic fleet management systems are described. The application of the simulated framework to the investigation of the performance of a family of real-time fleet operational strategies, which include load acceptance, assignment, and reassignment strategies, also is described. The simulation framework described is an example of a first-generation tool for the evaluation of dynamic fleet management systems. Selected experimental results are highlighted. These are intended to illustrate some of the issues encountered in real-time fleet management and the role of the simulation modeling environment in investigating them. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> Recent technological advances in communication systems now allow the exploitation of realtime information for dynamic vehicle routing and scheduling. It is possible, in particular, to consider diverting a vehicle away from its current destination in response to a new customer request. In this paper, a strategy for assigning customer requests, which includes diversion, is proposed, and various issues related to it are presented. An empirical evaluation of the proposed approach is performed within a previously reported tabu search heuristic. Simulations compare the tabu search heuristic, with and without the new strategy, on a dynamic problem motivated from a courier service application. The results demonstrate the potential savings that can be obtained through the application of the proposed approach. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> In the Dial-a-Ride problem (DARP) users specify transportation requests between origins and destinations to be served by vehicles. In the dynamic DARP, requests are received throughout the day and the primary objective is to accept as many requests as possible while satisfying operational constraints. This article describes and compares a number of parallel implementations of a Tabu search heuristic previously developed for the static DARP, i.e., the variant of the problem where all requests are known in advance. Computational results show that the proposed algorithms are able to satisfy a high percentage of user requests. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> Online decision making under uncertainty and time constraints represents one of the most challenging problems for robust intelligent agents. In an increasingly dynamic, interconnected, and real-time world, intelligent systems must adapt dynamically to uncertainties, update existing plans to accommodate new requests and events, and produce high-quality decisions under severe time constraints. Such online decision-making applications are becoming increasingly common: ambulance dispatching and emergency city-evacuation routing, for example, are inherently online decision-making problems; other applications include packet scheduling for Internet communications and reservation systems. This book presents a novel framework, online stochastic optimization, to address this challenge. This framework assumes that the distribution of future requests, or an approximation thereof, is available for sampling, as is the case in many applications that make either historical data or predictive models available. It assumes additionally that the distribution of future requests is independent of current decisions, which is also the case in a variety of applications and holds significant computational advantages. The book presents several online stochastic algorithms implementing the framework, provides performance guarantees, and demonstrates a variety of applications. It discusses how to relax some of the assumptions in using historical sampling and machine learning and analyzes different underlying algorithmic problems. And finally, the book discusses the framework's possible limitations and suggests directions for future research. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> Recent availability of relatively cheap small jet aircraft creates opportunities for a new air transport business: Air taxi, an on-demand service in which travellers call in one or a few days in advance to book transportation. In this paper, we present a methodology and simulation study supporting important strategic decisions, like for instance determining the required number of aircraft, for a company planning to establish an air taxi service in Norway. The methodology is based on a module simulating incoming bookings, built around a heuristic for solving the underlying dial-a-flight problem. The heuristic includes a separate method for solving the important subproblem of determining the best distribution of waiting time along a single aircraft schedule. The methodology has proved to provide reliable decision support to the company. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> When a public transit vehicle breaks down on a scheduled trip, one or more vehicles need to be rescheduled to serve that trip and other service trips originally scheduled for the disabled vehicle. In this paper, the vehicle rescheduling problem (VRSP) is investiaged to consider operating costs, schedule disruption costs, and trip cancellation costs. The VRSP is proven to be NP-hard, and a Lagrangian relaxation based insertion heuristic is developed. Extensive computational experiments on randomly generated problems are reported. The results show that the Lagrangian heuristic performs very well for solving the VRSP. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Differences with static routing <s> The advance of communication and information technologies based on satellite and wireless networks have allowed transportation companies to benefit from real-time information for dynamic vehicle routing with time windows. During daily operations, we consider the case in which customers can place requests such that their demand and location are stochastic variables. The time windows at customer locations can be violated although lateness costs are incurred. The objective is to define a set of vehicle routes which are dynamically updated to accommodate new customers in order to maximize the expected profit. This is the difference between the total revenue and the sum of lateness costs and costs associated with the total distance traveled. The solution approach makes use of a new constructive heuristic that scatters vehicles in the service area and an adaptive granular local search procedure. The strategies of letting a vehicle wait, positioning a vehicle in a region where customers are likely to appear, and diverting a vehicle away from its current destination are integrated within a granular local search heuristic. The performance of the proposed approach is assessed in test problems based on real-life Brazilian transportation companies. <s> BIB012
|
In contrast to their static counterparts, dynamic routing problems involve new elements that increase the complexity of their decisions (more degrees of freedom) and introduce new challenges while judging the merit of a given route plan. In some contexts, such as the pick-up of express courier BIB005 , the transport company may deny a customer request. As a consequence, it can reject a request either because it is simply impossible to service it, or because the cost of serving it is too high. This process of acceptance/denial has been used in many approaches BIB007 BIB010 BIB005 BIB006 BIB008 BIB011 and is referred to as service guarantee BIB009 . In dynamic routing, the ability to redirect a moving vehicle to a new request nearby allows for additional savings. Nevertheless, it requires realtime knowledge of the vehicle position and being able to communicate quickly with drivers to assign them new destinations. Thus, this strategy has received limited interest, with the main contributions being the early work by Regan et al. BIB001 BIB004 BIB003 , the study of diversion issues by Ichoua et al. BIB006 , and the work by Branchini et al. BIB012 . Dynamic routing also frequently differs in the objective function BIB002 . In particular, while a common objective in the static context is the minimiza-tion of the routing cost, dynamic routing may introduce other notions such as service level, throughput (number of serviced requests), or revenue maximization. Having to answer to dynamic customer requests also introduces the notion of response time: a customer might request to be serviced as soon as possible, in which case the main objective may become to minimize the delay between the arrival of a request and its service. Dynamic routing problems require making decisions in an online manner, which often compromises reactiveness with decision quality. In other words, the time invested searching for better decisions, comes at the price of a lower reactiveness to input changes. This aspect is of particular importance in contexts where customers call for a service and a good decision must be made as fast as possible.
|
A review of dynamic vehicle routing problems <s> Measuring dynamism <s> Although most real-world vehicle routing problems are dynamic, the traditional methodological arsenal for this class of problems has been based on adaptations of static algorithms. Still, some important new methodological approaches have recently emerged. In addition, computer-based technologies such as electronic data interchange (EDI), geographic information systems (GIS), global positioning systems (GPS), and intelligent vehicle-highway systems (IVHS) have significantly enhanced the possibilities for efficient dynamic routing and have opened interesting directions for new research. This paper examines the main issues in this rapidly growing area, and surveys recent results and other advances. The assessment of possible impact of new technologies and the distinction of dynamic problems vis-a-vis their static counterparts are given emphasis. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Measuring dynamism <s> Novel 1-aryl-2-(1-imidazolyl)alkyl ethers and thioethers having anti-fungal properties are disclosed. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Measuring dynamism <s> In this paper we propose a framework for dynamic routing systems based on their degree of dynamism. Next, we consider its impact on solution methodology and quality. Specifically, we introduce the Partially Dynamic Travelling Repairman Problem and describe several dynamic policies to minimize routing costs. The results of our computational study indicate that increasing the dynamic level results in a linear increase in route length for all policies studied. Furthermore, a Nearest Neighbour policy performed, on the average, uniformly better than the other dispatching rules studied. Among these, a Partitioning policy produced only slightly higher average route lengths. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Measuring dynamism <s> This paper reviews and classifies the work done in the field of dynamic vehicle routing. We focus, in particular, on problems where the uncertainty comes from the occurrence of new requests. Problem-solving approaches are investigated in contexts where consolidation of multiple requests onto the same vehicle is allowed and addressed through the design of planned routes. Starting with pure myopic approaches, we then review in later sections the issues of diversion and anticipation of future requests <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Measuring dynamism <s> This chapter discusses important characteristics seen within dynamic vehicle routing problems. We discuss the differences between the traditional static vehicle routing problems and its dynamic counterparts. We give an in-depth introduction to the degree of dynamism measure which can be used to classify dynamic vehicle routing systems. Methods for evaluation of the performance of algorithms that solve on-line routing problems are discussed and we list some of the most important issues to include in the system objective. Finally, we provide a three-echelon classification of dynamic vehicle routing systems based on their degree of dynamism and the system objective <s> BIB005
|
Different problems (or instances of a same problem) can have different levels of dynamism, which can be characterized according to two dimensions BIB004 : the frequency of changes and the urgency of requests. The former is the rate at which new information becomes available, while the latter is the time gap between the disclosure of a new request and its expected service time. From this observation three metrics have been proposed to measure the dynamism of a problem (or instance). Lund et al. BIB002 defined the degree of dynamism δ as the ratio between the number of dynamic requests n d and the total number of requests n tot as follows: Based on the fact that the disclosure time of requests is also important BIB001 , Larsen proposed the effective degree of dynamism δ e . This metric can be interpreted as the normalized average of the disclosure times. Let T be the length of the planning horizon, R the set of requests, and t i the disclosure time of request i ∈ R. Assuming that requests known beforehand have a disclosure time equal to 0, δ e can be expressed as: Larsen also extended the effective degree of dynamism to problems with time windows to reflect the level of urgency of requests. He defines the reaction time as the difference between the disclosure time t i and the end of the corresponding time window l i , highlighting that longer reaction times mean more flexibility to insert the request into the current routes. Thus, the effective degree of dynamism measure is extended as follows: It is worth noting that these three metrics only take values in the interval [0, 1] and all increase with the level of dynamism of a problem. Larsen et al. BIB003 BIB005 use the effective degree of dynamism to define a framework classifying D-VRPs among weakly, moderately, and strongly dynamic problems, with values of δ e being respectively lower than 0.3, comprised between 0.3 and 0.8, and higher than 0.8. Although the effective degree of dynamism and its variations have proven to capture well the time-related aspects of dynamism, it could be argued that they do not take into account other possible sources of dynamism. In particular, the geographical distribution of requests, or the traveling times between requests, are also of great importance in applications aiming at the minimization of response time. Although not considered, the frequency of updates in problem information has a dramatical impact on the time available for optimization.
|
A review of dynamic vehicle routing problems <s> A review of applications <s> Real-time decision problems are playing an increasingly important role in the economy due to advances in communication and information technologies that now allow realtime information to be quickly obtained and processed (Seguin et al., 1997). Among these, dynamic vehicle routing and dispatching problems have emerged as an intense area of research in the operations research community. Numerous examples may be found in Haines and Wolfe (1982), Powell, Jaillet and Odoni (1995) and Psaraftis (1995). In these problems, a set of vehicles is routed over a particular time horizon (typically, a day) while new service requests are occuring in real-time. With each new request, the current solution may be reconfigured to better service the new request, as well as those already assigned to a route. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> In a companion paper (Godfrey and Powell 2002) we introduced an adaptive dynamic programming algorithm for stochastic dynamic resource allocation problems, which arise in the context of logistics and distribution, fleet management, and other allocation problems. The method depends on estimating separable nonlinear approximations of value functions, using a dynamic programming framework. That paper considered only the case in which the time to complete an action was always a single time period. Experiments with this technique quickly showed that when the basic algorithm was applied to problems with multiperiod travel times, the results were very poor. In this paper, we illustrate why this behavior arose, and propose a modified algorithm that addresses the issue. Experimental work demonstrates that the modified algorithm works on problems with multiperiod travel times, with results that are almost as good as the original algorithm applied to single period travel times. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> Abstract In an e-commerce environment, order fulfilment is driven by customer demands and expectations. A dynamic vehicle routing and scheduling system may be specified which allows e-commerce customers to select their own delivery Time Windows and have these confirmed on-line as they place their order. The methodology is based upon demand forecasting, which leads to the generation of phantom orders and phantom routes. Subsequently, actual orders substitute for phantom orders in an on-line customer order process. The routing and scheduling method includes using both parallel tour-building and parallel insertion algorithms. Customer service levels are confirmed using GPS tracking and tracing, and a feedback loop uses expert systems or artificial intelligence as an input to the demand forecasting data to restart the whole process. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> In this chapter we describe an innovative real-time fleet management system designed and implemented for eCourier Ltd (London, UK) for which patents are pending in the United States and elsewhere. This paper describes both the business challenges and benefits of the implementation of a real-time fleet management system (with reference to empirical metrics such as courier efficiency, service times, and financial data), as well as the theoretical and implementation challenges of constructing such a system. In short, the system dramatically reduces the requirements of human supervisors for fleet management, improves service and increases courier efficiency. We first illustrate the overall architecture, then depict the main algorithms, including the service territory zoning methodology, the travel time forecasting procedure and the job allocation heuristic <s> BIB004 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> This study investigates the parameter settings of a real-time vehicle-dispatching system for consolidating milk runs. Seven modules are used to implement the real-time system and the parameters are determined by a comprehensive experimental design. The real-time vehicle-dispatching system will be demonstrated in different milk run scenarios. The results of the experiments suggest that the system should have an initial vehicle dispatching module and an inter-route improvement module. It is also recommended that the Best Fit algorithm for initial vehicle dispatch and the 2-Exchange algorithm for inter-route improvement are most suitable for the real-time system. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> This paper reviews and classifies the work done in the field of dynamic vehicle routing. We focus, in particular, on problems where the uncertainty comes from the occurrence of new requests. Problem-solving approaches are investigated in contexts where consolidation of multiple requests onto the same vehicle is allowed and addressed through the design of planned routes. Starting with pure myopic approaches, we then review in later sections the issues of diversion and anticipation of future requests <s> BIB006 </s> A review of dynamic vehicle routing problems <s> A review of applications <s> While it is certainly too early to make a definitive assessment of the effectiveness of Intelligent Transportation Systems (ITS), it is not to take stock of what has been achieved and to think about what could be achieved in the near future. In our opinion, ITS developments have been up to now largely hardware-driven and have led to the introduction of many sophisticated technologies in the transportation arena, while the development of the software component of ITS, models and decision-support systems in particular, is lagging behind. To reach the full potential of ITS, one must thus address the challenge of making the most intelligent usage possible of the hardware that is being deployed and the huge wealth of data it provides. We believe that transportation planning and management disciplines, operations research in particular, have a key role to play with respect to this challenge. The paper focuses on Freight ITS: Commercial Vehicle Operations and Advanced Fleet Management Systems, City Logistics, and electronic business. The paper reviews main issues, technological challenges, and achievements, and illustrates how the introduction of better operations research-based decision-support software could very significantly improve the ultimate performance of Freight ITS. <s> BIB007
|
Recent advances in technology have allowed the emergence of a wide new range of applications for vehicle routing. In particular, the last decade has seen the development of Intelligent Transport Systems (ITS), which are based on a combination of geolocation technologies, with precise geographic information systems, and increasingly efficient hardware and software for data processing and operations planning. We refer the interested reader to the study by Crainic et al. BIB007 for more details on ITS and the contributions of operations research to this relatively new domain. Among the ITS, the Advanced Fleet Management Systems (AFMS) are specifically designed for managing a corporate vehicle fleet. The core problem is generally to deliver (pick-up) goods or persons to (from) locations distributed in a given area. While customer requests can either be known in advance or appear dynamically during the day, vehicles are dispatched and routed in real time, potentially, by taking into account changing traffic conditions, uncertain demands, or varying service times. A key technological feature of AFMS is the optimization component. Traditionally, vehicle routing relies on teams of human dispatchers, meaning a critical operational process is bound to the competence and experience of dispatchers, as well as the management costs that are directly linked to the size of the fleet BIB004 . Advances in computer science have allowed a technological transfer from operational research to AFMS, as presented in the studies by Attanasio et al. BIB004 , Du et al. BIB005 , Godfrey and Powell BIB002 , Powell and Topaloglu , Roy , Simao et al. , and Slater BIB003 . The remainder of this section presents applications where dynamic routing has been or can be implemented. The interested reader is also referred to the work by Gendreau and Potvin BIB001 and Ichoua et al. BIB006 for complementary reviews.
|
A review of dynamic vehicle routing problems <s> Services <s> This problem is based on the British Telecom workforce scheduling problem, in which technicians (with different skills) are assigned to tasks (which require different skills) which arrive (partially) dynamically during the day. In order to manage their workforce, British Telecom divides the different regions into several areas. At the beginning of each day all the technicians in a region are assigned to one of these areas. During the day, each technician is limited to tasks within the assigned area. ::: ::: This effectively decomposes a large dynamic scheduling problem into smaller problems. On one hand, it makes the problem more manageable. On the other hand, it gives rise to, potentially, a mismatch between technicians and tasks within an area. Furthermore, it prevents technicians from being assigned a job which is just outside their area but happens to be close to where they are currently working. ::: ::: This paper studies the effect of the number of partitions on the expected objective (number of completed tasks) that a rule-based system (responsible for the dynamic assignment and reassignment of tasks to resources following dynamic events) can reach. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Services <s> This paper describes a dynamic capacitated arc routing problem motivated from winter gritting applications. In this problem, the service cost on each arc is a piecewise linear function of the time of beginning of service. This function also exhibits an optimal time interval where the service cost is minimal. Since the timing of an intervention is crucial, the dynamic aspect considered in this work stems from changes to these optimal service time intervals due to weather report updates. A variable neighborhood descent heuristic, initially developed for the static version of the problem, where all service cost functions are known in advance and do not change thereafter, is adapted to this dynamic variant. <s> BIB002
|
In this category of applications, a service request is defined by a customer location and a possible time window; while vehicle routes just fulfill service requests without considering side constraints such as capacity. Perhaps the simplest, yet most illustrative case in this category is the dynamic traveling salesman problem . A common application of dynamic routing can be found in the area of maintenance operations. Maintenance companies are often committed by contract to their customers, which specify periodical or planned visits to perform preventive maintenance, and may also request corrective maintenance on short notice. Therefore, each technician is first given a route with known requests at the beginning of the day, while new urgent requests are inserted dynamically throughout the day. An interesting feature of this problem is the possible mix of skills, tools, and spare part requirements, which have to be matched in order to service the request. This problem has been studied by Borenstein et al. BIB001 with an application to British Telecom. Another application of dynamic routing arises in the context of the French non-profit organization SOS Médecins. This organization operates with a crew of physicians, who are called on duty via a call center coordinated with other emergency services. When a patient calls, the severity of the case is evaluated, and a visit by a practitioner is planned accordingly. As in other emergency services, having an efficient dispatching system reduces the response time, thus improving service level for the society. On the other hand, it is important to decide in real-time whether or not to send a physician, so that it is possible to ensure a proper service level in areas where emergencies are likely to appear. Dynamic aspects can also appear on arc routing problems. This is for instance the case in the study by Tagmouti et al. BIB002 on the operation of a fleet of vehicles for winter gritting applications. Their work consider a network of streets or road segments that need to be gritted when affected by a moving storm. Depending on the movements of the storm, new segments may have to be gritted, and the routing of vehicles has to be updated accordingly.
|
A review of dynamic vehicle routing problems <s> Transport of goods <s> Urban freight systems are experiencing many problems due to higher levels of service and lower costs being demanded by shippers, with carriers having to operate in increasingly congested road conditions. Trucks operating in urban areas produce many negative impacts for society in terms of emissions, crashes, noise, and vibration. City logistics aims to globally optimize urban freight systems by considering the costs and benefits of schemes to the public as well as the private sector. The concepts of city logistics are introduced, and an outline is presented of some models that have recently been developed to predict the consequences of intelligent transportation systems. In particular, a stochastic vehicle routing and scheduling procedure that incorporates the variation of travel times is described. Results indicate that this approach can lead to significant reduction in operating costs by carriers as well as shorter routes with fewer trucks and increased reliability for customers. This procedure also reduces emissions and fuel consumption. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> Many companies with consumer direct service models, especially grocery delivery services, have found that home delivery poses an enormous logistical challenge due to the unpredictability of demand coupled with strict delivery windows and low profit margin products. These systems have proven difficult to manage effectively and could benefit from new technology, particularly to manage the interaction between order capture and order delivery. In this article, we define routing and scheduling problems that incorporate important features of this emerging business model and propose algorithms, based on insertion heuristics, for their solution. In the proposed home delivery problem, the company decides which deliveries to accept or reject as well as the time slot for the accepted deliveries so as to maximize expected profits. Computational experiments reveal the importance of an approach that integrates order capture with order delivery and demonstrates the quality and value of the proposed algorithms. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> This paper proposes neighborhood search heuristics to optimize the planned routes of vehicles in a context where new requests, with a pick-up and a delivery location, occur in real-time. Within this framework, new solutions are explored through a neighborhood structure based on ejection chains. Numerical results show the benefits of these procedures in a real-time context. The impact of a master–slave parallelization scheme, using an increasing number of processors, is also investigated. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> The distribution of goods based on road services in urban areas, usually known as City Logistics, contributes to traffic congestion and is affected by traffic congestion, generates environmental impacts and incurs in high logistics costs. Therefore a holistic approach to the design and evaluation of City Logistics applications requires an integrated framework in which all components could work together that is must be modelled not only in terms of the core models for vehicle routing and fleet management, but also in terms of models able of including the dynamic aspects of traffic on the underlying road network, namely if Information and Communication Technologies (ICT) applications are taken into account. This paper reports on the modelling framework developed in the national projects SADERYL-I and II, sponsored by the Spanish “Direccion General de Ciencia y Tecnologia” (DGCYT) and tested in the European Project MEROPE of the INTERREG IIIB Programme. The modelling framework consists of a Decision Support System whose core architecture is composed by a Data Base, to store all the data required by the implied models: location of logistic centres and customers, capacities of warehouses and depots, transportation costs, operational costs, fleet data, etc.; a Database Management System, for the updating of the information stored in the data base; a Model Base, containing the family of models and algorithms to solve the related problems, discrete location, network location, street vehicle routing and scheduling; a Model Base Management System, to update, modify, add or delete models from the Model Base; a GIS based Graphic User Interface supporting the dialogues to define and update data, select the model suitable to the intended problem, generate automatically from the digital map of the road network the input graph for the Network Location and Vehicle Routing models, apply the corresponding algorithm, visualize the problem and the results, etc. To account for the dynamics of urban traffic flows the system includes an underlying dynamic traffic simulation model (AIMSUN in this case) which is able to track individually the fleet vehicles, emulating in this way the monitoring of fleet vehicles in a real time fleet management system, gathering dynamic data (i.e. current position, previous position, current speed, previous speed, etc.) while following the vehicle, in a similar way as the data that in real life an equipped vehicle could provide. This is the information required by a “Dynamic Router and Scheduler” to determine which vehicle will be assigned to the new service and which will be the new route for the selected vehicle <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> Distribution schedules designed a priori may not cope adequately with unexpected events that occur during the plan execution, such as adverse traffic conditions or vehicle failures. This limitation may lead to delays, higher costs, and inferior customer service. This chapter presents the design and implementation of a real-time fleet management system that handles such unexpected events during urban freight distribution. The system monitors delivery vehicles, detects deviations from the distribution plan using dynamic travel time prediction, and adjusts the schedule accordingly by suggesting effective rerouting interventions. The system has been tested in a Greek 3PL operator and the results show significant improvements in customer service <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> In this chapter we describe an innovative real-time fleet management system designed and implemented for eCourier Ltd (London, UK) for which patents are pending in the United States and elsewhere. This paper describes both the business challenges and benefits of the implementation of a real-time fleet management system (with reference to empirical metrics such as courier efficiency, service times, and financial data), as well as the theoretical and implementation challenges of constructing such a system. In short, the system dramatically reduces the requirements of human supervisors for fleet management, improves service and increases courier efficiency. We first illustrate the overall architecture, then depict the main algorithms, including the service territory zoning methodology, the travel time forecasting procedure and the job allocation heuristic <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> The current decade sees a considerable growth in worldwide container transportation and with it an indispensable need for optimization. Also the interest in and availability of academic literatures as well as case reports are almost exploding. With this paper an earlier survey which proved to be of utmost importance for the community is updated and extended to provide the current state of the art in container terminal operations and operations research. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> This paper describes anticipatory algorithms for the dynamic vehicle dispatching problem with pickups and deliveries, a problem faced by local area courier companies. These algorithms evaluate alternative solutions through a short-term demand sampling and a fully sequential procedure for indifference zone selection. They also exploit an unified and integrated approach in order to address all the issues involved in real-time fleet management, namely assigning requests to vehicles, routing the vehicles, scheduling the routes and relocating idle vehicles. Computational results show that the anticipatory algorithms provide consistently better solutions than their reactive counterparts. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> On-line routing is concerned with building vehicle routes in an on-going fashion in such a way that customer requests arriving dynamically in time are efficiently and effectively served. An indispensable prerequisite for applying on-line routing methods is mobile communication technology. Additionally it is of utmost importance that the employed communication system is suitable integrated with the firm’s enterprise application system and business processes. On basis of a case study, we describe in this paper a system that is cheap and easy to implement due to the use of simple mobile phones. Additionally, we address the question how on-line routing methods can be integrated in this system. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> The CALAS project consists in a laser measure system allowing to localize precisely straddle carriers in a container terminal. The information given by such a tool makes an optimization possible. In fact, a box terminal is an open system subject to dynamics, in which many events can occur. Among others, they concern container arrivals and departures. Within the terminal, straddle carriers are trucks which are able to carry moved one container at a time in order to move it through the terminal. We aim to optimize the straddle carrier handling in order to improve the terminal management. Moreover, missions come into the system in an unpredictable way and straddle carriers are handled by humans. They can choose to follow the schedule or not. For these reasons, the exact state of the system is unknown. The optimization process that we try to build must be fail-safe and adaptive. In this context, we propose an approach using a meta-heuristic based on Ant Colony to resolve the problem of assigning missions to straddle carriers. We built a simulator which is able to test and to compare different scheduling policies. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> In the last decade, there has been an increasing body of research in dynamic vehicle routing problems. This article surveys the subclass of those problems called dynamic pickup and delivery problems, in which objects or people have to be collected and delivered in real-time. It discusses some general issues as well as solution strategies. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> Objective: The aim of this study was to develop an algorithm for scheduling pick-up and delivery tasks in hospitals. The number of jobs and the dynamic nature of the problem, in having jobs arriving over time, makes the use of information technology indispensable. An optimized scheduling for all types of transportation tasks occurring in a hospital accelerates medical procedures, and reduces the patient's waiting time and costs. Methods: In the design of the algorithm we use techniques from classical scheduling theory. In addition, due to some special properties and constraints, we model the problem using methods from graph theory. The resulting algorithm combines both approaches in a transparent manner. Conclusions: To optimize the schedules, we define the average weighted flow time as an objective function that corresponds to a measure for the task throughput. An evaluation of the algorithm at the Natters State Hospital in Austria shows that it has a superior performance than the current scheduling mechanism. <s> BIB012 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> This paper presents a dynamic routing method for supervisory control of multiple automated guided vehicles (AGVs) that are traveling within a layout of a given warehouse. In dynamic routing a calculated path particularly depends on the number of currently active AGVs' missions and their priorities. In order to solve the shortest path problem dynamically, the proposed routing method uses time windows in a vector form. For each mission requested by the supervisor, predefined candidate paths are checked if they are feasible. The feasibility of a particular path is evaluated by insertion of appropriate time windows and by performing the windows overlapping tests. The use of time windows makes the algorithm apt for other scheduling and routing problems. Presented simulation results demonstrate efficiency of the proposed dynamic routing. The proposed method has been successfully implemented in the industrial environment in a form of a multiple AGV control system. <s> BIB013 </s> A review of dynamic vehicle routing problems <s> Transport of goods <s> This paper considers a vehicle routing problem where each vehicle performs delivery operations over multiple routes during its workday and where new customer requests occur dynamically. The proposed methodology for addressing the problem is based on an adaptive large neighborhood search heuristic, previously developed for the static version of the problem. In the dynamic case, multiple possible scenarios for the occurrence of future requests are considered to decide about the opportunity to include a new request into the current solution. It is worth noting that the real-time decision is about the acceptance of the new request, not about its service which can only take place in some future routes (a delivery route being closed as soon as a vehicle departs from the depot). In the computational results, a comparison is provided with a myopic approach which does not consider scenarios of future requests. <s> BIB014
|
Due to the fact that urban areas are often characterized by highly variable traveling times, transport of goods in such areas have led to the definition of a specific category of applications known as city logistics. City logistics can be defined as an integrated vision of transport activities in urban areas, taking into account factors such as traffic and competition or cooperation between transport companies BIB001 . Barcelo et al. BIB004 developed a general framework for city logistics applications. They describe the different modules ranging from modeling the city road network and acquiring real-time traffic data to the dynamic routing of a fleet of vehicles. Zeimpekis et al. BIB005 proposed a Decision Support System (DSS) for city logistics which takes into account dynamic travel and service times. A typical application in city logistics is the courier service present in most urban areas. Couriers are dispatched to customer locations to collect packages, and either deliver them to their destination (short haul) or to a unique depot (long haul). Depending on the level of service paid by the customer, couriers may consolidate pick-ups from various customers, or provide an expedited service. Companies offering courier services often have an heterogeneous fleet composed of bicycles, motorbikes, cars, and small vans. The problem is then to dynamically route couriers, taking into account not only the known requests, their type, pick-up and delivery locations, and time windows, but also considering traffic conditions and varying travel times. A case study by Attanasio et al. BIB006 outlines the benefits of using an optimizationenabled AFMS at eCourier Ltd, a London based company offering courier services. The authors illustrate that aside from the improvements in service quality, response time, and courier efficiency, the use of an automated system allows decoupling the fleet size from the need for more dispatchers. Further results motivated by a similar application can be found in Gendreau et al. BIB003 and Ghiani et al. BIB008 . The delivery of newspapers and magazines is a domain in which customer satisfaction is of first importance. When a magazine or newspaper is not delivered, a subscriber contacts a call center and is offered to choose between a voucher or a future delivery. In the latter case, the request is then forwarded to the delivery company, which assigns it to a driver that will do a priority delivery. Traditionally, this process relies on an exchange of phone calls, faxes, and printed documents, that ultimately communicate the driver about the pending delivery, once he/she comes back to the depot. As an alternative, Bieding et al. BIB009 propose a centralized application that makes use of mobile phones to communicate with drivers and intelligently perform the routing in real time, reducing costs and improving customer satisfaction. More recently, Ferrucci et al. developed an approach that makes use historical data to anticipate future requests. Another application in which customer requests need to be answered with short delays can be found in companies with a direct service model, such as grocery delivery services. In general, the customer selects products on a website, and then chooses a time frame for the delivery at his home. Traditionally, the vendor defines an arbitrary number of customers that can be serviced within a time window, and the time window is made unavailable to customers as soon as the capacity is reached. Campbell and Savelsbergh BIB002 defined the Home Delivery Problem, in which the goal is to maximize the total expected revenue by dynamically deciding whether or not to accept a customer request within a specific time window. In comparison with the traditional approach, this means that the time windows available for a customer are dynamically defined taking into consideration the possible future requests. The authors propose a Greedy Randomized Adaptive Search Procedure (GRASP) and compare different cost functions to capture the problem uncertainty. Later, Azi et al. BIB014 proposed an Adaptive Large Neighborhood Search (ALNS) that take into account uncertainty by generating scenarios containing possible demand realizations. Apart from classical routing problems, related operational problems also arise in many organizations. The review by Stahlbock and Voss BIB007 on operations research applications in container terminals describes the dynamic stacker crane problem BIB010 BIB011 , which considers the routing of container carriers loading and unloading ships in a terminal. Other applications include transport of goods inside warehouses BIB013 , factories, and hospitals, where documents or expensive medical instruments must be transferred efficiently between services BIB012 .
|
A review of dynamic vehicle routing problems <s> Transport of persons <s> Abstract This paper introduces a modernized version of many-to-few dial-a-ride called autonomous dial-a-ride transit (ADART), which employs fully-automated order-entry and routing-and-scheduling systems that reside exclusively on board the vehicle. Here, “fully automated” means that under normal operation, the customer is the only human involved in the entire process of requesting a ride, assigning trips, scheduling arrivals and routing the vehicle. There are no telephone operators to receive calls, nor any central dispatchers to assign trips to vehicles, nor any human planning a route. The vehicles' computers assign trip demands and plan routes optimally among themselves, and the drivers' only job is to obey instructions from their vehicle's computer. Consequently, an ADART vehicle fleet covers a large service area without any centralized supervision. In effect, the vehicles behave like a swarm of ants accomplishing their chore without anyone in charge. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> Abstract This paper describes a software system designed to manage the deployment of a fleet of demand-responsive passenger vehicles such as taxis or variably routed buses. Multiple modes of operation are supported both for the fleet and for individual vehicles. Booking requests can be immediate (i.e. with zero notice) or in advance of travel. An initial implementation is chosen for each incoming request, subject to time-window and other constraints, and with an objective of minimising additional travel time or maximising a surrogate for future fleet capacity. This incremental insertion scheme is supplemented by post-insert improvement procedures, a periodically executed steepest-descent improvement procedure applied to the fleet as a whole, and a “rank-homing” heuristic incorporating information about future patterns of demand. A simple objective for trip-insertion and other scheduling operations is based on localised minimisation of travel time, while an alternative incorporating occupancy ratios has a more strategic orientation. Apart from its scheduling functions, the system includes automated vehicle dispatching procedures designed to achieve a favourable combination of customer service and efficiency of vehicle deployment. Provision is made for a variety of contingencies, including travel slower or faster than expected, unexpected vehicle locations, vehicle breakdowns and trip cancellations. Simulation tests indicate that the improvement procedures yield substantial efficiencies over more naive scheduling methods and that the system will be effective in real-time applications. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> Abstract This paper describes journey-planning procedures designed for use in a traveller information system covering fixed-schedule and demand-responsive public transport modes. The task is to construct a sequence of journey-legs to meet a traveller’s requirements with the least possible generalised cost, subject to time-window and other constraints. A journey may be carried out in a single leg by walking or by taking a taxi all the way from the origin to the destination, or by a sequence of one or more legs carried by public transport services sandwiched between walked segments connecting an initial pickup and final setdown stop. The public transport services may include fixed-route modes such as bus and train, and demand-responsive services running between fixed points. The main planning procedures are a high-level request-broker and a branch and bound procedure to handle multi-legged journeys; the request-broker also invokes a fleet-scheduling module to obtain bookings on demand-responsive services. The paper describes planning conditions, the planning procedures, and reduction techniques that are used to obtain acceptable computational performance. Tests with simulated demand suggest that the procedures are well suited for use in a real-time traveller information system. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> In 2001, Caramia and his coauthors introduced a very fast and efficient heuristic for rooting a fleet of vehicles for dynamic combined pickup and delivery services [Caramia, M., Italiano, G.F., Oriolo, G., Pacifici, A., Perugia, A., 2001. Routing a fleet of vehicles for dynamic combined pickup and delivery services. In: Proceedings of the Symposium on Operation Research 2001, Springer-Verlag, Berlin/Heidelberg, pp. 3-8.]. The authors assume that every client names a stretch-factor that denotes the maximal relative deviation from the shortest path between pickup and delivery point. Waiting times are not allowed. As these assumptions are not very realistic, this paper now presents the results of adapting this algorithm to the dynamic pickup and delivery vehicle routing problem with several time windows. Waiting times of vehicles are admitted. Moreover, the computational results are considerably improved by local search techniques making use of free computational capacity. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> Transportation on demand (TOD) is concerned with the transportation of passengers or goods between specific origins and destinations at the request of users. Management of a TOD system involves making decisions regarding three main aspects: request clustering, vehicle routing, and vehicle scheduling. The operations research literature contains numerous studies addressing both static and dynamic TOD problems, most of which are generalizations of the vehicle routing problem with pickup and delivery (VRPPD). The aim of this paper is to present the most important results regarding the VRPPD and to survey four areas of application: the dial-a-ride problem, the urban courier service problem, the dial-a-flight (or air taxi charter) problem, and the emergency ambulance dispatch problem. For each area, the paper describes the particular features of the problem and summarizes the main exact and heuristic solution algorithms that have been proposed in the literature. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> Abstract We propose a double request dial-a-ride model with soft time windows and its application to the CAB Health and Recovery Services, Inc., a non-profit organization in the Boston Metropolitan area for the purpose of addressing the CAB clients’ transportation needs. The objective of the proposed model is to minimize a convex combination of total vehicle transportation costs and total clients’ inconvenience time. The latter consists of excess riding time, early/late delivery time before service and late pickup time after service. The proposed model was used to compare the benefits of coordination and central dispatching over the current system under which individual centers of the organization schedule their own clients’ appointments and route their own vehicles. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> On-demand air transportation is progressively obtaining the popularity with its flexibility, convenience, and guaranteed availability. However, its unique dynamic characteristics, such as short-noticed new demands and disruptive unscheduled maintenance, challenge the efficient operations, since they will significantly affect the priori algorithmic solutions. An integrated optimization model is presented to tackle the dynamic nature of the on-demand air transportation operations. A dynamic planning method together with a rolling-horizon approach is used to accommodate new demand. A realistic solution to recover from unscheduled maintenance events is also provided and demonstrated to be effective based on real world scenarios. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> This paper is a result of the application of soft computing technologies to solve the pick up and delivery problem (PDP). In this paper, we consider a practical PDP that is frequently encountered in the real-world logistics operations, such as Helicopter Offshore Crew Transportation of Oil & Gas Company. We consider a typical scenario of relatively large number of participants, about 70 persons and 5 helicopters. Logistics planning turns to be a combinatorial problem, and that makes it very difficult to find reasonable solutions within a short computational time. We present an algorithm based on two optimization techniques, genetic algorithms and heuristic optimization. Our solution is tested on an example with a known optimal solution, and on actual data provided by PEMEX, Mexican Oil Company. Currently, the algorithm is implemented as part of the system for simulation and optimization of offshore logistics called SMART-Logistics and it is at a field-testing phase. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> The availability of relatively cheap small jet aircrafts suggests a new air transportation business: dial-a-flight, an on-demand service in which travelers call a few days in advance to schedule transportation. A successful on-demand air transportation service requires an effective scheduling system to construct minimum-cost pilot and jet itineraries for a set of accepted transportation requests. In Part I, we introduced an integer multicommodity network flow model with side constraints for the dial-a-flight problem and showed that small instances can be solved effectively. Here, we demonstrate that high-quality solutions for large-scale real-life instances can be produced efficiently by embedding the core optimization technology in a local search scheme. To achieve the desired level of performance, metrics were devised to select neighborhoods intelligently, a variety of search diversification techniques were included, and an asynchronous parallel implementation was developed. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> This paper introduces a pickup and delivery problem encountered in servicing of offshore oil and gas platforms in the Norwegian Sea. A single vessel must perform pickups and deliveries at several offshore platforms. All delivery demands originate at a supply base and all pickup demands are also destined to the base. The vessel capacity may never be exceeded along its route. In addition, the amount of space available for loading and unloading operations is limited at each platform. The problem, called the Single Vehicle Pickup and Delivery Problem with Capacitated Customers consists of designing a least cost vehicle (vessel) route starting and ending at the depot (base), visiting each customer (platform), and such that there is always sufficient capacity in the vehicle and at the customer location to perform the pickup and delivery operations. This paper describes several construction heuristics as well as a tabu search algorithm. Computational results are presented. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> In the last decade, there has been an increasing body of research in dynamic vehicle routing problems. This article surveys the subclass of those problems called dynamic pickup and delivery problems, in which objects or people have to be collected and delivered in real-time. It discusses some general issues as well as solution strategies. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> Recent availability of relatively cheap small jet aircraft creates opportunities for a new air transport business: Air taxi, an on-demand service in which travellers call in one or a few days in advance to book transportation. In this paper, we present a methodology and simulation study supporting important strategic decisions, like for instance determining the required number of aircraft, for a company planning to establish an air taxi service in Norway. The methodology is based on a module simulating incoming bookings, built around a heuristic for solving the underlying dial-a-flight problem. The heuristic includes a separate method for solving the important subproblem of determining the best distribution of waiting time along a single aircraft schedule. The methodology has proved to provide reliable decision support to the company. <s> BIB012 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> This study analyzes and solves a patient transportation problem arising in large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations in a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study includes several complicating constraints which are specific to a hospital context. The study provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high-quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles. <s> BIB013 </s> A review of dynamic vehicle routing problems <s> Transport of persons <s> The problem studied in this paper stems from a real application to the transportation of patients in the Hospital Complex of Tours (France). The ambulance central station of the Hospital Complex has to plan the transportation demands between care units which require a vehicle. Some demands are known in advance and the others arise dynamically. Each demand requires a specific type of vehicle and a vehicle can transport only one person at a time. The demands can be subcontracted to a private company which implies high cost. Moreover, transportations are subject to particular constraints, among them priority of urgent demands, disinfection of a vehicle after the transportation of a patient with contagious disease and respect of the type of vehicle needed. These characteristics involve a distinction between the vehicles and the crews during the modeling phase. We propose a modeling for solving this difficult problem and a tabu search algorithm inspired by Gendreau et al. (1999). This method supports an adaptive memory and a tabu search procedure. Computational experiments on a real-life instance and on randomly generated instances show that the method can provide high-quality solutions for this dynamic problem with a short computation time. <s> BIB014
|
The transport of persons is in general-and by many aspects-similar to the transport of goods, yet it is characterized by additional constraints such as regulation on waiting, travel, and service times. Taxis are arguably the most common on-demand individual transport systems. Requests are composed of a pick-up location and time, possibly coupled with a destination. They can be either known in advance, for instance when a customer books a cab for the next day, or they can arrive dynamically, in which case a taxi must be dispatched in the shortest time. When customers cannot share a vehicle, the closest free taxi is generally the one which takes the ride, leaving limited space for optimization. The study by Caramia et al. , generalized by Fabri and Recht BIB004 , focuses on a multi-cab metropolitan transportation system, where a taxi can transport more than one passenger at the same time. In this case the online algorithms minimize the total traveled distance, while assigning requests to vehicles and computing the taxi routes. This multi-cab transportation system can be generalized as an on-demand or door-to-door transport service. Many applications involve the transport of children, the elderly, disabled people, or patients, from their home to schools, place of work, or medical centers. Xiang et al. studied a DARP with changing travel speeds, vehicle breakdowns, and traffic congestion; while Dial BIB001 , followed by Horn BIB002 BIB003 , studied demand-responsive transport systems. An extensive review of this class of problems can be found in the studies by Cordeau et al. BIB005 and Berbeglia et al. BIB011 . A singular application of on-demand transportation systems can be found in major hospitals, with services possibly spread across various buildings on several branches. Depending on the medical procedure or facility capacity, a patient may need to be transferred on short notice from one service to another, possibly requiring trained staff or specific equipment for his/her care. This application has been studied by Beaudry et al. BIB013 , Kergosien et al. BIB014 , and Melachrinoudis et al. BIB006 . Air taxis developed as a flexible response to the limitations of traditional airlines. Air taxis offer passengers the opportunity to travel through smaller airports, avoiding waiting lines at check-in and security checks. Air taxi companies offer an on-demand service: customers book a flight a few days in advance, specifying whether they are willing to share the aircraft, stop at an intermediate airport, or have flexible traveling hours. Then, the company accommodates these requests, trying to consolidate flights whenever possible. The underlying optimization problems have not been subject to much attention, except in the studies by Cordeau et al. BIB005 , Espinoza et al. BIB009 , Fagerholt et al. BIB012 , and Yao et al. BIB007 . Similar problems arises in helicopter transportation systems, typically used by oil and gas companies to transport personnel between offshore petroleum platforms BIB010 BIB008 .
|
A review of dynamic vehicle routing problems <s> Solution Methods <s> An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed in two parts I and II. Part I focuses on the “static” case of the problem. In this case, intermediate requests that may appear during the execution of the route are not considered. A generalized objective function is examined, the minimization of a weighted combination of the time to service all customers and of the total degree of “dissatisfaction” experienced by them while waiting for service. This dissatisfaction is assumed to be a linear function of the waiting and riding times of each customer. Vehicle capacity constraints and special priority rules are part of the problem. A Dynamic Programming approach is developed. The algorithm exhibits a computational effort which, although an exponential function of the size of the problem, is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size. Part II extends this approach to solving the equivalent “dynamic” case. In this case, new customer requests are automatically eligible for consideration at the time they occur. The procedure is an open-ended sequence of updates, each following every new customer request. The algorithm optimizes only over known inputs and does not anticipate future customer requests. Indefinite deferment of a customer's request is prevented by the priority rules introduced in Part I. Examples in both “static” and “dynamic” cases are presented. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> Real-time decision problems are playing an increasingly important role in the economy due to advances in communication and information technologies that now allow realtime information to be quickly obtained and processed (Seguin et al., 1997). Among these, dynamic vehicle routing and dispatching problems have emerged as an intense area of research in the operations research community. Numerous examples may be found in Haines and Wolfe (1982), Powell, Jaillet and Odoni (1995) and Psaraftis (1995). In these problems, a set of vehicles is routed over a particular time horizon (typically, a day) while new service requests are occuring in real-time. With each new request, the current solution may be reconfigured to better service the new request, as well as those already assigned to a route. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> Abstract In real-time fleet management, vehicle routes are built in an on-going fashion as vehicle locations, travel times and customer requests are revealed over the planning horizon. To deal with such problems, a new generation of fast on-line algorithms capable of taking into account uncertainty is required. Although several articles on this topic have been published, the literature on real-time vehicle routing is still disorganized. In this paper the research in this field is reviewed and some issues that have not received attention so far are highlighted. A particular emphasis is put on parallel computing strategies. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> This paper reviews and classifies the work done in the field of dynamic vehicle routing. We focus, in particular, on problems where the uncertainty comes from the occurrence of new requests. Problem-solving approaches are investigated in contexts where consolidation of multiple requests onto the same vehicle is allowed and addressed through the design of planned routes. Starting with pure myopic approaches, we then review in later sections the issues of diversion and anticipation of future requests <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> In the past 30 years, commercial transport traffic has more than doubled in both Europe and North America. Asian commercial transport traffic over this period of time has likely increased even more. Traffic jams are routine and they can happen in any segment of the highway system at any time. Moreover, manufacturing companies increasingly apply just-in-time practices in order to cut down inventory costs. As any mismatch between supply and demand can result into significant disturbances of manufacturing processes, just-in-time practices necessitate punctual, reliable, and flexible transportation. Emerging technologies in real-time communications systems provide the means for commercial vehicle operators to meet the increasingly complex customer expectations in a highly dynamic environment with unreliable traffic conditions. FLEET TELEMATICS: Real-Time Management and Planning of Commercial Vehicle Operations combines wireless telematics systems with dynamic vehicle routing algorithms and vehicle-positioning systems to produce a telematics-enabled information system that can be employed by commercial fleet operators for real-time monitoring, control, and planning. The book presents a Messaging & Fleet Monitoring System that automatically identifies deviations between the planned and the current state of the transportation system and a Dynamic Planning System (DPS) that provides real-time decision support considering the current state of the transportation system. The DPS uses newly developed dynamic vehicle routing algorithms to find high-quality solutions and adjust schedules and routes immediately. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> We consider online Vehicle Routing Problems (VRPs). The problems are online because the problem instance is revealed incrementally. After providing motivations for the consideration of such online problems, we first give a detailed summary of the most relevant research in the area of online VRPs. We then consider the online Traveling Salesman Problem (TSP) with precedence and capacity constraints and give an online algorithm with a competitive ratio of at most 2. We also consider an online version of the TSP with m salesmen and we give an online algorithm that has a competitive ratio of 2, a result that is best possible. We also study polynomial-time algorithms for these problems. Finally, we introduce the notion of disclosure dates, a form of advanced notice which allows for more realisticcompetitive ratios. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> This chapter examines the evolution of research on dynamic vehicle routing problems (DVRP). We de?ne the DVRP and show how it is di?erent from the traditional static vehicle routing problem. We then illustrate the technological environment required. Next, we discuss important characteristics of the problem, including the degree of dynamism, elements relevant for the system objective, and evaluation methods for the performance of algorithms.The chapter then summarizes research prior to 2000 and focuses on developments from 2000 to present. Finally, we o?er our conclusions and suggest directions for future research. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Solution Methods <s> This paper presents a methodology for classifying the literature of the Vehicle Routing Problem (VRP). VRP as a field of study and practice is defined quite broadly. It is considered to encompass all of the managerial, physical, geographical, and informational considerations as well as the theoretic disciplines impacting this ever emerging-field. Over its lifespan the VRP literature has become quite disjointed and disparate. Keeping track of its development has become difficult because its subject matter transcends several academic disciplines and professions that range from algorithm design to traffic management. Consequently, this paper defines VRP's domain in its entirety, accomplishes an all-encompassing taxonomy for the VRP literature, and delineates all of VRP's facets in a parsimonious and discriminating manner. Sample articles chosen for their disparity are classified to illustrate the descriptive power and parsimony of the taxonomy. Moreover, all previously published VRP taxonomies are shown to be relatively myopic; that is, they are subsumed by what is herein presented. Because the VRP literature encompasses esoteric and highly theoretical articles at one extremum and descriptions of actual applications at the other, the article sampling includes the entire range of the VRP literature. <s> BIB009
|
Few research was conducted on dynamic routing between the work of Psaraftis BIB001 in 1980 and the late 1990s. However, the last decade has seen a renewed interest for this class of problems BIB009 , with solution techniques ranging from linear programming to metaheuristics. This section presents the major contributions in this field, and the reader is referred to the reviews, books, and special issues by Gendreau and Potvin BIB002 , Ghiani et al. BIB003 , Goel BIB006 , Ichoua , Ichoua et al. BIB004 BIB005 , Jaillet and Wagner BIB007 , Larsen et al. BIB008 , and Zeimpekis et al. , to complement our review.
|
A review of dynamic vehicle routing problems <s> Periodic reoptimization <s> An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed in two parts I and II. Part I focuses on the “static” case of the problem. In this case, intermediate requests that may appear during the execution of the route are not considered. A generalized objective function is examined, the minimization of a weighted combination of the time to service all customers and of the total degree of “dissatisfaction” experienced by them while waiting for service. This dissatisfaction is assumed to be a linear function of the waiting and riding times of each customer. Vehicle capacity constraints and special priority rules are part of the problem. A Dynamic Programming approach is developed. The algorithm exhibits a computational effort which, although an exponential function of the size of the problem, is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size. Part II extends this approach to solving the equivalent “dynamic” case. In this case, new customer requests are automatically eligible for consideration at the time they occur. The procedure is an open-ended sequence of updates, each following every new customer request. The algorithm optimizes only over known inputs and does not anticipate future customer requests. Indefinite deferment of a customer's request is prevented by the priority rules introduced in Part I. Examples in both “static” and “dynamic” cases are presented. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Periodic reoptimization <s> In this paper we formally introduce a generic real-time multivehicle truckload pickup and delivery problem. The problem includes the consideration of various costs associated with trucks' empty travel distances, jobs' delayed completion times, and job rejections. Although very simple, the problem captures most features of the operational problem of a real-world trucking fleet that dynamically moves truckloads between different sites according to customer requests that arrive continuously.We propose a mixed-integer programming formulation for the offline version of the problem. We then consider and compare five rolling horizon strategies for the real-time version. Two of the policies are based on a repeated reoptimization of various instances of the offline problem, while the others use simpler local (heuristic) rules. One of the reoptimization strategies is new, while the other strategies have recently been tested for similar real-time fleet management problems.The comparison of the policies is done under a general simulation framework. The analysis is systematic and considers varying traffic intensities, varying degrees of advance information, and varying degrees of flexibility for job-rejection decisions. The new reoptimization policy is shown to systematically outperform the others under all these conditions. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Periodic reoptimization <s> An aboundant literature on vehicle routing problems is available. However, most of the work deals with static problems, where all data are known in advance, i.e. before the optimization has started. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Periodic reoptimization <s> We consider a dynamic vehicle routing problem with hard time windows, in which a set of customer orders arrives randomly over time to be picked up within their time windows. The dispatcher does not have any deterministic or probabilistic information on the location and size of a customer order until it arrives. The objective is to minimize the sum of the total distance of the routes used to cover all the orders. We propose a column-generation-based dynamic approach for the problem. The approach generates single-vehicle trips (i.e., columns) over time in a real-time fashion by utilizing existing columns, and solves at each decision epoch a set-partitioning-type formulation of the static problem consisting of the columns generated up to this time point. We evaluate the performance of our approach by comparing it to an insertion-based heuristic and an approach similar to ours, but without computational time limit for handling the static problem at each decision epoch. Computational results on various test problems generalized from a set of static benchmark problems in the literature show that our approach outperforms the insertion-based heuristic on most test problems. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Periodic reoptimization <s> Ant colony optimization (ACO) is a metaheuristic for combinatorial optimization problems. In this paper we report on its successful application to the vehicle routing problem (VRP). First, we introduce the VRP and some of its variants, such as the VRP with time windows, the time dependent VRP, the VRP with pickup and delivery, and the dynamic VRP. These variants have been formulated in order to bring the VRP closer to the kind of situations encountered in the real-world. <s> BIB005
|
To the best of our knowledge, the first periodic reoptimization approach is due to Psaraftis BIB001 , with the development of a dynamic programming approach. His research focuses on the DARP and consists in finding the optimal route each time a new request is known. The main drawback of dynamic programming is the well-known curse of dimensionality [110, Chap. 1], which prevents its application to large instances. More generally, periodic reoptimization approaches start at the beginning of the day with a first optimization that produces an initial set of routes. Then, an optimization procedure periodically solves a static problem corresponding to the current state, either whenever the available data changes, or at fixed intervals of time -referred to as decision epochs BIB004 or time slices . The advantage of periodic reoptimization is that it can be based on algorithms developed for static routing, for which extensive research has been carried out. The main drawback is that all the optimization needs to be performed before updating the routing plan, thus increasing delays for the dispatcher. Yang et al. BIB002 addressed the real-time truckload PDP, in which a fleet of trucks has to service point-to-point transport requests arriving dynamically. Important assumptions are that all trucks can only handle one request at a time, with no possible preemption, and they travel at the same constant speed. The authors propose MYOPT, a rolling horizon approach based on a linear program (LP) that is solved whenever a new request arrives. Along the same line of linear programming, Chen and Xu BIB004 designed a dynamic column generation algorithm (DYCOL) for the D-VRPTW. The authors propose the concept of decision epochs over the planning horizon, which are the dates when the optimization process runs. The novelty of their approach relies on dynamically generating columns for a set-partitioning model, using columns from the previous decision epoch. The authors compared DYCOL to a traditional column generation with no time limit (COL). Computational results based on the Solomon benchmark demonstrate that DYCOL yields comparable results in terms of objective function, but with running times limited to 10 seconds, opposed to the various hours consumed by COL. Montemanni et al. BIB003 developed an Ant Colony System (ACS) to solve the D-VRP. Similar to Kilby et al. , their approach uses time slices, that is, they divide the day in periods of equal duration. A request arriving during a time slice is not handled until the end of the time bucket, thus the problem solved during a time slice only considers the requests known at its beginning. Hence, the optimization is run statically and independently during each time slice. The main advantage of this time partition is that similar computational effort is allowed for each time slice. This discretization is also possible by the nature of the requests, which are never urgent, and can be postponed. An interesting feature of their approach is the use of the pheromone trace to transfer characteristics of a good solution to the next time slice. A similar approach was also used by Gambardella et al. and Rizzoli et al. BIB005 .
|
A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> Vehicle dispatching consists of allocating real-time service requests to a fleet of moving vehicles. In this paper, each vehicle is associated with a vector of attribute values that describes its current situation with respect to new incoming service requests. Using this attribute description, a utility function aimed at approximating the decision process of a professional dispatcher is constructed through genetic programming. Computational results are reported on requests collected from a courier service company and a comparison is provided with a neural network model and a simple dispatching policy. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> Recent technological advances in communication systems now allow the exploitation of realtime information for dynamic vehicle routing and scheduling. It is possible, in particular, to consider diverting a vehicle away from its current destination in response to a new customer request. In this paper, a strategy for assigning customer requests, which includes diversion, is proposed, and various issues related to it are presented. An empirical evaluation of the proposed approach is performed within a previously reported tabu search heuristic. Simulations compare the tabu search heuristic, with and without the new strategy, on a dynamic problem motivated from a courier service application. The results demonstrate the potential savings that can be obtained through the application of the proposed approach. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> The paper analyses recent developments of a number of memory-based metaheuristics such as taboo search, scatter search, genetic algorithms and ant colonies. Its shows that the implementations of these general solving methods are more and more similar. So, a unified presentation is proposed under the name of Adaptive Memory Programming (AMP). A number of methods recently developed for the quadratic assignment, vehicle routing and graph colouring problems are reviewed and presented under the adaptive memory programming point of view. AMP presents a number of interesting aspects such as a high parallelization potential and theability of dealing with real and dynamic applications. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> The real-time vehicle routing problem with time windows and simultaneous delivery/pickup demands (RT-VRPTWDP) is formulated as a mixed integer programming model which is repeatedly solved in the rolling time horizon. The real-time delivery/pickup demands are served by capacitated vehicles with limited initial loads. Moreover, pickup services aren't necessarily done after delivery services in each route. A heuristic comprising of route construction, route improvement and tabu search is proposed. The route improvement procedure follows the general guidelines of anytime algorithm. Numerical examples made up by Ge linas were taken with modification for validation. Based on Taguchi orthogonal arrays approach, the optimal parameter setting for tabu search is set through experimentations on the RT-VRPTWDP. The results show that the proposed algorithm can efficiently decrease the total route cost. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> In the Dial-a-Ride problem (DARP) users specify transportation requests between origins and destinations to be served by vehicles. In the dynamic DARP, requests are received throughout the day and the primary objective is to accept as many requests as possible while satisfying operational constraints. This article describes and compares a number of parallel implementations of a Tabu search heuristic previously developed for the static DARP, i.e., the variant of the problem where all requests are known in advance. Computational results show that the proposed algorithms are able to satisfy a high percentage of user requests. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> We introduce the concept of fruitful regions in a dynamic routing context: regions that have a high potential of generating loads to be transported. The objective is to maximise the number of loads transported, while keeping to capacity and time constraints. Loads arrive while the problem is being solved, which makes it a real-time routing problem. The solver is a self-adaptive evolutionary algorithm that ensures feasible solutions at all times. We investigate under what conditions the exploration of fruitful regions improves the effectiveness of the evolutionary algorithm. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> In this paper we present a formulation for the dynamic vehicle routing problem with time-dependent travel times. We also present a genetic algorithm to solve the problem. The problem is a pick-up or delivery vehicle routing problem with soft time windows in which we consider multiple vehicles with different capacities, real-time service requests, and real-time variations in travel times between demand nodes.The performance of the genetic algorithm is evaluated by comparing its results with exact solutions and lower bounds for randomly generated test problems. For small size problems with up to 10 demands, the genetic algorithm provides almost the same results as the exact solutions, while its computation time is less than 10% of the time required to produce the exact solutions. For the problems with 30 demand nodes, the genetic algorithm results have less than 8% gap with lower bounds.This research also shows that as the uncertainty in the travel time information increases, a dynamic routing strategy that takes the real-time traffic information into account becomes increasingly superior to a static one. This is clear when we compare the static and dynamic routing strategies in problem scenarios that have different levels of uncertainty in travel time information. In additional tests on a simulated network, the proposed algorithm works well in dealing with situations in which accidents cause significant congestion in some part of the transportation network. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> This paper describes a tabu search heuristic for the vehicle routing problem with soft time windows. In this problem, lateness at customer locations is allowed although a penalty is incurred and added to the objective value. By adding large penalty values, the vehicle routing problem with hard time windows can be addressed as well. In the tabu search, a neighborhood of the current solution is created through an exchange procedure that swaps sequences of consecutive customers (or segments) between two routes. The tabu search also exploits an adaptive memory that contains the routes of the best previously visited solutions. New starting points for the tabu search are produced through a combination of routes taken from different solutions found in this memory. Many best-known solutions are reported on classical test problems. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> The distribution of goods based on road services in urban areas, usually known as City Logistics, contributes to traffic congestion and is affected by traffic congestion, generates environmental impacts and incurs in high logistics costs. Therefore a holistic approach to the design and evaluation of City Logistics applications requires an integrated framework in which all components could work together that is must be modelled not only in terms of the core models for vehicle routing and fleet management, but also in terms of models able of including the dynamic aspects of traffic on the underlying road network, namely if Information and Communication Technologies (ICT) applications are taken into account. This paper reports on the modelling framework developed in the national projects SADERYL-I and II, sponsored by the Spanish “Direccion General de Ciencia y Tecnologia” (DGCYT) and tested in the European Project MEROPE of the INTERREG IIIB Programme. The modelling framework consists of a Decision Support System whose core architecture is composed by a Data Base, to store all the data required by the implied models: location of logistic centres and customers, capacities of warehouses and depots, transportation costs, operational costs, fleet data, etc.; a Database Management System, for the updating of the information stored in the data base; a Model Base, containing the family of models and algorithms to solve the related problems, discrete location, network location, street vehicle routing and scheduling; a Model Base Management System, to update, modify, add or delete models from the Model Base; a GIS based Graphic User Interface supporting the dialogues to define and update data, select the model suitable to the intended problem, generate automatically from the digital map of the road network the input graph for the Network Location and Vehicle Routing models, apply the corresponding algorithm, visualize the problem and the results, etc. To account for the dynamics of urban traffic flows the system includes an underlying dynamic traffic simulation model (AIMSUN in this case) which is able to track individually the fleet vehicles, emulating in this way the monitoring of fleet vehicles in a real time fleet management system, gathering dynamic data (i.e. current position, previous position, current speed, previous speed, etc.) while following the vehicle, in a similar way as the data that in real life an equipped vehicle could provide. This is the information required by a “Dynamic Router and Scheduler” to determine which vehicle will be assigned to the new service and which will be the new route for the selected vehicle <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> We develop and analyze a mathematical model for dynamic fleet management that captures the characteristics of modern vehicle operations. The model takes into consideration dynamic data such as vehicle locations, travel time, and incoming customer orders. The solution method includes an effective procedure for solving the static problem and an efficient re-optimization procedure for updating the route plan as dynamic information arrives. Computational experiments show that our re-optimization procedure can generate near-optimal solutions. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Continuous reoptimization <s> This study analyzes and solves a patient transportation problem arising in large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations in a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study includes several complicating constraints which are specific to a hospital context. The study provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high-quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles. <s> BIB012
|
Continuous reoptimization approaches perform the optimization throughout the day and maintain information on good solutions in an adaptive memory BIB004 . Whenever the available data changes, a decision procedure aggregates the information from the memory to update the current routing. The advantage is that the computational capacity is maximized, possibly at the expense of a more complex implementation. It is worth noting that because the current routing is subject to change at any time, vehicles do not know their next destination until they finish the service of a request. To the best of our knowledge, the first continuous reoptimization approach is due Gendreau et al. BIB002 with the adaptation of the parallel Tabu Search (TS) framework introduced by Taillard et al. BIB009 to a D-VRPTW problem arising in the local operation of long distance express courier services. Their approach maintains a pool of good routes-the adaptive memorywhich is used to generate initial solutions for a parallel TS. The parallelized search is done by partitioning the routes of the current solution, and optimizing them in independent threads. Whenever a new customer request arrives, it is checked against all the solutions from the adaptive memory to decide whether it should be accepted or rejected. This framework was also implemented for the D-VRP BIB003 , while other variations of TS have been applied to the D-PDP BIB010 BIB005 and the DARP BIB006 BIB012 . Bent and Van Hentenryck introduced the Multiple Plan Approach (MPA) as a generalization of the TS with adaptive memory BIB002 . The general idea is to populate and maintain a solution pool (the routing plans) that are used to generate a distinguished solution. Whenever a new request arrives, a procedure is called to check whether it can be serviced or not; if it can be serviced, then the request is inserted in the solution pool and incompatible solutions are discarded. Pool updates are performed periodically or whenever a vehicle finishes servicing a customer. This pool-update phase is crucial and ensures that all solutions are coherent with the current state of vehicles and customers. The pool can be seen as an adaptive memory that maintains a set of alternative solutions. In an early work, Benyahia and Potvin BIB001 studied the D-PDP and proposed a Genetic Algorithm (GA) that models the decision process of a human dispatcher. More recently, GAs were also used for the same problem BIB011 BIB008 and for the D-VRP BIB007 . Genetic algorithms in dynamic contexts are very similar to those designed for static problems, although they generally run throughout the planning horizon and solutions are constantly adapting to the changes made to the input.
|
A review of dynamic vehicle routing problems <s> Stochastic modeling <s> The Commercial Transport Division of North American Van Lines dispatches thousands of trucks from customer origin to customer destination each week under high levels of demand uncertainty. Working closely with upper management, the project team developed a new type of network model for assigning drivers to loads. The model, LOADMAP, combines real-time information about drivers and loads with an elaborate forecast of future loads and truck activities to maximize profits and service. It provided management with a new understanding of the economics of truckload operations; integrated load evaluation, pricing, marketing, and load solicitation with truck and load assignment; and increased profits by an estimated $2.5 million annually, while providing a higher level of service. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> In a companion paper (Godfrey and Powell 2002) we introduced an adaptive dynamic programming algorithm for stochastic dynamic resource allocation problems, which arise in the context of logistics and distribution, fleet management, and other allocation problems. The method depends on estimating separable nonlinear approximations of value functions, using a dynamic programming framework. That paper considered only the case in which the time to complete an action was always a single time period. Experiments with this technique quickly showed that when the basic algorithm was applied to problems with multiperiod travel times, the results were very poor. In this paper, we illustrate why this behavior arose, and propose a modified algorithm that addresses the issue. Experimental work demonstrates that the modified algorithm works on problems with multiperiod travel times, with results that are almost as good as the original algorithm applied to single period travel times. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Freight transportation is characterized by highly dynamic information processes: customers call in orders over time to move freight; the movement of freight over long distances is subject to random delays; equipment failures require last minute changes; and decisions are not always executed in the field according to plan. The high-dimensionality of the decisions involved has made transportation a natural application for the techniques of mathematical programming, but the challenge of modeling dynamic information processes has limited their success. In this chapter, we explore the use of concepts from stochastic programming in the context of resource allocation problems that arise in freight transportation. Since transportation problems are often quite large, we focus on the degree to which some techniques exploit the natural structure of these problems. Experimental work in the context of these applications is quite limited, so we highlight the techniques that appear to be the most promising. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Mobile communication technologies enable communication between dispatchers and drivers and hence can enable fleet management based on real-time information. We assume that such communication capability exists for a single pickup and delivery vehicle and that we know the likelihood, as a function of time, that each of the vehicle's potential customers will make a pickup request. We then model and analyze the problem of constructing a minimum expected total cost route from an origin to a destination that anticipates and then responds to service requests, if they occur, while the vehicle is en route. We model this problem as a Markov decision process and present several structured results associated with the optimal expected cost-to-go function and an optimal policy for route construction. We illustrate the behavior of an optimal policy with several numerical examples and demonstrate the superiority of an optimal anticipatory policy, relative to a route design approach that reflects the reactive nature of current routing procedures for less-than-truckload pickup and delivery. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> In this paper we formally introduce a generic real-time multivehicle truckload pickup and delivery problem. The problem includes the consideration of various costs associated with trucks' empty travel distances, jobs' delayed completion times, and job rejections. Although very simple, the problem captures most features of the operational problem of a real-world trucking fleet that dynamically moves truckloads between different sites according to customer requests that arrive continuously.We propose a mixed-integer programming formulation for the offline version of the problem. We then consider and compare five rolling horizon strategies for the real-time version. Two of the policies are based on a repeated reoptimization of various instances of the offline problem, while the others use simpler local (heuristic) rules. One of the reoptimization strategies is new, while the other strategies have recently been tested for similar real-time fleet management problems.The comparison of the policies is done under a general simulation framework. The analysis is systematic and considers varying traffic intensities, varying degrees of advance information, and varying degrees of flexibility for job-rejection decisions. The new reoptimization policy is shown to systematically outperform the others under all these conditions. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> This paper examines the value of real-time traffic information to optimal vehicle routing in a nonstationary stochastic network. We present a systematic approach to aid in the implementation of transportation systems integrated with real-time information technology. We develop decision-making procedures for determining the optimal driver attendance time, optimal departure times, and optimal routing policies under time-varying traffic flows based on a Markov decision process formulation. With a numerical study carried out on an urban road network in Southeast Michigan, we demonstrate significant advantages when using this information in terms of total cost savings and vehicle usage reduction while satisfying or improving service levels for just-in-time delivery. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Response time plays a crucial role in reducing the loss of assets and lives caused by emergencies. Good dispatch strategies for emergency response vehicles result in more efficient service, and route guidance can help reduce vehicles' travel times. Because of a limited number and type of emergency response vehicles at each station, service area gaps will be created: they cannot be properly covered by the remaining emergency response vehicles when some vehicles are dispatched. Future emergency calls in these areas may experience longer response times than usual. In this paper, an optimization model is developed that, given real-time traffic information, can assist dispatchers of emergency response vehicle in assigning multiple emergency response vehicles to incidents and in determining the routes that avoid congestion spots in the transportation networks. This model accounts for the service area coverage concerns (when several vehicles are busy) by relocation and redistribution of the remaining vehicles am... <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> This chapter focuses on modeling the organization and flow of information and decisions in the context of freight transportation problems that involve the management of people and equipment to serve the needs of customers. The timing of the flow of capital is becoming an increasingly important dimension of freight transportation, but as of this writing, there has been virtually no formal research on the topic. Modeling the timing of physical activities, by contrast, dates to the 1950s. These models introduce a range of modeling and algorithmic challenges that have been studied since the early years of operations research models. The author’s decision to focus on modeling the evolution of information (or more broadly, the organization and flow of information and decisions) reflects the growing maturity of this class of models, and the importance of questions that require an explicit mode of information processes. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> This paper considers a dynamic and stochastic routing problem in which information about customer locations and probabilistic information about future service requests are used to maximize the expected number of customers served by a single uncapacitated vehicle. The problem is modeled as a Markov decision process, and analytical results on the structure of the optimal policy are derived. For the case of a single dynamic customer, we completely characterize the optimal policy. Using the analytical results, we propose a real-time heuristic and demonstrate its effectiveness compared with a series of other intuitively appealing heuristics. We also use computational tests to determine the heuristic value of knowing both customer locations and probabilistic information about future service requests. <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Dynamic response to emergencies requires real time information from transportation agencies, public safety agencies and hospitals as well as the many essential operational components. In emergency response operations, good vehicle dispatching strategies can result in more efficient service by reducing vehicles’ travel times and system preparation time and the coordination between these components directly influences the effectiveness of activities involved in emergency response. In this chapter, an integrated emergency response fleet deployment system is proposed which embeds an optimization approach to assist the dispatch center operators in assigning emergency vehicles to emergency calls, while having the capability to look ahead for future demands. The mathematical model deals with the real time vehicle dispatching problem while accounting for the service requirements and coverage concerns for future demand by relocating and diverting the on-route vehicles and remaining vehicles among stations. A rolling-horizon approach is adopted in the model to reduce the relocation sites in order to save computation time. A simulation program is developed to validate the model and to compare various dispatching strategies <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Approximate dynamic programming (ADP) is a broad umbrella for a modeling and algorithmic strategy for solving problems that are sometimes large and complex, and are usually (but not always) stochastic. It is most often presented as a method for overcoming the classic curse of dimensionality that is well-known to plague the use of Bellman's equation. For many problems, there are actually up to three curses of dimensionality. But the richer message of approximate dynamic programming is learning what to learn, and how to learn it, to make better decisions over time. This article provides a brief review of approximate dynamic programming, without intending to be a complete tutorial. Instead, our goal is to provide a broader perspective of ADP and how it should be approached from the perspective of different problem classes. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 239-249, 2009 <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> This paper examines approximate dynamic programming algorithms for the single-vehicle routing problem with stochastic demands from a dynamic or reoptimization perspective. The methods extend the rollout algorithm by implementing different base sequences (i.e. a priori solutions), look-ahead policies, and pruning schemes. The paper also considers computing the cost-to-go with Monte Carlo simulation in addition to direct approaches. The best new method found is a two-step lookahead rollout started with a stochastic base sequence. The routing cost is about 4.8% less than the one-step rollout algorithm started with a deterministic sequence. Results also show that Monte Carlo cost-to-go estimation reduces computation time 65% in large instances with little or no loss in solution quality. Moreover, the paper compares results to the perfect information case from solving exact a posteriori solutions for sampled vehicle routing problems. The confidence interval for the overall mean difference is (3.56%, 4.11%). <s> BIB012 </s> A review of dynamic vehicle routing problems <s> Stochastic modeling <s> Preface. Acknowledgments. 1. The challenges of dynamic programming. 1.1 A dynamic programming example: a shortest path problem. 1.2 The three curses of dimensionality. 1.3 Some real applications. 1.4 Problem classes. 1.5 The many dialects of dynamic programming. 1.6 What is new in this book? 1.7 Bibliographic notes. 2. Some illustrative models. 2.1 Deterministic problems. 2.2 Stochastic problems. 2.3 Information acquisition problems. 2.4 A simple modeling framework for dynamic programs. 2.5 Bibliographic notes. Problems. 3. Introduction to Markov decision processes. 3.1 The optimality equations. 3.2 Finite horizon problems. 3.3 Infinite horizon problems. 3.4 Value iteration. 3.5 Policy iteration. 3.6 Hybrid valuepolicy iteration. 3.7 The linear programming method for dynamic programs. 3.8 Monotone policies. 3.9 Why does it work? 3.10 Bibliographic notes. Problems 4. Introduction to approximate dynamic programming. 4.1 The three curses of dimensionality (revisited). 4.2 The basic idea. 4.3 Sampling random variables . 4.4 ADP using the postdecision state variable. 4.5 Lowdimensional representations of value functions. 4.6 So just what is approximate dynamic programming? 4.7 Experimental issues. 4.8 Dynamic programming with missing or incomplete models. 4.9 Relationship to reinforcement learning. 4.10 But does it work? 4.11 Bibliographic notes. Problems. 5. Modeling dynamic programs. 5.1 Notational style. 5.2 Modeling time. 5.3 Modeling resources. 5.4 The states of our system. 5.5 Modeling decisions. 5.6 The exogenous information process. 5.7 The transition function. 5.8 The contribution function. 5.9 The objective function. 5.10 A measuretheoretic view of information. 5.11 Bibliographic notes. Problems. 6. Stochastic approximation methods. 6.1 A stochastic gradient algorithm. 6.2 Some stepsize recipes. 6.3 Stochastic stepsizes. 6.4 Computing bias and variance. 6.5 Optimal stepsizes. 6.6 Some experimental comparisons of stepsize formulas. 6.7 Convergence. 6.8 Why does it work? 6.9 Bibliographic notes. Problems. 7. Approximating value functions. 7.1 Approximation using aggregation. 7.2 Approximation methods using regression models. 7.3 Recursive methods for regression models. 7.4 Neural networks. 7.5 Batch processes. 7.6 Why does it work? 7.7 Bibliographic notes. Problems. 8. ADP for finite horizon problems. 8.1 Strategies for finite horizon problems. 8.2 Qlearning. 8.3 Temporal difference learning. 8.4 Policy iteration. 8.5 Monte Carlo value and policy iteration. 8.6 The actorcritic paradigm. 8.7 Bias in value function estimation. 8.8 State sampling strategies. 8.9 Starting and stopping. 8.10 A taxonomy of approximate dynamic programming strategies. 8.11 Why does it work? 8.12 Bibliographic notes. Problems. 9. Infinite horizon problems. 9.1 From finite to infinite horizon. 9.2 Algorithmic strategies. 9.3 Stepsizes for infinite horizon problems. 9.4 Error measures. 9.5 Direct ADP for online applications. 9.6 Finite horizon models for steady state applications. 9.7 Why does it work? 9.8 Bibliographic notes. Problems. 10. Exploration vs. exploitation. 10.1 A learning exercise: the nomadic trucker. 10.2 Learning strategies. 10.3 A simple information acquisition problem. 10.4 Gittins indices and the information acquisition problem. 10.5 Variations. 10.6 The knowledge gradient algorithm. 10.7 Information acquisition in dynamic programming. 10.8 Bibliographic notes. Problems. 11. Value function approximations for special functions. 11.1 Value functions versus gradients. 11.2 Linear approximations. 11.3 Piecewise linear approximations. 11.4 The SHAPE algorithm. 11.5 Regression methods. 11.6 Cutting planes. 11.7 Why does it work? 11.8 Bibliographic notes. Problems. 12. Dynamic resource allocation. 12.1 An asset acquisition problem. 12.2 The blood management problem. 12.3 A portfolio optimization problem. 12.4 A general resource allocation problem. 12.5 A fleet management problem. 12.6 A driver management problem. 12.7 Bibliographic references. Problems. 13. Implementation challenges. 13.1 Will ADP work for your problem? 13.2 Designing an ADP algorithm for complex problems. 13.3 Debugging an ADP algorithm. 13.4 Convergence issues. 13.5 Modeling your problem. 13.6 Online vs. offline models. 13.7 If it works, patent it! <s> BIB013
|
Powell et al. BIB001 formulated a truckload PDP as a Markov Decision Process (MDP). Later, MDPs were used by Thomas and White BIB004 and Thomas BIB009 to solve a VRP in which known customers may ask for service with a known probability. Kim et al. BIB006 also used MDPs to tackle the VRP with dynamic travel times. Unfortunately, the curse of dimensionality and the simplifying assumptions make this approach unsuitable in most real-world applications. Nonetheless, it allowed new insights in the field of dynamic programming. To cope with the scalability problems of traditional dynamic programming, Approximate Dynamic Programming (ADP) steps forward in time, approximates the value function, and ultimately avoids the evaluation of all possible states. We refer the interested reader to Powell BIB013 BIB011 for a more detailed description of the ADP framework. ADP has been successfully applied to freight transport BIB008 BIB003 and fleet management problems BIB002 . In particular, Novoa and Storer BIB012 propose an ADP algorithm to dynamically solve the VRPSD. Linear programming has also been adapted to the dynamic and stochastic context. The OPTUN approach, proposed by Yang et al. BIB005 as an extension of MYOPT (see § 4.1.1), considers opportunity costs on each arc to reflect the expected cost of traveling to isolated areas. Consequently, the optimization tends to reject isolated requests, and avoids traversing arcs that are far away from potential requests. Later, Yang et al. BIB007 studied the emergency vehicle dispatching and routing and proposed a mathematical formulation that was later used by Haghani and Yang BIB010 on a similar problem.
|
A review of dynamic vehicle routing problems <s> Sampling <s> We consider the problem of scheduling an unknown sequence of tasks for a single server as the tasks arrive with the goal off maximizing the total weighted value of the tasks served before their deadline is reached. This problem is faced for example by schedulers in packet communication networks when packets have deadlines and rewards associated with them. We make the simplifying assumptions that every task takes the same fixed amount of time to serve, that every task arrives with the same initial latency to its deadline. We also assume that future task arrivals are stochastically described by a Hidden Markov Model (HMM). The resulting decision problem can be formally modelled as a Partially Observable Markov Decision Process (POMDP). ::: ::: We first present and analyze a new optimal off-line scheduling algorithm called Prescient Minloss scheduling for the problem just described, but with "prescient" foreknowledge of the future task arrivals. We then discuss heuristically adapting this off-line algorithm into an on-line algorithm by sampling possible future task sequences from the HMM. We discuss and empirically compare scheduling methods for this on-line problem, including previously proposed sampling-based POMDP solution methods. Our heuristic approach can be used to adapt any off-line scheduler into an on-line scheduler. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Sampling <s> This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the evaluation of every decision on all samples (expectatio0n) and the ability to avoid distributing the samples among decisions (consensus). The key idea underlying the novel algorithm is to approximate the regret of a decision d. The regret algorithm is evaluated on two fundamentally different applications: online packet scheduling in networks and online multiple vehicle routing with time windows. On both applications, it produces significant benefits over prior approaches. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Sampling <s> Online decision making under uncertainty and time constraints represents one of the most challenging problems for robust intelligent agents. In an increasingly dynamic, interconnected, and real-time world, intelligent systems must adapt dynamically to uncertainties, update existing plans to accommodate new requests and events, and produce high-quality decisions under severe time constraints. Such online decision-making applications are becoming increasingly common: ambulance dispatching and emergency city-evacuation routing, for example, are inherently online decision-making problems; other applications include packet scheduling for Internet communications and reservation systems. This book presents a novel framework, online stochastic optimization, to address this challenge. This framework assumes that the distribution of future requests, or an approximation thereof, is available for sampling, as is the case in many applications that make either historical data or predictive models available. It assumes additionally that the distribution of future requests is independent of current decisions, which is also the case in a variety of applications and holds significant computational advantages. The book presents several online stochastic algorithms implementing the framework, provides performance guarantees, and demonstrates a variety of applications. It discusses how to relax some of the assumptions in using historical sampling and machine learning and analyzes different underlying algorithmic problems. And finally, the book discusses the framework's possible limitations and suggests directions for future research. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Sampling <s> The statement of the standard vehicle routing problem cannot always capture all aspects of real-world applications. As a result, extensions or modifications to the model are warranted. Here we consider the case when customers can call in orders during the daily operations; i.e., both customer locations and demands may be unknown in advance. This is modeled as a combined dynamic and stochastic programming problem, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically, and combined iteratively to form a solution to the overall problem. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Sampling <s> An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Sampling <s> The VRP is a key to efficient transportation logistics. It is a computationally very hard problem. Whereas classical OR models are static and deterministic, these assumptions are rarely warranted in an industrial setting. Lately, there has been an increased focus on dynamic and stochastic vehicle routing in the research community. However, very few generic routing tools based on stochastic or dynamic models are available. We illustrate the need for dynamics and stochastic models in industrial routing, describe the Dynamic and Stochastic VRP, and how we have extended a generic VRP solver to cope with dynamics and uncertainty <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Sampling <s> In this chapter we describe an innovative real-time fleet management system designed and implemented for eCourier Ltd (London, UK) for which patents are pending in the United States and elsewhere. This paper describes both the business challenges and benefits of the implementation of a real-time fleet management system (with reference to empirical metrics such as courier efficiency, service times, and financial data), as well as the theoretical and implementation challenges of constructing such a system. In short, the system dramatically reduces the requirements of human supervisors for fleet management, improves service and increases courier efficiency. We first illustrate the overall architecture, then depict the main algorithms, including the service territory zoning methodology, the travel time forecasting procedure and the job allocation heuristic <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Sampling <s> This paper describes anticipatory algorithms for the dynamic vehicle dispatching problem with pickups and deliveries, a problem faced by local area courier companies. These algorithms evaluate alternative solutions through a short-term demand sampling and a fully sequential procedure for indifference zone selection. They also exploit an unified and integrated approach in order to address all the issues involved in real-time fleet management, namely assigning requests to vehicles, routing the vehicles, scheduling the routes and relocating idle vehicles. Computational results show that the anticipatory algorithms provide consistently better solutions than their reactive counterparts. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Sampling <s> This paper considers a vehicle routing problem where each vehicle performs delivery operations over multiple routes during its workday and where new customer requests occur dynamically. The proposed methodology for addressing the problem is based on an adaptive large neighborhood search heuristic, previously developed for the static version of the problem. In the dynamic case, multiple possible scenarios for the occurrence of future requests are considered to decide about the opportunity to include a new request into the current solution. It is worth noting that the real-time decision is about the acceptance of the new request, not about its service which can only take place in some future routes (a delivery route being closed as soon as a vehicle departs from the depot). In the computational results, a comparison is provided with a myopic approach which does not consider scenarios of future requests. <s> BIB009
|
Sampling approaches rely on the generation of scenarios containing possible realizations of the random variables. Figure 3 illustrates how scenarios are generated for the D-VRP. Solely based on the current customers, the optimal tour would be (A, B, E, D, C) (3a.), which ignores two zones (gray areas) where customers are likely to appear. By sampling the customer spatial distributions, customers X, Y , and Z are generated, and the new optimal tour is (C, X, Y, B, A, Z, E, D) (3b.). Removing the sampled (potential) customers leads to the tour (C, B, A, E, D) (3c.) which is suboptimal regarding a myopic cost evaluation, but leaves room to accommodate new customers at a lower cost. The Multiple Scenario Approach (MSA) is a predictive adaptation of the MPA framework discussed in § 4.1.2. The idea behind MSA is to take advantage of the time between decisions to continuously improve the current scenario pool. During the initialization, the algorithm, generates a first set of scenarios based on the requests known beforehand. Throughout the day, scenarios are then reoptimized and new ones are generated and added to the pool. When a decision is required, the scenario optimization procedure is suspended, and MSA uses the scenario pool to select the request to service next. MSA then discards the scenarios that are incompatible with the current routing, and resumes the optimization. Computational experiments on instances adapted from the Solomon benchmark showed that MSA outperforms MPA both in terms of serviced customers and traveled distances, especially for instances with high degrees of dynamism . Flatberg et al. BIB006 adapted the SPIDER commercial solver to use multiple scenarios and a consensus algorithm to tackle the D-VRP, while Pillac et al. implemented an event-driven optimization framework based on MSA and showed significant improvements over state-of-the-art algorithms for the D-VRPSD. An important component of scenario based-approaches such as MSA is the decision process, which defines how the information from the scenario pool is used to reach upon a decision regarding the next customer to visit. The most common algorithms used to reach a decision in MSA are: consensus, expectation, and regret. The consensus algorithm selects the customer appearing first with the highest frequency among scenarios. Expectation BIB002 BIB001 consists in evaluating the cost of visiting each customer first by forcing its visit in all scenarios and performing a complete optimization. Finally, regret BIB002 approximates the expectation algorithm and avoids the reoptimization of all scenarios. Even though these algorithms were initially designed for the routing of a single vehicle, they can be extended to the multi-vehicle case BIB003 . Hvattum et al. BIB004 developed the Dynamic Sample Scenario Hedge Heuristic (DSHH), an approach similar to the consensus algorithm for D-VRP. This method divides the planning horizon into time intervals. At the beginning of each interval, DSHH revises the routing by assigning a subset of promising requests to the vehicles, depending on the frequency of their assignment over all scenarios. DSHH later led to the development of the Branch and Regret Heuristic (BRH), where scenarios are merged to build a unique solution. Various local search approaches have been developed for the stochastic and dynamic problems. Ghiani et al. BIB008 developed an algorithm for the D-PDP that only samples the near future to reduce the computational effort. The main difference with MSA is that no scenario pool is used and the selection of the distinguished solution is based on the expected penalty of accommodating requests in the near future. Azi et al. BIB009 developed an Adaptive Large Neighborhood Search (ALNS) for a dynamic routing problem with multiple delivery routes, in which the dynamic decision is the acceptance of a new request. The approach maintains a pool of scenarios, optimized by an ALNS, that are used to evaluate the opportunity value of an incoming request. Tabu search has also been adapted to dynamic and stochastic problems. Ichoua et al. BIB005 and Attanasio et al. BIB007 tackled with tabu search the D-VRPTW and the D-PDP, respectively.
|
A review of dynamic vehicle routing problems <s> Other strategies <s> Abstract This paper considers the redeployment problem for a fleet of ambulances. This problem is encountered in the real-time management of emergency medical services. A dynamic model is proposed and a dynamic ambulance management system is described. This system includes a parallel tabu search heuristic to precompute redeployment scenarios. Simulations based on real-data confirm the efficiency of the proposed approach. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> In a companion paper (Godfrey and Powell 2002) we introduced an adaptive dynamic programming algorithm for stochastic dynamic resource allocation problems, which arise in the context of logistics and distribution, fleet management, and other allocation problems. The method depends on estimating separable nonlinear approximations of value functions, using a dynamic programming framework. That paper considered only the case in which the time to complete an action was always a single time period. Experiments with this technique quickly showed that when the basic algorithm was applied to problems with multiperiod travel times, the results were very poor. In this paper, we illustrate why this behavior arose, and propose a modified algorithm that addresses the issue. Experimental work demonstrates that the modified algorithm works on problems with multiperiod travel times, with results that are almost as good as the original algorithm applied to single period travel times. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> The dynamic Pickup and Delivery Problem with Time Windows (PDPTW) is faced by courier companies serving same-day pickup and delivery requests for the transport of letters and small parcels. This article focuses on the dynamic PDPTW for which future requests are not stochastically modelled or predicted. The standard solution methodology for the dynamic PDPTW is the use of a rolling time horizon as proposed by Psaraftis. When assigning a new request to a vehicle it may be preferable to consider the impact of a decision both on a short-term and on a long-term horizon. In particular, better managing slack time in the distant future may help reduce routing cost. This paper describes double-horizon based heuristics for the dynamic PDPTW. Computational results show the advantage of using a double-horizon in conjunction with insertion and improvement heuristics. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> Many real-world vehicle routing problems are dynamic optimization problems, with customer requests arriving over time, requiring a repeated reoptimization. In this paper, we consider a dynamic vehicle routing problem where one additional customer arrives at a beforehand unknown location when the vehicles are already under way. Our objective is to maximize the probability that the additional customer can be integrated into one of the otherwise fixed tours without violating time constraints. This is achieved by letting the vehicles wait at suitable locations during their tours, thus influencing the position of the vehicles at the time when the new customer arrives. For the cases of one and two vehicles, we derive theoretical results about the best waiting strategies. The general problem is shown to be NP-complete. Several deterministic waiting strategies and an evolutionary algorithm to optimize the waiting strategy are proposed and compared empirically. It is demonstrated that a proper waiting strategy can significantly increase the probability of being able to service the additional customer, at the same time reducing the average detour to serve that customer. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> Online decision making under uncertainty and time constraints represents one of the most challenging problems for robust intelligent agents. In an increasingly dynamic, interconnected, and real-time world, intelligent systems must adapt dynamically to uncertainties, update existing plans to accommodate new requests and events, and produce high-quality decisions under severe time constraints. Such online decision-making applications are becoming increasingly common: ambulance dispatching and emergency city-evacuation routing, for example, are inherently online decision-making problems; other applications include packet scheduling for Internet communications and reservation systems. This book presents a novel framework, online stochastic optimization, to address this challenge. This framework assumes that the distribution of future requests, or an approximation thereof, is available for sampling, as is the case in many applications that make either historical data or predictive models available. It assumes additionally that the distribution of future requests is independent of current decisions, which is also the case in a variety of applications and holds significant computational advantages. The book presents several online stochastic algorithms implementing the framework, provides performance guarantees, and demonstrates a variety of applications. It discusses how to relax some of the assumptions in using historical sampling and machine learning and analyzes different underlying algorithmic problems. And finally, the book discusses the framework's possible limitations and suggests directions for future research. <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> This paper considers a dynamic and stochastic routing problem in which information about customer locations and probabilistic information about future service requests are used to maximize the expected number of customers served by a single uncapacitated vehicle. The problem is modeled as a Markov decision process, and analytical results on the structure of the optimal policy are derived. For the case of a single dynamic customer, we completely characterize the optimal policy. Using the analytical results, we propose a real-time heuristic and demonstrate its effectiveness compared with a series of other intuitively appealing heuristics. We also use computational tests to determine the heuristic value of knowing both customer locations and probabilistic information about future service requests. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> This paper considers online stochastic multiple vehicle routing with time windows in which requests arrive dynamically and the goal is to maximize the number of serviced customers. Contrary to earlier algorithms which only move vehicles to known customers, this paper investigates waiting and relocation strategies in which vehicles may wait at their current location or relocate to arbitrary sites. Experimental results show that waiting and relocation strategies may dramatically improve customer service, especially for problems that are highly dynamic and contain many late requests. The decisions to wait and to relocate do not exploit any problem-specific features but rather are obtained by including choices in the online algorithm that are necessarily sub-optimal in an offline setting. <s> BIB008 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> Dynamic response to emergencies requires real time information from transportation agencies, public safety agencies and hospitals as well as the many essential operational components. In emergency response operations, good vehicle dispatching strategies can result in more efficient service by reducing vehicles’ travel times and system preparation time and the coordination between these components directly influences the effectiveness of activities involved in emergency response. In this chapter, an integrated emergency response fleet deployment system is proposed which embeds an optimization approach to assist the dispatch center operators in assigning emergency vehicles to emergency calls, while having the capability to look ahead for future demands. The mathematical model deals with the real time vehicle dispatching problem while accounting for the service requirements and coverage concerns for future demand by relocating and diverting the on-route vehicles and remaining vehicles among stations. A rolling-horizon approach is adopted in the model to reduce the relocation sites in order to save computation time. A simulation program is developed to validate the model and to compare various dispatching strategies <s> BIB009 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> We investigate the impact of two strategies for dynamic pickup and delivery problems on the quality of solutions produced by insertion heuristics: (a) a waiting strategy that delays the final assignment of vehicles to their next destination, and (b) a request buffering strategy that postpones the assignment of some non-urgent new requests to the next route planning. In this study, the strategies are tested in a constructive-deconstructive heuristic for the dynamic pickup and delivery problem with hard time windows and random travel times. Comparisons of the solution quality provided by these strategies to a more conventional approach were performed on randomly generated instances up to 100 requests with static and dynamic (time-dependent) travel times and different degrees of dynamism. The results indicate the advantages of the strategies both in terms of lost requests and number of vehicles. <s> BIB010 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> The advance of communication and information technologies based on satellite and wireless networks have allowed transportation companies to benefit from real-time information for dynamic vehicle routing with time windows. During daily operations, we consider the case in which customers can place requests such that their demand and location are stochastic variables. The time windows at customer locations can be violated although lateness costs are incurred. The objective is to define a set of vehicle routes which are dynamically updated to accommodate new customers in order to maximize the expected profit. This is the difference between the total revenue and the sum of lateness costs and costs associated with the total distance traveled. The solution approach makes use of a new constructive heuristic that scatters vehicles in the service area and an adaptive granular local search procedure. The strategies of letting a vehicle wait, positioning a vehicle in a region where customers are likely to appear, and diverting a vehicle away from its current destination are integrated within a granular local search heuristic. The performance of the proposed approach is assessed in test problems based on real-life Brazilian transportation companies. <s> BIB011 </s> A review of dynamic vehicle routing problems <s> Other strategies <s> This paper describes anticipatory algorithms for the dynamic vehicle dispatching problem with pickups and deliveries, a problem faced by local area courier companies. These algorithms evaluate alternative solutions through a short-term demand sampling and a fully sequential procedure for indifference zone selection. They also exploit an unified and integrated approach in order to address all the issues involved in real-time fleet management, namely assigning requests to vehicles, routing the vehicles, scheduling the routes and relocating idle vehicles. Computational results show that the anticipatory algorithms provide consistently better solutions than their reactive counterparts. <s> BIB012
|
In addition to the general frameworks described previously, the use of stochastic knowledge allows for the design and implementation of other strategies that try to adequately respond to upcoming events. The waiting strategy consists in deciding whether a vehicle should wait after servicing a request, before heading toward the next customer; or planning a waiting period on a strategic location. This strategy is particularly important in problems with time windows, where time lags appear between requests. Mitrović-Minić et al. BIB003 proved that in all cases it is better to wait after servicing a customer, but a more refined strategy can lead to further improvements. The problem is in general to evaluate the likelihood of a new request in the neighborhood of a serviced request and to plan a waiting period accordingly. The waiting strategy has been implemented in various frameworks for the D-VRP BIB004 BIB007 , D-VRPTW BIB008 BIB011 BIB005 BIB006 , D-PDP BIB012 BIB003 , and Dynamic and Stochastic TSP . The strategy has shown good results, especially in the case of a limited fleet facing a high request rate BIB006 . Aside from the waiting after or before servicing a customer, a vehicle can be relocated to a strategic position, where new requests are likely to arrive. This strategy is the keystone of emergency fleet deployment, also known as Emergency Vehicle Dispatching-or Redeployment-Problem BIB001 BIB009 . The relocation strategy has also been applied to other vehicle routing problems, such as the D-VRP ,D-VRPTW BIB008 BIB011 BIB005 BIB006 , D-TSPTW , D-PDP BIB012 BIB010 , and the Resource Allocation Problem (RAP) BIB002 . Request buffering, introduced by Pureza and Laporte BIB010 , consists in delaying the assignment of some requests to vehicles in a priority buffer, so that more urgent requests can be handled first.
|
A review of dynamic vehicle routing problems <s> Performance evaluation <s> In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes t(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> The dynamic Pickup and Delivery Problem with Time Windows (PDPTW) is faced by courier companies serving same-day pickup and delivery requests for the transport of letters and small parcels. This article focuses on the dynamic PDPTW for which future requests are not stochastically modelled or predicted. The standard solution methodology for the dynamic PDPTW is the use of a rolling time horizon as proposed by Psaraftis. When assigning a new request to a vehicle it may be preferable to consider the impact of a decision both on a short-term and on a long-term horizon. In particular, better managing slack time in the distant future may help reduce routing cost. This paper describes double-horizon based heuristics for the dynamic PDPTW. Computational results show the advantage of using a double-horizon in conjunction with insertion and improvement heuristics. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> We consider online routing optimization problems where the objective is to minimize the time needed to visit a set of locations under various constraints; the problems are online because the set of locations are revealed incrementally over time. We consider two main problems: (1) the online traveling salesman problem (TSP) with precedence and capacity constraints, and (2) the online TSP with m salesmen. For both problems we propose online algorithms, each with a competitive ratio of 2; for the m-salesmen problem, we show that our result is best-possible. We also consider polynomial-time online algorithms. ::: ::: We then consider resource augmentation, where we give the online servers additional resources to offset the powerful offline adversary advantage. In this way, we address a main criticism of competitive analysis. We consider the cases where the online algorithm has access to faster servers, servers with larger capacities, additional servers, and/or advanced information. We derive improved competitive ratios. We also give lower bounds on the competitive ratios under resource augmentation, which in many cases are tight and lead to best-possible results. ::: ::: Finally, we study online algorithms from an asymptotic point of view. We show that, under general stochastic structures for the problem data, unknown and unused by the online player, the online algorithms are almost surely asymptotically optimal. Furthermore, we provide computational results that show that the convergence can be very fast. <s> BIB005 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> This chapter discusses important characteristics seen within dynamic vehicle routing problems. We discuss the differences between the traditional static vehicle routing problems and its dynamic counterparts. We give an in-depth introduction to the degree of dynamism measure which can be used to classify dynamic vehicle routing systems. Methods for evaluation of the performance of algorithms that solve on-line routing problems are discussed and we list some of the most important issues to include in the system objective. Finally, we provide a three-echelon classification of dynamic vehicle routing systems based on their degree of dynamism and the system objective <s> BIB006 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> In a k-server routing problem k>=1 servers move in a metric space in order to visit specified points or carry objects from sources to destinations. In the online version requests arrive online while the servers are traveling. Two classical objective functions are to minimize the makespan, i.e., the time when the last server has completed its tour (k-Traveling Salesman Problem or k-tsp) and to minimize the sum of completion times (k-Traveling Repairman Problem or k-trp). Both problems, the k-tsp and the k-trp have been studied from a competitive analysis point of view, where the cost of an online algorithm is compared to that of an optimal offline algorithm. However, the gap between the obtained competitive ratios and the corresponding lower bounds have mostly been quite large for k>1, in particular for randomized algorithms against an oblivious adversary. We reduce a number of gaps by providing new lower bounds for randomized algorithms. The most dramatic improvement is in the lower bound for the k-Dial-a-Ride-Problem (the k-trp when objects need to be carried) from 4e-52e-3~2.4104 to 3 which is currently also the best lower bound for deterministic algorithms. <s> BIB007 </s> A review of dynamic vehicle routing problems <s> Performance evaluation <s> This paper describes a dynamic capacitated arc routing problem motivated from winter gritting applications. In this problem, the service cost on each arc is a piecewise linear function of the time of beginning of service. This function also exhibits an optimal time interval where the service cost is minimal. Since the timing of an intervention is crucial, the dynamic aspect considered in this work stems from changes to these optimal service time intervals due to weather report updates. A variable neighborhood descent heuristic, initially developed for the static version of the problem, where all service cost functions are known in advance and do not change thereafter, is adapted to this dynamic variant. <s> BIB008
|
In contrast to static problems, where measuring the performance of an algorithm is straightforward (i.e., running time and solution quality), dynamic problems require the introduction of new metrics to assess the performance of a particular method. Sleator and Tarjan BIB001 introduced the competitive analysis BIB005 BIB006 . Let P be a minimization problem and I the set of all instances of P . Let z * (I off ) be the optimal cost for the offline instance I off corresponding to I ∈ I. For offline instance I off , all input data from instance I, either static or dynamic, is available when building the solution. In contrast, the data of the online version I is revealed in real time, thus an algorithm A has to take into account new information as it is revealed and produce a solution relevant to the current state of knowledge. Let z A (I) = z(x A (I)) be the cost of the final solution x A (I) found by the online algorithm A on instance I. Algorithm A is said to be c-competitive, or equivalently to have a competitive ratio of c, if there exists a constant α such that In the case where α = 0, the algorithm is said to be strictly c-competitive, meaning that in all cases the objective value of the solution found by A will be at most of c times the optimal value. The competitive ratio metric allows a worst-case absolute measure of an algorithm performance in terms of the objective value. We refer the reader to Borodin and El-Yaniv BIB002 for an in-depth analysis of this measure, and to Jaillet and Wagner BIB005 and Fink et al. BIB007 for results on various routing problems. The main drawback of the competitive analysis is that it requires to prove the previously stated inequality analytically, which may be complex for realworld applications. The value of information proposed by Mitrović-Minić et al. BIB004 constitutes a more flexible and practical metric. We denote by z A (I off ) the value of the objective function returned by algorithm A for the offline instance I off . The value of information V A (I) for algorithm A on instance I is then defined as The value of information can be interpreted as the gap between the solution returned by an algorithm A on a instance I and the solution returned by the same algorithm when all information from I is known beforehand. In contrast with the competitive ratio, the value of information gives information on the performance of an algorithm based on empirical results, without requiring optimal solutions for the offline instances. It captures the impact of the dynamism on the solution yield by the algorithm under analysis. For instance, Gendreau et al. BIB003 report a value of information between 2.5% and 4.1% for their tabu search algorithm for the D-VRPTW, while Tagmouti et al. BIB008 reports values between 10% and 26.7% for a variable neighborhood search descent applied to a dynamic arc routing problem.
|
A review of dynamic vehicle routing problems <s> Benchmarks <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Benchmarks <s> This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the evaluation of every decision on all samples (expectatio0n) and the ability to avoid distributing the samples among decisions (consensus). The key idea underlying the novel algorithm is to approximate the regret of a decision d. The regret algorithm is evaluated on two fundamentally different applications: online packet scheduling in networks and online multiple vehicle routing with time windows. On both applications, it produces significant benefits over prior approaches. <s> BIB002 </s> A review of dynamic vehicle routing problems <s> Benchmarks <s> In this article, the real-time time-dependent vehicle routing problem with time windows is formulated as a series of mixed integer programming models that account for real-time and time-dependent travel times, as well as for real-time demands in a unified framework. In addition to vehicles routes, departure times are treated as decision variables, with delayed departure permitted at each node serviced. A heuristic comprising route construction and route improvement is proposed within which critical nodes are defined to delineate the scope of the remaining problem along the time rolling horizon and an efficient technique for choosing optimal departure times is developed. Fifty-six numerical problems and a real application are provided for demonstration. <s> BIB003 </s> A review of dynamic vehicle routing problems <s> Benchmarks <s> We consider a dynamic vehicle routing problem with hard time windows, in which a set of customer orders arrives randomly over time to be picked up within their time windows. The dispatcher does not have any deterministic or probabilistic information on the location and size of a customer order until it arrives. The objective is to minimize the sum of the total distance of the routes used to cover all the orders. We propose a column-generation-based dynamic approach for the problem. The approach generates single-vehicle trips (i.e., columns) over time in a real-time fashion by utilizing existing columns, and solves at each decision epoch a set-partitioning-type formulation of the static problem consisting of the columns generated up to this time point. We evaluate the performance of our approach by comparing it to an insertion-based heuristic and an approach similar to ours, but without computational time limit for handling the static problem at each decision epoch. Computational results on various test problems generalized from a set of static benchmark problems in the literature show that our approach outperforms the insertion-based heuristic on most test problems. <s> BIB004 </s> A review of dynamic vehicle routing problems <s> Benchmarks <s> Online decision making under uncertainty and time constraints represents one of the most challenging problems for robust intelligent agents. In an increasingly dynamic, interconnected, and real-time world, intelligent systems must adapt dynamically to uncertainties, update existing plans to accommodate new requests and events, and produce high-quality decisions under severe time constraints. Such online decision-making applications are becoming increasingly common: ambulance dispatching and emergency city-evacuation routing, for example, are inherently online decision-making problems; other applications include packet scheduling for Internet communications and reservation systems. This book presents a novel framework, online stochastic optimization, to address this challenge. This framework assumes that the distribution of future requests, or an approximation thereof, is available for sampling, as is the case in many applications that make either historical data or predictive models available. It assumes additionally that the distribution of future requests is independent of current decisions, which is also the case in a variety of applications and holds significant computational advantages. The book presents several online stochastic algorithms implementing the framework, provides performance guarantees, and demonstrates a variety of applications. It discusses how to relax some of the assumptions in using historical sampling and machine learning and analyzes different underlying algorithmic problems. And finally, the book discusses the framework's possible limitations and suggests directions for future research. <s> BIB005
|
To date, there is no reference benchmark for dynamic routing problems. Although, it is worth noting that various authors based their computational experiments on adaptations of the Solomon instances for static routing BIB002 BIB003 BIB004 BIB001 . Van Hentenryck and Bent BIB005 Chap. 10] describe how the original benchmark by Solomon can be adapted to dynamic problems. The interested reader is referred to the website of Pankratz and Krypczyk for an updated list of publicly available instances sets for dynamic vehicle routing problems.
|
A review of dynamic vehicle routing problems <s> Conclusions <s> An abundant literature about vehicle routing and scheduling problems is available in the scientific community. However, a large fraction of this work deals with static problems where all data are known before the routes are constructed. Recent technological advances now create environments where decisions are taken quickly, using new or updated information about the current routing situation. This paper describes such a dynamic problem, motivated from courier service applications, where customer requests with soft time windows must be dispatched in real time to a fleet of vehicles in movement. A tabu search heuristic, initially designed for the static version of the problem, has been adapted to the dynamic case and implemented on a parallel platform to increase the computational effort. Numerical results are reported using different request arrival rates, and comparisons are established with other heuristic methods. <s> BIB001 </s> A review of dynamic vehicle routing problems <s> Conclusions <s> This paper presents a methodology for classifying the literature of the Vehicle Routing Problem (VRP). VRP as a field of study and practice is defined quite broadly. It is considered to encompass all of the managerial, physical, geographical, and informational considerations as well as the theoretic disciplines impacting this ever emerging-field. Over its lifespan the VRP literature has become quite disjointed and disparate. Keeping track of its development has become difficult because its subject matter transcends several academic disciplines and professions that range from algorithm design to traffic management. Consequently, this paper defines VRP's domain in its entirety, accomplishes an all-encompassing taxonomy for the VRP literature, and delineates all of VRP's facets in a parsimonious and discriminating manner. Sample articles chosen for their disparity are classified to illustrate the descriptive power and parsimony of the taxonomy. Moreover, all previously published VRP taxonomies are shown to be relatively myopic; that is, they are subsumed by what is herein presented. Because the VRP literature encompasses esoteric and highly theoretical articles at one extremum and descriptions of actual applications at the other, the article sampling includes the entire range of the VRP literature. <s> BIB002
|
Recent technological advances provide companies with the right tools to manage their fleet in real time. Nonetheless, these new technologies also introduce more complexity in fleet management tasks, unveiling the need for decision support systems adapted to dynamic contexts. Consequently, during the last decade, the research community have shown a growing interest for the underlying optimization problems, leading to a new family of approaches specifically designed to efficiently address dynamism and uncertainty. By analyzing the current state of the art, some directions can be drawn for future research in this relatively new field. First, further work should aim at creating a taxonomy of dynamic vehicle routing problem, possibly by extending existing research on static routing BIB002 . This would allow a more precise classification of approaches, evaluate similarities between problems, and foster the development of generic frameworks. Second, there is currently no reference benchmark for dynamic vehicle routing problems. Therefore, there is a strong need for the development of publicly available benchmarks for the most common dynamic vehicle routing problems. Third, with the advent of multi-core processors on desktop computers, and low-cost graphical processing units (GPU), parallel computing is now readily available for time-consuming methods such as those based on sampling. Although early studies considered distributed optimization BIB001 , most approaches reviewed in this document do not take advantage of parallel architectures. The development of parallel algorithms is a challenge that could reduce the time needed for optimization and provide decision makers with highly reactive tools. Fourth, our review of the existing literature revealed that a large fraction of work done in the area of dynamic routing does not consider stochastic aspects. We are convinced that developing algorithms that make use of stochastic information will improve the fleet performance and reduce operating costs. Thus this line of research should become a priority in the near future. Finally, researchers have mainly focused on the routing aspect of the dynamic fleet management. However, in some applications there is more that can be done to improve performance and service level. For instance, in equipment maintenance services, the call center has a certain degree of freedom in fixing service appointments. In other words, it means that the customer time windows can be defined, or influenced, by the call center operator. As a consequence, a system in which aside from giving a yes/no answer to a customer request, suggests convenient times for the company would be highly desirable in such contexts.
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Pure, soluble and functional proteins are of high demand in modern biotechnology. Natural protein sources rarely meet the requirements for quantity, ease of isolation or price and hence recombinant technology is often the method of choice. Recombinant cell factories are constantly employed for the production of protein preparations bound for downstream purification and processing. Eschericia coli is a frequently used host, since it facilitates protein expression by its relative simplicity, its inexpensive and fast high density cultivation, the well known genetics and the large number of compatible molecular tools available. In spite of all these qualities, expression of recombinant proteins with E. coli as the host often results in insoluble and/or nonfunctional proteins. Here we review new approaches to overcome these obstacles by strategies that focus on either controlled expression of target protein in an unmodified form or by applying modifications using expressivity and solubility tags. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Escherichia coli has been the most widely used host for the production of recombinant proteins because it is the best characterized system in every aspect. Furthermore, the high cell density culture of recombinant E. coli has allowed production of various proteins with high yield and high productivities. Various cultivation strategies employing different host strains and expression systems have been successfully employed for the production of recombinant proteins. New strategies for strain improvement towards the goal of enhanced protein production are actively being developed based on high-throughput omics approaches such as transcriptomics and proteomics. This paper reviews recent advances in the production of recombinant proteins by high cell density culture of E. coli. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Sequential methodology based on the application of three types of experimental designs was used to optimize the fermentation conditions for elastase production from mutant strain ZJUEL31410 of Bacillus licheniformis in shaking flask cultures. The optimal cultivation conditions stimulating the maximal elastase production consist of 220 r/min shaking speed, 25 h fermentation time, 5% (v/v) inoculums volume, 25 ml medium volume in 250 ml Erlenmeyer flask and 18 h seed age. Under the optimized conditions, the predicted maximal elastase activity was 495 U/ml. The application of response surface methodology resulted in a significant enhancement in elastase production. The effects of other factors such as elastin and the growth factor (corn steep flour) on elastase production and cell growth were also investigated in the current study. The elastin had no significant effect on enzyme-improved production. It is still not clear whether the elastin plays a role as a nitrogen source or not. Corn steep flour was verified to be the best and required factor for elastase production and cell growth by Bacillus licheniformis ZJUEL31410. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> OBJECTIVE ::: To study the optimal medium composition for xylanase production by Aspergillus niger XY-1 in solid-state fermentation (SSF). ::: ::: ::: METHODS ::: Statistical methodology including the Plackett-Burman design (PBD) and the central composite design (CCD) was employed to investigate the individual crucial component of the medium that significantly affected the enzyme yield. ::: ::: ::: RESULTS ::: Firstly, NaNO(3), yeast extract, urea, Na(2)CO(3), MgSO(4), peptone and (NH(4))(2)SO(4) were screened as the significant factors positively affecting the xylanase production by PBD. Secondly, by valuating the nitrogen sources effect, urea was proved to be the most effective and economic nitrogen source for xylanase production and used for further optimization. Finally, the CCD and response surface methodology (RSM) were applied to determine the optimal concentration of each significant variable, which included urea, Na(2)CO(3) and MgSO(4). Subsequently a second-order polynomial was determined by multiple regression analysis. The optimum values of the critical components for maximum xylanase production were obtained as follows: x(1) (urea)=0.163 (41.63 g/L), x(2) (Na(2)CO(3))=-1.68 (2.64 g/L), x(3) (MgSO(4))=1.338 (10.68 g/L) and the predicted xylanase value was 14374.6 U/g dry substrate. Using the optimized condition, xylanase production by Aspergillus niger XY-1 after 48 h fermentation reached 14637 U/g dry substrate with wheat bran in the shake flask. ::: ::: ::: CONCLUSION ::: By using PBD and CCD, we obtained the optimal composition for xylanase production by Aspergillus niger XY-1 in SSF, and the results of no additional expensive medium and shortened fermentation time for higher xylanase production show the potential for industrial utilization. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> The aim of this work was to optimize the cultural and production parameters through the statistical approach for the synthesis of alpha amylase by Bacillus amyloliquefaciens in submerged fermentation (SmF) using a combination of wheat bran and groundnut oil cake (1:1) as the substrate. The process parameters influencing the enzyme production were identified using Plackett-Burman design. Among the various variables screened, the substrate concentration, incubation period and CaCl2 concentration were most significant. The optimum levels of these significant parameters were determined employing the response surface Box-Behnken design, which revealed these as follows: substrate concentration (12.5%), incubation period (42 h) and CaCl2 (0.0275 M). <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> In 1989, Manning, Patel, and Borchardt wrote a review of protein stability (Manning et al., Pharm. Res. 6:903-918, 1989), which has been widely referenced ever since. At the time, recombinant protein therapy was still in its infancy. This review summarizes the advances that have been made since then regarding protein stabilization and formulation. In addition to a discussion of the current understanding of chemical and physical instability, sections are included on stabilization in aqueous solution and the dried state, the use of chemical modification and mutagenesis to improve stability, and the interrelationship between chemical and physical instability. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Large proteins are usually expressed in a eukaryotic system while smaller ones are expressed in prokaryotic systems. For proteins that require glycosylation, mammalian cells, fungi or the baculovirus system is chosen. The least expensive, easiest and quickest expression of proteins can be carried out in Escherichia coli. However, this bacterium cannot express very large proteins. Also, for S-S rich proteins, and proteins that require post-translational modifications, E. coli is not the system of choice. The two most utilized yeasts are Saccharomyces cerevisiae and Pichia pastoris. Yeasts can produce high yields of proteins at low cost, proteins larger than 50 kD can be produced, signal sequences can be removed, and glycosylation can be carried out. The baculoviral system can carry out more complex post-translational modifications of proteins. The most popular system for producing recombinant mammalian glycosylated proteins is that of mammalian cells. Genetically modified animals secrete recombinant proteins in their milk, blood or urine. Similarly, transgenic plants such as Arabidopsis thaliana and others can generate many recombinant proteins. <s> BIB008 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> A revolution in industrial microbiology was sparked by the discoveries of ther double-sttranded structure of DNA and the development of trecombinant DNA technology. Traditional industrial microbiology was merged with molecular biology to yield improved recombinant processes for the industrial production of primary and secondary metabolites, protein biopharmaceuticals and industrial enzymes. Novel genetic techniques such as metabolic engineering, combinatorial biosynthesis and molecular breeding techniques and their modifications are contributing greatly to the development of improved industrial processes. In addition, functional genomics, proteomics and metabolomics are being exploited for the discovery of novel valuable small molecules for medicine as well as enzymes for catalysis. The sequencing of industrial microbal genomes is beuing carried out which bodes well for future process improvement and discovery of new industrial products. <s> BIB009 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Bacillus cereus ZH14 was previously found to produce a new type of antiviral ribonuclease, which was secreted into medium and active against tobacco mosaic virus. In order to enhance the ribonuclease production, in this study the optimization of culture conditions using response surface methodology was done. The fermentation variables including culture temperature, initial pH, inoculum size, sucrose, yeast extract, MgSO(4).7H(2)O, and KNO(3) were considered for selection of significant ones by using the Plackett-Burman design, and four significant variables (sucrose, yeast extract, MgSO(4).7H(2)O, and KNO(3)) were further optimized by a 2(4) factorial central composite design. The optimal combination of the medium constituents for maximum ribonuclease production was determined as 8.50 g/l sucrose, 9.30 g/l yeast extract, 2.00 g/l MgSO(4).7H(2)O, and 0.62 g/l KNO(3). The enzyme activity was increased by 60%. This study will be helpful to the future commercial development of the new bacteria-based antiviral ribonuclease fermentation process. <s> BIB010 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> The production of recombinant proteins in a large scale is important for protein functional and structural studies, particularly by using Escherichia coli over-expression systems; however, approximate 70% of recombinant proteins are over-expressed as insoluble inclusion bodies. Here we presented an efficient method for generating soluble proteins from inclusion bodies by using two steps of denaturation and one step of refolding. We first demonstrated the advantages of this method over a conventional procedure with one denaturation step and one refolding step using three proteins with different folding properties. The refolded proteins were found to be active using in vitro tests and a bioassay. We then tested the general applicability of this method by analyzing 88 proteins from human and other organisms, all of which were expressed as inclusion bodies. We found that about 76% of these proteins were refolded with an average of >75% yield of soluble proteins. This "two-step-denaturing and refolding" (2DR) method is simple, highly efficient and generally applicable; it can be utilized to obtain active recombinant proteins for both basic research and industrial purposes. <s> BIB011 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Protein expression in Escherichia coli represents the most facile approach for the preparation of non-glycosylated proteins for analytical and preparative purposes. So far, the optimization of recombinant expression has largely remained a matter of trial and error and has relied upon varying parameters, such as expression vector, media composition, growth temperature and chaperone co-expression. Recently several new approaches for the genome-scale engineering of E. coli to enhance recombinant protein expression have been developed. These methodologies now enable the generation of optimized E. coli expression strains in a manner analogous to metabolic engineering for the synthesis of low-molecular-weight compounds. In this review, we provide an overview of strain engineering approaches useful for enhancing the expression of hard-to-produce proteins, including heterologous membrane proteins. <s> BIB012 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Anti-lipopolysaccharide factors (ALFs) are important antimicrobial peptides that are isolated from some aquatic species. In a previous study, we isolated ALF genes from Chinese mitten crab, Eriocheir sinensis. In this study, we optimized the production of a recombinant ALF by expressing E. sinensis ALF genes in Escherichia coli maintained in shake-flasks. In particular, we focused on optimization of both the medium composition and the culture condition. Various medium components were analyzed by the Plackett-Burman design, and two significant screened factors, (NH4)2SO4 and KH2PO4, were further optimized via the central composite design (CCD). Based on the CCD analysis, we investigated the induction start-up time, the isopropylthio-D-galactoside (IPTG) concentration, the post-induction time, and the temperature by response surface methodology. We found that the highest level of ALF fusion protein was achieved in the medium containing 1.89 g/L (NH4)2SO4 and 3.18 g/L KH2PO4, with a cell optical density of 0.8 at 600 nm before induction, an IPTG concentration of 0.5 mmol/L, a post-induction temperature of 32.7°C, and a post-induction time of 4 h. Applying the whole optimization strategy using all optimal factors improved the target protein content from 6.1% (without optimization) to 13.2%. We further applied the optimized medium and conditions in high cell density cultivation, and determined that the soluble target protein constituted 10.5% of the total protein. Our identification of the economic medium composition, optimal culture conditions, and details of the fermentation process should facilitate the potential application of ALF for further research. <s> BIB013 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Tumor necrosis factor-α (TNF-α) is responsible for many autoimmune disorders including rheumatoid arthritis, psoriasis, Chron's disease, stroke, and atherosclerosis. Thus, inhibition of TNF-α is a major challenge in drug discovery. However, a sufficient amount of purified protein is needed for the in vitro screening of potential TNF-α inhibitors. In this work, induction conditions for the production of human TNF-α fusion protein in a soluble form by recombinant Escherichia coli BL21(DE3) pLysS were optimized using response surface methodology based on the central composite design. The induction conditions included cell density prior induction (OD(600nm)), post-induction temperature, IPTG concentration and post-induction time. Statistical analysis of the results revealed that all variables and their interactions had significant impact on production of soluble TNF-α. An 11% increase of TNF-α production was achieved after determination of the optimum induction conditions: OD(600nm) prior induction 0.55, a post induction temperature of 25°C, an IPTG concentration of 1mM and a post-induction time of 4h. We have also studied TNF-α oligomerization, the major property of this protein, and a K(d) value of 0.26nM for protein dimerization was determined. The concentration of where protein trimerization occurred was also detected. However, we failed to determine a reliable K(d) value for protein trimerization probably due to the complexibility of our model. <s> BIB014 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Abstract In this work, SVP2 from Salinivibrio proteolyticus strain AF-2004, a zinc metalloprotease with suitable biotechnological applications, was cloned for expression at high levels in Escherichia coli with the intention of changing culture conditions to generate a stable extracellular enzyme extract. The complete ORF of SVP2 gene was heterologously expressed in E. coli BL21 (DE3) by using pQE-80L expression vector system. In initial step, the effect of seven factors include: incubation temperature, peptone and yeast extract concentration, cell density (OD600) before induction, inducer (IPTG) concentration, induction time, and Ca 2+ ion concentrations on extracellular recombinant SVP2 expression and stability were investigated. The primary results revealed that the IPTG concentration, Ca 2+ ion concentration and induction time are the most important effectors on protease secretion by recombinant E. coli BL21. Central composite design experiment in the following showed that the maximum protease activity (522 U/ml) was achieved in 0.0089 mM IPTG for 24 h at 30 °C, an OD600 of 2, 0.5% of peptone and yeast extract, and a Ca 2+ ion concentration of 1.3 mM. The results exhibited that the minimum level of IPTG concentration along with high cell density and medium level of Ca 2+ with prolonged induction time provided the best culture condition for maximum extracellular production of heterologous protease SVP2 in E. coli expression system. <s> BIB015 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Escherichia coli has been the pioneering host for recombinant protein production, since the original recombinant DNA procedures were developed using its genetic material and infecting bacteriophages. As a consequence, and because of the accumulated know-how on E. coli genetics and physiology and the increasing number of tools for genetic engineering adapted to this bacterium, E. coli is the preferred host when attempting the production of a new protein. <s> BIB016 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Almost all of the 200 or so approved biopharmaceuticals have been produced in one of three host systems: the bacterium Escherichia coli, yeasts (Saccharomyces cerevisiae, Pichia pastoris) and mammalian cells. We describe the most widely used methods for the expression of recombinant proteins in the cytoplasm or periplasm of E. coli, as well as strategies for secreting the product to the growth medium. Recombinant expression in E. coli influences the cell physiology and triggers a stress response, which has to be considered in process development. Increased expression of a functional protein can be achieved by optimizing the gene, plasmid, host cell, and fermentation process. Relevant properties of two yeast expression systems, S. cerevisiae and P. pastoris, are summarized. Optimization of expression in S. cerevisiae has focused mainly on increasing the secretion, which is otherwise limiting. P. pastoris was recently approved as a host for biopharmaceutical production for the first time. It enables high-level protein production and secretion. Additionally, genetic engineering has resulted in its ability to produce recombinant proteins with humanized glycosylation patterns. Several mammalian cell lines of either rodent or human origin are also used in biopharmaceutical production. Optimization of their expression has focused on clonal selection, interference with epigenetic factors and genetic engineering. Systemic optimization approaches are applied to all cell expression systems. They feature parallel high-throughput techniques, such as DNA microarray, next-generation sequencing and proteomics, and enable simultaneous monitoring of multiple parameters. Systemic approaches, together with technological advances such as disposable bioreactors and microbioreactors, are expected to lead to increased quality and quantity of biopharmaceuticals, as well as to reduced product development times. <s> BIB017 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> BackgroundLeptospirosis is a zoonose that is increasingly endemic in built-up areas, especially where there are communities living in precarious housing with poor or non-existent sanitation infrastructure. Leptospirosis can kill, for its symptoms are easily confused with those of other diseases. As such, a rapid diagnosis is required so it can be treated effectively. A test for leptospirosis diagnosis using Leptospira Immunoglobulin-like (Lig) proteins is currently at final validation at Fiocruz.ResultsIn this work, the process for expression of LigB (131-645aa) in E. coli BL21 (DE3)Star™/pAE was evaluated. No significant difference was found for the experiments at two different pre-induction temperatures (28°C and 37°C). Then, the strain was cultivated at 37°C until IPTG addition, followed by induction at 28°C, thereby reducing the overall process time. Under this condition, expression was assessed using central composite design for two variables: cell growth at which LigB (131-645aa) was induced (absorbance at 600 nm between 0.75 and 2.0) and inducer concentration (0.1 mM to 1 mM IPTG). Both variables influenced cell growth and protein expression. Induction at the final exponential growth phase in shaking flasks with Absind = 2.0 yielded higher cell concentrations and LigB (131-645aa) productivities. IPTG concentration had a negative effect and could be ten-fold lower than the concentration commonly used in molecular biology (1 mM), while keeping expression at similar levels and inducing less damage to cell growth. The expression of LigB (131-645aa) was associated with cell growth. The induction at the end of the exponential phase using 0.1 mM IPTG at 28°C for 4 h was also performed in microbioreactors, reaching higher cell densities and 970 mg/L protein. LigB (131-645aa) was purified by nickel affinity chromatography with 91% homogeneity.ConclusionsIt was possible to assess the effects and interactions of the induction variables on the expression of soluble LigB (131-645aa) using experimental design, with a view to improving process productivity and reducing the production costs of a rapid test for leptospirosis diagnosis. <s> BIB018 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> The formulation and delivery of biopharmaceutical drugs, such as monoclonal antibodies and recombinant proteins, poses substantial challenges owing to their large size and susceptibility to degradation. In this Review we highlight recent advances in formulation and delivery strategies — such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs — and discuss their advantages and limitations. We also highlight current and emerging delivery routes that provide an alternative to injection, including transdermal, oral and pulmonary delivery routes. In addition, the potential of targeted and intracellular protein delivery is discussed. <s> BIB019 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> BackgroundStreptococcus pneumoniae (S. pneumoniae) causes several serious diseases including pneumonia, septicemia and meningitis. The World Health Organization estimates that streptococcal pneumonia is the cause of approximately 1.9 million deaths of children under five years of age each year. The large number of serotypes underlying the disease spectrum, which would be reflected in the high production cost of a commercial vaccine effective to protect against all of them and the higher level of amino acid sequence conservation as compared to polysaccharide structure, has prompted us to attempt to use conserved proteins for the development of a simpler vaccine. One of the most prominent proteins is pneumolysin (Ply), present in almost all the serotypes known at the moment, which shows an effective protection against S. pneumoniae infections.ResultsWe have cloned the pneumolysin gene from S. pneumoniae serotype 14 and studied the effects of eight variables related to medium composition and induction conditions on the soluble expression of rPly in Escherichia coli (E. coli) and a 28-4 factorial design was applied. Statistical analysis was carried out to compare the conditions used to evaluate the expression of soluble pneumolysin; rPly activity was evaluated by hemolytic activity assay and served as the main response to evaluate the proper protein expression and folding. The optimized conditions, validated by the use of triplicates, include growth until an absorbance of 0.8 (measured at 600 nm) with 0.1 mM IPTG during 4 h at 25°C in a 5 g/L yeast extract, 5 g/L tryptone, 10 g/L NaCl, 1 g/L glucose medium, with addition of 30 μg/mL kanamycin.ConclusionsThis experimental design methodology allowed the development of an adequate process condition to attain high levels (250 mg/L) of soluble expression of functional rPly in E. coli, which should contribute to reduce operational costs. It was possible to recover the protein in its active form with 75% homogeneity. <s> BIB020 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Escherichia coli is the organism of choice for the production of recombinant proteins. Its use as a cell factory is well-established and it has become the most popular expression platform. For this reason, there are many molecular tools and protocols at hand for the high-level production of recombinant proteins, such as a vast catalog of expression plasmids, a great number of engineered strains and many cultivation strategies. We review the different approaches for the synthesis of recombinant proteins in E. coli and discuss recent progress in this ever-growing field. <s> BIB021 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Second generation biofuel development is increasingly reliant on the recombinant expression of cellulases. Designing or identifying successful expression systems is thus of preeminent importance to industrial progress in the field. Recombinant production of cellulases has been performed using a wide range of expression systems in bacteria, yeasts and plants. In a number of these systems, particularly when using bacteria and plants, significant challenges have been experienced in expressing full-length proteins or proteins at high yield. Further difficulties have been encountered in designing recombinant systems for surface-display of cellulases and for use in consolidated bioprocessing in bacteria and yeast. For establishing cellulase expression in plants, various strategies are utilized to overcome problems, such as the auto-hydrolysis of developing plant cell walls. In this review, we investigate the major challenges, as well as the major advances made to date in the recombinant expression of cellulases across the commonly used bacterial, plant and yeast systems. We review some of the critical aspects to be considered for industrial-scale cellulase production. <s> BIB022 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB023 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Receptor activator of nuclear factor (NF)-κB ligand (RANKL), a master cytokine that drives osteoclast differentiation, activation and survival, exists in both transmembrane and extracellular forms. To date, studies on physiological role of RANKL have been mainly carried out with extracellular RANKL probably due to difficulties in achieving high level expression of functional transmembrane RANKL (mRANKL). In the present study, we took advantage of codon optimization and response surface methodology to optimize the soluble expression of mRANKL in E. coli. We optimized the codon usage of mRANKL sequence to a preferred set of codons for E. coli changing its codon adaptation index from 0.64 to 0.76, tending to increase its expression level in E. coli. Further, we utilized central composite design to predict the optimum combination of variables (cell density before induction, lactose concentration, post-induction temperature and post-induction time) for the expression of mRANKL. Finally, we investigated the effects of various experimental parameters using response surface methodology. The best combination of response variables was 0.6 OD600, 7.5 mM lactose, 26°C post-induction temperature and 5 h post-induction time that produced 52.4 mg/L of fusion mRANKL. Prior to functional analysis of the protein, we purified mRANKL to homogeneity and confirmed the existence of trimeric form of mRANKL by native gel electrophoresis and gel filtration chromatography. Further, the biological activity of mRANKL to induce osteoclast formation on RAW264.7 cells was confirmed by tartrate resistant acid phosphatase assay and quantitative real-time polymerase chain reaction assays. Importantly, a new finding from this study was that the biological activity of mRANKL is higher than its extracellular counterpart. To the best of our knowledge, this is the first time to report heterologous expression of mRANKL in soluble form and to perform a comparative study of functional properties of both forms of RANKL. <s> BIB024 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> BACKGROUND:Escherichia coli phytase is an acidic histidine phytase with great specific activity. Pichia pastoris is a powerful system for the heterologous expression of active and soluble proteins which can express recombinant proteins in high cell density fermenter without loss of product yield and efficiently secrete heterologous proteins into the media. Recombinant protein expression is influenced by expression conditions such as temperature, concentration of inducer, and pH. By optimization, the yield of expressed proteins can be increase. Response surface methodology (RSM) has been widely used for the optimization and studying of different parameters in biotechnological processes. ::: OBJECTIVES:In this study, the expression of synthetic appA gene in P. pastoris was greatly improved by adjusting the expression condition. ::: MATERIALS AND METHODS:The appA gene with 410 amino acids was synthesized by P. pastoris codon preference and cloned in expression vector pPinkα-HC, under the control of AOX1 promoter, and it was transformed into P. pastoris GS115 by electroporation. Recombinant phytase was expressed in buffered methanol-complex medium (BMMY) and the expression was analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and enzymatic assay. To achieve the highest level of expression, methanol concentration, pH and temperature were optimized via RSM. Finally, the optimum pH and temperature for recombinant phytase activity was determined. ::: RESULTS:Escherichia coli phytase was expressed in P. pastoris under different cultivation conditions (post-induction temperature, methanol concentration, and post-induction pH). The optimized conditions by RSM using face centered central composite design were 1% (v/v) methanol, pH = 5.8, and 24.5°C. Under the optimized conditions, appA was successfully expressed in P. pastoris and the maximum phytase activity was 237.2 U/mL after 72 hours of expression. ::: CONCLUSIONS:By optimization of recombinant phytase expression in shake flask culture, we concluded that P. pastoris was a suitable host for high-level expression of phytase and it can possess high potential for industrial applications. ::: KEYWORDS:6-Phytase; Genes; Pichia; Synthetic <s> BIB025 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> A haloalkaliphilic solvent-tolerant lipase was produced from Alkalibacillus salilacus within 48 h of growth in liquid medium. An overall 4.9-fold enhanced production was achieved over unoptimized media after medium optimization by statistical approaches. Plackett–Burman screening suggested lipase production maximally influenced by olive oil, KH2PO4, NaCl, and glucose; and response surface methodology predicted the appropriate levels of each parameter. Produced lipase was highly active and stable over broad ranges of temperature (15–65 °C), pH (4.0–11.0), and NaCl concentration (0–30 %) showing excellent thermostable, pH-stable, and halophilic properties. The enzyme was optimally active at pH 8.0 and 40 °C. Majority of cations, except some like Co2+ and Al3+ were positive signals for lipase activity. In addition, the presence of chemical agents and organic solvents with different log P ow was well tolerated by the enzyme. Finally, efficacy of lipase-mediated esterification of various alcohols with oleic acid in organic solvents was studied. <s> BIB026 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> Streptomyces kanasenisi ZX01 was found to produce a novel glycoprotein GP-1 previously, which was secreted into medium and had significant activity against tobacco mosaic virus. However, the low production of GP-1 by strain ZX01 limited its further studies. In order to improve the yield of GP-1, a series of statistical experimental design methods were applied to optimize medium of strain ZX01 in this work. Millet medium was chosen to be the optimal original medium for optimization. Soluble starch and yeast extract were identified as the optimal carbon and nitrogen source using one-factor-at-a-time method. Response surface methodology was used to optimize medium compositions (soluble starch, yeast extract and inorganic salts). A higher yield of GP-1 was 601.33 µg/L after optimization. The optimal compositions of medium were: soluble starch 13.61 g/L, yeast extract 4.19 g/L, NaCl 3.54 g/L, CaCO3 0.28 g/L, millet, 10 g/L. The yield of GP-1 in a 5 L fermentor using optimized medium was 2.54 mg/L, which is much higher than the result of shake flask. This work will be helpful for the improvement of GP-1 production on a large scale and lay a foundation for developing it to be a novel anti-plant virus agent. <s> BIB027 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Introduction <s> BACKGROUND ::: The aim of this study was to determine the best condition for the production of DT386-BR2 fusion protein, an immunotoxin consisting of catalytic and translocation domains of diphtheria toxin fused to BR2, a cancer specific cell penetrating peptide, for targeted eradication of cancer cells, in terms of the host, cultivation condition, and culture medium. ::: ::: ::: MATERIALS AND METHODS ::: Recombinant pET28a vector containing the codons optimized for the expression of the DT386-BR2 gene was transformed to different strains of Escherichia coli (E. coli BL21 DE3, E. coli Rosetta DE3 and E. coli Rosetta-gami 2 DE3), followed by the induction of expression using 1 mM IPTG. Then, the strain with the highest ability to produce recombinant protein was selected and used to determine the best expression condition using response surface methodology (RSM). Finally, the best culture medium was selected. ::: ::: ::: RESULTS ::: Densitometry analysis of sodium dodecyl sulfate-polyacrylamide gel electrophoresis of the expressed fusion protein showed that E. coli Rosetta DE3 produced the highest amounts of the recombinant fusion protein when quantified by 1 mg/ml bovine serum albumin (178.07 μg/ml). Results of RSM also showed the best condition for the production of the recombinant fusion protein was induction with 1 mM IPTG for 2 h at 37°C. Finally, it was established that terrific broth could produce higher amounts of the fusion protein when compared to other culture media. ::: ::: ::: CONCLUSION ::: In this study, we expressed the recombinant DT386-BR2 fusion protein in large amounts by optimizing the expression host, cultivation condition, and culture medium. This fusion protein will be subjected to purification and evaluation of its cytotoxic effects in future studies. <s> BIB028
|
Advances in biotechnology, including the development of genetic engineering and cloning, have provided a means for the large scale expression of heterologous proteins for different applications BIB009 . Currently, recombinant proteins are widely used in the biological and biomedical industries as well as in research with their market share increasing rapidly BIB019 BIB007 . The production of high yields of soluble and functional recombinant protein is the ultimate goal in protein biotechnology BIB016 . To achieve this objective, many key aspects such as the expression system, the expression vector, the host strain, the purification tag, the media composition, the induction conditions and the purification methods need to be carefully evaluated and optimised before embarking on large scale production of a recombinant protein of interest BIB014 BIB020 BIB021 . Although both eukaryotic and prokaryotic expression systems are used for overproduction of soluble recombinant protein, choosing the right system for your protein depends, amongst other things, on the growth rate and culturing conditions of host cells, the level of the target gene expression and post translational processing of the synthesized protein BIB022 . The most commonly used prokaryotic systems are based on expression in bacteria, including E. coli and Bacillus species BIB002 BIB017 . There is no single method which is universally successful for protein expression that will ensure the production of a desired concentration of soluble and functional protein BIB011 BIB001 BIB023 . Varying factors that influence protein expression in a trial-and-error process to achieve optimum protein expression has been troublesome BIB012 . To overcome this problem, statistical approaches have been used to evaluate the variables that have the largest influence on the production of a recombinant protein of interest in terms of yield BIB013 BIB005 , product quality BIB028 , purity BIB025 BIB024 and solubility BIB018 BIB015 . These statistical processes include the Design of Experiment (DoE) approach BIB003 BIB010 . This approach advances the traditional one-factor-at-a-time (OFAT) method, which involves varying one factor while other factors are held constant. This single variable OFAT approach results in the need to run multiple experiments with a high risk of failing to identify the true optimum BIB027 . The DoE method provides for a significantly reduced experimental matrix BIB004 BIB026 BIB006 . There are an increasing number of published studies on the application of statistically based optimization processes in the field of protein biotechnology BIB028 . This has been matched by a corresponding increase in the application of DoE methods, such as screening and optimisation designs, to enhance protein production. This review examines the literature on the DoE methodologies commonly employed to evaluate the effect of media composition and culture conditions on recombinant protein expression. It will focus on the application of DoE to increase recombinant protein expression in prokaryotic systems, where high yields can be achieved but poor product quality remains a risk BIB008 . It also provides an overview of the important statistical analysis tools embedded in common DoE software. These tools facilitate the interpretation of experimental data which ultimately allows the identification of optimal factor levels for maximum yield. Finally, the review provides some thoughts on the benefits of the common DoE methods typically used in recombinant protein production in order to direct future research efforts.
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Abstract ::: When the cDNA encoding bovine microsomal 17 alpha-hydroxylase cytochrome P450 (P45017 alpha) containing modifications within the first seven codons which favor expression in Escherichia coli is placed in a highly regulated tac promoter expression plasmid, as much as 16 mg of spectrally detectable P45017 alpha per liter of culture can be synthesized and integrated into E. coli membranes. The known enzymatic activities of bovine P45017 alpha can be reconstituted by addition of purified rat liver NADPH-cytochrome P450 reductase to isolated E. coli membrane fractions containing the recombinant P45017 alpha enzyme. Surprisingly, it is found that E. coli contain an electron-transport system that can substitute for the mammalian microsomal NADPH-cytochrome P450 reductase in supporting both the 17 alpha-hydroxylase and 17,20-lyase activities of P45017 alpha. Thus, not only can E. coli express this eukaryotic membrane protein at relatively high levels, but as evidenced by metabolism of steroids added directly to the cells, the enzyme is catalytically active in vivo. These studies establish E. coli as an efficacious heterologous expression system for structure-function analysis of the cytochrome P450 system. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> A scalable transfection procedure using polyethylenimine (PEI) is described for the human embryonic kidney 293 cell line grown in suspension. Green fluorescent protein (GFP) and human placental secreted alkaline phosphatase (SEAP) were used as reporter genes to monitor transfection efficiency and productivity. Up to 75% of GFP-positive cells were obtained using linear or branched 25 kDa PEI. The 293 cell line and two genetic variants, either expressing the SV40 large T-antigen (293T) or the Epstein-Barr virus (EBV) EBNA1 protein (293E), were tested for protein expression. The highest expression level was obtained with 293E cells using the EBV oriP-containing plasmid pCEP4. We designed the pTT vector, an oriP-based vector having an improved cytomegalovirus expression cassette. Using this vector, 10- and 3-fold increases in SEAP expression was obtained in 293E cells compared with pcDNA3.1 and pCEP4 vectors, respectively. The presence of serum had a positive effect on gene transfer and expression. Transfection of suspension-growing cells was more efficient with linear PEI and was not affected by the presence of medium conditioned for 24 h. Using the pTT vector, >20 mg/l of purified His-tagged SEAP was recovered from a 3.5 l bioreactor. Intracellular proteins were also produced at levels as high as 50 mg/l, representing up to 20% of total cell proteins. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> The neutrophil-activating protein of Helicobacter pylori (HP-NAP) is a major antigen responsible for the generation of immune response in an infected individual. The cloning and expression of the gene corresponding to neutrophil-activating protein (NAP) were followed by process development for enhanced production and purification. The production process was developed in two parts. In the first part, some of the cultivation medium components (viz. carbon to nitrogen ratio, concentrations of sodium polyphosphate and magnesium sulphate) were optimized using the Taguchi robust experimental design. The intracellular NAP production level after 24 h of cultivation was considered as the target function or the dependent variable. There was a 76.8% increase in the NAP production level. Using this optimal medium composition obtained in the first part, the temperature of cultivation and the pH of cultivation medium were optimized in the second part. The NAP production level at hour 30 of cultivation was considered as the target function or the dependent variable. The optimal values for these two independent variables were 37.2 °C and 6.3 respectively. At this combination of temperature and pH, the theoretical maximum NAP production level was 1280 mg l−1. This optimal combination was verified experimentally and the NAP production level was found to be 1261 mg l−1. The optimization of the cultivation conditions resulted in a 61.5% increase in NAP production level. About a 2.91-fold overall increase in NAP production level at hour 24 of cultivation was achieved through process optimization. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> L-Asparaginase (isozyme II) from Escherichia coli is an important therapeutic enzyme used in the treatment of leukemia. Extracellular expression of recombinant asparaginase was obtained by fusing the gene coding for asparaginase to an efficient pelB leader sequence and an N-terminal 6x histidine tag cloned under the T7lac promoter. Media composition and the induction strategy had a major influence on the specificity and efficiency of secretion of recombinant asparaginase. Induction of the cells with 0.1 mM IPTG at late log phase of growth in TB media resulted in fourfold higher extracellular activity in comparison to growing the cells in LB media followed by induction during the mid log phase. Using an optimized expression strategy a yield of 20,950 UI/L of recombinant asparaginase was obtained from the extracellular medium. The recombinant protein was purified from the culture supernatant in a single step using Ni-NTA affinity chromatography which gave an overall yield of 95 mg/L of purified protein, with a recovery of 86%. This is approximately 8-fold higher to the previously reported data in literature. The fluorescence spectra, analytical size exclusion chromatography, and the specific activity of the purified protein were observed to be similar to the native protein which demonstrated that the protein had folded properly and was present in its active tetramer form in the culture supernatant. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> In recent years, the number of recombinant proteins used for therapeutic applications has increased dramatically. Production of these proteins has a remarkable demand in the market. Escherichia coli offers a means for the rapid and economical production of recombinant proteins. These advantages, coupled with a wealth of biochemical and genetic knowledge, have enabled the production of such economically therapeutic proteins such as insulin and bovine growth hormone. These demands have driven the development of a variety of strategies for achieving high-level expression of protein, particularly involving several aspects such as expression vectors design, gene dosage, promoter strength (transcriptional regulation), mRNA stability, translation initiation and termination (translational regulation), host design considerations, codon usage, and fermentation factors available for manipulating the expression conditions, which are the major challenges is obtaining the high yield of protein at low cost. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Escherichia coli has been the most widely used host for the production of recombinant proteins because it is the best characterized system in every aspect. Furthermore, the high cell density culture of recombinant E. coli has allowed production of various proteins with high yield and high productivities. Various cultivation strategies employing different host strains and expression systems have been successfully employed for the production of recombinant proteins. New strategies for strain improvement towards the goal of enhanced protein production are actively being developed based on high-throughput omics approaches such as transcriptomics and proteomics. This paper reviews recent advances in the production of recombinant proteins by high cell density culture of E. coli. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Producing soluble proteins in Escherichia coli is still a major bottleneck for structural proteomics. Therefore, screening for soluble expression on a small scale is an attractive way of identifying constructs that are likely to be amenable to structural analysis. A variety of expression-screening methods have been developed within the Structural Proteomics In Europe (SPINE) consortium and to assist the further refinement of such approaches, eight laboratories participating in the network have benchmarked their protocols. For this study, the solubility profiles of a common set of 96 His(6)-tagged proteins were assessed by expression screening in E. coli. The level of soluble expression for each target was scored according to estimated protein yield. By reference to a subset of the proteins, it is demonstrated that the small-scale result can provide a useful indicator of the amount of soluble protein likely to be produced on a large scale (i.e. sufficient for structural studies). In general, there was agreement between the different groups as to which targets were not soluble and which were the most soluble. However, for a large number of the targets there were wide discrepancies in the results reported from the different screening methods, which is correlated with variations in the procedures and the range of parameters explored. Given finite resources, it appears that the question of how to most effectively explore ;expression space' is similar to several other multi-parameter problems faced by crystallographers, such as crystallization. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Automation and miniaturization are key issues of high-throughput research projects in the post-genomic era. The implementation of robotics and parallelization has enabled researchers to process large numbers of protein targets for structural studies in a short time with reasonable cost efficiency. However, the cost of implementing the robotics and parallelization often prohibit their use in the traditional academic laboratory. Fortunately, multiple groups have made significant efforts to minimize the cost of heterologous protein expression for the production of protein samples in quantities suitable for high resolution structural studies. In this review, we describe recent efforts to continue to minimize the cost for the parallel processing of multiple protein targets and focus on those materials and strategies that are highly suitable for the traditional academic laboratory. <s> BIB008 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Production of recombinant proteins at low temperatures is one strategy to prevent formation of protein aggregates and the use of an expensive inducer such as IPTG. We report on the construction of two expression vectors both containing the cold-inducible des promoter of Bacillus subtilis, where one allows intra- and the other extracellular synthesis of recombinant proteins. Production of recombinant proteins started within the first 30min after temperature downshock to 25 degrees C and continued for about 5h. <s> BIB009 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Abstract A recombinant glutaryl-7-aminocephalosporanic acid acylase (GLA) from Pseudomonas N176 has been over-expressed in BL21(DE3)pLysS Escherichia coli cells. By alternating screenings of medium components and simplified factorial experimental designs, an improved microbial process was set up at shake-flask level (and then scaled up to 2L-fermentors) giving a ∼80- and 120-fold increase in specific and volumetric enzyme productivity, respectively. Under the best expression conditions, ∼1380 U/g cell and 16,100 U/L of GLA were produced versus the ∼18 U/g cell and the ∼140 U/L obtained in the initial standard conditions. Osmotic stress caused by the addition of NaCl, low cell growth rate linked to high biomass yield in the properly-designed rich medium, optimization of the time and the amount of inducer’s addition and decrease of temperature during recombinant protein production, represent the factors concurring to achieve the reported expression level. Notably, this expression level is significantly higher than any previously described production of GLAs. High volumetric production, cost reduction and the simple one-step chromatographic purification of the His-tagged recombinant enzyme, makes this GLA an economic tool to be used in the 7-ACA industrial production. <s> BIB010 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Large proteins are usually expressed in a eukaryotic system while smaller ones are expressed in prokaryotic systems. For proteins that require glycosylation, mammalian cells, fungi or the baculovirus system is chosen. The least expensive, easiest and quickest expression of proteins can be carried out in Escherichia coli. However, this bacterium cannot express very large proteins. Also, for S-S rich proteins, and proteins that require post-translational modifications, E. coli is not the system of choice. The two most utilized yeasts are Saccharomyces cerevisiae and Pichia pastoris. Yeasts can produce high yields of proteins at low cost, proteins larger than 50 kD can be produced, signal sequences can be removed, and glycosylation can be carried out. The baculoviral system can carry out more complex post-translational modifications of proteins. The most popular system for producing recombinant mammalian glycosylated proteins is that of mammalian cells. Genetically modified animals secrete recombinant proteins in their milk, blood or urine. Similarly, transgenic plants such as Arabidopsis thaliana and others can generate many recombinant proteins. <s> BIB011 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Bacteria are simple and cost effective hosts for producing recombinant proteins. However, their physiological features may limit their use for obtaining in native form proteins of some specific structural classes, such as for instance polypeptides that undergo extensive post-translational modifications. To some extent, also the production of proteins that depending on disulfide bridges for their stability has been considered difficult in E. coli.Both eukaryotic and prokaryotic organisms keep their cytoplasm reduced and, consequently, disulfide bond formation is impaired in this subcellular compartment. Disulfide bridges can stabilize protein structure and are often present in high abundance in secreted proteins. In eukaryotic cells such bonds are formed in the oxidizing environment of endoplasmic reticulum during the export process. Bacteria do not possess a similar specialized subcellular compartment, but they have both export systems and enzymatic activities aimed at the formation and at the quality control of disulfide bonds in the oxidizing periplasm.This article reviews the available strategies for exploiting the physiological mechanisms of bactera to produce properly folded disulfide-bonded proteins. <s> BIB012 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> A revolution in industrial microbiology was sparked by the discoveries of ther double-sttranded structure of DNA and the development of trecombinant DNA technology. Traditional industrial microbiology was merged with molecular biology to yield improved recombinant processes for the industrial production of primary and secondary metabolites, protein biopharmaceuticals and industrial enzymes. Novel genetic techniques such as metabolic engineering, combinatorial biosynthesis and molecular breeding techniques and their modifications are contributing greatly to the development of improved industrial processes. In addition, functional genomics, proteomics and metabolomics are being exploited for the discovery of novel valuable small molecules for medicine as well as enzymes for catalysis. The sequencing of industrial microbal genomes is beuing carried out which bodes well for future process improvement and discovery of new industrial products. <s> BIB013 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Background ::: The thermostable β-glucosidase (Tn Bgl1A) from Thermotoga neapolitana is a promising biocatalyst for hydrolysis of glucosylated flavonoids and can be coupled to extraction methods using pressurized hot water. Hydrolysis has however been shown to be dependent on the position of the glucosylation on the flavonoid, and e.g. quercetin-3-glucoside (Q3) was hydrolysed slowly. A set of mutants of Tn Bgl1A were thus created to analyse the influence on the kinetic parameters using the model substrate para-nitrophenyl-β-D-glucopyranoside (p NPGlc), and screened for hydrolysis of Q3. <s> BIB014 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> The ultimate goal of structural biology is to understand the structural basis of proteins in cellular processes. In structural biology, the most critical issue is the availability of high-quality samples. "Structural biology-grade" proteins must be generated in the quantity and quality suitable for structure determination using X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The purification procedures must reproducibly yield homogeneous proteins or their derivatives containing marker atom(s) in milligram quantities. The choice of protein purification and handling procedures plays a critical role in obtaining high-quality protein samples. With structural genomics emphasizing a genome-based approach in understanding protein structure and function, a number of unique structures covering most of the protein folding space have been determined and new technologies with high efficiency have been developed. At the Midwest Center for Structural Genomics (MCSG), we have developed semi-automated protocols for high-throughput parallel protein expression and purification. A protein, expressed as a fusion with a cleavable affinity tag, is purified in two consecutive immobilized metal affinity chromatography (IMAC) steps: (i) the first step is an IMAC coupled with buffer-exchange, or size exclusion chromatography (IMAC-I), followed by the cleavage of the affinity tag using the highly specific Tobacco Etch Virus (TEV) protease; the second step is IMAC and buffer exchange (IMAC-II) to remove the cleaved tag and tagged TEV protease. These protocols have been implemented on multidimensional chromatography workstations and, as we have shown, many proteins can be successfully produced in large-scale. All methods and protocols used for purification, some developed by MCSG, others adopted and integrated into the MCSG purification pipeline and more recently the Center for Structural Genomics of Infectious Diseases (CSGID) purification pipeline, are discussed in this chapter. <s> BIB015 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Taliglucerase alfa (Protalix Biotherapeutics, Carmiel, Israel) is a novel plant cell-derived recombinant human β-glucocerebrosidase for Gaucher disease. A phase 3, double-blind, randomized, parallel-group, comparison-dose (30 vs 60 U/kg body weight/infusion) multinational clinical trial was undertaken. Institutional review board approvals were received. A 9-month, 20-infusion trial used inclusion/exclusion criteria in treatment-naive adult patients with splenomegaly and thrombocytopenia. Safety end points were drug-related adverse events: Ab formation and hypersensitivity reactions. Primary efficacy end point was reduction in splenic volume measured by magnetic resonance imaging. Secondary end points were: changes in hemoglobin, hepatic volume, and platelet counts. Exploratory parameters included biomarkers and bone imaging. Twenty-nine patients (11 centers) completed the protocol. There were no serious adverse events; drug-related adverse events were mild/moderate and transient. Two patients (6%) developed non-neutralizing IgG Abs; 2 other patients (6%) developed hypersensitivity reactions. Statistically significant spleen reduction was achieved at 9 months: 26.9% (95% confidence interval [CI]: -31.9, -21.8) in the 30-unit dose group and 38.0% (95% CI: -43.4, -32.8) in the 60-unit dose group (both P < .0001); and in all secondary efficacy end point measures, except platelet counts at the lower dose. These results support safety and efficacy of taliglucerase alfa for Gaucher disease. <s> BIB016 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Abstract β-Galactosidases (EC 3.2.1.23) constitute a large family of proteins that are known to catalyze both hydrolytic and transgalactosylation reactions. The hydrolytic activity has been applied in the food industry for decades for reducing the lactose content in milk, while the transgalactosylation activity has been used to synthesize galacto-oligosaccharides and galactose containing chemicals in recent years. The main focus of this review is on the expression and production of Aspergillus niger, Kluyveromyces lactis and bacterial β-galactosidases in different microbial hosts. Furthermore, emphasis is given on the reported applications of the recombinant enzymes. Current developments on novel β-galactosidases, derived from newly identified microbial sources or by protein engineering means, together with the use of efficient recombinant microbial production systems are converting this enzyme into a relevant synthetic tool. Thermostable β-galactosidases (cold-adapted or thermophilic) in addition to the growing market for functional foods will likely redouble its industrial interest. <s> BIB017 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> We previously found that plasmids bearing a mammalian replication initiation region (IR) and a nuclear matrix attachment region (MAR) efficiently initiate gene amplification and spontaneously increase their copy numbers in animal cells. In this study, this novel method was applied to the establishment of cells with high recombinant antibody production. The level of recombinant antibody expression was tightly correlated with the efficiency of plasmid amplification and the cytogenetic appearance of the amplified genes, and was strongly dependent on cell type. By using a widely used cell line for industrial protein production, CHO DG44, clones expressing very high levels of antibody were easily obtained. High-producer clones stably expressed the antibody over several months without eliciting changes in both the protein expression level and the cytogenetic appearance of the amplified genes. The integrity and reactivity of the protein produced by this method was fine. In serum-free suspension culture, the specific protein production rate in high-density cultures was 29.4 pg/cell/day. In conclusion, the IR/MAR gene amplification method is a novel and efficient platform for recombinant antibody production in mammalian cells, which rapidly and easily enables the establishment of stable high-producer cell clone. <s> BIB018 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Bacteria have long been the favorite expression system for recombinant protein production. However, the flaw of the system is that insoluble and inactive proteins are co-produced due to codon bias, protein folding, phosphorylation, glycosylation, mRNA stability and promoter strength. Factors are cited and the methods to convert to soluble and active proteins are described, for example a tight control of Escherichia coli milieu, refolding from inclusion body and through fusion technology. <s> BIB019 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Abstract In this work, SVP2 from Salinivibrio proteolyticus strain AF-2004, a zinc metalloprotease with suitable biotechnological applications, was cloned for expression at high levels in Escherichia coli with the intention of changing culture conditions to generate a stable extracellular enzyme extract. The complete ORF of SVP2 gene was heterologously expressed in E. coli BL21 (DE3) by using pQE-80L expression vector system. In initial step, the effect of seven factors include: incubation temperature, peptone and yeast extract concentration, cell density (OD600) before induction, inducer (IPTG) concentration, induction time, and Ca 2+ ion concentrations on extracellular recombinant SVP2 expression and stability were investigated. The primary results revealed that the IPTG concentration, Ca 2+ ion concentration and induction time are the most important effectors on protease secretion by recombinant E. coli BL21. Central composite design experiment in the following showed that the maximum protease activity (522 U/ml) was achieved in 0.0089 mM IPTG for 24 h at 30 °C, an OD600 of 2, 0.5% of peptone and yeast extract, and a Ca 2+ ion concentration of 1.3 mM. The results exhibited that the minimum level of IPTG concentration along with high cell density and medium level of Ca 2+ with prolonged induction time provided the best culture condition for maximum extracellular production of heterologous protease SVP2 in E. coli expression system. <s> BIB020 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Escherichia coli has been the pioneering host for recombinant protein production, since the original recombinant DNA procedures were developed using its genetic material and infecting bacteriophages. As a consequence, and because of the accumulated know-how on E. coli genetics and physiology and the increasing number of tools for genetic engineering adapted to this bacterium, E. coli is the preferred host when attempting the production of a new protein. <s> BIB021 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> In recent years, high yield expression of proteins in E. ::: coli has witnessed rapid progress with developments of new methodologies ::: and technologies. An important advancement has been the development of novel recombinant cloning approaches and protocols to express heterologous proteins ::: for Nuclear Magnetic Resonance (NMR) studies and for isotopic enrichment. ::: Isotope labeling in NMR is necessary for rapid acquisition of high dimensional ::: spectra for structural studies. In addition, higher yield of proteins using ::: various solubility and affinity tags has made protein over-expression ::: cost-effective. Taken together, these methods have opened new avenues for ::: structural studies of proteins and their interactions. This article deals ::: with the different techniques that are employed for over-expression of proteins ::: in E. coli and different methods used for isotope labeling of proteins ::: vis-a-vis NMR spectroscopy. <s> BIB022 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Almost all of the 200 or so approved biopharmaceuticals have been produced in one of three host systems: the bacterium Escherichia coli, yeasts (Saccharomyces cerevisiae, Pichia pastoris) and mammalian cells. We describe the most widely used methods for the expression of recombinant proteins in the cytoplasm or periplasm of E. coli, as well as strategies for secreting the product to the growth medium. Recombinant expression in E. coli influences the cell physiology and triggers a stress response, which has to be considered in process development. Increased expression of a functional protein can be achieved by optimizing the gene, plasmid, host cell, and fermentation process. Relevant properties of two yeast expression systems, S. cerevisiae and P. pastoris, are summarized. Optimization of expression in S. cerevisiae has focused mainly on increasing the secretion, which is otherwise limiting. P. pastoris was recently approved as a host for biopharmaceutical production for the first time. It enables high-level protein production and secretion. Additionally, genetic engineering has resulted in its ability to produce recombinant proteins with humanized glycosylation patterns. Several mammalian cell lines of either rodent or human origin are also used in biopharmaceutical production. Optimization of their expression has focused on clonal selection, interference with epigenetic factors and genetic engineering. Systemic optimization approaches are applied to all cell expression systems. They feature parallel high-throughput techniques, such as DNA microarray, next-generation sequencing and proteomics, and enable simultaneous monitoring of multiple parameters. Systemic approaches, together with technological advances such as disposable bioreactors and microbioreactors, are expected to lead to increased quality and quantity of biopharmaceuticals, as well as to reduced product development times. <s> BIB023 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> NTT (N-terminal tags) on the catalytic (p110) sub-unit of PI 3-K (phosphoinositol 3-kinase) have previously been shown to increase cell signalling and oncogenic transformation. Here we test the impact of an NT (N-terminal) His-tag on in vitro lipid and protein kinase activity of all class-1 PI 3-K isoforms and two representative oncogenic mutant forms (E545K and H1047R), in order to elucidate the mechanisms behind this elevated signalling and transformation observed in vivo. Our results show that an NT His-tag has no impact on lipid kinase activity as measured by enzyme titration, kinetics and inhibitor susceptibility. Conversely, the NT His-tag did result in a differential effect on protein kinase activity, further potentiating the elevated protein kinase activity of both the helical domain and catalytic domain oncogenic mutants with relation to p110 phosphorylation. All other isoforms also showed elevated p110 phosphorylation (although not statistically significant). We conclude that the previously reported increase in cell signalling and oncogenic-like transformation in response to p110 NTT is not mediated via an increase in the lipid kinase activity of PI 3-K, but may be mediated by increased p110 autophosphorylation and/or other, as yet unidentified, intracellular protein/protein interactions. We further observe that tagged recombinant protein is suitable for use in in vitro lipid kinase screens to identify PI 3-K inhibitors; however, we recommend that in vivo (including intracellular) experiments and investigations into the protein kinase activity of PI 3-K should be conducted with untagged constructs. <s> BIB024 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Microbial enzymes are of great importance in the development of industrial bioprocesses. Current applications are focused on many different markets including pulp and paper, leather, detergents and textiles, pharmaceuticals, chemical, food and beverages, biofuels, animal feed and personal care, among others. Today there is a need for new, improved or/and more versatile enzymes in order to develop more novel, sustainable and economically competitive production processes. Microbial diversity and modern molecular techniques, such as metagenomics and genomics, are being used to discover new microbial enzymes whose catalytic properties can be improved/modified by different strategies based on rational, semi-rational and random directed evolution. Most industrial enzymes are recombinant forms produced in bacteria and fungi. <s> BIB025 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> The autotransporter family of Gram-negative protein exporters has been exploited for surface expression of recombinant passenger proteins. While the passenger in some cases was successfully translocated, a major problem has been low levels of full-length protein on the surface due to proteolysis following export over the cytoplasmic membrane. The aim of the present study was to increase the surface expression yield of the model protein SefA, a Salmonella enterica fimbrial subunit with potential for use in vaccine applications, by reducing this proteolysis through process design using Design of Experiments methodology. Cultivation temperature and pH, hypothesized to influence periplasmic protease activity, as well as inducer concentration were the parameters selected for optimization. Through modification of these parameters, the total surface expression yield of SefA was increased by 200 %. At the same time, the yield of full-length protein was increased by 300 %, indicating a 33 % reduction in proteolysis. <s> BIB026 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> A metabolic engineering perspective which views recombinant protein expression as a multistep pathway allows us to move beyond vector design and identify the downstream rate limiting steps in expression. In E.coli these are typically at the translational level and the supply of precursors in the form of energy, amino acids and nucleotides. Further recombinant protein production triggers a global cellular stress response which feedback inhibits both growth and product formation. Countering this requires a system level analysis followed by a rational host cell engineering to sustain expression for longer time periods. Another strategy to increase protein yields could be to divert the metabolic flux away from biomass formation and towards recombinant protein production. This would require a growth stoppage mechanism which does not affect the metabolic activity of the cell or the transcriptional or translational efficiencies. Finally cells have to be designed for efficient export to prevent buildup of proteins inside the cytoplasm and also simplify downstream processing. The rational and the high throughput strategies that can be used for the construction of such improved host cell platforms for recombinant protein expression is the focus of this review. <s> BIB027 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Many enzymes from basidiomycota have been identified and more recently characterized on the molecular level. This report summarizes the potential biotechnological applications of these enzymes and evaluates recent advances in their heterologous expression in Escherichia coli. Being one of the most widely used hosts for the production of recombinant proteins, there are, however, recurrent problems of recovering substantial yields of correctly folded and active enzymes. Various strategies for the efficient production of recombinant proteins from basidiomycetous fungi are reviewed including the current knowledge on vectors and expression strains, as well as methods for enhancing the solubility of target expression products and their purification. Research efforts towards the refolding of recombinant oxidoreductases and hydrolases are presented to illustrate successful production strategies. <s> BIB028 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Proteins are now widely produced in diverse microbial cell factories. The Escherichia coli is still the dominant host for recombinant protein production but, as a bacterial cell, it also has its issues: the aggregation of foreign proteins into insoluble inclusion bodies is perhaps the main limiting factor of the E. coli expression system. Conversely, E. coli benefits of cost, ease of use and scale make it essential to design new approaches directed for improved recombinant protein production in this host cell.With the aid of genetic and protein engineering novel tailored-made strategies can be designed to suit user or process requirements. Gene fusion technology has been widely used for the improvement of soluble protein production and/or purification in E. coli, and for increasing peptide’s immunogenicity as well. New fusion partners are constantly emerging and complementing the traditional solutions, as for instance, the Fh8 fusion tag that has been recently studied and ranked among the best solubility enhancer partners. In this review, we provide an overview of current strategies to improve recombinant protein production in E. coli, including the key factors for successful protein production, highlighting soluble protein production, and a comprehensive summary of the latest available and traditionally-used gene fusion technologies. A special emphasis is given to the recently discovered Fh8 fusion system that can be used for soluble protein production, purification and immunogenicity in E. coli. The number of existing fusion tags will probably increase in the next few years, and efforts should be taken to better understand how fusion tags act in E. coli. This knowledge will undoubtedly drive the development of new tailored-made tools for protein production in this bacterial system. <s> BIB029 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Escherichia coli is the organism of choice for the production of recombinant proteins. Its use as a cell factory is well-established and it has become the most popular expression platform. For this reason, there are many molecular tools and protocols at hand for the high-level production of recombinant proteins, such as a vast catalog of expression plasmids, a great number of engineered strains and many cultivation strategies. We review the different approaches for the synthesis of recombinant proteins in E. coli and discuss recent progress in this ever-growing field. <s> BIB030 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Lysozyme is a protein found in egg white, tears, saliva, and other secretions. As a marketable natural alternative to preservatives, lysozyme can act as a natural antibiotic. In this study, we have isolated Bacillus licheniformis TIB320 from soil, which contains a lysozyme gene with various features. We have cloned and expressed the lysozyme in E. coli. The antimicrobial activity of the lysozyme showed that it had a broad antimicrobial spectrum against several standard strains. The lysozyme could maintain efficient activities in a pH range between 3 and 9 and from 20°C to 60°C, respectively. The lysozyme was resistant to pepsin and trypsin to some extent at 40°C. Production of the lysozyme was optimized by using various expression strategies in B. subtilis WB800. The lysozyme from B. licheniformis TIB320 will be promising as a food or feed additive. <s> BIB031 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Yeasts are widely used for the production of heterologous proteins. Improving the expression of such proteins is a top priority for pharmaceutical and industrial applications. N-Glycosylation, a common form of protein modification in yeasts, facilitates proper protein folding and secretion. Accordingly, our previous study revealed that the attachment of additional N-glycans to recombinant elastase by introducing an N-glycosylation sequon at suitable locations could stimulate its expression. Interestingly, the sequon Asn-Xaa-Thr is N-glycosylated more efficiently than Asn-Xaa-Ser, so improving the N-glycosylation efficiency via the conversion of Ser to Thr in the sequon would enhance the efficiency of N-glycosylation and increase glycoprotein expression. Recently, the expression level of recombinant elastase was enhanced by this means in our lab. Actually, the modification of N-glycosylation sites can generally be achieved through site-directed mutagenesis; thus, the method described in this report represe... <s> BIB032 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> 【The well-characterized gram-positive bacterium Bacillus subtilis is an outstanding industrial candidate for protein expression owing to its single membrane and high capacity of secretion, simplifying the downstream processing of secretory proteins. During the last few years, there has been continuous progress in the illustration of secretion mechanisms and application of this robust host in various fields of life science, such as enzyme production, feed additives, and food and pharmaceutical industries. Here, we review the developments of Bacillus subtilis as a highly promising expression system illuminating strong chemical- and temperatureinducible and other types of promoters, strategies for ribosome-binding-site utilization, and the novel approach of signal peptide selection. Furthermore, we outline the main steps of the Sec pathway and the relevant elements as well as their interactions. In addition, we introduce the latest discoveries of Tat-related complex structures and functions and the countless applications of this full-folded protein secretion pathway. This review also lists some of the current understandings of ATP-binding cassette transporters. According to the extensive knowledge on the genetic modification strategies and molecular biology of Bacillus subtilis, we propose some suggestions and strategies for improving the yield of intended productions. We expect this to promote striking future developments in the optimization and application of this bacterium.】 <s> BIB033 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> We report here a PCR-based cloning methodology that requires no post-PCR modifications such as restriction digestion and phosphorylation of the amplified DNA. The advantage of the present method is that it yields only recombinant clones thus eliminating the need for screening. Two DNA amplification reactions by PCR are performed wherein the first reaction amplifies the gene of interest from a source template, and the second reaction fuses it with the designed expression vector fragments. These vector fragments carry the essential elements that are required for the fusion product selection. The entire process can be completed in less than 8 hours. Furthermore, ligation of the amplified DNA by a DNA ligase is not required before transformation, although the procedure yields more number of colonies upon transformation if ligation is carried out. As a proof-of-concept, we show the cloning and expression of GFP, adh, and rho genes. Using GFP production as an example, we further demonstrate that the E. coli T7 express strain can directly be used in our methodology for the protein expression immediately after PCR. The expressed protein is without or with 6xHistidine tag at either terminus, depending upon the chosen vector fragments. We believe that our method will find tremendous use in molecular and structural biology. <s> BIB034 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Recombinant protein expression often presents a bottleneck for the production of proteins for use in many areas of animal-cell biotechnology. Difficult-to-express proteins require the generation of numerous expression constructs, where popular prokaryotic screening systems often fail to identify expression of multi domain or full-length protein constructs. Post-translational modified mammalian proteins require an alternative host system such as insect cells using the Baculovirus Expression Vector System (BEVS). Unfortunately this is time-, labor-, and cost-intensive. It is clearly desirable to find an automated and miniaturized fast multi-sample screening method for protein expression in such systems. With this in mind, in this paper a high-throughput initial expression screening method is described using an automated Microcultivation system in conjunction with fast plasmid based transient transfection in insect cells for the efficient generation of protein constructs. The applicability of the system is demonstrated for the difficult to express Nucleotide-binding Oligomerization Domain-containing protein 2 (NOD2). To enable detection of proper protein expression the rather weak plasmid based expression has been improved by a sensitive inline detection system. Here we present the functionality and application of the sensitive SplitGFP (split green fluorescent protein) detection system in insect cells. The successful expression of constructs is monitored by direct measurement of the fluorescence in the BioLector Microcultivation system. Additionally, we show that the results obtained with our plasmid-based SplitGFP protein expression screen correlate directly to the level of soluble protein produced in BEVS. In conclusion our automated SplitGFP screen outlines a sensitive, fast and reliable method reducing the time and costs required for identifying the optimal expression construct prior to large scale protein production in baculovirus infected insect cells. Biotechnol. Bioeng. 2016;113: 1975-1983. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. <s> BIB035 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> We engineered efficient 2,3-butanediol (23BD) production from cellobiose using Bacillus subtilis. First, we found that B. subtilis harboring an empty vector could produce 23BD from cellobiose. However, productivity using cellobiose as a carbon source was lower than that when using glucose. This lower productivity was improved by adding purified beta-glucosidase from Thermobifida fusca YX (Tfu_0937) in the fermentation. Encouraged by these findings, we found that hydrolysis of cellobiose to glucose was an important reaction of 23BD biosynthesis in B. subtilis using cellobiose. Hence, we created efficient 23BD production from cellobiose using exogenous Tfu_0937-expressing B. subtilis. Using the engineered strain, 21.2 g L(-1) of 23BD was produced after 72 h of cultivation. The productivity and yield were 0.294 g L(-1) h(-1) and 0.35 g 23BD/g cellobiose, respectively. We successfully demonstrated efficient 23BD production from cellobiose by using BGL-expressing B. subtilis. <s> BIB036 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Enzymes from extremophiles are creating interest among researchers due to their unique properties and the enormous power of catalysis at extreme conditions. Since community demands are getting more intensified, therefore, researchers are applying various approaches viz. metagenomics to increase the database of extremophilic species. Furthermore, the innovations are being made in the naturally occurring enzymes utilizing various tools of recombinant DNA technology and protein engineering, which allows redesigning of the enzymes for its better fitment into the process. In this review, we discuss the biochemical constraints of psychrophiles during survival at the lower temperature. We summarize the current knowledge about the sources of such enzymes and their in vitro modification through mutagenesis to explore their biotechnological potential. Finally, we recap the microbial cell surface display to enhance the efficiency of the process in cost effective way. <s> BIB037 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Pullulanase plays an important role in industrial applications of starch processing. However, extracellular production of pullulanase from recombinant Bacillus subtilis is yet limited due to the issues on regulatory elements of B. subtilis expression system. In this study, the gene encoding B. naganoensis pullulanase (PUL) was expressed in B. subtilis WB800 under the promoter PHpaII in the shuttle vector pMA0911. The extracellular activity of expressed pullulanase was 3.9 U ml(-1) from the recombinant B. subtilis WB800/pMA0911-PHpaII-pul. To further enhance the yield of PUL, the promoter PHpaII in pMA0911 was replaced by a stronger constitutive promoter P43. Then the activity was increased to 8.7 U ml(-1) from the recombinant B. subtilis WB800/pMA0911-P43-pul. Effect of host on pullulanase expression was further investigated by comparison between B. subtilis WB600 and B. subtilis WB800. In addition to the available B. subtilis WB800 recombinants, the constructed plasmids pMA0911-PHpaII-pul and pMA0911-P43-pul were transformed into B. subtilis WB600, respectively. Consequently, the extracellular production of PUL was significantly enhanced by B. subtilis WB600/pMA0911-P43-pul, resulting in the extracellular pullulanase activity of 24.5 U ml(-1). Therefore, promoter and host had an impact on pullulanase expression and their optimization would be useful to improve heterologous protein expression in B. subtilis. <s> BIB038 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> β-glucosidases catalyze the final step of cellulose hydrolysis and are essential in cellulose degradation. A β-glucosidase gene, cen502, was identified and isolated from a metagenomic library from Bursaphelenchus xylophilus via functional screening. Analyses indicated that cen502 encodes a 465 amino acid polypeptide that contains a catalytic domain belonging to the glycoside hydrolase family 1 (GH1). Cen502 was heterologously expressed, purified, and biochemically characterized. Recombinant Cen502 displayed optimum enzymatic activity at pH 8.0 and 38 °C. The enzyme had highest specific activity to p-nitrophenyl-β-D-glucopyranoside (pNPG; 180.3 U/mg) and had K m and V max values of 2.334 mol/ml and 9.017 μmol/min/mg, respectively. The addition of Fe2+ and Mn2+ significantly increased Cen502 β-glucosidase activity by 60% and 50%, respectively, while 10% and 25% loss of β-glucosidase activity was induced by addition of Pb2+ and K+, respectively. Cen502 exhibited activity against a broad array of substrates, including cellobiose, lactose, salicin, lichenan, laminarin, and sophorose. However, Cen502 displayed a preference for the hydrolysis of β-1,4 glycosidic bonds rather than β-1,3, β-1,6, or β-1,2 bonds. Our results indicate that Cen502 is a novel β-glucosidase derived from bacteria associated with B. xylophilus and may represent a promising target to enhance the efficiency of cellulose bio-degradation in industrial applications. <s> BIB039 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Inform the Choice of Expression System <s> Pullulanase is crucial to the specific hydrolysis of branch points in amylopectin and is generally employed as an important enzyme in the starch-processing industry. Recombinant Bacillus subtilis that employs an inducible promoter would be a suitable candidate for pullulanase expression because of its safety and controllable production, but its level of pullulanase activity is relatively low. In this study, we investigated the effect of the enhancers DegQ, DegU, and DegS on pullulanase expression in a recombinant B. subtilis inducible system. The genes degQ, degU, and degS were introduced to the recombinant plasmid pMA0911-PsacB-pul harboring the promoter PsacB, signal peptide LipA, and gene encoding pullulanase. The regulatory effects of the enhancers involved in recombinant plasmids on pullulanase expression level were evaluated in B. subtilis WB600 and WB800, respectively. The positive regulation of DegQ toward pullulanase expression was detected from B. subtilis WB800, leading to a 60% increase in enzyme activity. In addition, enzyme activity was further enhanced by inserting the degQ gene to the position closer to the promoter PsacB. Consequently, pullulanase activity reached 26.5 U ml-1 from the B. subtilis WB800/pMA0911-PsacB- pul-degQ(N) after expression optimization, which was a 5.9-fold increase compared to that of the original strain B. subtilis WB800/pMA0911-PsacB-pul. Hence, the inducible expression of the enzyme was efficiently enhanced by regulating the enhancer DegQ from recombinant B. subtilis WB800. <s> BIB040
|
Protein purification from natural sources can require a large quantity of the source organism and may yield only small amount of target protein after several rounds of extraction and purification BIB021 BIB015 . Recombinant expression of proteins has become an indispensable tool to produce proteins to satisfactory yields BIB022 and to meet the demands of industry and research BIB013 . With the aid of genetic engineering, a desired gene cloned into a suitable expression vector can be overexpressed as a recombinant protein of interest BIB034 . Recombinant proteins can be expressed in cell cultures of bacteria BIB001 , yeasts BIB032 , mammalian cells BIB018 BIB002 , plants BIB016 and insects BIB035 . However, the prokaryotic systems remains the most attractive hosts due to their low cost, high productivity and rapid production rates BIB011 . Prokaryotic heterologous protein expression is mainly carried out in the bacteria E. coli, although increasingly the Bacillus species are being employed BIB019 BIB036 BIB003 . Drawbacks of prokaryotic expression systems include poor protein quality, due to the inability of prokaryotic cells to carry out post-translational modifications such as glycosylation, the presence of toxic cell wall pyrogens, along with the formation of inclusion bodies resulting in aggregated and insoluble heterologous protein BIB025 . Some widely used bacterial expression systems that are commercially available are listed in Table 1 . Table 1 . Summary of the most widely used recombinant expression strains from E. coli and Bacillus species outlining their advantages and disadvantages. While there are a variety of expression vectors commercially available, their choice is strongly based on the combination of replicons, promoters, selection markers, multiple cloning sites and fusion proteins BIB023 . An informed decision on the best expression plasmid BIB006 BIB007 BIB012 BIB017 BIB008 can be confusing. The most commonly used expression plasmids BIB020 BIB037 BIB026 BIB027 BIB039 and their key features such as promoters BIB005 BIB014 BIB004 BIB010 BIB028 , affinity tags BIB029 BIB024 and selection markers BIB030 have been extensively reviewed in the literature, primarily focusing on the E. coli prokaryotic expression system. Widely used Bacillus strains BIB031 BIB040 , vectors and promoters have also been reviewed BIB033 BIB038 BIB009 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Abstract The major targets for improvement of recombinant expression systems in microbial cells are gene dosage, transcriptional control machinery and, to some extent, translation. Here we show that optimization of fermentation conditions by applying statistically designed, multifactorial experiments offers an additional method for potential enhancement of gene expression systems. A chromosomally encoded fusion between the Bacillus subtilis aprE regulatory region and the E. coli lacZ gene carried by the B. subtilis host cells was used. The 2 × SG sporulation medium was used as a basal medium. Among the 11 fermentation factors we examined, the most significant variables influencing β-galactosidase expression were statistically elucidated for optimization and included peptone, MgSO4 · 7H2O, and KCl. The optimum concentrations of these variables were predicted by using a second-order polynomial model fitted to the results obtained by applying the Box-Behnken design, a response surface method. Calculated optimum concentrations were predicted to confer a maximum yield of 2,423.5 β-galactosidase specific activity units. A verification experiment performed under optimal conditions yielded 96% of the predicted specific activity value with an increase by a factor of almost 5 compared with the results obtained under basal conditions. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Although Escherichia coli is well studied and various recombinant E. coli protein expression systems have been developed, people usually consider the rapid growing (log phase) culture of E. coli as optimum for production of proteins. However, here we demonstrate that at stationary phase three E. coli systems, BL21 (DE3)(pET), DH5alpha (pGEX) induced with lactose, and TG1 (pBV220) induced with heat shock could overexpress diversified genes, including three whose products are deleterious to the host cells, more stably and profitably than following the log phase induction protocol. Physical and patch-clamp assays indicated that characteristics of target proteins prepared from cultures of the two different growth phases coincide. These results not only provide a better strategy for recombinant protein preparation in E. coli, but also reveal that rapid rehabilitation from stresses and stationary phase protein overproduction are fundamental characters of E. coli. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Producing soluble proteins in Escherichia coli is still a major bottleneck for structural proteomics. Therefore, screening for soluble expression on a small scale is an attractive way of identifying constructs that are likely to be amenable to structural analysis. A variety of expression-screening methods have been developed within the Structural Proteomics In Europe (SPINE) consortium and to assist the further refinement of such approaches, eight laboratories participating in the network have benchmarked their protocols. For this study, the solubility profiles of a common set of 96 His(6)-tagged proteins were assessed by expression screening in E. coli. The level of soluble expression for each target was scored according to estimated protein yield. By reference to a subset of the proteins, it is demonstrated that the small-scale result can provide a useful indicator of the amount of soluble protein likely to be produced on a large scale (i.e. sufficient for structural studies). In general, there was agreement between the different groups as to which targets were not soluble and which were the most soluble. However, for a large number of the targets there were wide discrepancies in the results reported from the different screening methods, which is correlated with variations in the procedures and the range of parameters explored. Given finite resources, it appears that the question of how to most effectively explore ;expression space' is similar to several other multi-parameter problems faced by crystallographers, such as crystallization. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> The production of recombinant anti-HIV peptide, T-20, in Escherichia coli was optimized by statistical experimental designs (successive designs with multifators) such as 24–1 fractional factorial, 23 full factorial, and 22 rotational central composite design in order. The effects of media compositions (glucose, NPK sources, MgSO4, and trace elements), induction level, induction timing (optical density at induction process), and induction duration (culture time after induction) on T-20 production were studied by using a statistical response surface method. A series of iterative experimental designs was employed to determine optimal fermentation conditions (media and process factors). Optimal ranges characterized by %T-20 (proportion of pepttide to the total cell protein) were observed, narrowed down, and further investigated to determine the optimal combination of culture conditions, which was as follows: 9, 6, 10, and 1 mL of glucose, NPK sources, MgSO4, and trace elements, respectively, in a total of 100 mL of medium inducted at an OD of 0.55–0.75 with 0.7 mM isopropyl-β-d-thiogalactopyranoside in an induction duration of 4 h. Under these conditions, up to 14% of T-20 was obtained. This statistical optimization allowed, the production of T-20 to be increased more than twofold (from 6 to 14%) within, a shorter induction duration (from 6 to 4 h) at the shake-flask scale. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Abstract Escherichia coli ( E. coli ) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, to obtain milligrams of soluble proteins is still challenging since many proteins are expressed in an insoluble form without optimization. Therefore when working with tens of proteins or protein domains it is recommended that high-throughput expression screening at a small scale (1–4 ml of culture) is carried out to identify the optimal conditions for soluble protein production. Once determined, these culture conditions can be applied at a large scale to produce sufficient protein for structural or functional studies. We describe a procedure that has enabled the systematic screening of culture conditions or fusion-tags on hundreds of cultures per week. The analysis of the optimal conditions for the soluble production of these proteins helped us to design a simple and efficient protocol for soluble protein expression screening. This protocol has since been used on hundreds of proteins and is illustrated with the genome wide scale production of proteins containing the DNA binding domains of Ciona intestinalis . <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Bacteria have long been the favorite expression system for recombinant protein production. However, the flaw of the system is that insoluble and inactive proteins are co-produced due to codon bias, protein folding, phosphorylation, glycosylation, mRNA stability and promoter strength. Factors are cited and the methods to convert to soluble and active proteins are described, for example a tight control of Escherichia coli milieu, refolding from inclusion body and through fusion technology. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> In the recent past years, a large number of proteins have been expressed in Escherichia coli with high productivity due to rapid development of genetic engineering technologies. There are many hosts used for the production of recombinant protein but the preferred choice is E. coli due to its easier culture, short life cycle, well-known genetics, and easy genetic manipulation. We often face a problem in the expression of foreign genes in E. coli. Soluble recombinant protein is a prerequisite for structural, functional and biochemical studies of a protein. Researchers often face problems producing soluble recombinant proteins for over-expression, mainly the expression and solubility of heterologous proteins. There is no universal strategy to solve these problems but there are a few methods that can improve the level of expression, non-expression, or less expression of the gene of interest in E. coli. This review addresses these issues properly. Five levels of strategies can be used to increase the expression and solubility of over-expressed protein; (1) changing the vector, (2) changing the host, (3) changing the culture parameters of the recombinant host strain, (4) co-expression of other genes and (5) changing the gene sequences, which may help increase expression and the proper folding of desired protein. Here we present the resources available for the expression of a gene in E. coli to get a substantial amount of good quality recombinant protein. The resources include different strains of E. coli, different E. coli expression vectors, different physical and chemical agents and the co expression of chaperone interacting proteins. Perhaps it would be the solutions to such problems that will finally lead to the maturity of the application of recombinant proteins. The proposed solutions to such problems will finally lead to the maturity of the application of recombinant proteins. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Abstract Receptor activator of nuclear factor-κB (RANK) and its cognate ligand (RANKL) is a member of the TNF superfamily of cytokines which is essential in osteobiology and its overexpression has been implicated in the pathogenesis of bone degenerative diseases such as osteoporosis. Therefore, RANKL is considered a major therapeutic target for the suppression of bone resorption in bone metabolic diseases such as rheumatoid arthritis and cancer metastasis. To evaluate the inhibitory effect of potential RANKL inhibitors a sufficient amount of protein is required. In this work RANKL was cloned for expression at high levels in Escherichia coli with the interaction of changing cultures conditions in order to produce the protein in a soluble form. In an initial step, the effect of expression host on soluble protein production was investigated and BL21(DE3) pLysS was the most efficient one found for the production of RANKL. Central composite design experiment in the following revealed that cell density before induction, IPTG concentration, post-induction temperature and time as well as their interactions had a significant influence on soluble RANKL production. An 80% increase of protein production was achieved after the determination of the optimum induction conditions: OD 600nm before induction 0.55, an IPTG concentration of 0.3 mM, a post-induction temperature of 25 °C and a post-induction time of 6.5 h. Following RANKL purification the thermal stability of the protein was studied. The interaction of RANKL with SPD304, a patented small-molecule inhibitor of TNF-α, was also studied in a fluorescence binding assay resulting in a K d value of 14.1 ± 0.5 μM. <s> BIB008 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Escherichia coli is the organism of choice for the production of recombinant proteins. Its use as a cell factory is well-established and it has become the most popular expression platform. For this reason, there are many molecular tools and protocols at hand for the high-level production of recombinant proteins, such as a vast catalog of expression plasmids, a great number of engineered strains and many cultivation strategies. We review the different approaches for the synthesis of recombinant proteins in E. coli and discuss recent progress in this ever-growing field. <s> BIB009 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB010 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Expression of recombinant proteins in Escherichia coli (E. coli) remains the most popular and cost-effective method for producing proteins in basic research and for pharmaceutical applications. Despite accumulating experience and methodologies developed over the years, production of recombinant proteins prone to aggregate in E. coli-based systems poses a major challenge in most research applications. The challenge of manufacturing these proteins for pharmaceutical applications is even greater. This review will discuss effective methods to reduce and even prevent the formation of aggregates in the course of recombinant protein production. We will focus on important steps along the production path, which include cloning, expression, purification, concentration, and storage. <s> BIB011 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> The HSPA6, one of the members of large family of HSP70, is significantly up-regulated and has been targeted as a biomarker of cellular stress in several studies. Herein, conditions were optimized to increase the yield of recombinant camel HSPA6 protein in its native state, primarily focusing on the optimization of upstream processing parameters that lead to an increase in the specific as well as volumetric yield of the protein. The results showed that the production of cHSPA6 was increased proportionally with increased incubation temperature up to 37 °C. Induction with 10 μM IPTG was sufficient to induce the expression of cHSPA6 which was 100 times less than normally used IPTG concentration. Furthermore, the results indicate that induction during early to late exponential phase produced relatively high levels of cHSPA6 in soluble form. In addition, 5 h of post-induction incubation was found to be optimal to produce folded cHSPA6 with higher specific and volumetric yield. Subsequently, highly pure and homogenous cHSPA6 preparation was obtained using metal affinity and size exclusion chromatography. Taken together, the results showed successful production of electrophoretically pure recombinant HSPA6 protein from Camelus dromedarius in Escherichia coli in milligram quantities from shake flask liquid culture. <s> BIB012 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> The ease of genetic manipulation, low cost, rapid growth and number of previous studies have made Escherichia coli one of the most widely used microorganism species for producing recombinant protei... <s> BIB013 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors that Influence Media Composition and Culture Conditions in an Expression System <s> Protein stability is a topic of major interest for the biotechnology, pharmaceutical and food industries, in addition to being a daily consideration for academic researchers studying proteins. An understanding of protein stability is essential for optimizing the expression, purification, formulation, storage and structural studies of proteins. In this review, discussion will focus on factors affecting protein stability, on a somewhat practical level, particularly from the view of a protein crystallographer. The differences between protein conformational stability and protein compositional stability will be discussed, along with a brief introduction to key methods useful for analyzing protein stability. Finally, tactics for addressing protein-stability issues during protein expression, purification and crystallization will be discussed. <s> BIB014
|
A careful selection of expression system, expression vector and host does not always guarantee the production of a large amount of target protein in soluble and active form BIB009 . Media composition and induction conditions have a significant influence on recombinant protein expression levels BIB013 and solubility BIB007 . For example, media containing a defined concentration of salts, peptone and yeast influences the yield of a recombinant glucosidase BIB001 ; while media composition does not always have a major effect on protein solubility BIB003 . Prosthetic groups in media are known to prevent the formation of inclusion bodies where required by the protein BIB006 BIB014 . The most common media used in prokaryotic expression systems, along with their advantages and disadvantages, have been reviewed elsewhere BIB005 . Culture conditions are another set of factors that must be carefully optimised to achieve high yields of heterologous protein BIB010 . Factors such as cell density prior to induction, inducer concentration, induction temperature and induction duration are all known to influence yield BIB004 BIB011 BIB012 BIB002 BIB008 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Enhancing the Production of Recombinant Proteins in a Prokaryotic Expression System by DoE <s> Anti-lipopolysaccharide factors (ALFs) are important antimicrobial peptides that are isolated from some aquatic species. In a previous study, we isolated ALF genes from Chinese mitten crab, Eriocheir sinensis. In this study, we optimized the production of a recombinant ALF by expressing E. sinensis ALF genes in Escherichia coli maintained in shake-flasks. In particular, we focused on optimization of both the medium composition and the culture condition. Various medium components were analyzed by the Plackett-Burman design, and two significant screened factors, (NH4)2SO4 and KH2PO4, were further optimized via the central composite design (CCD). Based on the CCD analysis, we investigated the induction start-up time, the isopropylthio-D-galactoside (IPTG) concentration, the post-induction time, and the temperature by response surface methodology. We found that the highest level of ALF fusion protein was achieved in the medium containing 1.89 g/L (NH4)2SO4 and 3.18 g/L KH2PO4, with a cell optical density of 0.8 at 600 nm before induction, an IPTG concentration of 0.5 mmol/L, a post-induction temperature of 32.7°C, and a post-induction time of 4 h. Applying the whole optimization strategy using all optimal factors improved the target protein content from 6.1% (without optimization) to 13.2%. We further applied the optimized medium and conditions in high cell density cultivation, and determined that the soluble target protein constituted 10.5% of the total protein. Our identification of the economic medium composition, optimal culture conditions, and details of the fermentation process should facilitate the potential application of ALF for further research. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Enhancing the Production of Recombinant Proteins in a Prokaryotic Expression System by DoE <s> Proteins are now widely produced in diverse microbial cell factories. The Escherichia coli is still the dominant host for recombinant protein production but, as a bacterial cell, it also has its issues: the aggregation of foreign proteins into insoluble inclusion bodies is perhaps the main limiting factor of the E. coli expression system. Conversely, E. coli benefits of cost, ease of use and scale make it essential to design new approaches directed for improved recombinant protein production in this host cell.With the aid of genetic and protein engineering novel tailored-made strategies can be designed to suit user or process requirements. Gene fusion technology has been widely used for the improvement of soluble protein production and/or purification in E. coli, and for increasing peptide’s immunogenicity as well. New fusion partners are constantly emerging and complementing the traditional solutions, as for instance, the Fh8 fusion tag that has been recently studied and ranked among the best solubility enhancer partners. In this review, we provide an overview of current strategies to improve recombinant protein production in E. coli, including the key factors for successful protein production, highlighting soluble protein production, and a comprehensive summary of the latest available and traditionally-used gene fusion technologies. A special emphasis is given to the recently discovered Fh8 fusion system that can be used for soluble protein production, purification and immunogenicity in E. coli. The number of existing fusion tags will probably increase in the next few years, and efforts should be taken to better understand how fusion tags act in E. coli. This knowledge will undoubtedly drive the development of new tailored-made tools for protein production in this bacterial system. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Enhancing the Production of Recombinant Proteins in a Prokaryotic Expression System by DoE <s> Medium development for high level expression of human interferon gamma (hIFN-γ) from Pichia pastoris (GS115) was performed with the aid of statistical and nonlinear modeling techniques. In the initial screening, gluconate and glycine were found to be key carbon and nitrogen sources, showing significant effect on production of hIFN-γ. Plackett-Burman screening revealed that medium components., gluconate, glycine, KH2PO4 and histidine, have a considerable impact on hIFN-γ production. Optimization was further proceeded with Box-Behnken design followed by artificial neural network linked genetic algorithm (ANN-GA). The maximum production of hIFN-γ was found to be 28.48mg/L using Box-Behnken optimization (R2=0.98), whereas the ANN-GA based optimization had displayed a better production rate of 30.99mg/L (R2=0.98), with optimal concentration of gluconate=50 g/L, glycine=10.185 g/L, KH2PO4=35.912 g/L and histidine 0.264 g/L. The validation was carried out in batch bioreactor and unstructured kinetic models were adapted. The Luedeking-Piret (L-P) model showed production of hIFN-γ was mixed growth associated with the maximum production rate of 40mg/L of hIFN-γ production. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Enhancing the Production of Recombinant Proteins in a Prokaryotic Expression System by DoE <s> ABSTRACTLipase is one of the most important industrial enzymes, widely used in the preparation of food additives, cosmetics and pharmaceuticals. In order to obtain a large amount of lipase, in the present study, a gene encoding intracellular lipase was cloned from Acinetobacter haemolyticus. The recombinant lipase KV1 containing a His-tag was expressed in Esherichia coli BL21 (DE3) cells, using pET-30a as the expression vector. Using the central composite design, screening and optimization of induction conditions (cell density before induction, IPTG (isopropyl β-D-1-thiogalactopyranoside) concentration, post-induction temperature and post-induction time) were made. All parameters significantly (P < 0.05) influenced the expression of lipase KV1, rendering a 70% increase in enzyme production at optimum induction conditions (OD600 before induction: 0.6, IPTG concentration: 0.5 mmol/L, post-induction temperature: 40 °C, post-induction time: 16 h). The expressed recombinant lipase KV1 was purified using Ni-aff... <s> BIB004
|
It can be difficult to make informed decisions regarding the optimal combination of expression system, conditions and media components. Oftentimes this results in an unsatisfactory and costly trial-and-error process being employed to enhance the overall production yield BIB002 . To address this problem more effective, statistically supported, approaches have been developed and have gained significant traction. In this approach, a controlled model is developed defining media components, induction and expression conditions based on the recombinant protein of interest BIB001 . DoE, employed in this way, has provided powerful tools to screen and optimise factors affecting recombinant protein expression . This is due to DoEs' ability to identify factors affecting recombinant protein production and optimise the process with the minimum number of experiments . A typical DoE workflow is depicted in diagrammatic form (see Figure 1) . The desired output, or response, is to achieve a high yield of a protein of interest and involves three main stages: Stage 1. The first stage of the process is to compile a list of factors that can influence protein expression. These are usually such factors as; induction temperature, induction duration, pH, media components (carbon source, nitrogen source, micronutrients). Stage 2. At this stage, a suitable software package such as MINITAB, JMP or Design Experts will be acquired for the statistical analysis. The second stage of DoE aims to reduce the number of factors to a smaller subset, these being the most important factors (i.e. those with the greatest impact on expression). This process is known as screening. Having a smaller set of significant factors greatly simplifies the statistical process. Sometimes, if the number of factors is small (between 2 and 4) there is no need to carry out the screening stage. When looking at a factor that influences protein expression the concept of levels is important: temperature, for example, may be examined between 20 • C and 40 • C. These two temperatures represent the lowest and highest "level" of this parameter that will influence expression. For the purposes of modelling these two levels are input into the model for this factor. Similarly, the upper and lower levels are input for all other relevant parameters. It is important to note that the levels are input into the DoE package as +1 (highest value of a parameter) and −1 (lowest value of a parameter). This "coding" is carried out to avoid the use of multiple different measurement units for parameters such as pH, temperature. The software will then suggest a minimal set of experiments to explore the significance of each factor. The design of the experimental matrix can be selected from a range of choices such as Full Factorial Design, Plackett Burman Design or indeed a custom design. The objective is to assess the "main effect" of a factor (its direct effect on a response) as well as its "interaction effects" (the effect on other factors). The suggested experiments are carried out and the results are used to inform the next stage of the process-optimisation. Stage 3. The final stage of the process is optimisation and is typically carried out with a set of three to four factors. An experimental RSM (Response Surface Methodology) design strategy is selected and experiments are run as for the screening stage. The optimisation process expresses the response surface as a polynomial and uses the input data to estimate its coefficients. The derivative of this polynomial is used to obtain inflection points corresponding to maxima or minima in the model. The model can be evaluated by looking at the goodness of fit between the model and experimental data. Finally, experiments using the optimum conditions predicted by the model are carried out to validate the model. A typical DoE workflow in protein production. Case study A illustrates the optimization of recombinant lipase KV1 expression in E. coli BIB004 where a screening process was not required since the number of factors affecting this enzyme is not large (four factors). The four factors (A, B, C, D), therefore, underwent optimisation by Central Composite Design (CCD) under Response Surface Methodology (RSM) which resulted in a yield increase in protein expression of 3.1-fold. Case study B describes the optimisation process for high yield production of recombinant human interferon-γ BIB003 . In this case, the number of factors involved is large (nine factors) and they were subjected to a screening process before optimisation. Four factors (X1, X2, X3, X7) out of nine were identified by Plackett-Burman Design (PBD) based screening to be the most influential and subsequently used for further optimisation. A Box-Benkhn Design (BBD) also under RSM was selected to optimize the screened factors and increased the production of human interferon-γ up to 5.1 fold. Further details of these two case studies can be found in the references provided and similar cases are found in Tables 4 and 7 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> The production of dextransucrase from Leuconostoc mesenteroides NRRL B-640 was investigated using statistical approaches. Plackett-Burman design with six variables, viz. sucrose, yeast extract, K(2)HPO(4), peptone, beef extract, and Tween 80, was used to screen the nutrients that significantly affected the dextransucrase production. 2(4)-Central composite design with four selected variables (sucrose, K(2)HPO(4), yeast extract, and beef extract) was used for response surface methodology (RSM) for optimizing the enzyme production. The culture was grown under flask culture with 100 ml optimized medium containing 30 g/l sucrose, 18.5 g/l yeast extract, 15.3 g/l K(2)HPO(4), and 5 g/l beef extract at 25 degrees C and shaking at 200 rpm gave dextransucrase with specific activity of 0.68 U/mg. Whereas the same optimized medium in a 3.0-l bioreactor (1.4 l working volume) gave an experimentally determined value of specific activity of 0.70 U/mg, which was in perfect agreement with the predicted value of 0.65 U/mg by the statistical model. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Response surface methodology was used to optimize the fermentation medium for enhancing naringinase production by Staphylococcus xylosus. The first step of this process involved the individual adjustment and optimization of various medium components at shake flask level. Sources of carbon (sucrose) and nitrogen (sodium nitrate), as well as an inducer (naringin) and pH levels were all found to be the important factors significantly affecting naringinase production. In the second step, a 22 full factorial central composite design was applied to determine the optimal levels of each of the significant variables. A second-order polynomial was derived by multiple regression analysis on the experimental data. Using this methodology, the optimum values for the critical components were obtained as follows: sucrose, 10.0%; sodium nitrate, 10.0%; pH 5.6; biomass concentration, 1.58%; and naringin, 0.50% (w/v), respectively. Under optimal conditions, the experimental naringinase production was 8.45 U/mL. The determination coefficients (R 2) were 0.9908 and 0.9950 for naringinase activity and biomass production, respectively, indicating an adequate degree of reliability in the model. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> A revolution in industrial microbiology was sparked by the discoveries of ther double-sttranded structure of DNA and the development of trecombinant DNA technology. Traditional industrial microbiology was merged with molecular biology to yield improved recombinant processes for the industrial production of primary and secondary metabolites, protein biopharmaceuticals and industrial enzymes. Novel genetic techniques such as metabolic engineering, combinatorial biosynthesis and molecular breeding techniques and their modifications are contributing greatly to the development of improved industrial processes. In addition, functional genomics, proteomics and metabolomics are being exploited for the discovery of novel valuable small molecules for medicine as well as enzymes for catalysis. The sequencing of industrial microbal genomes is beuing carried out which bodes well for future process improvement and discovery of new industrial products. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Fermentative production of nattokinase, a potent fibrinolytic enzyme using Bacillus natto NRRL 3666 was optimized by statistical optimization methods at shake flask. In addition, the production scheme based on optimum medium was scaled-up in 5-L lab-scale fermenter containing 2.5 L working volume. Further, unstructured mathematical models were proposed for kinetics of batch fermentation at shake flask as well as fermenter level for nattokinase. Enhanced production of nattokinase from 188 ±2.4 to 1,190.68±11 U/mL within 40 hr was achieved at shake flask. Nattokinase production improved to 1,932 U/ mL in fermenter, which was 1.6 fold higher than shake flask. The production process was significantly reduced to 26 hr in the fermenter. The proposed models showed better prediction for experimental data with respect to biomass formation (R2>0.96), enzyme production (R2>0.99), and substrate utilization (R2>0.96). The present work showed successful optimization of medium for hyperproduction of nattokinase. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Abstract: Experiments can discover many unexpected things and highlighted issues for further detailed study. ::: The emerging of advanced products and processes are changing rapidly, customers are more demanding and ::: product life-cycle and time to market are shrinking. In this environment engineers and scientists need a strategic ::: approach to overcome this demand. Design of experiments (DOE) is the answer to these challenges. It allows ::: a researcher to understand what happen to the output (response) when the settings of the input variables in ::: a system are purposely changed. Unfortunately there are many scientists and engineers still practice the study ::: one-factor-at-a-time (OFAT). DOE offers a number of advantages over the traditional OFAT approach to ::: experimentation. One of the important advantages of DOE is that it has the ability to discover the presence of ::: interaction between the factors of the process, while OFAT does not. The objective of this paper is to ::: demonstrate how DOE approach works. This paper describes a case study on rubber glove manufacturing ::: process. It illustrates interaction between factors that cannot be found when varying only one factor at a time. ::: Model that describes the relationships between the inputs and output variables were then developed and used ::: to indicate areas where operations may be improved. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Protease producing bacteria were isolated from soil in South Korea. These bacteria were screened in skim milk agar medium using skim milk as the substrate. The highest clear zone producing bacterial strain (BK-P23) was selected for further optimization studies. The strain was identified as Exiguobacterium profundum BK-P23 based on morphological, biochemical and molecular characterizations. The results of the 16S rRNA analysis showed that this strain was highly similar to E. profundum. The strain was able to grow under alkaline conditions at pH 8.5 and a temperature of 30°C. In the preliminary optimization experiments, five different parameters, i.e. carbon source (lactose), nitrogen source (corn steep solid), pH, temperature and incubation period were varied with the goal of optimizing enzyme production in low cost medium using the Box-Behnken design combined with response surface methodology. The optimal conditions were determined to be pH 9.0, a temperature of 30°C, lactose (1.0%) as the carbon source and corn steep solid (1.0%) as the cheap additional nitrogen source. In addition, 24 h of incubation was shown to produce the highest protease yield. Overall, the amount of enzyme produced was significantly higher in the optimized medium when compared with the original medium. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> In the present study, four cold-adapted bacterial isolates were screened for multiple-enzyme production at low temperature (15 °C). The most potent isolate, Bacillus cereus GA6 (HQ832575), was subjected to mutation by UV radiation to obtain a mutant strain with elevated enzyme production. The mutant strain, designated as CUVGA6, with higher chitinase activity at low temperature was selected for enzyme production optimization using factorial design and response-surface methodology (RSM). Two statistically significant parameters (colloidal chitin and KH2PO4) for response were selected (p value = 0.008 and 0.004, respectively) along with pH and temperature and utilized to optimize the process. Central composite design of RSM was used to optimize the levels of key ingredients for the best yield of chitinase. Maximum chitinase production was predicted to be 428.57 U/ml for a 4.4-fold increase in medium containing 2 % colloidal chitin, 6.0 g/L K2HPO4 and pH 9.0 at 25 °C when incubated for 7 days in submerged fermentation. ANOVA of CCD suggested that the quadratic interaction effect of K2HPO4 with chitin, temperature and pH has high impact on the production of chitinase (p value = 0.007, 0.002, 0.035, respectively), although its linear effect was not significant as observed. The closeness of optimized values (R 2 = 82.28 %) to experimental values (R 2 = 80.13 %) proved the validity of statistical model. Thus, multi-enzyme producing cold-adapted mutant B. cereus GA6 (CUVGA6) could be exploited for the production of chitinase which is of industrial significance. <s> BIB008 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Enzymes from extremophiles are creating interest among researchers due to their unique properties and the enormous power of catalysis at extreme conditions. Since community demands are getting more intensified, therefore, researchers are applying various approaches viz. metagenomics to increase the database of extremophilic species. Furthermore, the innovations are being made in the naturally occurring enzymes utilizing various tools of recombinant DNA technology and protein engineering, which allows redesigning of the enzymes for its better fitment into the process. In this review, we discuss the biochemical constraints of psychrophiles during survival at the lower temperature. We summarize the current knowledge about the sources of such enzymes and their in vitro modification through mutagenesis to explore their biotechnological potential. Finally, we recap the microbial cell surface display to enhance the efficiency of the process in cost effective way. <s> BIB009 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Medium development for high level expression of human interferon gamma (hIFN-γ) from Pichia pastoris (GS115) was performed with the aid of statistical and nonlinear modeling techniques. In the initial screening, gluconate and glycine were found to be key carbon and nitrogen sources, showing significant effect on production of hIFN-γ. Plackett-Burman screening revealed that medium components., gluconate, glycine, KH2PO4 and histidine, have a considerable impact on hIFN-γ production. Optimization was further proceeded with Box-Behnken design followed by artificial neural network linked genetic algorithm (ANN-GA). The maximum production of hIFN-γ was found to be 28.48mg/L using Box-Behnken optimization (R2=0.98), whereas the ANN-GA based optimization had displayed a better production rate of 30.99mg/L (R2=0.98), with optimal concentration of gluconate=50 g/L, glycine=10.185 g/L, KH2PO4=35.912 g/L and histidine 0.264 g/L. The validation was carried out in batch bioreactor and unstructured kinetic models were adapted. The Luedeking-Piret (L-P) model showed production of hIFN-γ was mixed growth associated with the maximum production rate of 40mg/L of hIFN-γ production. <s> BIB010 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Stainless steel 310 is having excellent high temperature properties with good weldability and ductility and is designed for high temperature service. Electro chemical machining is a non-traditional machining process belongs to electrochemical category. In this study, the effect of the input parameters on the material removal and surface roughness of stainless steel 310 was investigated using full factorial design (32) of experiments. Flow rate was the parameter with the larger influence on responses. <s> BIB011 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Gallic acid glycoside was enzymatically synthesized by using dextransucrase and sucrose from gallic acid. After purification by butanol partitioning and preparative HPLC, gallic acid glucoside was detected at m/z 355 (C13, H16, O10, Na)+ by matrix-assisted laser desorption ionization time-of-flight mass spectrometry. The yield of gallic acid glucoside was found to be 35.7% (114 mM) by response surface methodology using a reaction mixture of 319 mM gallic acid, 355 mM sucrose, and 930 mU/mL dextransucrase. The gallic acid glucoside obtained showed 31% higher anti-lipid peroxidation and stronger inhibition (Ki = 1.23 mM) against tyrosinase than that shown by gallic acid (Ki = 1.98 mM). In UVB-irradiated human fibroblast cells, gallic acid glucoside lowered matrix metalloproteinase-1 levels and increased the collagen content, which was indicative of a stronger anti-aging effect than that of gallic acid or arbutin. These results indicated that gallic acid glucoside is likely a superior cosmetic ingredient with skin-whitening and anti-aging functions. <s> BIB012 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> Purpose: To investigate the enhancement of streptokinase extracellular expression in Escherichia coli by adjusting culture media. Methods: Screening of 10 chemical factors (EDTA, peptone, glycine, triton X-100, glycerol, K 2 HPO 4 , KH 2 PO 4 , Ca 2+ (calcium chloride), yeast and NaCl) in order to increase the secretion of extracellular protein was carried out by response surface methodology (RSM). The method was also employed to optimize the concentrations of critical factors that had been determined in the screening step. Results : The results indicate that glycine, triton X-100 and Ca 2+ were the most effective chemical factors in terms of increase in extracellular expression of streptokinase with optimum levels of 0.878, 0.479 and 0.222 %, respectively. Expression of streptokinase under optimum concentrations of critical permeabilizing factors led to a 7-fold increase in the quantity of secreted recombinant protein (5824 U/mL) compared to the initial level (802 U/mL). Conclusion : The results show that medium optimization using RSM is effective in improving extracellular streptokinase expression. The optimization medium is considered fundamental and useful for efficient production of streptokinase on a large scale. Keywords : Streptokinase, Response surface methodology, Membrane permeabilization, Extracellular secretion <s> BIB013 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE; a Brief Overview <s> ABSTRACTLipase is one of the most important industrial enzymes, widely used in the preparation of food additives, cosmetics and pharmaceuticals. In order to obtain a large amount of lipase, in the present study, a gene encoding intracellular lipase was cloned from Acinetobacter haemolyticus. The recombinant lipase KV1 containing a His-tag was expressed in Esherichia coli BL21 (DE3) cells, using pET-30a as the expression vector. Using the central composite design, screening and optimization of induction conditions (cell density before induction, IPTG (isopropyl β-D-1-thiogalactopyranoside) concentration, post-induction temperature and post-induction time) were made. All parameters significantly (P < 0.05) influenced the expression of lipase KV1, rendering a 70% increase in enzyme production at optimum induction conditions (OD600 before induction: 0.6, IPTG concentration: 0.5 mmol/L, post-induction temperature: 40 °C, post-induction time: 16 h). The expressed recombinant lipase KV1 was purified using Ni-aff... <s> BIB014
|
DoE is a statistical technique used to plan experiments and analyse data using a controlled set of tests designed to model and explore the relationship between factors and observed responses BIB007 . This technique allows the researcher to use the minimum number of experiments, in which the experimental parameters can be varied simultaneously, to make evidence based decisions BIB013 . It uses a mathematical model to analyse the process data, such as protein expression levels BIB002 . The model allows a researcher to understand the influence of the experimental parameters (inputs) on the A typical DoE workflow in protein production. Case study A illustrates the optimization of recombinant lipase KV1 expression in E. coli BIB014 where a screening process was not required since the number of factors affecting this enzyme is not large (four factors). The four factors (A, B, C, D), therefore, underwent optimisation by Central Composite Design (CCD) under Response Surface Methodology (RSM) which resulted in a yield increase in protein expression of 3.1-fold. Case study B describes the optimisation process for high yield production of recombinant human interferon-γ BIB010 . In this case, the number of factors involved is large (nine factors) and they were subjected to a screening process before optimisation. Four factors (X 1 , X 2 , X 3 , X 7 ) out of nine were identified by Plackett-Burman Design (PBD) based screening to be the most influential and subsequently used for further optimisation. A Box-Benkhn Design (BBD) also under RSM was selected to optimize the screened factors and increased the production of human interferon-γ up to 5.1 fold. Further details of these two case studies can be found in the references provided and similar cases are found in Tables 4 and 7 . DoE is a statistical technique used to plan experiments and analyse data using a controlled set of tests designed to model and explore the relationship between factors and observed responses BIB007 . This technique allows the researcher to use the minimum number of experiments, in which the experimental parameters can be varied simultaneously, to make evidence based decisions BIB013 . It uses a mathematical model to analyse the process data, such as protein expression levels BIB002 . The model allows a researcher to understand the influence of the experimental parameters (inputs) on the response (outputs) and to identify a process optimum BIB011 . Furthermore, DoE software uses three-dimensional surface and contour plots, to visualise and understand the relationship between factors and responses BIB009 BIB012 . In recombinant protein production, a DoE approach can significantly improve the efficiency in screening for most influential experimental parameters (e.g., media composition, culture condition etc.) and determine optimal experimental conditions BIB005 . The mathematical models employed in DoE define the process under study BIB006 . Screening designs such as Plackett Burman Design are based on a first order model BIB008 as shown in Equation BIB003 . where Y is the response, β0 is the model intercept, βi is the linear coefficient and Xi is the level of the independent variables. A statistically significant level of 5% (p-value = 0.05) is commonly used to identify the most influential factors. The significance level (or p-value) of each variable is based on its effect on the response and is calculated using Student's T-test BIB010 in Equation (1). where E(X i ) is the effect of variable X i and S.E., the associated standard error. Factors with p-value < 0.05 are statistically significant while factors with p-value > 0.05 are not statistically significant (see Table 5 for more details). Statistically significant factors are subjected to further optimisation by Response Surface Methodology. A second-order polynomial equation in which independent variables are coded using Equation (3) is used to input factors into the model (see Section 5.4). where x i is a dimensionless value of an independent variable; Xi is real value of an independent variable; X cp is real value of an independent variable at the design centre point; and ∆Xi is step change in the real value of the variable i BIB004 . Replicates at the central point are required to check for the absence of bias between sets of experiments. The fit of the model is then evaluated through analysis of variance (ANOVA) which determines the significance of each term in the equation and estimates the goodness of fit in each case BIB001 (see Figure 5 and Table 9 for more details).
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE Versus One-Factor-At-a-Time (OFAT) <s> Soil contaminated with vegetable cooking oil was used in the isolation of a lipase-producing microorganism. The effectiveness of two different statistical design techniques in the screening and optimization of media constituents for enhancing the lipolytic activity of the soil microorganism was determined. The media constituents for lipase production by the isolated soil microorganism were screened using a Plackett-Burman design. Oil, magnesium sulfate, and ferrous sulfate were found to influence lipolytic activity at 24 and 72 h of culture with very high confidence levels. Whereas oil and ferrous sulfate showed a positive effect, magnesium sulfate indicated a negative effect on the lipolytic activity. A central composite design (CCD) followed by response surface methodology was used in optimizing these media constituents for enhancing the lipolytic activity. The regression model obtained for 72 h of lipolytic activity was found to be the best fit, withR 2=0.97, compared with the other model. An optimum combination at 9.3 mL/L of oil, 0.311 g/L of magnesium sulfate, and 0.007 g/L of ferrous sulfate in the media gave a maximum measured lipolytic activity of 7.1 U/mL at 72 h of culture. This increase in lipolytic activity was found to be 10.25% higher than the maximum experimentally observed value in the CCD. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE Versus One-Factor-At-a-Time (OFAT) <s> The potential of Trichoderma reesei for cellulase production using pineapple waste as substrate has been investigated. A maximum cellulase activity of 9.23 U/mL is obtained under the optimum experimental conditions: pH (5.5), temperature (37.5°C), initial substrate concentration (3%), inoculum concentration (6.6 × 108 CFU/mL), and culture time (6 days). Box-Behnken design (BBD) statistical tool and genetic algorithm (GA) are used to optimize the process parameters. The BBD study of linear and quadratic interactive effects of experimental variables on the desired response of cellulase activity showed that the second order polynomial is significant (R 2 = 0.9414). The experimental cellulase activity under the optimal conditions identified by the BBD is 9.23 U/mL and that by GA is 6.98 U/mL. This result indicates that the BBD model gives better result than GA in the present case. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> DoE Versus One-Factor-At-a-Time (OFAT) <s> Recombinant human interferon alpha-2b is an FDA-approved drug for monotherapy or in combination therapy with other drugs for hepatitis and cancers. It belongs to a family of homologous proteins involved in antiviral, antiproliferative, and immunoregulatory processes. A different expression system has been used for overexpression of this protein. Escherichia coli expression system is a highly characterized host and various expression settings have been developed based on its properties. However, finding the best conditions for the overexpression of recombinant human interferon alpha-2b remains to be addressed. In this study, the expression of synthetic human interferon alpha-2b gene in E. coli was greatly improved by adjusting the expression condition. In this regard, a recombinant gene was designed and codon optimized for the periplasmic expression of this protein. Then, gene subcloning was employed to insert the synthesized gene into the pET22b expression vector. Thereafter, the response surface methodology method was employed to design 20 experiments to find out the optimum points for isopropyl β-D-1-thiogalactopyranoside concentration, post-induction period, and the cell density of induction (OD600). The expression fluctuations were assessed by using the real time polymerase chain reaction method. Our results indicated that the synthetic human interferon alpha-2b gene was successfully codon optimized and subcloned into the expression vector. The real time polymerase chain reaction results revealed that the optimum levels of the selected parameters are 0.27 mM for isopropyl β-D-1-thiogalactopyranoside concentration, 7.98 H for the post-induction period, and 3.93 for cell density (OD600). These optimized conditions led to a 3.5-fold increase in the rhIFNα2b expression, which is highly promising for large scale rhIFNα2b overexpression. <s> BIB003
|
DoE advances the traditional OFAT approach; OFAT fails to account for variables interacting with and influencing, each other and also requires significantly more experiments to converge on an optimum; all of which increases cost and time BIB001 . Figure 2 provides a brief comparative description between DoE and OFAT. In recombinant protein expression, where various independent variables do not always act in isolation, it is likely that their interaction effects can significantly influence protein production BIB003 . Therefore, it is necessary to use a controlled set of tests that can examine the effects of many interacting factors to achieve optimal expression BIB002 . OFAT is performed using more experiments than DoE (each black dot represents an experiment) and does not identify the true optimum (indicated as a red oval). However, with the DoE approach (b) fewer experiments are used and the likelihood of finding the optimum conditions (in red) for the process being studied is high. With DoE the combined or interaction effect of P1 and P2 on the response can be identified and measured. The ovals indicate production yields, blue indicates the lowest yields, whereas red indicates highest yields, where the optimum is found. The DoE approach also identifies a pathway to the optimum response (indicated by the arrow).
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Defining a DoE Workflow to Optimise Recombinant Protein Production <s> In selecting a method to produce a recombinant protein, a researcher is faced with a bewildering array of choices as to where to start. To facilitate decision-making, we describe a consensus 'what to try first' strategy based on our collective analysis of the expression and purification of over 10,000 different proteins. This review presents methods that could be applied at the outset of any project, a prioritized list of alternate strategies and a list of pitfalls that trip many new investigators. <s> BIB001
|
Employing DoE to optimise the production of a recombinant protein can be divided into two main work packages, initial screening and subsequent optimisation. To evaluate all the factors that influence a production process, it is initially required to carry out a wide-ranging experimental screening. This first screening step will identify all factors that significantly influence recombinant protein production BIB001 . The second step in the workflow is to use a DoE optimisation design to achieve optimum production focusing only on the factors identified through the initial screening design. A variety of DoE software packages such as MINITAB (Minitab Ltd., State College, PA, USA), JMP (SAS Institute, Cary, NC, USA) and Design Experts (Science Plus Group, Groningen, the Netherlands) are commercially available and provide a variety of factorial designs depending upon the objective of the experiment. Regardless of the statistical package used, the main steps of a typical DoE workflow include planning the test, screening and optimisation (detailed schematically in Figure 3 ). OFAT is performed using more experiments than DoE (each black dot represents an experiment) and does not identify the true optimum (indicated as a red oval). However, with the DoE approach (b) fewer experiments are used and the likelihood of finding the optimum conditions (in red) for the process being studied is high. With DoE the combined or interaction effect of P1 and P2 on the response can be identified and measured. The ovals indicate production yields, blue indicates the lowest yields, whereas red indicates highest yields, where the optimum is found. The DoE approach also identifies a pathway to the optimum response (indicated by the arrow). Employing DoE to optimise the production of a recombinant protein can be divided into two main work packages, initial screening and subsequent optimisation. To evaluate all the factors that influence a production process, it is initially required to carry out a wide-ranging experimental screening. This first screening step will identify all factors that significantly influence recombinant protein production BIB001 . The second step in the workflow is to use a DoE optimisation design to achieve optimum production focusing only on the factors identified through the initial screening design. A variety of DoE software packages such as MINITAB (Minitab Ltd., State College, PA, USA), JMP (SAS Institute, Cary, NC, USA) and Design Experts (Science Plus Group, Groningen, the Netherlands) are commercially available and provide a variety of factorial designs depending upon the objective of the experiment. Regardless of the statistical package used, the main steps of a typical DoE workflow include planning the test, screening and optimisation (detailed schematically in Figure 3 ). (5) The response data are analysed and visualised using plots for ease of data interpretation. At this stage, a reduced number of factors (i.e., the most influential) are retained for the subsequent optimisation phase. (6) Further optimisation can be carried out (via an optimisation DoE design).
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Planning the Test; Selection of Factors and Associated Levels Influencing Recombinant Protein Production <s> Use of 4 agro-industrial by products and organic materials as nitrogen sources for production of Aspergillus oryzae S2 α-amylase in liquid culture was investigated. The 2 agro-industrial byproducts maltose and saccharose, and also lactose and starch were individually evaluated for use as carbon sources. A Box-Behnken experimental design was used to determine optimal conditions for production of α-amylase. A maximum amylase activity of 750 U/mL was obtained at a temperature of 24°C, a urea concentration of 1 g/L, and a C/N ratio of 2. Laboratory scale application of optimal conditions in a 7 L fermentor produced a final α-amylase activity of 770 U/mL after 3 days of batch cultivation. Addition of 10% starch to the culture medium each 12 h immediately after the stationary phase of cell growth led to a production yield of 1,220 U/mL at the end of fed-batch cultivation. <s> BIB001
|
The DoE workflow in protein production, like in any other DoE process optimisation, starts with the planning the test BIB001 . This involves defining the objective of the study, identifying factors involved and associated levels (i.e., high, central and low). Thus, preliminary experiments are recommended when knowledge of effects of factors on the experiment is not sufficient to set levels. The factors are input parameters that can be modified in the experiment and are referred to as the controllable factors. The levels of factors are fixed based on their working limits . The most popular experimental designs are two level designs although more levels can be used depending upon the type of design and objective of the study. Table 2 depicts a two level experimental design. (5) The response data are analysed and visualised using plots for ease of data interpretation. At this stage, a reduced number of factors (i.e., the most influential) are retained for the subsequent optimisation phase. (6) Further optimisation can be carried out (via an optimisation DoE design). The DoE workflow in protein production, like in any other DoE process optimisation, starts with the planning the test BIB001 . This involves defining the objective of the study, identifying factors involved and associated levels (i.e., high, central and low). Thus, preliminary experiments are recommended when knowledge of effects of factors on the experiment is not sufficient to set levels. The factors are input parameters that can be modified in the experiment and are referred to as the controllable factors. The levels of factors are fixed based on their working limits . The most popular experimental designs are two level designs although more levels can be used depending upon the type of design and objective of the study. Table 2 depicts a two level experimental design. Table 2 . An example of a two level experimental design having nine factors that are known to influence recombinant protein expression. In this case the nine factors relate to two experimental components; media composition and induction conditions. When planning the screening phase the selected factors (yeast extract, tryptone, glycerol, NaCl, Inoculum size, IPTG concentration, induction temperature, incubation time and pH, labelled X 1 to X 9 respectively) and associated levels (high, defined as +1 and low defined as −1 are selected to cover the intended experimental space (i.e., to cover the productive range). The levels are defined as the range between the known working limits.
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> The aim of this work was to statistically optimize the cultural and nutritional parameters for the production of polyhydroxybutyrate (PHB) under submerged fermentation using jackfruit seed hydrolysate as the sole carbon source. On the basis of results obtained from “one variable at a time” experiment, inoculum age, jackfruit seed hydrolysate concentration, and pH were selected for response surface methodology studies. A central composite design (CCD) was employed to get the optimum level of these three factors to maximize the PHB production. The CCD results predicted that jackfruit seed hydrolysates containing 2.5% reducing sugar, inoculum age of 18 h, and initial medium pH 6 could enhance the production of PHB to reach 49% of the biomass (biomass 4.5 g/l and PHB concentration 2.2 g/l). Analysis of variance exhibited a high coefficient of determination (R2) value of 0.910 and 0.928 for biomass and PHB concentration, respectively, and ensured that the quadratic model with the experimental data was a satisfactory one. This is the first report on PHB production by Bacillus sphaericus using statistical experimental design and RSM in submerged fermentation with jackfruit seed hydrolysate as the sole source of carbon. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> Escherichia coli has been the pioneering host for recombinant protein production, since the original recombinant DNA procedures were developed using its genetic material and infecting bacteriophages. As a consequence, and because of the accumulated know-how on E. coli genetics and physiology and the increasing number of tools for genetic engineering adapted to this bacterium, E. coli is the preferred host when attempting the production of a new protein. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> A potential glycolipid biosurfactant producer Streptomyces sp. MAB36 was isolated from marine sediment samples. Medium composition and culture conditions for the glycolipid biosurfactant production by Streptomyces sp. MAB36 were optimized, using two statistical methods: Plackett-Burman design was applied to find out the key ingredients and conditions for the best yield of glycolipid biosurfactant production and central composite design was used to optimize the concentration of the four significant variables, starch, casein, crude oil and incubation time. Fructose and yeast extract were the best carbon and nitrogen sources for the production of the glycolipid biosurfactant. Biochemical characterizations including FTIR and MS studies suggested the glycolipid nature of the biosurfactant. The isolated glycolipid biosurfactant reduced the surface tension of water from 73.2 to 32.4 mN/m. The purified glycolipid biosurfactant showed critical micelle concentrations of 36 mg/l. The glycolipid biosurfactant was effective at very low concentrations over a wide range of temperature, pH, and NaCl concentration. The purified glycolipid biosurfactant showed strong antimicrobial activity. Thus, the strain Streptomyces sp. MAB36 has proved to be a potential source of glycolipid biosurfactant that could be used for the bioremediation processes in the marine environment. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> BackgroundLeptospirosis is a zoonose that is increasingly endemic in built-up areas, especially where there are communities living in precarious housing with poor or non-existent sanitation infrastructure. Leptospirosis can kill, for its symptoms are easily confused with those of other diseases. As such, a rapid diagnosis is required so it can be treated effectively. A test for leptospirosis diagnosis using Leptospira Immunoglobulin-like (Lig) proteins is currently at final validation at Fiocruz.ResultsIn this work, the process for expression of LigB (131-645aa) in E. coli BL21 (DE3)Star™/pAE was evaluated. No significant difference was found for the experiments at two different pre-induction temperatures (28°C and 37°C). Then, the strain was cultivated at 37°C until IPTG addition, followed by induction at 28°C, thereby reducing the overall process time. Under this condition, expression was assessed using central composite design for two variables: cell growth at which LigB (131-645aa) was induced (absorbance at 600 nm between 0.75 and 2.0) and inducer concentration (0.1 mM to 1 mM IPTG). Both variables influenced cell growth and protein expression. Induction at the final exponential growth phase in shaking flasks with Absind = 2.0 yielded higher cell concentrations and LigB (131-645aa) productivities. IPTG concentration had a negative effect and could be ten-fold lower than the concentration commonly used in molecular biology (1 mM), while keeping expression at similar levels and inducing less damage to cell growth. The expression of LigB (131-645aa) was associated with cell growth. The induction at the end of the exponential phase using 0.1 mM IPTG at 28°C for 4 h was also performed in microbioreactors, reaching higher cell densities and 970 mg/L protein. LigB (131-645aa) was purified by nickel affinity chromatography with 91% homogeneity.ConclusionsIt was possible to assess the effects and interactions of the induction variables on the expression of soluble LigB (131-645aa) using experimental design, with a view to improving process productivity and reducing the production costs of a rapid test for leptospirosis diagnosis. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> CGA-N46 is a small antifungal derived peptide and consists of the 31st to 76th amino acids of the N-terminus of human chromogranin A. Polycistronic expression of recombinant CGA-N46 in Bacillus subtilis DB1342 was used to improve its production, but the yield of CGA-N46 was still low. In the present study, response surface methodology (RSM) was used to optimize culture medium composition and growth conditions of the engineered strain B. subtilis DB1342(p-3N46) for the further increase of CGA-N46 yield. The results of two-level factorial experiments indicated that dextrin and tryptone were significant factors affecting CGA-N46 expression. Central composite design (CCD) was used to determine the ideal conditions of each significant factors. From the results of CCD, the optimal medium composition was predicted to be dextrin 16.6 g/L, tryptone 19.2 g/L, KH2PO4·3H2O 6 g/L, pH 6.5. And the optimal culture process was indicated that B. subtilis DB1342(p-3N46) seed culture was inoculated into fresh culture medium at 5% (v/v), followed by expression of CGA-N46 for 56 hours at 30°C induced by 2% (v/v) sucrose after one hour of shaking culture. To test optimal CGA-N46 peptide expression, the yeast growth inhibition assay was employed and it was found that under optimal culture conditions, CGA-N46 inhibited the growth of C. albican by 42.17%, 30.86% more than that in the pre-optimization conditions. In summary, RSM can be used to optimize expression conditions of CGA-N46 in engineered strains B. subtilis DB1342(p-3N46). <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> Anaerobic digestion of lignocellulosic biomass is limited by inefficient hydrolysis of recalcitrant substrates, leading to low biogas yields where bacteria can be utilized successfully. In this study, we have chosen ten cellulose-degrading bacteria from active anaerobic slurry identified as Enterobacter ludwigii, Klebsiella pneumoniae, Pantoea agglomerans, Bacillus subtilis, Bacillus pumilus, Bacillus anthracis, Pseudomonas sp., Enterobacteriaceae bacterium Staphylococcus warneri, and Bacillus safensis; among them, E. ludwigii was found to be the most potent having an endoglucanase gene in the genome. The growth conditions of E. ludwigii were further optimized using Response Surface Methodology that designated 35 °C temperature, 6.5 pH, 5 % carboxymethylcellulose, 5 % yeast extract, 1 % ammonium nitrate as optimum growth conditions. The optimized growth module found to accelerate cellulase production at both transcription and translation level that in turn enhanced biogas production inside anaerobic digester as well. Finally, the growth-cellulase production relationship could be helpful in efficient industrial applications. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> The aim of the study was to identify new sources of substrate from agro-industrial waste for protease production using Bacillus sp., a local bacteria isolated from an agro-waste dumping site. The strain was identified as Bacillus sp. BT MASC 3 by 16S rRNA sequence followed by phylogenic analysis. Response surface methodology-based Box–Behnken design (BBD) was used to optimize the variables such as pH, incubation time, coffee pulp waste (CPW) and corncob (CC) substrate concentration. The BBD design showed a reasonable adjustment of the quadratic model with the experimental data. Statistics-based contour and 3-D plots were generated to evaluate the changes in the response surface and understand the relationship between the culture conditions and the enzyme yield. The maximum yield of protease production (920 U/mL) was achieved after 60 h of incubation with 3.0 g/L of CPW and 2.0 g/L of CC at pH 8 and temperature 37 °C in this study. The molecular mass of the purified enzyme was 46 kDa. The highest activity was obtained at 50 °C and pH 9 for the purified enzymes. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Factors Levels Low High <s> Response surface methodology was used to optimize the medium for antifungal active substance production from Streptomyces lydicus E12 in flask cultivation. Initially, the component factors, which influence antifungal substance production, were studied by varying one factor at a time. Starch, soybean cake powder, K2HPO4·3H2O and MgSO4·7H2O were found to have a significant effect on the production of antifungal substances by the traditional design. Then, a Box–Behnken design was applied for further optimization. A quadratic model was found to fit antifungal active substance production. The analysis revealed that the optimum values of the tested variable were starch 84.96 g/L, soybean cake powder 4.13 g/L, glucose 5 g/L, MgSO4·7H2O 1.23 g/L, K2HPO4·3H2O 2.14 g/L and NaCl 0.5 g/L. The test result of 67.44% antifungal inhibition agreed with the prediction and increased by 14.28% in comparison with the basal medium. <s> BIB008
|
Media composition In general, for recombinant protein expression subjected to DoE, the most commonly selected factors relate to media composition and include components such as yeast extract BIB006 , K 2 HPO 4, MgSO BIB002 , starch, glucose, peptone, NaCl, sucrose, glycerine BIB008 . For induction conditions, common factors selected are incubation time, incubation temperature, pH, agitation, inoculum age and size BIB003 BIB001 ; induction period, induction temperature, culture inoculation concentration BIB005 BIB007 ; Optical Density (OD), Isopropyl β-D-1-thiogalactopyranoside (IPTG) concentration BIB004 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> PURPOSE ::: Optimization of the fermentation medium for maximum alkaline protease production was carried out using a new strain, Bacillus Sp. PE-11. ::: ::: ::: METHODS ::: The carbon source (glucose), the nitrogen source (peptone) and salt solution were selected to optimize. A 2(3 )full factorial composite experimental design and response surface methodology were used in the design of experiments and in the analysis of results. This procedure limited the number of actual experiments performed while allowing for possible interactions between the three components. ::: ::: ::: RESULTS AND DISCUSSION ::: The optimum values for the tested variables for the maximum alkaline protease production were; glucose 7.798 (g/L), peptone 9.548 (g/L) and salt solution 8.757%. The maximum alkaline protease production was 4,98,123 PU/L. This method was efficient; only 20 experiments were necessary to assess these conditions, and model adequacy was very satisfactory, as the coefficient of determination was 0.941. ::: ::: ::: CONCLUSIONS ::: In the work, we have demonstrated the use of a central composite factorial design by determining the conditions leading to the high yield of enzyme production. Thus, smaller and less time consuming experimental designs could generally suffice for the optimization of many fermentation processes. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> The production of dextransucrase from Leuconostoc mesenteroides NRRL B-640 was investigated using statistical approaches. Plackett-Burman design with six variables, viz. sucrose, yeast extract, K(2)HPO(4), peptone, beef extract, and Tween 80, was used to screen the nutrients that significantly affected the dextransucrase production. 2(4)-Central composite design with four selected variables (sucrose, K(2)HPO(4), yeast extract, and beef extract) was used for response surface methodology (RSM) for optimizing the enzyme production. The culture was grown under flask culture with 100 ml optimized medium containing 30 g/l sucrose, 18.5 g/l yeast extract, 15.3 g/l K(2)HPO(4), and 5 g/l beef extract at 25 degrees C and shaking at 200 rpm gave dextransucrase with specific activity of 0.68 U/mg. Whereas the same optimized medium in a 3.0-l bioreactor (1.4 l working volume) gave an experimentally determined value of specific activity of 0.70 U/mg, which was in perfect agreement with the predicted value of 0.65 U/mg by the statistical model. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> Screening designs help assess the relative impact of a large number of factors. Experimenters often prefer quantitative factors to have three levels rather than two, but common screening designs use only two factors. This article proposes a new class of.. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> Protein thermal stability is an important factor considered in medical and industrial applications. Many structural characteristics related to protein thermal stability have been elucidated, and increasing salt bridges is considered as one of the most efficient strategies to increase protein thermal stability. However, the accurate simulation of salt bridges remains difficult. In this study, a novel method for salt-bridge design was proposed based on the statistical analysis of 10,556 surface salt bridges on 6,493 X-ray protein structures. These salt bridges were first categorized based on pairing residues, secondary structure locations, and Cα-Cα distances. Pairing preferences generalized from statistical analysis were used to construct a salt-bridge pair index and utilized in a weighted electrostatic attraction model to find the effective pairings for designing salt bridges. The model was also coupled with B-factor, weighted contact number, relative solvent accessibility, and conservation prescreening to determine the residues appropriate for the thermal adaptive design of salt bridges. According to our method, eight putative salt-bridges were designed on a mesophilic β-glucosidase and 24 variants were constructed to verify the predictions. Six putative salt-bridges leaded to the increase of the enzyme thermal stability. A significant increase in melting temperature of 8.8, 4.8, 3.7, 1.3, 1.2, and 0.7°C of the putative salt-bridges N437K-D49, E96R-D28, E96K-D28, S440K-E70, T231K-D388, and Q277E-D282 was detected, respectively. Reversing the polarity of T231K-D388 to T231D-D388K resulted in a further increase in melting temperatures by 3.6°C, which may be caused by the transformation of an intra-subunit electrostatic interaction into an inter-subunit one depending on the local environment. The combination of the thermostable variants (N437K, E96R, T231D and D388K) generated a melting temperature increase of 15.7°C. Thus, this study demonstrated a novel method for the thermal adaptive design of salt bridges through inference of suitable positions and substitutions. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> Nostoc ellipsosporum is a pharmaceutically important cyanobacteria known for the production of anti-HIV drug called cyanovirin. A five-level four factor central composite design of response surface methodology (RSM) was performed to identify the optimal level for maximum protein production. Four medium variables, PHA extract, glucose, Fe-EDTA and micronutrients, were chosen for RSM optimization study. Experimental data were analyzed by regression, and model equation was constructed. Maximum protein production was expected at the predicted optimal level of PHA extract, 2.50 % (v/v), glucose, 0.05 % (w/v), Fe-EDTA, 0.125 % (v/v) and micronutrients, 0.125 % (v/v). Validation experiment results were in good agreement with the results predicted by RSM. Results of this study showed that optimization by RSM approach improves the protein production, and also PHA extract was found to be a significant medium component for enhancing the protein synthesis. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> Pseudomonas sp. SUK producing an extracellular collagenolytic protease was isolated from soil samples from meat and poultry industrial area based in Kolhapur, India. Response surface methodology was employed for the optimization of different nutritional parameters influencing production of collagenolytic protease by newly isolated Pseudomonas sp. SUK in submerged fermentation. Initial screening of production parameters was performed using Plackett-Burman design and the variables with statistically significant effects on collagenolytic protease production were identified as gelatin, peptone, and K2HPO4. Further, optimization by response surface methodology (RSM) using Central Composite Design showed optimum production of collagenolytic protease with 12.05 g L−1 of gelatin, 12.26 g L−1 of peptone and 1.29 g L−1 of K2HPO4. Collagenolytic protease production obtained experimentally has very close agreement with the model prediction value and the model was proven to be adequate. The statistical optimization by response surface methodology upsurges collagenolytic protease yield by 2.9 fold, hence the experimental design is effective towards process optimization. Moreover, ammonium sulphate precipitated, partially purified enzyme has shown to cleave collagen from bovine achilles tendon, which was observed by phase contrast microscopy, and SDS-PAGE. Hence, extracellular collagenolytic protease of Pseudomonas sp. SUK could have considerable potential for industrial as well as medical applications. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Screening Designs to Identify Factors that Significantly Affect Recombinant Protein Expression <s> Optimization of the fermentation conditions for extracellular production of L-asparaginase by Streptomyces brollosae NEAE-115 under solid state fermentation was investigated. The Plackett–Burman experimental design was used to screen 16 independent variables (incubation time, moisture content, inoculum size, temperature, pH, soybean meal + wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2) and three dummy variables. The most significant independent variables found to affect enzyme production, namely soybean + wheat bran (X6), L-asparagine (X9) and K2HPO4 (X12), were further optimized by the central composite design. We found that L-asparaginase production by S. brollosae NEAE-115 was 47.66, 129.92 and 145.57 units per gram dry substrate (U/gds) after an initial survey using “soybean meal + wheat bran” as a substrate for L-asparaginase production (step 1), statistical optimization by Plackett–Burman design (step 2) and further optimization by the central composite design (step 3), respectively, with a fold of increase of 3.05. <s> BIB007
|
Screening designs are used to devise a matrix using factors and levels as formulated in the planning stage. BIB003 . By employing the statistical tools embedded in the DoE software, screening designs establish the relationships between variables and responses. The interaction effects between variables on a given response are also investigated BIB004 . In protein biotechnology, screening designs are mainly utilised to identify media composition and culture condition factors that significantly influence protein production BIB005 . Various researchers have explored the effects of both media components BIB002 BIB005 BIB001 BIB006 and culture conditions BIB007 on protein expression. There are many different types of screening designs and their choice depends upon the nature of experiment and the objective of the study. The classical screening designs include Full Factorial Designs, Fractional Factorial Designs and Plackett-Burman Designs. Current DoE software, such as JMP from the SAS Institute, provides additional screening designs such as Definitive Screening Designs and Custom Designs. The most common screening designs are compared in Table 3 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Full Factorial Design <s> The concentrations of oat spelt xylan, casein hydrolysate and NH4Cl in the culture medium for production of xylanase from Bacillus sp. I-1018 were optimized by means of response surface methods. The path of steepest ascent was used to approach the optimal region of the medium composition. The optimum composition of the nutrient medium was then easily determined by using a central composite design and was found to be 3.16g/l of xylan, 1.94g/l casein hydrolysate, 0.8g/l of NH4Cl. The xylanase production was increased by 135% when the strain was grown in the optimized medium compared to initial medium. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Full Factorial Design <s> "Statistical Viewpoint" addresses principles of statis- tics useful to practitioners in compliance and valida- tion. We intend to present these concepts in a mean- ingful way so as to enable their application in daily work situations. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments to coordinating editor Susan Haigney at [email protected]. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Full Factorial Design <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Full Factorial Design <s> Fibrinolytic enzymes are agents that dissolve fibrin clots. These fibrinolytic agents have potential use to treat cardiovascular diseases, such as heart attack and stroke. In the present article, a fibrinolytic enzyme producing Pseudoalteromonas sp. IND11 was isolated from the fish scales and optimized for enzyme production. Cow dung was used as a substrate for the production of fibrinolytic enzyme in solid-state culture. A two-level full factorial design was used for the screening of key ingredients while further optimization was carried out using the central composite design. Statistical analysis revealed that the second-order model is significant with model F-value of 6.88 and R2 value of 0.860. Enzyme production was found to be high at pH 7.0, and the supplementation of 1% (w/w) maltose and 0.1% (w/w) sodium dihydrogen phosphate enhanced fibrinolytic enzyme production. The optimization of process parameters using response surface methodology resulted in a three-fold increase in the yield of fibrinolytic enzyme. This is the first report on production of fibrinolytic enzyme using cow dung substrate in solid-state fermentation. <s> BIB004
|
When little is known about the effects of the factors on a response, a full factorial design is recommended. This design includes all combinations of all factor levels and provides a predictive model that includes the main effects and all possible interactions BIB002 . This design consists of two, or more, levels with experimental runs that encompass all possible combinations of these levels, across all factors. In a full factorial design where k represents number of factors; 2 k represents the number of experiments required to carry out a two level design with k factors. Similar to other screening designs, Full Factorial Design can include centre points, randomisation and blocking variables to improve the efficiency of the design BIB003 . This approach was significant in screening for the most influential factors affecting recombinant protein production for a variety of proteins BIB001 BIB004 (see Table 4 ).
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Fractional Factorial Design (FFD) <s> A fermentation medium for avilamycin production by Streptomyces viridochromogenes Tu57-1 has been optimized. Important components and their concentrations were investigated using fractional factorial design and Box–Behnken Design. The results showed that soybean flour, soluble starch, MgSO4·7H2O and CaCl2·2H2O are important for avilamycin production. A polynomial model related to medium components and avilamycin yield had been established. A high coefficient of determination (R2 = 0.92) was obtained that indicated good agreement between the experimental and predicted values of avilamycin yield. Student’s T-test of each coefficient showed that all the linear and quadratic terms had significant effect (P > |T| < 0.05) on avilamycin yield. The significance of tested components was related to MgSO4·7H2O (0.37 g/L), CaCl2·2H2O (0.39 g/L), soybean flour (21.97 g/L) and soluble starch (37.22 g/L). The yield of avilamycin reached 88.33 ± 0.94 mg/L (p < 0.05) that was 2.8-fold the initial yield. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Fractional Factorial Design (FFD) <s> Abstract In this work, SVP2 from Salinivibrio proteolyticus strain AF-2004, a zinc metalloprotease with suitable biotechnological applications, was cloned for expression at high levels in Escherichia coli with the intention of changing culture conditions to generate a stable extracellular enzyme extract. The complete ORF of SVP2 gene was heterologously expressed in E. coli BL21 (DE3) by using pQE-80L expression vector system. In initial step, the effect of seven factors include: incubation temperature, peptone and yeast extract concentration, cell density (OD600) before induction, inducer (IPTG) concentration, induction time, and Ca 2+ ion concentrations on extracellular recombinant SVP2 expression and stability were investigated. The primary results revealed that the IPTG concentration, Ca 2+ ion concentration and induction time are the most important effectors on protease secretion by recombinant E. coli BL21. Central composite design experiment in the following showed that the maximum protease activity (522 U/ml) was achieved in 0.0089 mM IPTG for 24 h at 30 °C, an OD600 of 2, 0.5% of peptone and yeast extract, and a Ca 2+ ion concentration of 1.3 mM. The results exhibited that the minimum level of IPTG concentration along with high cell density and medium level of Ca 2+ with prolonged induction time provided the best culture condition for maximum extracellular production of heterologous protease SVP2 in E. coli expression system. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Fractional Factorial Design (FFD) <s> Abstract Hexaoligochitin produced by chitinase, ASCHI61, from Aeromonas schubertii was recently expressed. In this work, the optimal conditions for the mass production of ASCHI61 were investigated. The efficiency of recombinant protein expression in Escherichia coli was determined by various parameters, including the pH of the culture medium, induction temperature, shaking speed, inducer concentration, and induction period. The optimization experiments could be simplified through a statistical design of experiments (response surface methodology). From the fractional factorial design, the interactive effect of induction temperature and time was the most significant. The total activity of the enzyme was 32,092 U at 23.9 °C with 115 min of induction. Under those conditions, the total activity of the recombinant protein was 30,650 U in the fermentation experiments, with an error of only 4.8%. The total activity of ASCHI61 increased 1.54-fold under the optimal conditions. Based on the results, ASCHI61 can be expressed more for hexaoligochitin production. <s> BIB003
|
FFD is a recommended screening design when a large number of factors are involved. This design consists of reducing the initially large number of potential factors to a subset of the most effective ones and is represented using the following notation: where 2 represents number of levels, k the number of factors, p the extra columns required and R the resolution of the method. The method resolution describes the degree to which the estimated main effects are aligned with the estimated interactions associated with levels BIB002 BIB003 BIB001 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Plackett-Burman Designs (PBD) <s> The optimization of nutrient levels for the production of pristinamycins by Streptomyces pristinaespiralis CGMCC 0957 in submerged fermentation was carried out using the statistical methodologies based on the Plackett–Burman design, the steepest ascent method, and the central composite design (CCD). First, the Plackett–Burman design was applied to evaluate the influence of related nutrients in the medium. Soluble starch and MgSO4·7H2O were then identified as the most significant nutrients with a confidence level of 99%. Subsequently, the concentrations of the two nutrients were further optimized using response surface methodology of CCD, together with the steepest ascent method. Accordingly, a second-order polynomial regression model was finally fitted to the experimental data. By solving the regression equation from the model and analyzing the response surface, the optimal levels for soluble starch and MgSO4·7H2O were determined as 20.95 and 5.67g/L, respectively. Under the optimized medium, the yield of pristinamycins in the shake flask and 5-L bioreactor could reach 1.30 and 1.01g/L, respectively, which is the highest yield reported in literature to date. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Plackett-Burman Designs (PBD) <s> The production optimization of alpha-amylase (E.C.3.2.1.1) from Aspergillus oryzae CBS 819.72 fungus, using a by-product of wheat grinding (gruel) as sole carbon source, was performed with statistical methodology based on three experimental designs. The optimisation of temperature, agitation and inoculum size was attempted using a Box-Behnken design under the response surface methodology. The screening of nineteen nutrients for their influence on alpha-amylase production was achieved using a Plackett-Burman design. KH(2)PO(4), urea, glycerol, (NH(4))(2)SO(4), CoCl(2), casein hydrolysate, soybean meal hydrolysate, MgSO(4) were selected based on their positive influence on enzyme formation. The optimized nutrients concentration was obtained using a Taguchi experimental design and the analysis of the data predicts a theoretical increase in the alpha-amylase expression of 73.2% (from 40.1 to 151.1 U/ml). These conditions were validated experimentally and revealed an enhanced alpha-amylase yield of 72.7%. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Plackett-Burman Designs (PBD) <s> Summary Response surface methodology was employed for the optimization of different nutritional and physical parameters for the production of laccase by the filamentous bacteria Streptomyces psammoticus MTCC 7334 in submerged fermentation. Initial screening of production parameters was performed using a Plackett – Burman design and the variables with statistically significant effects on laccase production were identified. Incubation temperature, incubation period, agitation rate, concentrations of yeast extract, MgSO 4 7H 2 O, and trace elements were found to influence laccase production significantly. These variables were selected for further optimization studies using a Box-Behnken design. The statistical optimization by response surface methodology resulted in a three-fold increase in the production of laccase by S. psammoticus MTCC 7334. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Plackett-Burman Designs (PBD) <s> OBJECTIVE ::: To isolate marine bacteria, statistically optimize them for maximum asparaginase production. ::: ::: ::: METHODS ::: In the present study, statistically based experimental designs were applied to maximize the production of L-asparaginase from bacterial strain of Bacillus cereus (B. cereus) MAB5 (HQ675025) isolated and identified by 16S rDNA sequencing from mangroves rhizosphere sediment. ::: ::: ::: RESULTS ::: Plackett-Barman design was used to identify the interactive effect of the eight variables viz. yeast extract, soyabean meal, glucose, magnesium sulphate, KH(2)PO(4), wood chips, aspargine and sodium chloride. All the variables are denoted as numerical factors and investigated at two widely spaced intervals designated as -1 (low level) and +1 (high level). The effect of individual parameters on L-asparaginase production was calculated. Soyabean meal, aspargine, wood chips and sodium chloride were found to be the significant among eight variables. The maximum amount of L-asparaginase produced (51.54 IU/mL) from the optimized medium containing soyabean meal (6.282 8 g/L), aspargine (5.5 g/L), wood chips (1.383 8 g/L) and NaCl (4.535 4 g/L). ::: ::: ::: CONCLUSIONS ::: The study revealed that, it is useful to produce the maximum amount of L-asparaginase from B. cereus MAB5 for the treatment of various infections and diseases. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Plackett-Burman Designs (PBD) <s> Bacillus amyloliquefaciens H11 has been proven as a potential producer of extracellular protease with capacity of hydrolyzing gelatin. Therefore, the cultivation conditions for the enhanced production of gelatinolytic enzyme from a newly isolated B. amyloliquefaciens H11 was investigated using Plackett- Burman design and response surface methodology. Three significant variables (agitation speed, cultivation time and fish gelatin concentration) were selected for optimization. Increase in speed of agitation and fish gelatin concentration markedly increased the production of gelatinolytic enzyme. Gelatin concentration and cultivation time showed significant interaction and both variables played the important role in enzyme pro- duction. The maximal gelatinolytic enzyme production in the basal medium was 2,801 U/mL under the following optimal condition: agitation speed of 234 rpm, 8.36 g/L of fish gelatin and 31 h of cultivation. The predicted model fitted well with the experimental results (2,734 ± 101 U/mL). 14-fold increase in yield was achieved, compared with the basal condition (212 U/mL). Thus, cultivation of B. amyloliquefaciens H11 under the optimal condition could enhance the production of gelatinolytic enzyme effectively. <s> BIB005
|
PBD design is often used as an alternative to fractional and full factorial designs because of its potential to reduce the gaps found in fractional designs and to strengthen the estimation of the main effects, which may have been disregarded when full factorial designs are used BIB001 BIB002 BIB003 BIB005 BIB004 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Definitive Screening Design (DSD) and Custom Design (CD) <s> Protein hydrolysates were produced from shrimp waste mainly comprising head and shell of Penaeus monodon by enzymatic hydrolysis for 90 min using four microbial proteases (Alcalase, Neutrase, Protamex, Flavourzyme) where PR(%) and DH (%) of respective enzymes were compared to select best of the lot. Alcalase, which showed the best result, was used to optimize hydrolysis conditions for shrimp waste hydrolysis by response surface methodology using a central composite design. A model equation was proposed to determine effects of temperature, pH, enzyme/substrate ratio and time on DH where optimum values found to be 59.37 °C, 8.25, 1.84% and 84.42 min. for maximum degree of hydrolysis 33.13% respectively. The model showed a good fit in experimental data because 92.13% of the variability within the range of values studied could be explained by it. The protein hydrolysate obtained contained high protein content (72.3%) and amino acid (529.93 mg/gm) of which essential amino acid and flavour amino acid were was 54.67-55.93% and 39.27-38.32% respectively. Protein efficiency ratio (PER) (2.99) and chemical score (1.05) of hydrolysate was suitable enough to recommend as a functional food additive. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Definitive Screening Design (DSD) and Custom Design (CD) <s> In this paper, we develop column-augmented DSDs that can accommodate any number of two-level qualitative factors using two methods. <s> BIB002
|
DSD and CD are a class of screening designs that have potential applications in recombinant protein expression for assessing the impact of a large number of factors on a given response. DSD has recently been reported to be particularly advantageous as it allows the estimation of the main effects of certain components alone but also the interactions between components as well as the factors with non-linear effects such as quadratic effects (an interaction term where a factor interacts with itself); all executed with the minimum number of experimental runs BIB002 . CD enables tailoring a design, whilst simultaneously minimising resource usage: it is highly flexible and more cost-effective than other screening designs. It allows for the best use of the experimental budget and tackles a wide range of challenges with the capability to model effects including centre points and replicates. However, in most cases this design allows for the estimation of main effects only. Table 4 summarises the most common screening designs, along with their roles in identifying most influential independent factors, in recombinant protein production. The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB001 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> Response surface methodology (RSM) is a collection of statistical design and numerical optimization techniques used to optimize processes and product designs. The original work in this area dates from the 1950s and has been widely used, especially in the chemical and process industries. The last 15 years have seen the widespread application of RSM and many new developments. In this review paper we focus on RSM activities since 1989. We discuss current areas of research and mention some areas for future research. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> Response surface methodology employing central composite design (CCD) was used to optimize fermentation medium for the production of cellulase-free, alkaline xylanase from Streptomyces violaceoruber under submerged fermentation. The design was employed by selecting wheat bran, peptone, beef extract, incubation time and agitation as model factors. A second-order quadratic model and response surface method showed that the optimum conditions for xylanase production (wheat bran 3.5 % (w/v), peptone 0.8 % (w/v), beef extract 0.8 % (w/v), incubation time 36 h and agitation 250 rpm) results in 3.0-fold improvement in alkaline xylanase production (1500.0 IUml−1) as compared to initial level (500.0 IUml−1) after 36 h of fermentation, whereas its value predicted by the quadratic model was 1347 IUml−1. Analysis of variance (ANOVA) showed a high coefficient of determination (R2) value of 0.9718, ensuring a satisfactory adjustment of the quadratic model with the experimental data. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> Protein hydrolysates were produced from shrimp waste mainly comprising head and shell of Penaeus monodon by enzymatic hydrolysis for 90 min using four microbial proteases (Alcalase, Neutrase, Protamex, Flavourzyme) where PR(%) and DH (%) of respective enzymes were compared to select best of the lot. Alcalase, which showed the best result, was used to optimize hydrolysis conditions for shrimp waste hydrolysis by response surface methodology using a central composite design. A model equation was proposed to determine effects of temperature, pH, enzyme/substrate ratio and time on DH where optimum values found to be 59.37 °C, 8.25, 1.84% and 84.42 min. for maximum degree of hydrolysis 33.13% respectively. The model showed a good fit in experimental data because 92.13% of the variability within the range of values studied could be explained by it. The protein hydrolysate obtained contained high protein content (72.3%) and amino acid (529.93 mg/gm) of which essential amino acid and flavour amino acid were was 54.67-55.93% and 39.27-38.32% respectively. Protein efficiency ratio (PER) (2.99) and chemical score (1.05) of hydrolysate was suitable enough to recommend as a functional food additive. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> The degree of hydrolysis (DH) and angiotensin I-converting enzyme (ACE)-inhibitory activity of vital wheat gluten (VWG) hydrolyzed using Alcalase were investigated using Box-Behnken response surface methodology (RSM). The mean responses were fitted to a second order polynomial to obtain regression equations. The enzyme-substrate ratio and the hydrolysis time increased the DH significantly (p<0.05). The substrate concentration was the only significant linear term leading to an increase in ACE-inhibitory activity. The optimized conditions of a substrate concentration of 5.04%, an enzyme-substrate ratio 5.94%, and a hydrolysis time 30.79 min gave a point prediction of a 12.74% DH and 82.28% ACE-inhibitory activity. Analytical results from confirmatory experiment were a 12.22%±0.5 DH and a 78.93%±1.07 ACE-inhibitory activity. The optimized conditions of the study provide useful information to the functional food and beverage industries to enhance the anti-hypertensive activities of peptides from VWG. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> Background: The present study focused on utilization of agrowaste byproducts generated from oil mill for L-asparaginase enzyme production using Serratia marcescens under solid state fermentation. Classical and statistical methods were employed to optimize the process variables and the results were compared. Results: The classical one factor at a time (OFAT) and response surface methodology (RSM) are employed to optimize the fermentation process. When used as the sole carbon source in SSF, coconut oil cake (COC) showed maximum enzyme production. The optimal values of substrate amount, initial moisture content, pH and temperature were found to be 6 g, 40%, 6 and 35°C respectively under classical optimization method with maximum enzyme activity of 3.87 (U gds -1 ). Maximum enzyme activity of 5.86 U gds -1 was obtained at the predicted optimal conditions of substrate amount 7.6 g of COC, initial moisture content of substrate 50%, temperature 35.5°C and pH 7.4. Validation results proved that a good relation existed between the experimental and the predicted model. Conclusions: RSM optimization approach enhances the enzyme production to 33% when compared to classical method. Utilization of coconut oil cake as a low cost substrate in SSF for L-asparaginase production makes the process economical and also reduces the environmental pollution by converting the oil mill solid waste into a useful bioproduct. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> A natural bacterial strain identified as Bacillus amyloliquefaciens MBAA3 using 16S rDNA partial genome sequencing has been studied for optimization of cellulase production. Statistical screening of media components for production of cellulase by B. amyloliquefaciens MBAA3 was carried out by Plackett–Burman design. Plackett–Burman design showed CMC, MgSO4 and pH as significant components influencing the cellulase production from the media components screened by Plackett-Burman fractional factorial design. The optimum concentrations of these significant parameters were determined employing the response surface central composite design, involving three factors and five levels was adopted to acquire the best medium for the production of cellulase enzyme revealed concentration of CMC (1.84 g), MgSO4 (0.275 g), and pH (8.5) in media for highest enzyme production. Response surface counter plots revealed that middle level of MgSO4 and middle level of CMC, higher level of CMC and lower level of pH and higher level of MgSO4 with lower level of pH increase the production of cellulase. After optimization cellulase activity increased by 6.81 fold. Presence of cellulase gene in MBAA3 was conformed by the amplification of genomic DNA of MBAA3. A PCR product of cellulase gene of 1500 bp was successfully amplified. The amplified gene was conformed by sequencing the amplified product and sequence was deposited in the gene bank under the accession number KF929416.Graphical AbstractResponse surface graph showing interaction effects between concentration of a CMC and MgSO4. b pH and CMC. c MgSO4 and pH <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimisation Designs to Maximise Recombinant Protein Production in Prokaryotic Systems <s> Use of 4 agro-industrial by products and organic materials as nitrogen sources for production of Aspergillus oryzae S2 α-amylase in liquid culture was investigated. The 2 agro-industrial byproducts maltose and saccharose, and also lactose and starch were individually evaluated for use as carbon sources. A Box-Behnken experimental design was used to determine optimal conditions for production of α-amylase. A maximum amylase activity of 750 U/mL was obtained at a temperature of 24°C, a urea concentration of 1 g/L, and a C/N ratio of 2. Laboratory scale application of optimal conditions in a 7 L fermentor produced a final α-amylase activity of 770 U/mL after 3 days of batch cultivation. Addition of 10% starch to the culture medium each 12 h immediately after the stationary phase of cell growth led to a production yield of 1,220 U/mL at the end of fed-batch cultivation. <s> BIB008
|
As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on finding the variable levels that result in an optimal yield BIB004 BIB006 . Figure 4 , describes the benefit of The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on finding the variable levels that result in an optimal yield BIB004 BIB006 . Figure 4 , describes the benefit of The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on finding the variable levels that result in an optimal yield BIB004 BIB006 . Figure 4 , describes the benefit of The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation The screening process identifies most influential factors on the process under investigation (i.e., X1 and X6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation The rationale of screening designs lies in identifying the variables that are statistically significant in influencing protein production among a large number of potentially important variables BIB005 BIB007 . Table 5 illustrates how screening analysis identifies statistically significant factors based on their effect and probability values. The screening process identifies most influential factors on the process under investigation (i.e., X 1 and X 6 in the example shown in Table 5 ) and thus paves the way for effective optimisation by reducing the number of factors to be optimised in the third work package of the DoE workflow BIB003 . As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on finding the variable levels that result in an optimal yield BIB004 BIB006 . Figure 4 , describes the benefit of carrying out an optimisation process after a screening process has identified a small number of key variables. As a collection of statistical design and numerical optimisation techniques BIB001 , optimisation uses the reduced number of variables identified in the previous screening process and focuses on finding the variable levels that result in an optimal yield BIB004 BIB006 . Figure 4 , describes the benefit of carrying out an optimisation process after a screening process has identified a small number of key variables. Response Surface Methodology (RSM) is the most popular optimisation method . It consists of mathematical and statistical techniques used to build empirical models capable of exploring the process space and studying the relationship between the response and process variables to find the optimal response BIB008 BIB006 BIB002 . In general, for a given number of factors, RSM requires more runs than screening designs, thus, the number of factors to consider should initially be reduced through an appropriate screening process. Central composite designs (CCD) and Box-Behnken designs (BBD) are the two of the major Response Surface Designs commonly used in recombinant protein optimization .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Central Composite Design (CCD) <s> Abstract A human interferon beta expressed from a synthetic gene was produced in high cell density culture by recombinant Escherichia coli using an optimized linear feeding strategy. The optimal induction conditions to be determined consisted of inducer concentration and dry cell weight at the time of induction. For this purpose, the response surface methodology was applied. Under optimal conditions, the maximum interferon beta concentration and overall productivity of 2.2 g/l and 0.151 g/l h were obtained, respectively, as the highest amounts ever reported for this protein. Two optimal ranges of dry cell weight and IPTG concentration consisting of 50 g/l and 2.54 mM, and 70 g/l and 1.29 mM were predicted, respectively, at which maximum productivity was achieved. By using a novel feeding strategy with linear variation of specific growth rate during high cell density fermentation, the maximum biomass productivity of 5.037 g/l h was obtained in a defined medium during 16 h. Then, by applying the optimum induction conditions, we accomplished an increase in overall productivity by more than three-fold over the central point. This is the first report showing the high production of human interferon beta by a synthetic gene in a simple fed-batch high cell density culture of recombinant E. coli in a defined medium. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Central Composite Design (CCD) <s> Fermentation conditions were statistically optimized for producing extracellular xylanase by Aspergillus niger SL-05 using apple pomace and cotton seed meal. The primary study shows that culture medium with a 1:1 ratio of apple pomace and cotton seed meal (carbon and nitrogen sources) yielded maximal xylanase activity. Three significant factors influencing xylanase production were identified as urea, KH(2)PO(4), and initial moisture content using Plackett-Burman design study. The effects of these three factors were further investigated using a design of rotation-regression-orthogonal combination. The optimized conditions by response surface analysis were 2.5% Urea, 0.09% KH(2)PO(4), and 62% initial moisture content. The analysis of variance indicated that the established model was significant (P < 0.05), "while" or "and" the lack of fit was not significant. Under the optimized conditions, the model predicted 4,998 IU/g dry content, whereas validation experiments produced an enzymatic activity of xylanase at 5,662 IU/g dry content after 60 h fermentation. This study innovatively developed a fermentation medium and process to utilize inexpensive agro-industrial wastes to produce a high yield of xylanase. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Central Composite Design (CCD) <s> Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Central Composite Design (CCD) <s> The purpose of this article is to use statistical Plackett–Burman and Box–Wilson response surface methodology to optimize the medium components and, thus, improve chitinase production by Streptomyces griseorubens C9. This strain was previously isolated and identified from a semi-arid soil of Laghouat region (Algeria). First, syrup of date, colloidal chitin, yeast extract and K2HPO4, KH2PO4 were proved to have significant effects on chitinase activity using the Plackett–Burman design. Then, an optimal medium was obtained by a Box–Wilson factorial design of response surface methodology in liquid culture. Maximum chitinase production was predicted in medium containing 2% colloidal chitin, 0.47% syrup of date, 0.25 g/l yeast extract and 1.81 g/l K2HPO4, KH2PO4 using response surface plots of the STATISTICA software v.12.0. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Central Composite Design (CCD) <s> Abstract Coal is the world’s most abundant energy source because of its abundance and relatively low cost. Due to the scarcity in the supply of high-grade coal, it is necessary to use low-grade coal for fulfilling energy demands of modern civilization. However, due to its high ash and moisture content, low-grade coal exerts the substantial impact on their consumption like pyrolysis, liquefaction, gasification and combustion process. The present research aimed to develop the efficient technique for the production of clean coal by optimizing the operating parameters with the help of response surface methodology. The effect of three independent variables such as hydrofluoric acid (HF) concentration (10–20% by volume), temperature (60–100 °C), and time (90–180 min), for ash reduction from the low-grade coal was investigated. A quadratic model was proposed to correlate the independent variables for maximum ash reduction at the optimum process condition by using central composite design (CCD) method. The study reveals that HF concentration was the most effective parameter for ash reduction in comparison with time and temperature. It may be due to the higher F -statistics value for HF concentration, which effects to large extent of ash reduction. The characterization of coal was evaluated by Fourier transform infrared spectroscopy (FTIR) analysis and Field-emission scanning electron microscopy with energy-dispersive X-ray (FESEM-EDX) analysis for confirmation of the ash reduction. <s> BIB005
|
CCDs are favoured in process optimisation due to determine the coefficients of a second-degree polynomial which fit a full quadratic during response surface analysis BIB004 . CCD has been widely used in optimising protein production process specifically addressing the aim of increasing productivity and solubility BIB001 . There are different types of central composite designs such as uniform precision, orthogonal/block and so forth. However, a common standard characteristic includes the number of runs per design BIB003 , which depends on the number factors (see Table 6 ). Central composite uniform precision designs are used to provide protection against bias in the regression coefficients while central composite orthogonal designs can be used to avoid correlations between coefficients of variables BIB002 . Table 6 . Common CCD components and the possible total number of runs. Factorial, axial and central points are the main components of a typical CCD and the total number of runs is dictated by the number of factors being tested. As the number of factors increases, the number of component points increase and so the total number of runs. In some cases, CCDs do not contain axial points, especially when the variance of model prediction is not suspected BIB005 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Application of central composite design for the optimization of photo-destruction of a textile dye using UV/S2O82- process The photooxidative destruction of C. I. Basic Red 46 (BR46) by UV/S2O82- process is presented. Central Composite Design (CCD) was employed to optimize the effects of operational parameters on the photooxidative destruction efficiency. The variables investigated were the initial dye and S2O82- concentrations, reaction time and distance of the solution from UV lamp. The predicted values of the photodestruction efficiency were found to be in good agreement with the experimental values (R2 = 0.9810, Adjusted R2 = 0.9643). The results of the optimization predicted by the model showed that the maximum decolorization efficiency (>98%) was achieved at the optimum conditions of the reaction time 10 min, initial dye concentration 10 mg/l, initial peroxydisulfate concentration 1.5 mmol/l and distance of UV lamp from the solution 6 cm. The figure-of-merit electrical energy per order (EEo) was employed to estimate the electrical energy consumption and related treatment costs. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Protease producing Streptomyces sp. A6 was isolated from intertidal zone of the coast of Diu (Gujarat, India). Plackett–Burman method was applied to identify important factors (shrimp waste, FeCl3, ZnSO4 and pH) influencing protease production by Streptomyces sp. A6. Further optimization was done by response surface methodology using central composite design. The concentrations of medium components for higher protease production as optimized using the above approach were (g l−1): Shrimp waste, 14; FeCl3, 0.035; ZnSO4, 0.065 and pH, 8.0. This statistical optimization approach led to production of 129.02 ± 2.03 U ml−1 of protease which was 4.96 fold higher compared to that obtained using the unoptimized medium. The protease production was scaled to 3 l in a 5-l bench fermenter using optimized medium which further increased the production by 63.4%. Deproteinization and chitin recovery obtained at the end of fermentation was 85.12 ± 4.7 and 70.58 ± 1.33%, respectively. The present study is the first report on statistical optimization of medium components for production of protease by Streptomyces species using cheaper raw material such as shrimp waste. The study also explored the possibility Streptomyces sp. A6 for reclamation of shrimp wastes. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Anti-lipopolysaccharide factors (ALFs) are important antimicrobial peptides that are isolated from some aquatic species. In a previous study, we isolated ALF genes from Chinese mitten crab, Eriocheir sinensis. In this study, we optimized the production of a recombinant ALF by expressing E. sinensis ALF genes in Escherichia coli maintained in shake-flasks. In particular, we focused on optimization of both the medium composition and the culture condition. Various medium components were analyzed by the Plackett-Burman design, and two significant screened factors, (NH4)2SO4 and KH2PO4, were further optimized via the central composite design (CCD). Based on the CCD analysis, we investigated the induction start-up time, the isopropylthio-D-galactoside (IPTG) concentration, the post-induction time, and the temperature by response surface methodology. We found that the highest level of ALF fusion protein was achieved in the medium containing 1.89 g/L (NH4)2SO4 and 3.18 g/L KH2PO4, with a cell optical density of 0.8 at 600 nm before induction, an IPTG concentration of 0.5 mmol/L, a post-induction temperature of 32.7°C, and a post-induction time of 4 h. Applying the whole optimization strategy using all optimal factors improved the target protein content from 6.1% (without optimization) to 13.2%. We further applied the optimized medium and conditions in high cell density cultivation, and determined that the soluble target protein constituted 10.5% of the total protein. Our identification of the economic medium composition, optimal culture conditions, and details of the fermentation process should facilitate the potential application of ALF for further research. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Escherichia coli has been the pioneering host for recombinant protein production, since the original recombinant DNA procedures were developed using its genetic material and infecting bacteriophages. As a consequence, and because of the accumulated know-how on E. coli genetics and physiology and the increasing number of tools for genetic engineering adapted to this bacterium, E. coli is the preferred host when attempting the production of a new protein. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Enzymatic hydrolysis is the most crucial step in bioconversion of lignocellulosic biomass to ethanol as efficient conversion of polymers to fermentable sugars determines final ethanol concentration. Cellulase complex used for commercial biomass hydrolysis is mostly derived from Trichoderma sp. and contains β-glucosidase <1% of total proteins, a ratio much lower than required for optimum saccharification. Therefore, supplementing cellulases with exogenous β-glucosidase having desired properties and bioprospecting organisms producing such enzyme is an important activity. β-glucosidase producing yeast Rhodotorula glutinis was isolated from decaying vegetables. The β-glucosidase enzyme was constitutively expressed on the cell surface. Addition of surfactant to the culture medium or sonication could not release cell-associated β-glucosidase enzyme. While cellulose and glucose induced high levels of β-glucosidase activity, unusual stimulation of β-glucosidase production was observed with Cellobiose and Soybean meal additive in minimal medium. The enzyme had temperature optimum of 50 °C and pH 6.0–6.5 and showed high glucose tolerance ability as 38.62% activity was retained even at 1.2 M glucose concentration. Culture medium for β-glucosidase production was optimised using Response Surface Methodology (RSM) with Box–Behnken design. The optimised predicted values for the three responses: extracellular enzyme activity – 0.048 IU/mL; extracellular specific enzyme activity – 0.649 IU/mg of protein; cell associated enzyme activity – 9.389 IU/mL were obtained. Thus, R. glutinis could be a potential gene source of β-glucosidase with desirable properties to be exploited in biomass hydrolysis. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Abstract β-Glucosidases show great potential as catalysts for various biotechnology processes including biomass hydrolysis for bioethanol production. In this study, response surface methodology was used to evaluate the effect of some variables on β-glucosidase production by Penicillium verruculosum using passion fruit peel as substrate, and on hydrolysis of this process residue with P. verruculosum crude extract, by applying a full factorial central composite design. Process optimization resulted in a 5.7 fold increase in β-glucosidase activity. The enzymes were more active at 65 °C, pH 4.5, remaining stable at 55 and 60 °C and over a broad pH range. P. verruculosum crude extract hydrolyzed passion fruit peel with glucose yield of 45.54%. This article provides, for the first time, the production of remarkable yields of β-glucosidase and the achievement of expressive levels of glucose through the use of passion fruit peel, an abundant and inexpensive agro-industrial residue. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Box Behnken Design (BBD) <s> Abstract Coal is the world’s most abundant energy source because of its abundance and relatively low cost. Due to the scarcity in the supply of high-grade coal, it is necessary to use low-grade coal for fulfilling energy demands of modern civilization. However, due to its high ash and moisture content, low-grade coal exerts the substantial impact on their consumption like pyrolysis, liquefaction, gasification and combustion process. The present research aimed to develop the efficient technique for the production of clean coal by optimizing the operating parameters with the help of response surface methodology. The effect of three independent variables such as hydrofluoric acid (HF) concentration (10–20% by volume), temperature (60–100 °C), and time (90–180 min), for ash reduction from the low-grade coal was investigated. A quadratic model was proposed to correlate the independent variables for maximum ash reduction at the optimum process condition by using central composite design (CCD) method. The study reveals that HF concentration was the most effective parameter for ash reduction in comparison with time and temperature. It may be due to the higher F -statistics value for HF concentration, which effects to large extent of ash reduction. The characterization of coal was evaluated by Fourier transform infrared spectroscopy (FTIR) analysis and Field-emission scanning electron microscopy with energy-dispersive X-ray (FESEM-EDX) analysis for confirmation of the ash reduction. <s> BIB008
|
BBDs are also a class of response surface designs; however, they differ from CCD in their design structure. For example, a CCD with 4 factors requires 31 runs (experiments), whereas a BBD only has 27 runs for the same number of factors. For 5 factors, CCD has 52 runs while BBD has 46 runs. Reduced runs can result in significant time and cost savings in an optimisation process. In optimisation experiments BBD is widely used as a good design to fit the quadratic model with fewer experiments BIB006 . Several studies show that BBDs have contributed to production increases for recombinant proteins (see Table 7 ). Table 7 . RSM methods used to optimise the production of recombinant proteins along with their effect on yield and citing reference. Both CCD and BBD optimisation methods are widely used, the choice depends on the number of factors and objectives of the study (see Figure 1) . The standard characteristic is that all response surface designs feature a second-order polynomial model to describe the process where interaction terms introduce curvature into the response function and a first-order equation is inadequate to fit the model BIB007 . CCD is the most preferred RSM BIB003 BIB002 due to the fact that this design contains full factorial or fractional factorial modes, with the potential to add central points to evaluate the experimental error and axial points to check the variance of the model BIB005 BIB008 . The number of runs (N) in CCD is calculated using Equation BIB004 . where k is the number of factors and Cp the number of centre points BIB001 . Table 8 is an example of a two level CCD with two centre point replicates along with responses such as actual, predicted and residues (see Table 8 ). Table 8 . Central Composite Design of four independent factors (labelled X 1 , X 2 , X 3 , X 4 respectively) studied at two levels (+1 and −1) including two central point replicates (0 and 0). The table also shows different types of common responses found in optimisation process; (1) Actual data refers to experimental results; (2) predicted data are generated by software based on the design and actual results. The residuals are the difference between actual and predicted data.
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> The production of recombinant anti-HIV peptide, T-20, in Escherichia coli was optimized by statistical experimental designs (successive designs with multifators) such as 24–1 fractional factorial, 23 full factorial, and 22 rotational central composite design in order. The effects of media compositions (glucose, NPK sources, MgSO4, and trace elements), induction level, induction timing (optical density at induction process), and induction duration (culture time after induction) on T-20 production were studied by using a statistical response surface method. A series of iterative experimental designs was employed to determine optimal fermentation conditions (media and process factors). Optimal ranges characterized by %T-20 (proportion of pepttide to the total cell protein) were observed, narrowed down, and further investigated to determine the optimal combination of culture conditions, which was as follows: 9, 6, 10, and 1 mL of glucose, NPK sources, MgSO4, and trace elements, respectively, in a total of 100 mL of medium inducted at an OD of 0.55–0.75 with 0.7 mM isopropyl-β-d-thiogalactopyranoside in an induction duration of 4 h. Under these conditions, up to 14% of T-20 was obtained. This statistical optimization allowed, the production of T-20 to be increased more than twofold (from 6 to 14%) within, a shorter induction duration (from 6 to 4 h) at the shake-flask scale. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> The Pichia pastoris clone producing streptokinase (SK) was optimized for its nutritional requirements to improve intracellular expression using statistical experimental designs and response surface methodology. The skc gene was ligated downstream of the native glyceraldehyde 3-phosphate dehydrogenase promoter and cloned in P. pastoris. Toxicity to the host was not observed by SK expression using YPD medium. The transformant producing SK at level of 1,120 IU/ml was selected, and the medium composition was investigated with the aim of achieving high expression levels. The effect of various carbon and nitrogen sources on SK production was tested by using Plackett-Burman statistical design and it was found that dextrose and peptone are the effective carbon and nitrogen sources among all the tested. The optimum conditions of selected production medium parameters were predicted using response surface methodology and the maximum predicted SK production of 2,136.23 IU/ml could be achieved with the production medium conditions of dextrose (x1), 2.90%; peptone (x2), 2.49%; pH, 7.2 (x3), and temperature, 30.4 (x4). Validation studies showed a 95% increase in SK production as compared to that before optimization at 2,089 IU/ml. SK produced by constitutive expression was found to be functionally active by plasminogen activation assay and fibrin clot lysis assay. The current recombinant expression system and medium composition may enable maximum production of recombinant streptokinase at bioreactor level. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> Response surface methodology (RSM) and artificial neural network (ANN) were used to optimize the effect of four independent variables, viz. glucose, sodium chloride (NaCl), temperature and induction time, on lipase production by a recombinant Escherichia coli BL21. The optimization and prediction capabilities of RSM and ANN were then compared. RSM predicted the dependent variable with a good coefficient of correlation determination (R² and adjusted R² values for the model. Although the R (2) value showed a good fit, absolute average deviation (AAD) and root mean square error (RMSE) values did not support the accuracy of the model and this was due to the inferiority in predicting the values towards the edges of the design points. On the other hand, ANN-predicted values were closer to the observed values with better R², adjusted R², AAD and RMSE values and this was due to the capability of predicting the values throughout the selected range of the design points. Similar to RSM, ANN could also be used to rank the effect of variables. However, ANN could not predict the interactive effect between the variables as performed by RSM. The optimum levels for glucose, NaCl, temperature and induction time predicted by RSM are 32 g/L, 5 g/L, 32°C and 2.12 h, and those by ANN are 25 g/L, 3 g/L, 30°C and 2 h, respectively. The ANN-predicted optimal levels gave higher lipase activity (55.8 IU/mL) as compared to RSM-predicted levels (50.2 IU/mL) and the predicted lipase activity was also closer to the observed data at these levels, suggesting that ANN is a better optimization method than RSM for lipase production by the recombinant strain. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> In the present study, four cold-adapted bacterial isolates were screened for multiple-enzyme production at low temperature (15 °C). The most potent isolate, Bacillus cereus GA6 (HQ832575), was subjected to mutation by UV radiation to obtain a mutant strain with elevated enzyme production. The mutant strain, designated as CUVGA6, with higher chitinase activity at low temperature was selected for enzyme production optimization using factorial design and response-surface methodology (RSM). Two statistically significant parameters (colloidal chitin and KH2PO4) for response were selected (p value = 0.008 and 0.004, respectively) along with pH and temperature and utilized to optimize the process. Central composite design of RSM was used to optimize the levels of key ingredients for the best yield of chitinase. Maximum chitinase production was predicted to be 428.57 U/ml for a 4.4-fold increase in medium containing 2 % colloidal chitin, 6.0 g/L K2HPO4 and pH 9.0 at 25 °C when incubated for 7 days in submerged fermentation. ANOVA of CCD suggested that the quadratic interaction effect of K2HPO4 with chitin, temperature and pH has high impact on the production of chitinase (p value = 0.007, 0.002, 0.035, respectively), although its linear effect was not significant as observed. The closeness of optimized values (R 2 = 82.28 %) to experimental values (R 2 = 80.13 %) proved the validity of statistical model. Thus, multi-enzyme producing cold-adapted mutant B. cereus GA6 (CUVGA6) could be exploited for the production of chitinase which is of industrial significance. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> Optimization of the fermentation conditions for extracellular production of L-asparaginase by Streptomyces brollosae NEAE-115 under solid state fermentation was investigated. The Plackett–Burman experimental design was used to screen 16 independent variables (incubation time, moisture content, inoculum size, temperature, pH, soybean meal + wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2) and three dummy variables. The most significant independent variables found to affect enzyme production, namely soybean + wheat bran (X6), L-asparagine (X9) and K2HPO4 (X12), were further optimized by the central composite design. We found that L-asparaginase production by S. brollosae NEAE-115 was 47.66, 129.92 and 145.57 units per gram dry substrate (U/gds) after an initial survey using “soybean meal + wheat bran” as a substrate for L-asparaginase production (step 1), statistical optimization by Plackett–Burman design (step 2) and further optimization by the central composite design (step 3), respectively, with a fold of increase of 3.05. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Analysis and Interpretation of Optimisation Data <s> Medium development for high level expression of human interferon gamma (hIFN-γ) from Pichia pastoris (GS115) was performed with the aid of statistical and nonlinear modeling techniques. In the initial screening, gluconate and glycine were found to be key carbon and nitrogen sources, showing significant effect on production of hIFN-γ. Plackett-Burman screening revealed that medium components., gluconate, glycine, KH2PO4 and histidine, have a considerable impact on hIFN-γ production. Optimization was further proceeded with Box-Behnken design followed by artificial neural network linked genetic algorithm (ANN-GA). The maximum production of hIFN-γ was found to be 28.48mg/L using Box-Behnken optimization (R2=0.98), whereas the ANN-GA based optimization had displayed a better production rate of 30.99mg/L (R2=0.98), with optimal concentration of gluconate=50 g/L, glycine=10.185 g/L, KH2PO4=35.912 g/L and histidine 0.264 g/L. The validation was carried out in batch bioreactor and unstructured kinetic models were adapted. The Luedeking-Piret (L-P) model showed production of hIFN-γ was mixed growth associated with the maximum production rate of 40mg/L of hIFN-γ production. <s> BIB006
|
Regardless of the DoE design employed, the goal is to provide a methodology for conducting controlled experiments with the aim of identifying the vital process inputs and investigating interactions between them BIB003 . At a screening level, after the experimental data are entered, the DoE software generates a variety of graphs that are used to interpret the results obtained. These may be scatter plots, histograms, bar charts and Pareto charts that allow the researcher to identify the distribution of the data and statistical significance of the variables tested BIB006 . Different screening analysis methods have been used in the field of protein production BIB001 BIB004 BIB005 BIB002 . Figure 5 illustrates a typical DoE data analysis and interpretation route from data visualisation, through experiment validation to conclusion. Regardless of the DoE design employed, the goal is to provide a methodology for conducting controlled experiments with the aim of identifying the vital process inputs and investigating interactions between them BIB003 . At a screening level, after the experimental data are entered, the DoE software generates a variety of graphs that are used to interpret the results obtained. These may be scatter plots, histograms, bar charts and Pareto charts that allow the researcher to identify the distribution of the data and statistical significance of the variables tested BIB006 . Different screening analysis methods have been used in the field of protein production BIB001 BIB004 BIB005 BIB002 . Figure 5 illustrates a typical DoE data analysis and interpretation route from data visualisation, through experiment validation to conclusion. The rationale for data analysis is to evaluate the effects of variables on response. Graphical Representation shows how the data are distributed. The Statistical Analysis and Probability stage identifies variables that are statistically significant. This will identify variables that are important to bring forward to the subsequent optimisation step based on their statistical significance. The Visualization and Interpretation stage will focus on representational analysis that identifies optimal levels.
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> A five-level-four-factor central composite rotary design was employed to find out the interactive effects of four variables, viz. concentrations of acetate, glucose and K2HPO4, and dark incubation period on poly-beta-hydroxybutyrate (PHB) production in a N2-fixing cyanobacterium, Nostoc muscorum. Acetate, glucose and dark incubation period exhibited positive impacts on PHB yield. Using response surface methodology (RSM), a second order polynomial equation was obtained by multiple regression analysis. A yield of 45.6% of dry cell weight (dcw) was achieved at reduced level of nutrients, i.e. 0.17% acetate, 0.16% glucose and 5 mg l(-1) K2HPO4 at a dark incubation period of 95 h as compared to 41.6% PHB yield in 0.4% acetate, 0.4% glucose and 40 mg l(-1) K2HPO4 at a dark incubation period of 168 h under single factor optimization strategy. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> A two-step response surface methodology (RSM) study was conducted for the optimization of keratinase production and enzyme activity from poultry feather by Streptomyces sp7. Initially different combinations of salts were screened for maximal production of keratinase at a constant pH of 6.5 and feather meal concentration of 5 g/L. A combination of K2HPO4, KH2PO4, and NaCl gave a maximum yield of keratinase (70.9 U/mL) production. In the first step of the RSM study, the selected five variables (feather meal, K2HPO4, KH2PO4, NaCl, and pH) were optimized by a 25 full-factorial rotatable central composite design (CCD) that resulted in 95 U/mL of keratinase production. The results of analysis of variance and regression of a second-order model showed that the linear effects of feather meal concentration (p<0.005) and NaCl (p<0.029) and the interactive effects of all variables were more significant and that values of the quadratic effects of feather meal (p<1.72e-5), K2HPO4 (p<4.731e-6), KH2PO4 (p<1.01e-10), and pH (p 7.63e-7) were more significant than the linear and interactive effects of the process variables. In the second step, a 23 rotatable full-factorial CCD and response surface analysis were used for the selection of optimal process parameters (pH, temperature, and rpm) for keratinase enzyme activity. These optima were pH 11.0, 45 degrees C, and 300 rpm. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Abstract Response surface methodology, which allows for rapid identification of important factors and optimization of them to enhance enzyme production, was employed here to optimize culture conditions for the production of cis -epoxysuccinic acid hydrolase from Bordetella sp. strain 1–3. In the first step, a Plackett–Burman design was used to evaluate the effects of nine variables (yeast extract, cis -epoxysuccinic acid, KH 2 PO 4 , K 2 HPO 4 · 3H 2 O, MgSO 4 · 7H 2 O, trace minerals solution, culture volume, initial pH and incubation time) on the enzyme production. Yeast extract, cis -epoxysuccinic acid and KH 2 PO 4 had significant influences on cis -epoxysuccinic acid hydrolase production and their concentrations were further optimized using central composite design and response surface analysis. A combination of adjusting the concentration of yeast extract to 7.8 g/l, cis -epoxysuccinic acid to 9.8 g/l, and KH 2 PO 4 to 1.12 g/l would favor maximum cis -epoxysuccinic acid hydrolase production. An enhancement of cis -epoxysuccinic acid hydrolase production from 5.6 U/ml to 9.27 U/ml was gained after optimization. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Response surface methodology (RSM) was used to evaluate the effects of fermentation parameters for glutamic acid (GA) production by Corynebacterium glutamicum CECT690 in submerged fermentation using palm date waste as substrate. To attain this purpose at the first stage, inoculum size, substrate concentration, penicillin concentration, phosphate concentration, and inoculum age were optimized for GA production. The next stage, the level of air flow rate in a 5-l fermenter (batch mode) which was run in optimized conditions was determined. The first stage gave the following results for the fermentation conditions optimized using RSM in 500-ml shake flasks: inoculum size 2% (v/v), substrate concentration 25% (w/v), penicillin concentration 1 U/ml, phosphate concentration 4 g/l, and inoculum age 10 h. Moreover, the maximum GA amount predicted by the model was 39.32 mg/ml. This was in agreement with the actual experimental value (36.64 mg/ml). In the second stage of the study, the amounts of GA were 118.75, 142.25, and 95.83 mg/ml in optimized conditions with the three levels of air flow rate of 0.6, 1.2, and 1.6 vvm, respectively. The present results demonstrate the potential of date waste juice as a substrate for producing GA by cultivation of C. glutamicum. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> This paper reports the production of a cellulase-free and alkali-stable xylanase in high titre from a newly isolated Bacillus pumilus SV-85S using cheap and easily available agro-residue wheat bran. Optimization of fermentation conditions enhanced the enzyme production to 2995.20 +/- 200.00 IU/ml, which was 9.91-fold higher than the activity under unoptimized basal medium (302.2 IU/ml). Statistical optimization using response-surface methodology was employed to obtain a cumulative effect of peptone, yeast extract, and potassium nitrate (KNO(3)) on enzyme production. A 2(3) central composite design best optimized the nitrogen source at the 0 level for peptone and yeast extract and at the -alpha level for KNO(3), along with 5.38-fold increase in xylanase activity. Addition of 0.1% tween 80 to the medium increased production by 1.5-fold. Optimum pH for xylanase was 6.0. The enzyme was 100% stable over the pH range from 5 to 11 for 1 h at 37 degrees C and it lost no activity, even after 3 h of incubation at pH 7, 8, and 9. Optimum temperature for the enzyme was 50 degrees C, but the enzyme displayed 78% residual activity even at 65 degrees C. The enzyme retained 50% activity after an incubation of 1 h at 60 degrees C. Characteristics of B. pumilus SV-85S xylanase, including its cellulase-free nature, stability in alkali over a long duration, along with high-level production, are particularly suited to the paper and pulp industry. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Abstract Production of α-amylase under solid-state fermentation by Bacillus brevis MTCC 7521 has been investigated using cassava bagasse as the substrate, one of the major solid wastes released during extraction of starch from cassava (Manihot esculenta). Response surface methodology was used to evaluate the effect of the main variables, i.e. incubation period (36 h), moisture holding capacity (60%), pH (7.0) and temperature (60°C) on enzyme production by applying a full factorial central composite design. The maximum hydrolysis of soluble starch (85%) and cassava starch (75%) was obtained with the application of 4 mL (≈ 14,752 units) of B. brevis crude enzyme after 5 h of incubation. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Protein hydrolysates were produced from shrimp waste mainly comprising head and shell of Penaeus monodon by enzymatic hydrolysis for 90 min using four microbial proteases (Alcalase, Neutrase, Protamex, Flavourzyme) where PR(%) and DH (%) of respective enzymes were compared to select best of the lot. Alcalase, which showed the best result, was used to optimize hydrolysis conditions for shrimp waste hydrolysis by response surface methodology using a central composite design. A model equation was proposed to determine effects of temperature, pH, enzyme/substrate ratio and time on DH where optimum values found to be 59.37 °C, 8.25, 1.84% and 84.42 min. for maximum degree of hydrolysis 33.13% respectively. The model showed a good fit in experimental data because 92.13% of the variability within the range of values studied could be explained by it. The protein hydrolysate obtained contained high protein content (72.3%) and amino acid (529.93 mg/gm) of which essential amino acid and flavour amino acid were was 54.67-55.93% and 39.27-38.32% respectively. Protein efficiency ratio (PER) (2.99) and chemical score (1.05) of hydrolysate was suitable enough to recommend as a functional food additive. <s> BIB007 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> BackgroundLeptospirosis is a zoonose that is increasingly endemic in built-up areas, especially where there are communities living in precarious housing with poor or non-existent sanitation infrastructure. Leptospirosis can kill, for its symptoms are easily confused with those of other diseases. As such, a rapid diagnosis is required so it can be treated effectively. A test for leptospirosis diagnosis using Leptospira Immunoglobulin-like (Lig) proteins is currently at final validation at Fiocruz.ResultsIn this work, the process for expression of LigB (131-645aa) in E. coli BL21 (DE3)Star™/pAE was evaluated. No significant difference was found for the experiments at two different pre-induction temperatures (28°C and 37°C). Then, the strain was cultivated at 37°C until IPTG addition, followed by induction at 28°C, thereby reducing the overall process time. Under this condition, expression was assessed using central composite design for two variables: cell growth at which LigB (131-645aa) was induced (absorbance at 600 nm between 0.75 and 2.0) and inducer concentration (0.1 mM to 1 mM IPTG). Both variables influenced cell growth and protein expression. Induction at the final exponential growth phase in shaking flasks with Absind = 2.0 yielded higher cell concentrations and LigB (131-645aa) productivities. IPTG concentration had a negative effect and could be ten-fold lower than the concentration commonly used in molecular biology (1 mM), while keeping expression at similar levels and inducing less damage to cell growth. The expression of LigB (131-645aa) was associated with cell growth. The induction at the end of the exponential phase using 0.1 mM IPTG at 28°C for 4 h was also performed in microbioreactors, reaching higher cell densities and 970 mg/L protein. LigB (131-645aa) was purified by nickel affinity chromatography with 91% homogeneity.ConclusionsIt was possible to assess the effects and interactions of the induction variables on the expression of soluble LigB (131-645aa) using experimental design, with a view to improving process productivity and reducing the production costs of a rapid test for leptospirosis diagnosis. <s> BIB008 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> The present study deals with the production of cold active polygalacturonase (PGase) by submerged fermentation using Thalassospira frigidphilosprofundus, a novel species isolated from deep waters of Bay of Bengal. Nonlinear models were applied to optimize the medium components for enhanced production of PGase. Taguchi orthogonal array design was adopted to evaluate the factors influencing the yield of PGase, followed by the central composite design (CCD) of response surface methodology (RSM) to identify the optimum concentrations of the key factors responsible for PGase production. Data obtained from the above mentioned statistical experimental design was used for final optimization study by linking the artificial neural network and genetic algorithm (ANN-GA). Using ANN-GA hybrid model, the maximum PGase activity (32.54 U/mL) was achieved at the optimized concentrations of medium components. In a comparison between the optimal output of RSM and ANN-GA hybrid, the latter favored the production of PGase. In addition, the study also focused on the determination of factors responsible for pectin hydrolysis by crude pectinase extracted from T. frigidphilosprofundus through the central composite design. Results indicated 80% degradation of pectin in banana fiber at 20°C in 120 min, suggesting the scope of cold active PGase usage in the treatment of raw banana fibers. <s> BIB009 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> The thermotolerant yeast Pichia etchellsii produces multiple cell bound β-glucosidases that can be used for synthesis of important alkyl- and aryl-glucosides. Present work focuses on enhancement of β-glucosidase I (BGLI) production in Pichia pastoris. In the first step, one-factor-at-a-time experimentation was used to investigate the effect of aeration, antifoam addition, casamino acid addition, medium pH, methanol concentration, and mixed feed components on BGLI production. Among these, initial medium pH, methanol concentration, and mixed feed in the induction phase were found to affect BGLI production. A 3.3-fold improvement in β-glucosidase expression was obtained at pH 7.5 as compared to pH 6.0 on induction with 1 % methanol. Addition of sorbitol, a non-repressing substrate, led to further enhancement in β-glucosidase production by 1.4-fold at pH 7.5. These factors were optimized with response surface methodology using Box-Behnken design. Empirical model obtained was used to define the optimum "operating space" for fermentation which was a pH of 7.5, methanol concentration of 1.29 %, and sorbitol concentration of 1.28 %. Interaction of pH and sorbitol had maximum effect leading to the production of 4,400 IU/L. The conditions were validated in a 3-L bioreactor with accumulation of 88 g/L biomass and 2,560 IU/L β-glucosidase activity. <s> BIB010 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Abstract Response surface methodology was employed to optimize the cultivation conditions of Penicillium oxalicum SAEM-51 for the enhancement of chitin deacetylase (CDA) production under solid-state fermentation. The enzyme catalyzes deacetylation of N-acetyl glucosamine subunits of chitin resulted in the production of chitosan, widely utilized biopolymer for drug delivery, waste water treatment and as nutraceutics. Among different agro-horticultural substrates evaluated, mustard oil cake had resulted into maximal CDA production. Entrapment of fungal mycelia onto the solid support was studied using scanning electron microscopy. Optimal physico-chemical conditions for maximal CDA production were found to be 4.906 g, 73.62% and 8.578% for substrate amount, moisture content and inoculum size, respectively. The experimental CDA production (1162.03±7.2 U gds−1) under optimized condition was observed in close agreement with the values predicted by the quadratic model (1137.85 U gds−1). The CDA production by P. oxalicum SAEM-51 was increased significantly by 1.3 fold, as compared to the un-optimized ones (877.56±8.9 U gds−1). <s> BIB011 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> The supply of many valuable proteins that have potential clinical or industrial use is often limited by their low natural availability. With the modern advances in genomics, proteomics and bioinformatics, the number of proteins being produced using recombinant techniques is exponentially increasing and seems to guarantee an unlimited supply of recombinant proteins. The demand of recombinant proteins has increased as more applications in several fields become a commercial reality. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, producing soluble proteins in E. coli is still a major bottleneck for structural biology projects. One of the most challenging steps in any structural biology project is predicting which protein or protein fragment will express solubly and purify for crystallographic studies. The production of soluble and active proteins is influenced by several factors including expression host, fusion tag, induction temperature and time. Statistical designed experiments are gaining success in the production of recombinant protein because they provide information on variable interactions that escape the "one-factor-at-a-time" method. Here, we review the most important factors affecting the production of recombinant proteins in a soluble form. Moreover, we provide information about how the statistical design experiments can increase protein yield and purity as well as find conditions for crystal growth. <s> BIB012 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> The phytase (PPHY) of Pichia anomala has the requisite properties of thermostability and acidstability, broad substrate spectrum, and protease insensitivity, which make it a suitable candidate as a feed and food additive. The 1,389-bp PPHY gene was amplified from P. anomala genomic DNA, cloned in pPICZαA, and expressed extracellularly in P. pastoris X33. Three copies of PPHY have been detected integrated into the chromosomal DNA of the recombinant P. pastoris. The size exclusion chromatography followed by electrophoresis of the pure rPPHY confirmed that this is a homohexameric glycoprotein of ~420 kDa with a 24.3 % portion as N-linked glycans. The temperature and pH optima of rPPHY are 60 °C and 4.0, similar to the endogenous enzyme. The kinetic characteristics K m, V max, K cat, and K cat/K m of rPPHY are 0.2 ± 0.03 mM, 78.2 ± 1.43 nmol mg−1 s−1, 65,655 ± 10.92 s−1, and 328.3 ± 3.12 μM−1 s−1, respectively. The optimization of medium components led to a 21.8-fold improvement in rPPHY production over the endogenous yeast. The rPPHY titer attained in shake flasks could also be sustained in the laboratory fermenter. The rPPHY accounts for 57.1 % of the total secreted protein into the medium. The enzyme has been found useful in fractionating allergenic protein glycinin from soya protein besides dephytinization. <s> BIB013 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> Medium development for high level expression of human interferon gamma (hIFN-γ) from Pichia pastoris (GS115) was performed with the aid of statistical and nonlinear modeling techniques. In the initial screening, gluconate and glycine were found to be key carbon and nitrogen sources, showing significant effect on production of hIFN-γ. Plackett-Burman screening revealed that medium components., gluconate, glycine, KH2PO4 and histidine, have a considerable impact on hIFN-γ production. Optimization was further proceeded with Box-Behnken design followed by artificial neural network linked genetic algorithm (ANN-GA). The maximum production of hIFN-γ was found to be 28.48mg/L using Box-Behnken optimization (R2=0.98), whereas the ANN-GA based optimization had displayed a better production rate of 30.99mg/L (R2=0.98), with optimal concentration of gluconate=50 g/L, glycine=10.185 g/L, KH2PO4=35.912 g/L and histidine 0.264 g/L. The validation was carried out in batch bioreactor and unstructured kinetic models were adapted. The Luedeking-Piret (L-P) model showed production of hIFN-γ was mixed growth associated with the maximum production rate of 40mg/L of hIFN-γ production. <s> BIB014 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Evaluation of Experimental Design and Predictive Model Validation <s> ABSTRACTCold-adapted superoxide dismutase (SOD) with higher catalytic activity at lower temperature has great amount of applications in many aspects as an industrial enzyme. The application of recombinant enzyme in gene engineering and microbial fermentation technology is an effective way to obtain high-yield product. In this study, to obtain the recombinant SOD in E. coli (rPsSOD) with the highest activity, the Box-Behnken design was first applied to optimize the important parameters (lactose, tryptone and Tween-80) affecting the activity of rPsSOD. The results showed that the optimal fermentation conditions were Tween-80 (0.047%), tryptone (6.16 g/L), lactose (11.38 g/L). The activity of rPsSOD was 71.86 U/mg (1.54 times) as compared with non-optimized conditions. Such an improved production will facilitate the application of the cold-adapted rPsSOD. <s> BIB015
|
For RSM analysis, the goals are to (i) develop a predictive model that describes how the process inputs influence the process output and (ii) determine the optimal settings of the inputs BIB001 BIB002 . Following the completion of the optimisation experiments, the results are used to fit a second-order polynomial equation (Equation (5)) BIB014 . where Yi is the predicted response, β0, βi, βii and βij are regression coefficients for the intercept, firstorder model coefficients, quadratic coefficient and linear model coefficient for the interaction respectively BIB011 BIB004 . The fit of the model is then evaluated through analysis of variance (ANOVA, Table 9 ) which compares the variation due to the change in the combination of variable levels with the variation due to the random errors BIB012 . The rationale for data analysis is to evaluate the effects of variables on response. Graphical Representation shows how the data are distributed. The Statistical Analysis and Probability stage identifies variables that are statistically significant. This will identify variables that are important to bring forward to the subsequent optimisation step based on their statistical significance. The Visualization and Interpretation stage will focus on representational analysis that identifies optimal levels. For RSM analysis, the goals are to (i) develop a predictive model that describes how the process inputs influence the process output and (ii) determine the optimal settings of the inputs BIB001 BIB002 . Following the completion of the optimisation experiments, the results are used to fit a second-order polynomial equation (Equation (5)) BIB014 . where Yi is the predicted response, β 0 , β i , β ii and β ij are regression coefficients for the intercept, first-order model coefficients, quadratic coefficient and linear model coefficient for the interaction respectively BIB011 BIB004 . The fit of the model is then evaluated through analysis of variance (ANOVA, Table 9 ) which compares the variation due to the change in the combination of variable levels with the variation due to the random errors BIB012 . The coefficient value of R 2 defines how well the model fits the data. The closer the R 2 is to 1, the better it describes the experimental data BIB008 . The Adjusted R 2 is used to check the adequacy of the model by measuring the amount of variation about the mean derived from the model; the closer the value is to 1, the better it describes the model BIB007 . For example, in Table 9 , the R 2 = 0.9971 indicates the significance of regression of the fitting equation and therefore, adequacy of discrimination, indicating that only 0.29% of the total variation could not be explained by the fitting equation BIB015 . When R 2 = 99.71%, Adj-R 2 = 99.63%, Pred-R 2 = 99.48% are in good agreement with each other (as in Table 9 ), this provides confidence in the accuracy of the model BIB013 . Additionally, the p-value and signal-to-noise ratio are used to estimate the quality of the model. For a significant model, a p-value < 0.05 is desirable BIB005 . Appropriate precision measures the signal-to-noise ratio; where a ratio greater than 4 indicates an adequate model BIB003 and is commonly used in protein production optimisation BIB006 BIB009 . Furthermore, the p-value lack of fit and the plot of observed values versus predicted values are used to estimate the quality of the model. With a good model, the p-value lack of fit should be >0.05 BIB004 as shown in Table 9 . Finally, all data should fall on the straight line on the observed versus predicted plots BIB010 as shown in Figure 6 . The coefficient value of R 2 defines how well the model fits the data. The closer the R 2 is to 1, the better it describes the experimental data BIB008 . The Adjusted R 2 is used to check the adequacy of the model by measuring the amount of variation about the mean derived from the model; the closer the value is to 1, the better it describes the model BIB007 . For example, in Table 9 , the R 2 = 0.9971 indicates the significance of regression of the fitting equation and therefore, adequacy of discrimination, indicating that only 0.29% of the total variation could not be explained by the fitting equation BIB015 . When R 2 = 99.71%, Adj-R 2 = 99.63%, Pred-R 2 = 99.48% are in good agreement with each other (as in Table 9 ), this provides confidence in the accuracy of the model BIB013 . Additionally, the p-value and signal-to-noise ratio are used to estimate the quality of the model. For a significant model, a p-value < 0.05 is desirable BIB005 . Appropriate precision measures the signalto-noise ratio; where a ratio greater than 4 indicates an adequate model BIB003 and is commonly used in protein production optimisation BIB006 BIB009 . Furthermore, the p-value lack of fit and the plot of observed values versus predicted values are used to estimate the quality of the model. With a good model, the p-value lack of fit should be >0.05 BIB004 as shown in Table 9 . Finally, all data should fall on the straight line on the observed versus predicted plots BIB010 as shown in Figure 6 .
|
The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Statistical evaluation of fermentation conditions and nutritional factors by Plackett–Burman two-level factorial design followed by optimization of significant parameters using response surface methodology for lipase production by Bacillus brevis was performed in submerged batch fermentation. Temperature, glucose, and olive oil were found to be the significant factors affecting lipase production. Maximum lipase activity of 5.1 U ml−1 and cell mass of 1.82 g l−1 at 32 h were obtained at the optimized conditions of temperature, 33.7 °C; initial pH, 8; and speed of agitation, 100 rpm, with the medium components: olive oil, 13.73 ml l−1; glucose, 13.98 g l−1; peptone, 2 g l−1; Tween 80, 5 ml l−1; NaCl, 5 g l−1; CH3COONa, 5 g l−1; KCl, 2 g l−1; CaCl2·2H2O, 1 g l−1; MnSO4·H2O, 0.5 g l−1; FeSO4·7H2O, 0.1 g l−1; and MgSO4·7H2O, 0.01 g l−1. The lipase productivity and specific lipase activity were found to be 0.106 U (ml h)−1 and 2.55 U mg−1, respectively. Unstructured kinetic models and artificial neural network models were used to describe the lipase fermentation. The kinetic analysis of the lipase fermentation by B. brevis shows that lipase is a growth-associated product. <s> BIB001 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Soymilk was fermented with Lactobacillus casei, and statistical experimental design was used to investigate factors affecting viable cells of L. casei, including temperature, glucose, niacin, riboflavin, pyridoxine, folic acid and pantothenic acid. Initial screening by Plackett-Burman design revealed that among these factors, temperature, glucose and niacin have significant effects on the growth of L. casei. Further optimization with Box-Behnken design and response surface analysis showed that a second-order polynomial model fits the experimental data appropriately. The optimum conditions for temperature, glucose and niacin were found to be 15.77 °C, 5.23 and 0.63 g/L, respectively. The concentration of viable L. casei cells under these conditions was 8.23 log10 (CFU/mL). The perfect agreement between the observed values and the values predicted by the equation confirms the statistical significance of the model and the model’s adequate precision in predicting optimum conditions. <s> BIB002 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Response surface methodology (RSM) and artificial neural network (ANN) were used to optimize the effect of four independent variables, viz. glucose, sodium chloride (NaCl), temperature and induction time, on lipase production by a recombinant Escherichia coli BL21. The optimization and prediction capabilities of RSM and ANN were then compared. RSM predicted the dependent variable with a good coefficient of correlation determination (R² and adjusted R² values for the model. Although the R (2) value showed a good fit, absolute average deviation (AAD) and root mean square error (RMSE) values did not support the accuracy of the model and this was due to the inferiority in predicting the values towards the edges of the design points. On the other hand, ANN-predicted values were closer to the observed values with better R², adjusted R², AAD and RMSE values and this was due to the capability of predicting the values throughout the selected range of the design points. Similar to RSM, ANN could also be used to rank the effect of variables. However, ANN could not predict the interactive effect between the variables as performed by RSM. The optimum levels for glucose, NaCl, temperature and induction time predicted by RSM are 32 g/L, 5 g/L, 32°C and 2.12 h, and those by ANN are 25 g/L, 3 g/L, 30°C and 2 h, respectively. The ANN-predicted optimal levels gave higher lipase activity (55.8 IU/mL) as compared to RSM-predicted levels (50.2 IU/mL) and the predicted lipase activity was also closer to the observed data at these levels, suggesting that ANN is a better optimization method than RSM for lipase production by the recombinant strain. <s> BIB003 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> The simultaneous effects of slurry concentration (25–35 % w/w), enzyme concentration (0.5–2.5 % w/w) and liquefaction time (30–90 min) on the yield, total solids and rheological parameters of oat milk developed were studied and optimised by the application of response surface methodology. The effect of independent and dependent variables have been studied using a central composite rotatable design of experiments. Power law model explains the flow behaviour of the developed oat milk samples with correlation coefficient (R 2) varying from 0.89 to 0.96. All the formulations exhibited pseudo plastic behaviour with the flow behaviour index (n) between 0.29 and 0.46 and consistency index varying from 1.033 to 10.22 Pa s n . Statistical analysis showed that yield, total solids and rheology of oat milk were significantly (p < 0.05) correlated to slurry concentration, enzyme concentration and liquefaction time. The optimum conditions for making oat milk were: 27.1 % w/w slurry concentration, 2.1 % w/w enzyme concentration and liquefaction time of 49 min. <s> BIB004 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Receptor activator of nuclear factor (NF)-κB ligand (RANKL), a master cytokine that drives osteoclast differentiation, activation and survival, exists in both transmembrane and extracellular forms. To date, studies on physiological role of RANKL have been mainly carried out with extracellular RANKL probably due to difficulties in achieving high level expression of functional transmembrane RANKL (mRANKL). In the present study, we took advantage of codon optimization and response surface methodology to optimize the soluble expression of mRANKL in E. coli. We optimized the codon usage of mRANKL sequence to a preferred set of codons for E. coli changing its codon adaptation index from 0.64 to 0.76, tending to increase its expression level in E. coli. Further, we utilized central composite design to predict the optimum combination of variables (cell density before induction, lactose concentration, post-induction temperature and post-induction time) for the expression of mRANKL. Finally, we investigated the effects of various experimental parameters using response surface methodology. The best combination of response variables was 0.6 OD600, 7.5 mM lactose, 26°C post-induction temperature and 5 h post-induction time that produced 52.4 mg/L of fusion mRANKL. Prior to functional analysis of the protein, we purified mRANKL to homogeneity and confirmed the existence of trimeric form of mRANKL by native gel electrophoresis and gel filtration chromatography. Further, the biological activity of mRANKL to induce osteoclast formation on RAW264.7 cells was confirmed by tartrate resistant acid phosphatase assay and quantitative real-time polymerase chain reaction assays. Importantly, a new finding from this study was that the biological activity of mRANKL is higher than its extracellular counterpart. To the best of our knowledge, this is the first time to report heterologous expression of mRANKL in soluble form and to perform a comparative study of functional properties of both forms of RANKL. <s> BIB005 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Molybdenum has long been known to be toxic to ruminants, but not to humans. However, more recently it has been increasingly reported that molybdenum shows toxic effects to reproductive organs of fish, mouse and even humans. Hence, its removal from the environment is highly sought after. In this study, response surface methodology (RSM) was successfully applied in the optimization and maximization of Mo6+ reduction to Mo-blue by Serratia sp. MIE2 for future bioremediation application. The optimal conditions predicted by RSM were 20 mM molybdate, 3.95 mM phosphate, pH 6.25 and 25 g l−1 sucrose with absorbance of 19.53 for Mo-blue production measured at 865 nm. The validation experimental run of the predicted optimal conditions showed that the maximum Mo-blue production occurred at absorbance of 20.87, with a 6.75 % deviation from the predicted value obtained from RSM. Molybdate reduction was successfully maximized using RSM with molybdate reduction before and after optimization using RSM showing Mo-blue production starting at the absorbance value of 10.0 at 865 nm going up to an absorbance value above 20.87. The modelling kinetics of Mo6+ reduction showed that Teissier was the best model, with calculated P max , K s and K i values of 1.97 Mo-blue per hour, 5.79 mM and 31.48 mM, respectively. <s> BIB006 </s> The Goldilocks Approach: A Review of Employing Design of Experiments in Prokaryotic Recombinant Protein Production <s> Optimum Determination <s> Sweet sorghum (Sorghum bicolor (L.) Moench) bagasse is a lignocellulosic material consisting mainly of hemicellulose and cellulose, a potential source of fermentable sugars. The present study aimed to optimize the hydrolysis of sweet sorghum bagasse to obtain the highest concentrations of xylose and glucose with the minimum amount of inhibitor compounds. Seven varieties of sweet sorghum bagasse were used for the hydrolysis experiment, carried out in three stages with a 23 Box–Behnken factorial design; the critical factors selected for both stages were H2SO4 and H2O2 concentrations, time and liquid–solid ratio (LSR). The alkaline hydrolysis was carried out with a subsequent enzymatic hydrolysis using 0.4 mL cellulose and 0.5 mL beta-glucosidase. The optimum conditions for acid hydrolysis were H2SO4 (1.375 % w/v), time (36 min) and LSR (4.9:1 v/w of bagasse) resulting in values of 11.55 g/L glucose and 41.27 g/L xylose, respectively; for alkaline hydrolysis H2O2 (4.5 % w/v), time (45 h) and LSR (16:1 v/w of bagasse) were the optimum values. Under these conditions, 65.96 g/L glucose concentration was obtained. Validation of the model indicated no difference between predicted and observed values in the optimization of the hydrolysis process. <s> BIB007
|
Once the predictive model has been validated, it can be used to determine the optimised parameters. The statistical tools embedded in DoE software are used to generate 3D-graphs, called surface contour plots that visually describe the relationship between variables and response BIB004 BIB006 . The 3-D surface and contour graphs are generated as a combination of two test variables with the others maintained at their respective zero levels BIB002 see Figure 7 . Surface, contour and residual plots, along with ANOVA, are the main optimisation analysis tools commonly used to determine optimum levels for high yields of recombinant protein BIB005 BIB007 BIB001 . Once the predictive model has been validated, it can be used to determine the optimised parameters. The statistical tools embedded in DoE software are used to generate 3D-graphs, called surface contour plots that visually describe the relationship between variables and response BIB004 BIB006 . The 3-D surface and contour graphs are generated as a combination of two test variables with the others maintained at their respective zero levels BIB002 see Figure 7 . Surface, contour and residual plots, along with ANOVA, are the main optimisation analysis tools commonly used to determine optimum levels for high yields of recombinant protein BIB005 BIB007 BIB001 . BIB003 . The figure depicts the two-factor interaction (in this case the two factors explored are glucose and culturing temperature) where one factor influences the response of another factor. It also shows the visualisation of optimum levels. The colour scale indicates the level of lipase activity (IU/mL) where red indicates the region of optimal yield, yellow indicates medium yield, and green indicates low yield. In this case, the optimal enzyme activity (33 IU/mL) was achieved at a culture temperature between 30 °C and 34 °C; and a glucose concentration between 40 g/mL-50 g/mL. Image used with permission.
|
Survey of Technologies for Web Application Development <s> An Architecture for Distributed Hypermedia Applications <s> The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this paper, we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support. <s> BIB001 </s> Survey of Technologies for Web Application Development <s> An Architecture for Distributed Hypermedia Applications <s> A reference software architecture for a domain defines the fundamental components of the domain and the relations between them. Research has shown the benefits of having a reference architecture for product development, software reuse and maintenance. Many mature domains, such as compilers and operating systems, have well-known reference architectures. We present a process to derive a reference architecture for a domain. We used this process to derive a reference architecture for Web servers, which is a relatively new domain. The paper presents the mapping of this reference architecture to the architectures of three open source Web servers: Apache (80KLOC), AOL-Server (164KLOC), and Jigsaw (106KLOC). <s> BIB002
|
The architectural foundation of the Web is the request-response cycle realized by the Hypertext Transfer Protocol (HTTP) and Hypertext Markup Language (HTML and XHTML) [Raggett 1999; standards. The Representational State Transfer (REST) architectural style provides a model architecture for the Web that was used to rationalize the definition of the HTTP 1.1 recommendation BIB001 . Modularity, extensibility, and inversion of control are characteristics inherent in the Web that have allowed incorporation of features supporting dynamic content. Inversion of control is implemented on both clients and servers by various plug-in, content handling, and filtering interfaces that allow custom components to be invoked in response to events. The following sections review the operations of Web servers and browsers highlighting the extension points that are leveraged to provide interactive and dynamic content for Web applications. 2.1.1 Web Servers. Web servers implement the server-side duties of HTTP, the application-layer protocol that governs message-passing between Web clients and servers. Figure 1 is adapted from a reference architecture for Web servers provided by BIB002 . The most common Web server implementations are the Apache HTTP server [Apache Software Foundation 2004] , available for most operating systems, and the Internet Information Service (IIS) [Microsoft Corporation 2005b] , available only for Microsoft Windows operating systems. Request and response messages share a common format that includes a start line, message headers, and optionally, a message body and message trailers. Request messages specify a request method, most commonly GET or POST, and a Universal Resource Identifier (URI) for a requested resource. Resources are a key abstraction for the Web, uniformly identifying documents, services, collections of other resources, and other types of information sources using a single naming scheme. Response messages include a status line and a representation of a resource. The protocol supports transmission of any content type that can be represented as a sequence of bytes with associated metadata. Responses are interpreted by client browsers. 2.1.2 Web Browsers. Web browsers process user interface commands, format and send request messages to Web servers, wait for and interpret server response messages, and render content within the browser's display window area. Figure 2 is a simplified architecture diagram that illustrates the operations of Web browsers. HTML and XHTML are the most common content types on the Web. Browser extensibility features allow many other content types to be displayed by deferring their rendering to registered plug-in components (helper applications) that handle the content. The pervasiveness of HTML makes it a friendly target for dynamic content generation systems.
|
Survey of Technologies for Web Application Development <s> Operating System <s> You code. And code. And code. You build only to rebuild. You focus on making your site compatible with almost every browser or wireless device ever put out there. Then along comes a new device or a new browser, and you start all over again.You can get off the merry-go-round.It's time to stop living in the past and get away from the days of spaghetti code, insanely nested table layouts, tags, and other redundancies that double and triple the bandwidth of even the simplest sites. Instead, it's time for forward compatibility.Isn't it high time you started designing with web standards?Standards aren't about leaving users behind or adhering to inflexible rules. Standards are about building sophisticated, beautiful sites that will work as well tomorrow as they do today. You can't afford to design tomorrow's sites with yesterday's piecemeal methods.Jeffrey teaches you to: Slash design, development, and quality assurance costs (or do great work in spite of constrained budgets) Deliver superb design and sophisticated functionality without worrying about browser incompatibilities Set up your site to work as well five years from now as it does today Redesign in hours instead of days or weeks Welcome new visitors and make your content more visible to search engines Stay on the right side of accessibility laws and guidelines Support wireless and PDA users without the hassle and expense of multiple versions Improve user experience with faster load times and fewer compatibility headaches Separate presentation from structure and behavior, facilitating advanced publishing workflows <s> BIB001 </s> Survey of Technologies for Web Application Development <s> Operating System <s> Despite this, Web interaction designers can’t help but feel a little envious of our colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web’s rapid proliferation also creates a gap between the experiences we can provide and the experiences users can get from a desktop application. <s> BIB002
|
Operating System Abstraction Layer 2.1.3 HTML. HTML documents contain text interspersed with formatting elements. Cascading Style Sheets (CSS) allow formatting elements to be separated into reusable style sheets . Uneven CSS implementations in browsers and ingrained practices resulted in an extended acceptance period for style sheets. Promotion of Web standards and improved browser implementations of the standards have recently resulted in steadily increasing use of CSS to separate presentation attributes from content BIB001 ]. Client-side scripts can be embedded in HTML documents to add interactivity to Web pages and further process a document before it is rendered by a browser. The Document Object Model (DOM) [W3C 2004a] interface allows embedded scripts to modify the structure, content, and presentation of documents. The combination of HTML, client-side scripting, and DOM is informally known as Dynamic HTML (DHTML). The most popular client-side scripting language is JavaScript. It is also possible to reference Java applets, ActiveX controls, Macromedia Flash presentations, and other kinds of precompiled programs within HTML documents, but the approach has compatibility, latency, and security issues that limit its effectiveness . In spite of these concerns, ActiveX and Macromedia Flash are still widely used by web designers to provide a more graphically-intensive user experience than would otherwise be practically achievable on the Web. While the promising W3C Scalable Vector Graphics (SVG) [W3C 2005a] and Synchronized Media Integration Language (SMIL) [W3C 2005b ] standards provide XML-based alternatives to Macromedia Flash for multimedia presentations, they are not yet pervasively used. 2.1.4 XML. The Extensible Markup Language (XML) is a widely accepted markup language that simplifies the transmission of structured data between applications . XML is a meta-language for creating collections of custom elements, in contrast to HTML, which provides a fixed set of elements. The Extensible Stylesheet Language (XSL) family includes an XML-based element matching language for XSL Transformations (XSLT) that is used to programmatically transform XML documents into other formats . XML has been extremely successful standard since the original recommendation was released in December, 1997. XML provides the base syntax for the XHTML and CSS standards that normalize and will eventually replace HTML as the dominant presentation technology for the Web. Configurations for Web applications and services are now commonly maintained within XML files. The extensive availability of XML parsers makes it more convenient for programmers to work with XML files rather than develop parsers for proprietary file formats. Web services standards including SOAP [W3C 2004b ] and XML-RPC ] leverage XML for configuration and as a request and response message format for remote procedure calls over HTTP. Operating System Abstraction Layer in components and access to internal browser methods and properties related to the presentation of content. Security that prevents downloaded components from performing undesirable actions is a key requirement for browser extensions. ActiveX makes use of "Authenticode", a code-signing scheme that verifies that downloaded binary components are pristine as offered by a certified provider prior to their execution by a browser. Figure 7 illustrates the place of extensions within the architecture of browsers. Applets. Java applets represent an extension approach that is not browser-specific since it leverages the portable Java byte code format. Applets are Java class files that are downloaded and interpreted by a Java Virtual Machine (JVM) running within the browser address space. The JVM executes applets only after verifying that their code is safe, meaning that it has not been tampered with and contains no instructions that violate the client's security policy. Applets can also be digitally signed and verified to provide an additional level of security. Java applets initially suffered from poor perceived performance due to extended download times and slow interpretation; therefore the technology has been relegated to a secondary role, even though performance has since been vastly improved by the introduction of just in time (JIT) compilation to native code. The pervasive Macromedia Flash animation player plug-in provides an alternative to Java applets that is now commonly used to embed marketing presentations in Web pages. Client-side Scripting. Interpreters for lightweight scripting languages such as JavaScript and VBScript were available for most browsers by the end of 1996. Client-side scripting languages interfaces are more accessible than browser extension APIs, since they remove the need to know an entire API to add pieces of useful functionality to an HTML document. Client-side scripts are slightly less efficient than plug-ins, but the advantages of easy access to browser properties and methods outweigh the performance penalty. Initially, each browser creator implemented a proprietary scripting language and API that was incompatible by design with other browsers. Scripting language standards, including ECMAScript and DOM, have improved the situation to the point that cross-browser scripting is possible. Rich Internet Applications. The applet concept has recently been revived un- der the guise of rich Internet applications (RIA). The objective of the RIA concept is to break away from the page-centered constraints imposed by the Web to support a more interactive user experience. RIA solutions that attempt to introduce a new plug-in architecture on to the Web (Konfabulator, Curl, and Sash Weblications, to name a few) have attracted attention, but eventually lose momentum due to the requirement to download and install a plug-in. Laszlo and Macromedia Flex are RIA environments that are attempting to exploit the large installed base of Flash users to provide improved interactivity without requiring plug-in installation. Laszlo applications are described using an XML application description language. The Laszlo Presentation Server is a Java servlet that dynamically composes and serves Flash interfaces in response to requests for Laszlo applications. RIA solutions can improve the responsiveness and presentation quality of Web user interfaces, but have not reached the mainstream of development. More experience with the technologies is needed to assess their compatibility with the existing Web infrastructure before widespread adoption of RIA will occur. Expanded Use of Dynamic HTML. BIB002 coined the term Ajax for using the combination of XHTML, CSS, DOM, XML, XSLT, JavaScript, and XMLHttpRequest, a JavaScript API for accessing Web services, to deliver RIA completely within the existing Web infrastructure. The production application of Ajax within several popular, high-volume web sites including Gmail, Google Suggest, Google Maps, and the Amazon.com A9 search engine provides evidence that the combination can be effective and scalable. The disadvantages of Ajax lie in browser compatibility issues and the non-straightforward JavaScript coding that can be required to implement simple functionality.
|
Survey of Technologies for Web Application Development <s> Initial Dynamism <s> A fundamental change is occurring in the way people write computer programs, away from system programming languages such as C or C++ to scripting languages such as Perl or Tcl. Although many people are participating in the change, few realize that the change is occurring and even fewer know why it is happening. This article explains why scripting languages will handle many of the programming tasks in the next century better than system programming languages. System programming languages were designed for building data structures and algorithms from scratch, starting from the most primitive computer elements. Scripting languages are designed for gluing. They assume the existence of a set of powerful components and are intended primarily for connecting components. <s> BIB001 </s> Survey of Technologies for Web Application Development <s> Initial Dynamism <s> Over the last few years, the World Wide Web has transformed itself from a static content-distribution medium to an interactive, dynamic medium. The Web is now widely used as the presentation layer for a host of on-line services such as e-mail and address books, e-cards, e-calendar, shopping, banking, and stock trading. As a consequence (HyperText Markup Language)HTML files are now typically generated dynamically after the server receives the request. From the Web-site providers' point of view, dynamic generation of HTML pages implies a lesser understanding of the real capacity and performance of their Web servers. From the Web developers' point of view, dynamic content implies an additional technology decision: the Web programming technology to be employed in creating a Web-based service. Since the Web is inherently interactive, performance is a key requirement, and often demands careful analysis of the systems. In this paper, we compare four dynamic Web programming technologies from the point of view of performance. The comparison is based on testing and measurement of two cases: one is a case study of a real application that was deployed in an actual Web-based service; the other is a trivial application. The two cases provide us with an opportunity to compare the performance of these technologies at two ends of the spectrum in terms of complexity. Our focus in this paper is on how complex vs. simple applications perform when implemented using different Web programming technologies. The paper draws comparisons and insights based on this development and performance measurement effort. <s> BIB002
|
The earliest technical elements that allowed for interactive and dynamic content were HTML forms, the HTTP POST request method, and the Common Gateway Interface (CGI). HTML forms are used to collect user input data that is submitted to a forms processor on a Web server in a GET or POST message. By 1993, the availability of CGI completed the forms processing path by providing a means by which Web servers could process and respond to submitted data. CGI is functional but not scalable; as its limitations became clear other solutions were developed that were more efficient but more complicated. This section reviews the first wave of technologies for the dynamic Web including CGI, its server-side successors, and client-side extension interfaces. 2.2.1 Forms. The HTML forms capability naturally extends the Web's document metaphor by allowing user input to be entered on Web pages. A form is a section of a document that contains named user interface controls such as text boxes, check boxes, radio buttons, list boxes, and buttons [Raggett et al. 1997] . The definition of a form specifies a request method (GET or POST) and a URI for a server-side forms processor. When a form is submitted, the browser formats a request message containing the form data as a sequence of name-value pairs. For GET messages, the form data set is appended to the action URI as query parameters. When POST is used, the form data is sent in the message body. The forms capability of HTML is relied on by many Web applications. The forms interface is simple, cross-platform, supports light data validation, and allows pages to be event-driven. The event model of a form is implicit in the URI references associated with submit buttons. The loosely coupled interaction between forms and their processors can be a source of reliability problems since there is no static guarantee that the data types of submitted data elements conform to the expectations of form processor. 2.2.2 CGI. CGI was the first widely available means for integrating Web servers with external systems, initially provided as a method to process data submitted from HTML forms [NCSA 1993] . CGI allows server-side programs to be invoked in response to HTTP requests. A Web server creates a new process for each CGI request. Figure 3 shows the archictecture of CGI. CGI programs can be written in any programming language that supports environment variables and the standard input and output streams. The earliest CGI programs were written in C, but the deployment ease and portability of interpreted scripting languages such as tcl, Perl, and Python has made them the languages of choice for CGI BIB001 ]. Perl is the most popular language for CGI scripting. User input and metadata about requests is passed into CGI programs through environment variables and within the standard input stream, respectively. The output written by a CGI program to its standard output stream is sent to the client within an HTTP response message. The example Perl script in Figure 4 reads an environment variable to determine the request method (GET or POST) and displays the data that was submitted from a form. CGI was the first widely supported technology for dynamic content and is still · 9 supported out-of-the-box by most Web servers. In tandem with scripting languages, CGI is a platform-independent solution with a simple, well-known interface. The disadvantages are related to scalability and usability concerns. CGI is not highly scalable because a new process must be created for each request. For busy Web sites serving thousands of concurrent users, the CPU and memory usage required to constantly create and destroy processes severely limits the number of concurrent requests that can be handled. The use of scripting languages further strains a Web server's capacity due to the need to start an interpreter for each request. The usability problems of CGI stem from the limitations of its thin abstraction over the HTTP protocol. Programmers must understand the workings of HTTP, down to the level of formatting details of resource identifiers and messages, to be able to create CGI scripts. No page computation model is provided; the programmer is responsible for generation of the response by printing HTML to the standard output stream. Role separation between designers and programmers is diminished due to the fact that the presentation attributes of pages are embedded with print statements in programs. Web page authoring tools such as FrontPage or Dreamweaver can not be used since the presentation HTML is embedded within a program's logic. Other responsibilities including state management, security, validation, data access, and event handling are completely delegated to programmers. A spate of fragile, idiosyncratic Web application implementations were the result of the lack of structure allowed by CGI. The introduction of software engineering discipline in the form of coding guidelines, scripting libraries, and frameworks has improved the situation to some extent ]. Despite its limitations, CGI is not obsolete. It natively exists within most Web servers, in contrast to other dynamic content solutions that require additional component installation. The out-of-the-box, universal availability of CGI makes it a possible target for small to medium-sized applications with low-volume expectations. found CGI to be inefficient in handling concurrent client requests and therefore suitable only for low-traffic applications based on benchmark comparisons to other options for generating dynamic content. The technology is still in use mainly due to the increasing popularity of scripting languages, which can provide a straightforward, portable alternative to Java. 2.2.3 Scalable CGI Implementations. FastCGI (Open Market), mod perl combined with the Apache::Registry module (Apache), and PerlEx (ActiveState) are examples of Web server extensions that improve the scalability of CGI. FastCGI is a CGI implementation that maintains a pool of persistent processes that are reused for multiple requests to reduce process creation overhead . Figure 3 shows the architecture of scalable CGI implementations. mod perl is an Apache extension that embeds a Perl interpreter within the Web server that allows Perl scripts to access the Apache C language API. Apache::Registry is a Perl library that supports CGI under mod perl. The combination of mod perl and Apache::Registry improves performance by avoiding the overhead of starting and stopping a Perl interpreter for each request. An Apache Web server can also be extended to support corresponding capabilities for other scripting languages including Python (mod snake, mod python), tcl (mod tcl), and Ruby (mod ruby with eRuby). PerlEx provides similar capabilities for Microsoft IIS by maintaining a pool of interpreters that is managed by a Web server extension module [ActiveState 2003] . compared the performance of FastCGI, mod perl, PHP, and Java servlets under Apache on Linux using a minimal commodity hardware configuration (a single Pentium III 733 MHz processor with 384 MB of memory). Their results showed that FastCGI was the best performing and most reliable option on the benchmark hardware. Java servlets also performed steadily, even though the authors conceded that the benchmark conditions were not realistic for the technology, which is more appropriately matched to enterprise-level hardware supporting multiple processors and larger amounts of memory. BIB002 compared the performance of a similar technology group (CGI, FastCGI, Java servlets, and JSP) on a dual-processor Solaris system (2 360 MHz Sun Ultra processors with 512 MB of memory) with similar results that showed FastCGI to be the best performer. However, the authors also concluded that factors other than performance, including development time, support availability, ease of integration, and deployment convenience, are also important concerns for Web development groups. of IIS. Figure 6 shows the placement of filters and extensions within the reference architecture. The corresponding Apache API constructs are modules, handlers, and filters ]. ISAPI extensions behave like CGI scripts; extensions are invoked directly by clients or through URI mapping, and are responsible for handling requests and creating responses. On IIS servers, a well-known example is the mapping of .asp files to asp.dll, the Active Server Pages interpreter. A corresponding example for Apache is the association of .php files to the mod php extension module. ISAPI filters perform additional behaviors in addition to the default behaviors, and can be used to implement custom logging, authentication, mapping, and retrieval features. The Apache API also supports filters as a modular way to manipulate request or response data streams. Web server APIs were originally designed as scalable replacements for CGI, but they are rarely directly used to build Web applications. The APIs are complex, non-portable, and require advanced programming knowledge, so extension modules are difficult to build, test, and maintain. Reliability can be compromised due to the tight integration of extensions into Web servers; a single flaw in an extension module can crash a Web server. The cost of developing extensions is easier to justify for widely reusable features than for those supporting only a single application. In spite of their weaknesses, Web server APIs are an important building block for dynamic content generation systems. In fact, for performance reasons most serverside technologies that support dynamic content are based on Web server extension modules.
|
Survey of Technologies for Web Application Development <s> WEB PROGRAMMING VS. REGULAR PROGRAMMING <s> Most Web applications are still developed ad hoc. One reason is the gap between established software design concepts and the low-level Web implementation model. We summarize work on WebComposition, a model for Web application development, then introduce the WebComposition Markup Language, an XML-based language that implements the model. WCML embodies object-oriented principles such as modularity, abstraction and encapsulation. <s> BIB001 </s> Survey of Technologies for Web Application Development <s> WEB PROGRAMMING VS. REGULAR PROGRAMMING <s> The World Wide Web is rich in content and services, but access to these resources must be obtained mostly through manual browsers. We would like to be able to write programs that reproduce human browsing behavior, including reactions to slow transmission-rates and failures on many simultaneous links. We thus introduce a concurrent model that directly incorporates the notions of failure and rate of communication, and then describe programming constructs based on this model. <s> BIB002 </s> Survey of Technologies for Web Application Development <s> WEB PROGRAMMING VS. REGULAR PROGRAMMING <s> This paper describes the Intermediary Architecture , a middleware architecture which interposes distributed object services between Web client and server. The architecture extends current Web architectures with a new kind of plug-in, making a new colleciton of Web applications easier to develop. Example services including Web annotations and Web performance monitoring are described. <s> BIB003 </s> Survey of Technologies for Web Application Development <s> WEB PROGRAMMING VS. REGULAR PROGRAMMING <s> The World Wide Web is an increasingly important factor in planning for general distributed computing environments. This article surveys Web technologies that look to integrate aspects of object technology with the basic infrastructure of the Web. <s> BIB004
|
In software development terms, the maturity level of the state of common practices for Web development has traditionally lagged relative to the technologies and techniques used for other client-server applications BIB001 . As late as 1995, the CGI was still the most practical option for dynamic Web content creation. In contrast, distributed object environments based on CORBA and COM have been available for client-server development since 1992. By the time that templating and scripting languages were commonly supporting largely ad-hoc Web development practices, client-server development more advanced, supported by graphical development tools, frameworks, and software engineering practices. As the Web began to be used in an increasingly large class of critical business applications, it became apparent that the fundamental requirements were not well supported by existing solutions. Early attempts, roughly between 1995 and 1999, centered on trying to find a unifying API for Web programming, essentially viewing the Web as a distributed object system in the tradition of CORBA BIB002 BIB004 BIB003 . Difficulties in coordinating the efforts of the wide-ranging Web community hindered efforts to define a global API, but the mark of distributed object research can be seen in serviceoriented architecture (SOA) standards, which implement globally distributed Web service technologies by exchanging XML over HTTP [W3C 2004b ]. The continual disparity fueled a marketing pipeline for dynamic Web technology creators. Almost any advance that addressed limitations of Web development could find a waiting base of potential adopters. Even problematic technologies such as ActiveX found avenues of acceptance solely based on incremental benefits. The introduction of J2EE in 1999 was a flashpoint for the dynamic Web; instantly the maturity gap was narrowed and priorities shifted so that many software engineering advances were for the first time being driven by the requirements of Web applications. The importance of J2EE can not be overstated since it set standards that have since influenced subsequent significant advances for Web development, including .NET, which surfaced as a competitive response. This section examines tools, techniques, and technologies that have been carried forward from traditional programming domains into the realm of the Web.
|
Survey of Technologies for Web Application Development <s> Categories of Web Frameworks. <s> A switch actuating device adapted for conjoint rotation with a rotatable assembly of a prime mover. Means for mounting to the rotatable assembly so as to be conjointly rotatable therewith includes a pair of sets of opposite surfaces, and switch operating means conjointly rotatable with the mounting means is arranged for axial movement thereon between a pair of opposite positions. A pair of centrifugal weight members are responsive to the rotational speed of the device to effect the axial movement of the switch operating means between its opposite positions and include a pair of sets of means arranged for guiding engagement on the oppposite surface set pair upon the axial movement of the switch operating means between its opposite positions, respectively. A pair of springs are respectively biased between the centrifugal members. A method of assembling a switch actuating device is also disclosed. <s> BIB001 </s> Survey of Technologies for Web Application Development <s> Categories of Web Frameworks. <s> Preface. Who This Book Is For. Acknowledgements. Colophon. Introduction. Architecture. Enterprise Applications. Kinds of Enterprise Application. Thinking About Performance. Patterns. The Structure of the Patterns. Limitations of These Patterns. I. THE NARRATIVES. 1. Layering. The Evolution of Layers in Enterprise Applications. The Three Principal Layers. Choosing Where to Run Your Layers. 2. Organizing Domain Logic. Making a Choice. Service Layer. 3. Mapping to Relational Databases. Architectural Patterns. The Behavioral Problem. Reading in Data Structural Mapping Patterns. Mapping Relationships. Inheritance. Building the Mapping. Double Mapping. Using Metadata. Database Connections. Some Miscellaneous Points. Further Reading. 4. Web Presentation. View Patterns. Input Controller Patterns. Further Reading. 5. Concurrency (by Martin Fowler and David Rice). Concurrency Problems. Execution Contexts. Isolation and Immutability. Optimistic and Pessimistic Concurrency Control. Preventing Inconsistent Reads. Deadlocks. Transactions. ACID. Transactional Resources. Reducing Transaction Isolation for Liveness. Business and System Transactions. Patterns for Offline Concurrency Control. Application Server Concurrency. Further Reading. 6. Session State. The Value of Statelessness. Session State. Ways to Store Session State. 7. Distribution Strategies. The Allure of Distributed Objects. Remote and Local Interfaces. Where You Have to Distribute. Working with the Distribution Boundary. Interfaces for Distribution. 8. Putting it all Together. Starting With the Domain Layer. Down to the Data Source. Data Source for Transaction Script. Data Source Table Module (125). Data Source for Domain Model (116). The Presentation Layer. Some Technology-Specific Advice. Java and J2EE. .NET. Stored Procedures. Web Services. Other Layering Schemes. II. THE PATTERNS. 9. Domain Logic Patterns. Transaction Script. How It Works. When to Use It. The Revenue Recognition Problem. Example: Revenue Recognition (Java). Domain Model. How It Works. When to Use It. Further Reading. Example: Revenue Recognition (Java). Table Module. How It Works. When to Use It. Example: Revenue Recognition with a Table Module (C#). Service Layer(by Randy Stafford). How It Works. When to Use It. Further Reading. Example: Revenue Recognition (Java). 10. Data Source Architectural Patterns. Table Data Gateway. How It Works. When to Use It. Further Reading. Example: Person Gateway (C#). Example: Using ADO.NET Data Sets (C#). Row Data Gateway. How It Works. When to Use It. Example: A Person Record (Java). Example: A Data Holder for a Domain Object (Java). Active Record. How It Works. When to Use It. Example: A Simple Person (Java). Data Mapper. How It Works. When to Use It. Example: A Simple Database Mapper (Java). Example: Separating the Finders (Java). Example: Creating an Empty Object (Java). 11. Object-Relational Behavioral Patterns. Unit of Work. How It Works. When to Use It. Example: Unit of Work with Object Registration (Java) (by David Rice). Identity Map. How It Works. When to Use It. Example: Methods for an Identity Map (Java). Lazy Load. How It Works. When to Use It. Example: Lazy Initialization (Java). Example: Virtual Proxy (Java). Example: Using a Value Holder (Java). Example: Using Ghosts (C#). 12. Object-Relational Structural Patterns. Identity Field. How It Works. When to Use It. Further Reading. Example: Integral Key (C#). Example: Using a Key Table (Java). Example: Using a Compound Key (Java). Foreign Key Mapping. How It Works. When to Use It. Example: Single-Valued Reference (Java). Example: Multitable Find (Java). Example: Collection of References (C#). Association Table Mapping. How It Works. When to Use It. Example: Employees and Skills (C#). Example: Using Direct SQL (Java). Example: Using a Single Query for Multiple Employees (Java) (by Matt Foemmel and Martin Fowler). Dependent Mapping. How It Works. When to Use It. Example: Albums and Tracks (Java). Embedded Value. How It Works. When to Use It. Further Reading. Example: Simple Value Object (Java). Serialized LOB. How It Works. When to Use It. Example: Serializing a Department Hierarchy in XML (Java). Single Table Inheritance. How It Works. When to Use It. Example: A Single Table for Players (C#). Loading an Object from the Database. Class Table Inheritance. How It Works. When to Use It. Further Reading. Example: Players and Their Kin (C#). Concrete Table Inheritance. How It Works. When to Use It. Example: Concrete Players (C#). Inheritance Mappers. How It Works. When to Use It. 13. Object-Relational Metadata Mapping Patterns. Metadata Mapping. How It Works. When to Use It. Example: Using Metadata and Reflection (Java). Query Object. How It Works. When to Use It. Further Reading. Example: A Simple Query Object (Java). Repository (by Edward Hieatt and Rob Mee). How It Works. When to Use It. Further Reading. Example: Finding a Person's Dependents (Java). Example: Swapping Repository Strategies (Java). 14. Web Presentation Patterns. Model View Controller. How It Works. When to Use It. Page Controller. How It Works. When to Use It. Example: Simple Display with a Servlet Controller and a JSP View (Java). Example: Using a JSP as a Handler (Java). Example: Page Handler with a Code Behind (C#). Front Controller. How It Works. When to Use It. Further Reading. Example: Simple Display (Java). Template View. How It Works. When to Use It. Example: Using a JSP as a View with a Separate Controller (Java). Example: ASP.NET Server Page (C#). Transform View. How It Works. When to Use It. Example: Simple Transform (Java). Two Step View. How It Works. When to Use It. Example: Two Stage XSLT (XSLT). Example: JSP and Custom Tags (Java). Application Controller. How It Works. When to Use It. Further Reading. Example: State Model Application Controller (Java). 15. Distribution Patterns. Remote Facade. How It Works. When to Use It. Example: Using a Java Session Bean as a Remote Facade (Java). Example: Web Service (C#). Data Transfer Object. How It Works. When to Use It. Further Reading. Example: Transferring Information about Albums (Java). Example: Serializing Using XML (Java). 16. Offline Concurrency Patterns. Optimistic Offline Lock (by David Rice). How It Works. When to Use It. Example: Domain Layer with Data Mappers (165) (Java). Pessimistic Offline Lock (by David Rice). How It Works. When to Use It. Example: Simple Lock Manager (Java). Coarse-Grained Lock (by David Rice and Matt Foemmel). How It Works. When to Use It. Example: Shared Optimistic Offline Lock (416) (Java). Example: Shared Pessimistic Offline Lock (426) (Java). Example: Root Optimistic Offline Lock (416) (Java). Implicit Lock (by David Rice). How It Works. When to Use It. Example: Implicit Pessimistic Offline Lock (426) (Java). 17. Session State Patterns. Client Session State. How It Works. When to Use It. Server Session State. How It Works. When to Use It. Database Session State. How It Works. When to Use It. 18. Base Patterns. Gateway. How It Works. When to Use It. Example: A Gateway to a Proprietary Messaging Service (Java). Mapper. How It Works. When to Use It. Layer Supertype. How It Works. When to Use It. Example: Domain Object (Java). Separated Interface. How It Works. When to Use It. Registry. How It Works. When to Use It. Example: A Singleton Registry (Java). Example: Thread-Safe Registry (Java) (by Matt Foemmel and Martin Fowler). Value Object. How It Works. When to Use It. Money. How It Works. When to Use It. Example: A Money Class (Java) (by Matt Foemmel and Martin Fowler). Special Case. How It Works. When to Use It. Further Reading. Example: A Simple Null Object (C#). Plugin (by David Rice and Matt Foemmel). How It Works. When to Use It. Example: An Id Generator (Java). Service Stub (by David Rice). How It Works. When to Use It. Example: Sales Tax Service (Java). Record Set. How It Works. When to Use It. References Index. 0321127420T10162002 <s> BIB002 </s> Survey of Technologies for Web Application Development <s> Categories of Web Frameworks. <s> Part I. Why Frameworks/Components?: 1. Components and application frameworks 2. Components: the future of web-application development 3. What do they provide and what are the benefits? Part II. Selecting Frameworks and Components: 4. Choosing component libraries and application frameworks 5. Open source and components/frameworks Part III. Using Components and Frameworks: 6. Frameworks and developement methodologies 7. IDE's 8. Strategies for using frameworks, best practices Part IV. Summary and the Future: 9. Conclusions: the future of frameworks/components. <s> BIB003 </s> Survey of Technologies for Web Application Development <s> Categories of Web Frameworks. <s> Written for architects and developers, this guide presents alternatives to EJB and explains how to manage transactions, solve common problems, design applications, access data, use open source products to enhance productivity, and design. <s> BIB004
|
The most useful frameworks supporting Web application development can be categorized as Web application and user interface frameworks, persistence frameworks, and lightweight containers BIB003 ]. Web application and user interface frameworks are the most directly relevant to dynamic content generation systems. Persistence frameworks such as Hibernate aim to allow programmers to efficiently retrieve and update database information through encapsulated objects rather than through direct SQL calls, or through EJB entity beans. Frameworks that leverage inversion of control to simplify component-based development, collectively known as lightweight containers, are gaining traction as an more usuable alternative for J2EE applications that would otherwise needlessly incur the implementation complexity of EJB, even though high-end capabilities are not required. The open source PicoContainer and Spring frameworks are highlyregarded lightweight container frameworks BIB004 . 5.1.3 Model-View-Controller. The Model-View-Controller (MVC) design pattern BIB001 , commonly used in user interface programming to separate presentation, business, and state management concerns, is a logical architectural choice that matches the event-driven nature of dynamic Web applications. Early Web technologies did not allow convenient separation of concerns, but the more recent convergence of Java, scripting and template languages, and servlets supports the modularization of Web applications to align with the MVC roles. Figure 20 shows the most common mapping of the MVC roles to J2EE entities. For .NET, a similar mapping to ASP.NET, the IHttpHandler interface, and managed components is possible. Much attention has been focused on creating Web MVC frameworks since 2000, when the potential for reuse and streamlining the development process became evident. As a result, many competing frameworks are available, many of which are products of open source development projects. While the most well-known frameworks currently target the J2EE platform, several have been ported to .NET, and there are also many scripting language frameworks available. Scripting languages frameworks are handicapped from the start by either a reliance on CGI or the need to implement a supporting infrastructure analogous to servlets, in addition to supporting MVC. 5.1.4 Application-driven Web MVC Frameworks. Application-driven Web MVC frameworks implement MVC using the Front Controller pattern BIB002 ]. In the Front Controller pattern, events are directed to an application-level Controller that invokes the correct action in response. The event-action table is is maintained at the application level, usually in an XML file. Navigation details are abstracted out of individual pages, although the encapsulation and reusability is compromised by dependencies on the configuation file that defines the event-action table. An application consists of a Controller class, Model classes, View page templates, and a configuration file. The main objective is event-handling rather than hiding implementation details, so programmer familiarity with resource implementation technologies is assumed. The lightweight nature of the abstraction reduces the learning curve for experienced Web developers relative to more opaque frameworks. Novice Web programmers may face an extended orientation period due to the need to comprehend the workings of a framework in addition to more fundamental concepts. Apache Struts. The most well-known application-driven Web MVC framework is the open source Apache Struts framework ]. First introduced in 2000, Struts continues to be the most popular Web MVC framework for Java. The framework is mature, well-documented, and effectively supports the requirements of a large class of interactive Web applications. The implementation is straightforward and based on the servlet, JSP, HTML forms, JavaBeans, and XML standards. While several well-known frameworks with active developer communities, including WebWork, Spring MVC, and Maverick, occupy the same architectural niche, the details of Struts, the de-facto leader, are broadly representative of the category. The event-action table for a Struts application is specified in struts-config.xml, an application-level configuration file. The web.xml configuration file for an application maps URI names to the ActionServlet servlet, a central Controller servlet supplied by the framework that routes requests to Action class instances that encapsulate response logic. Action instances access session data, HTML form data, and integration components to formulate dynamic responses. Form beans, JavaBean components that implement the Struts ActionForm or DynaActionForm interfaces, encapsulate the server-side state of the input fields of an HTML form. The DynaActionForm interface simplifies state management by dynamically creat- ing form beans when they are needed without requiring additional programming. After processing response logic, an Action class transfers control to the Controller passing a global forward, the logical name of the View template that will generate the next page of the dynamic user interface based on the outcome of the response logic. Although View templates are normally JSP pages, the framework allows other template engines to be used, including Velocity and WebMacro. The Model is the least constrained tier of a Struts application, consisting of components that are accessed by Action class instances and View templates to dynamically access and update persistent information. Field-level input validations, implemented either by hand-coding form bean validate() methods or by declaration in the validator-rules.xml and validation.xml configuration files, are always performed on the server-side and optionally through generated JavaScript on the client-side. 5.1.5 Page-driven MVC Frameworks. Page-driven frameworks implement the Page Controller pattern BIB002 ] to provide an event-driven model for Web programming that recalls traditional desktop GUI programming. In the Page Controller pattern, events generated from pages are directed to page-level Controllers. The event-action table is spread throughout individual pages of an application. An application consists of related pages, classes, and configuration files. Relative to application-driven frameworks, the higher degree of page independence allows heavier abstraction over resource implementation details that supports rapid component-based development through drag-and-drop GUI page composition. The high abstraction level may extend the learning even for experienced Web programmers due to the need to become familiar with with a completely different object model. takes the approach to its logical conclusion by converting compiled Swing applications into HTML pages at the Web server dynamically at runtime. The effectiveness of desktop API-derivative Web application frameworks is limited by their code intensive nature, which prevents role separation between designers and programmers. GUI development tools such as EchoStudio simplify development, but are not accessible to Web designers since page designs are not based on HTML so familiar Web authoring tools can not be used. WebObjects. Other page-driven frameworks take a more practical, templatebased approach to incorporating desktop GUI programming practices into Web development. The proprietary Web development framework of the NeXT (now Apple) WebObjects application server environment pioneered a page-driven, componentbased approach to Web programming in 1996. The WebObjects framework was initially built for Objective-C, but was re-implemented for Java in 2000 to support J2EE development. WebObjects applications are collections of pages containing HTML and references to Web Components. Tapestry. Tapestry ], available since 2000, is an open source Java framework from Apache that was heavily influenced by WebObjects. Tapestry provides an object model for component-based development of Web applications. A Tapestry application is a collection of pages that are composed from HTML and components that encapsulate dynamic behavior. In Tapestry, components are known as Java Web Components (JWC). Tapestry supports two kinds of components, user interface components and control components. Control components are not rendered on pages but instead provide control flow constructs. Simple applications can be constructed entirely from library components provided as part of the framework distribution. A Tapestry page is defined by an XML specification, one or more Java classes, and an HTML template. The XML specification identifies a Java page controller class, and defines identifiers that indirectly bind components to HTML templates. The page controller class implements listener methods that handle user interface events. Templates contain HTML and component references. A Tapestry component definition includes an XML specification, one or more Java classes, and an HTML template. Both page and component templates consist of plain HTML and placeholder tags that reference components through a special attribute, jwcid. Role separation is well-supported since Web designers can use their preferred authoring tools to design page templates, which templates contain only valid HTML. Although the framework internally routes all requests through a single entry servlet, the ApplicationServlet, the Tapestry object model completely abstracts servlet processing. Programmers do not need to understand servlet processing to effectively use the framework. Tapestry appeals to desktop GUI programmers since they are familiar with event-driven programming. While the dynamic content generation process is computationally intensive, the framework avoids scalability problems by efficiently caching internal objects. ASP.NET and JavaServer Faces. While Tapestry is technically highly-regarded, the momentum behind the framework has been largely eclipsed by the emergence of ASP.NET ] and, to a lesser extent so far, the nascent JavaServer Faces (JSF) specification ]. ASP.NET is an upgraded version of ASP that supports Web Forms, a namespace within the .NET framework that provides a page-driven object model for Web programming that is similar to the Tapestry object model. JSF is a specification for a component-based Web application framework built over JSP and tag libraries, much closer in concept to Struts than ASP.NET. JSF has superficial similarities to ASP.NET, but is very different in detail. Both frameworks support rapid user interface development with GUI form builder tools, primarily Visual Studio.NET for ASP.NET and Java Studio Creator for JSF. A major conceptual difference is that ASP.NET is page-driven, while JSF is application-driven. All requests for JSF application resources are routed to views by a central servlet, the FacesServlet, per specifications in the applicationlevel faces-config.xml file. While the uptake of JSF is in an early stage and widespread adoption is not inevitable, the merging of characteristics of the Front Controller and Page Controller patterns provides a higher degree of deployment flexibility due to clearer separation of the navigational aspect from page definitions relative to ASP.NET. Portals and Portlets. The component model of portlets is closely related to JSF, which features integration with the Java Portlet API . Portlets are managed Java components that respond to requests and generate dy-· 37 namic content. Java Portlet API specifies how to compose component portlets into combined portals that aggregate content from portlet subject to personalization. A similar component model is provided by the ASP.NET WebParts framework.
|
Survey of Technologies for Web Application Development <s> Model-Driven Development of Web Applications <s> Integrated information systems are often realized as data-intensive Web sites, which integrate data from multiple data sources. We present a system, called STRUDEL, for specifying and generating data-intensive Web sites. STRUDEL separates the tasks of accessing and integrating a site's data sources, building its structure, and generating its HTML representation. STRUDEL's declarative query language, called StruQL, supports the first two tasks. Unlike ad-hoc database queries, a StruQL query is a software artifact that must be extensible and reusable To support more modular and reusable site definition queries, we extend StruQL with functions and describe how the new language, FunStruQL, better supports common site-engineering tasks, such as choosing a strategy for generating the site's pages dynamically and/or statically To substantiate STRUDEL's benefits, we describe the re-engineering of a production Web site using FunStruQL and show that the new site is smaller, more reusable, and unlike the original site, can be analyzed and optimized. <s> BIB001 </s> Survey of Technologies for Web Application Development <s> Model-Driven Development of Web Applications <s> The exponential growth and capillar diffusion of the Web are nurturing a novel generation of applications, characterized by a direct business-to-customer relationship. The development of such applications is a hybrid between traditional IS development and Hypermedia authoring, and challenges the existing tools and approaches for software production. This paper investigates the current situation of Web development tools, both in the commercial and research fields, by identifying and characterizing different categories of solutions, evaluating their adequacy to the requirements of Web application development, enlightening open problems, and exposing possible future trends. <s> BIB002 </s> Survey of Technologies for Web Application Development <s> Model-Driven Development of Web Applications <s> The paper discusses the issue of views in the Web context. We introduce a set of languages for managing and restructuring data coming from the World Wide Web. We present a specific data model, called the ARANEUS Data Model, inspired to the structures typically present in Web sites. The model allows us to describe the scheme of a Web hypertext, in the spirit of databases. Based on the data model, we develop two languages to support a sophisticate view definition process: the first, called ULIXES, is used to build database views of the Web, which can then be analyzed and integrated using database techniques; the second, called PENELOPE, allows the definition of derived Web hypertexts from relational views. This can be used to generate hypertextual views over the Web. <s> BIB003
|
The defining feature of model-driven development is automatic code generation of deployable applications from high-level feature specifications. Model-driven development technologies for Web applications aim to simplify the development process by generating deployable sites from presentational, behavioral, and navigational requirements specified in models. In the tradition of prior work in automatic programming and CASE, which were not completely successful, model-driven development technologies aim to reduce the dependency on low-level programming by raising the abstraction model to a higher level. Adaptation to Web development required the creation of new kinds of models, methods, and techniques that better match the unique properties of Web applications. Initial Progress. Araneus BIB003 and Strudel BIB001 , are representative of initial research progress in adapting model-driven techniques for Web development. These systems utilize data models to manage collections of generated content derived using database metadata and queries. BIB002 surveyed current work in the area as of 1999, including AutoWeb, RMM, OOHDM, and the Oracle Web Development Suite, each of which applied proprietary database, navigation, behavior, and presentation modeling approaches to Web development. WebML and its commercial successor, WebRatio, use proprietary hypertext, data, and presentation models to comprehensively extend the prior work by generating code for an abstract framework that maps to platform-specific MVC implementations at runtime. Other products, including CodeCharge, CodeSmith, DeKlarit, and Fabrique, emphasize GUI-based maintenance of detailed models that facilitate generative programming of the presentation tier for Web applications, at varying degrees of rigor. While these initial approaches were workable, widespread usage of the technologies was limited by the reliance on proprietary modeling languages. Model-driven Architecture. The Model Driven Architecture (MDA) standard from OMG chose UML as the primary modeling language for model-driven development. UML was formally extended to be computationally complete in order to be able to support the level of detail needed to generate applications of abi-trary complexity from specifications. The MDA standards are the product of an industry-wide effort to raise the abstraction level of business systems development. The Meta-Object Facility (MOF) is a set of standardized interfaces, including the XML Metadata Interchange format (XMI), that provide the basis for the specification models required for MDA. A Platform Independent Model (PIM) specifies application features generically that are converted by rule-driven translators into Platform Specific Models (PSM) that reflect the unique properties of disparate platforms. A PSM can be either directly interpreted or further processed to generate a deployable system. Web applications are supported by MDA tools as another implementation platform to target for code generation. Large vendors are backing the MDA standards with compliant toolsets, including Oracle ADF and IBM/Rational Rapid Developer, and comprehensively support Web application and Web service development. While MDA has the potential to shield developers from the implementation complexities inherent in Web applications and improving the process, the MDA processes represent a major paradigm shift for organizations and widespread diffusion of the technologies, if is occurs, will be incremental.
|
A Survey on Models and Query Languages for Temporally Annotated RDF <s> A. Approaches that translate to RDF <s> The Semantic Web consists of many RDF graphs nameable by URIs. This paper extends the syntax and semantics of RDF to cover such named graphs. This enables RDF statements that describe graphs, which is beneficial in many Semantic Web application areas. Named graphs are given an abstract syntax, a formal semantics, an XML syntax, and a syntax based on N3. SPARQL is a query language applicable to named graphs. A specific application area discussed in detail is that of describing provenance information. This paper provides a formally defined framework suited to being a foundation for the Semantic Web trust layer. <s> BIB001 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> A. Approaches that translate to RDF <s> One of the fundamental challenges facing the unprecedented data deluge produced by the sensor networks is how to manage time-series streaming data so that they can be reasoning-ready and provenance-aware. Semantic web technology shows great promise but lacks adequate support for the notion of time. We present a system for the representation, indexing and querying of time-series data, especially streaming data, using the semantic web approach. This system incorporates a special RDF vocabulary and a semantic interpretation for time relationships. The resulting framework, which we refer to as Time-Annotated RDF, provides basic functionality for the representation and querying of time-related data. The capabilities of Time-Annotated RDF were implemented as a suite of Java APIs on top of Tupelo, a semantic content management middleware, to provide transparent integration among heterogeneous data, as present in streams and other data sources, and their metadata. We show how this system supports commonly used time-related queries using Time-Annotated SPARQL introduced in this paper as well as an analysis of the TA-RDF data model. Such prototype system has already seen successful usage in a virtual sensor project where near-real-time radar data streams need to be fetched, indexed, processed and re-published as new virtual sensor streams. <s> BIB002
|
In , instead of having RDF triples associated with their validity temporal interval, named graphs BIB001 are used both for saving space and for querying the temporal RDF database using standard SPARQL. In particular, each created named graph g is associated with a temporal interval i and all RDF triples whose validity interval is i become members of g (in this process blank nodes are replaced by URIs). Temporal relationships between named graphs, such that time:intervalOverlaps are derived from a temporal reasoning system. Additionally, the authors propose an index structure for time intervals, called keyTree index, assuming that triples within named graphs have indices by themselves. The proposed index improves the performance of time point queries over an in-memory ordered list that contains the intervals' start and end times. Experimental results are provided. In BIB002 , the time-annotated RDF framework is proposed for the representation and management of time-series streaming data. In particular, a TA-RDF graph is a set of triples <s where <s,p,o> is an RDF triple and t S , t p , and t o are time points. In other words, a TA-RDF graph relates streams at certain points in time. To translate a TA-RDF graph into a regular RDF graph, a data stream vocabulary is used, where (i) dvs:belongsTo is a propery that indicates that a resource is a frame in a stream, (ii) dvs:hasTimestamp is a property indicating the timestamp of a frame, and (iii) dvs:Nil is a resource corresponding to the Nil timestamp. An RDF graph G is the translation of a TA-RDF graph G TA iff (B is the set of blank nodes): A query language for the time-annotated RDF, called TA-SPARQL, is proposed which has a formal translation into normal SPARQL. For example, a TA-SPARQL query www.ijacsa.thesai.org requesting the temperature in Chicago at sunrise of
|
A Survey on Models and Query Languages for Temporally Annotated RDF <s> B. Other approaches <s> In this paper, we present a temporal extension of the SPARQL query language for RDF graphs. The new language is based on a temporal RDF database model employing triple timestamping with temporal elements, which best preserves the scalability property enjoyed by triple storage technologies, especially in a multi-temporal setting. The proposed SPARQL extensions are aimed at embedding several features of the TSQL2 consensual language designed for temporal relational databases. <s> BIB001 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> B. Other approaches <s> Navigational features have been largely recognized as fundamental for graph database query languages. This fact has motivated several authors to propose RDF query languages with navigational capabilities. In particular, we have argued in a previous paper that nested regular expressions are appropriate to navigate RDF data, and we have proposed the nSPARQL query language for RDF, that uses nested regular expressions as building blocks. In this paper, we study some of the fundamental properties of nSPARQL concerning expressiveness and complexity of evaluation. Regarding expressiveness, we show that nSPARQL is expressive enough to answer queries considering the semantics of the RDFS vocabulary by directly traversing the input graph. We also show that nesting is necessary to obtain this last result, and we study the expressiveness of the combination of nested regular expressions and SPARQL operators. Regarding complexity of evaluation, we prove that the evaluation of a nested regular expression E over an RDF graph G can be computed in time O (|G |·|E |). <s> BIB002 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> B. Other approaches <s> Entities and the concepts they instantiate evolve over time. For example, a corporate entity may have resulted from a series of mergers and splits, or a concept such as that of Whale may have evolved along with our understanding of the physical world. We propose a model for capturing and querying concept evolution. Our proposal extends an RDF-like model with temporal features and evolution operators. In addition, we provide a query language that exploits these extensions and supports historical queries. Moreover, we study how evolution information can be exploited to answer queries that are agnostic to evolution details (hence, evolution-unaware). For these, we propose dynamic programming algorithms and evaluate their efficiency and scalability by experimenting with both real and synthetic datasets. <s> BIB003
|
In BIB001 , an N-dimensional time domain has the form: T=T 1 … T N , where each T i is a set of intervals. A multitemporal RDF triple is defined as (s,p,o | T), where <s,p,o> is an RDF triple and T T. Note that since T is a set, some compression is achieved in the storage of multi-temporal RDF triples. As a query language, the authors propose T-SPARQL, an extension of SPARQL that has many features of TSQL2 (a query language designed for temporal relational databases). As in TQL2, if T is a multi-dimensional time element, the expression VALID(T) and TRANSACTION(T) can be used to express conditions on the valid and transaction components of In , an uncertain temporal knowledge base is a pair KB = <F, C>, where F is a set of weighted temporal RDF triples and C is a set of first-order temporal consistency constraints. In particular, a fact in F has the form: p (s,o,i) triples p(s,o) , where s and o can be variables. To answer a query Q, all matches from the KB at collected into a set F Q . Then, all facts possibly conflicting with them are also added to F Q . To resolve the conflicts, a consistent subset F Q,C of F Q is selected such that the sum of the weights of the facts in F Q,C is maximized. Then, the matches to Q within F Q,C are returned as answer to the query. The query answering problem is shown to be NP-hard. A scheduling algorithm for query answering is provided, as well as an efficient approximation algorithm with polynomial performance. Experimental results show the efficiency of the proposed approach. In BIB003 , the authors extend RDF with temporal features and evolution operators. In addition, in contrast to the rest of the reviewed works, they associate concepts with their lifespan. In particular, an evolution base Σ is a set of RDF triples and a mapping τ from the set of considered RDF triples and considered resources to the set of temporal intervals. In addition, Σ may contain statements of the form (c, term, c'), where term is one of the special evolution properties becomes, join, split, merge, and detach. To support evolution-aware querying, the authors define a navigational query language to traverse temporal and evolution edges in an evolution graph. This language is analogous to nSPARQL BIB002 , a language that extends SPARQL with navigational capabilities based on nested regular expressions. nSPARQL uses four different axes, namely self , next, edge, and node, for navigation on an RDF graph and node label testing. The authors extend the nested regular expressions constructs of nSPARQL with temporal semantics and a set of five evolution axes, namely join, split, merge, detach, and becomes that extend the traversing capabilities of nSPARQL to the evolution edges. The extended query language is formally defined. An example query is "who was the head of the German government before and after the unification of 1990". The query is expressed as follows: No implementation results of this theory are provided.
|
A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between <s> BIB001 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> We prove that entailment for RDF Schema (RDFS) is decidable, NP-complete, and in P if the target graph does not contain blank nodes. We show that the standard set of entailment rules for RDFS is incomplete and that this can be corrected by allowing blank nodes in predicate position. We define semantic extensions of RDFS that involve datatypes and a subset of the OWL vocabulary that includes the property-related vocabulary (e.g. FunctionalProperty), the comparisons (e.g. sameAs and differentFrom) and the value restrictions (e.g. allValuesFrom). These semantic extensions are in line with the 'if-semantics' of RDFS and weaker than the 'iff-semantics' of D-entailment and OWL (DL or Full). For these semantic extensions we present entailment rules, prove completeness results, prove that consistency is in P and that, just as for RDFS, entailment is NP-complete, and in P if the target graph does not contain blank nodes. There are no restrictions on use to obtain decidability: classes can be used as instances. <s> BIB002 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> The Semantic Web consists of many RDF graphs nameable by URIs. This paper extends the syntax and semantics of RDF to cover such named graphs. This enables RDF statements that describe graphs, which is beneficial in many Semantic Web application areas. Named graphs are given an abstract syntax, a formal semantics, an XML syntax, and a syntax based on N3. SPARQL is a query language applicable to named graphs. A specific application area discussed in detail is that of describing provenance information. This paper provides a formally defined framework suited to being a foundation for the Semantic Web trust layer. <s> BIB003 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers <s> BIB004 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> Spatial and temporal data are critical components in many applications. This is especially true in analytical domains such as national security and criminal investigation. Often, the analytical process requires uncovering and analyzing complex thematic relationships between disparate people, places and events. Fundamentally new query operators based on the graph structure of Semantic Web data models, such as semantic associations, are proving useful for this purpose. However, these analysis mechanisms are primarily intended for thematic relationships. In this paper, we describe a framework built around the RDF metadata model for analysis of thematic, spatial and temporal relationships between named entities. We discuss modeling issues and present a set of semantic query operators. We also describe an efficient implementation in Oracle DBMS and demonstrate the scalability of our approach with a performance study using a large synthetic dataset from the national security domain. <s> BIB005 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> This paper presents a minimalist program for RDF, by showing how one can do without several predicates and keywords of the RDF Schema vocabulary, obtaining a simpler language which preserves the original semantics. This approach is beneficial in at least two directions: (a) To have a simple abstract fragment of RDFS easy to formalize and to reason about, which captures the essence of RDFS; (b) To obtain algorithmic properties of deduction and optimizations that are relevant for particular fragments. Among our results are: the identification of a simple fragment of RDFS; the proof that it encompasses the main features of RDFS; a formal semantics and a deductive system for it; sound and complete deductive systems for their sub-fragments; and an ${\cal O}(n \log n)$ complexity bound for ground entailment in this fragment. <s> BIB006 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its uses. A Semantic Web Primer provides an introduction and guide to this continuously evolving field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for independent study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials.The third edition of this widely used text has been thoroughly updated, with significant new material that reflects a rapidly developing field. Treatment of the different languages (OWL2, rules) expands the coverage of RDF and OWL, defining the data model independently of XML and including coverage of N3/Turtle and RDFa. A chapter is devoted to OWL2, the new W3C standard. This edition also features additional coverage of the query language SPARQL, the rule language RIF and the possibility of interaction between rules and ontology languages and applications. The chapter on Semantic Web applications reflects the rapid developments of the past few years. A new chapter offers ideas for term projects. Additional material, including updates on the technological trends and research directions, can be found at http://www.semanticwebprimer.org. <s> BIB007 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> One of the fundamental challenges facing the unprecedented data deluge produced by the sensor networks is how to manage time-series streaming data so that they can be reasoning-ready and provenance-aware. Semantic web technology shows great promise but lacks adequate support for the notion of time. We present a system for the representation, indexing and querying of time-series data, especially streaming data, using the semantic web approach. This system incorporates a special RDF vocabulary and a semantic interpretation for time relationships. The resulting framework, which we refer to as Time-Annotated RDF, provides basic functionality for the representation and querying of time-related data. The capabilities of Time-Annotated RDF were implemented as a suite of Java APIs on top of Tupelo, a semantic content management middleware, to provide transparent integration among heterogeneous data, as present in streams and other data sources, and their metadata. We show how this system supports commonly used time-related queries using Time-Annotated SPARQL introduced in this paper as well as an analysis of the TA-RDF data model. Such prototype system has already seen successful usage in a virtual sensor project where near-real-time radar data streams need to be fetched, indexed, processed and re-published as new virtual sensor streams. <s> BIB008 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> Real-world use of RDF requires the ability to transparently represent and explain metadata associated with RDF triples. For example, when RDF triples are extracted automatically by information extraction programs, there is a need to represent where the triples came from, what their temporal validity is, and how certain we are that the triple is correct. Today, there is no theoretically clean and practically scalable mechanism that spans these different needs - reification is the only solution propose to date, and its implementations have been ugly. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF supports users who need to think about the uncertainty, temporal aspects, and provenance of the RDF triples in an RDF database. We develop a prototype aRDF implementation and show that our algorithms work efficiently even on real world data sets containing over 10 million triples. <s> BIB009 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> Spatial and temporal data is plentiful on the Web, and Semantic Web technologies have the potential to make this data more accessible and more useful. Semantic Web researchers have consequently made progress towards better handling of spatial and temporal data.SPARQL, the W3C-recommended query language for RDF, does not adequately support complex spatial and temporal queries. In this work, we present the SPARQL-ST query language. SPARQL-ST is an extension of SPARQL for complex spatiotemporal queries. We present a formal syntax and semantics for SPARQL-ST. In addition, we describe a prototype implementation of SPARQL-ST and demonstrate the scalability of this implementation with a performance study using large real-world and synthetic RDF datasets. <s> BIB010 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> Temporal encoding schemes using RDF and OWL are often plagued by a massive proliferation of useless “container” objects. Reasoning and querying with such representations is extremely complex, expensive, and error-prone. We present a temporal extension of the Hayes and ter Horst entailment rules for RDFS/OWL. The extension is realized by extending RDF triples with further temporal arguments and requires only some lightweight forms of reasoning. The approach has been implemented in the forward chaining engine HFC. <s> BIB011 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> The paper addresses how a robot can maintain a state representation of all that it knows about the environment over time and space, given its observations and its domain knowledge. The advantage in combining domain knowledge and observations is that the robot can in this way project from the past into the future, and reason from observations to more general statements to help guide how it plans to act and interact. The difficulty lies in the fact that observations are typically uncertain and logical inference for completion against a knowledge base is computationally hard. <s> BIB012 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> We describe a generic framework for representing and reasoning with annotated Semantic Web data, a task becoming more important with the recent increased amount of inconsistent and non-reliable meta-data on the web. We formalise the annotated language, the corresponding deductive system and address the query answering problem. Previous contributions on specific RDF annotation domains are encompassed by our unified reasoning formalism as we show by instantiating it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we provide a generic method for combining multiple annotation domains allowing to represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the development of a query language -- AnQL -- that is inspired by SPARQL, including several features of SPARQL 1.1 (subqueries, aggregates, assignment, solution modifiers) along with the formal definitions of their semantics. <s> BIB013 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> H}. In particular, it is proved that G |= τ G' iff there is a mapping v such that v(G') is a subgraph of scl(G). <s> Entities and the concepts they instantiate evolve over time. For example, a corporate entity may have resulted from a series of mergers and splits, or a concept such as that of Whale may have evolved along with our understanding of the physical world. We propose a model for capturing and querying concept evolution. Our proposal extends an RDF-like model with temporal features and evolution operators. In addition, we provide a query language that exploits these extensions and supports historical queries. Moreover, we study how evolution information can be exploited to answer queries that are agnostic to evolution details (hence, evolution-unaware). For these, we propose dynamic programming algorithms and evaluate their efficiency and scalability by experimenting with both real and synthetic datasets. <s> BIB014
|
The authors extend their theory to support also anonymous timestamps. A query is defined as a pair (H, B A) , where H and B are temporal RDF graphs without blank nodes and with some elements replaced by variables and A is a set of usual arithmetic built-in predicates over time point variables and time points. All variables appearing in H should also appear in B. For deriving maximal validity intervals a special structure is used. For example a query that asks for the service providers that have web services for more than 4 consecutive years is: In , the authors extend the work in BIB004 and they define a temporal graph as a set of temporal triples of the form (s,p,o):i, where (s,p,o) is an RDF triple and i is a temporal interval variable or a temporal interval. A temporal constraint is an expression of the form i ω i', where i, i' are temporal intervals or temporal interval variables and ω is one of the relationships of Allen's temporal interval algebra BIB001 . A temporal graph with temporal constrains (called c-temporal graph) is a pair C = (G, Σ) , where G is a temporal graph and Σ is a set of temporal constraints over the intervals of G. www.ijacsa.thesai.org The authors define entailment between two c-temporal graphs C, C' as follows: C |= τ(const) C' iff for each time ground instance v(C) of C, there is a time ground instance v'(C') of C' such that v(C) |= τ v(C'). The authors define the c-slice closure of C, denoted by cscl(C), extending the definition of slice closure of BIB004 . It is proved that C |= τ(const) C' iff there is an interval map γ from C' to C and a mapping v s.t. v(γ(C')) is a subgraph of cscl (C) . Entailment between two c-temporal graphs is shown to be NP-complete. No query language or implementation is provided. In BIB005 , , BIB010 , the authors consider an extension of RDFS with spatial and temporal information. Here, we consider only the extension with temporal information. In BIB011 , , the authors extend the RDFS and ter-Horst entailment rules BIB002 (which extend RDFS with terms from the OWL BIB007 The proposed extension has been implemented using the forward chaining engine HFC BIB012 , which supports arbitrary tuples, user defined tests, and actions. Some experimental results are provided. However, no query language is provided. In BIB013 , a general framework for representing, reasoning, and querying annotated RDFS data is presented. The authors show how their unified reasoning framework can be instantiated for the temporal, fuzzy, and provenance domain. Here, we are concerned with the temporal instantiation. We define ⊥={{}} and ㄒ={[-∞,+∞]}. Let L={t | t is a finite set of disjoint temporal intervals} {⊥,ㄒ}. On L, the authors define the partial order: Between the elements of L, the authors define the operations + and × are follows: BIB003 } + { BIB009 , BIB008 BIB014 ]} = { BIB009 , BIB003 BIB014 } and { , BIB003 } × { BIB009 , BIB008 BIB014 } = { , BIB008 }. An annotated RDFS graph G is a set of temporal triples (s,p,o) :t, where (s,p,o) is an RDF triple and tL. The models of G are formally defined extending ρRDF semantics, where ρRDF BIB006 is a subset of RDFS keeping its essential features. The authors present a set of sound and complete inference rules of the general form: The evaluation of a TGP query w.r.t. a temporal graph G and an entailment relation X is formally defined using multisorted first-order logic. Yet, evaluation of a TGP using this definition can be inefficient. Therefore, the authors describe an optimization. Assume that the entailment relation X is characterized by a set of definite rules of the form: A 1 ,..,A n →B. Then, the rules: are applied until a fixpoint is reached, where x i and y i are time point variables. Then, based on the result, derived RDF triples are associated with their maximal validity intervals. Now, based on these maximal intervals the evaluation of a TGP query is efficiently defined. Though the authors state that they have implemented their framework using the PostgreSQL database system, no implementation results are provided.
|
A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers <s> BIB001 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> In this paper, we present a temporal extension of the SPARQL query language for RDF graphs. The new language is based on a temporal RDF database model employing triple timestamping with temporal elements, which best preserves the scalability property enjoyed by triple storage technologies, especially in a multi-temporal setting. The proposed SPARQL extensions are aimed at embedding several features of the TSQL2 consensual language designed for temporal relational databases. <s> BIB002 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> Real-world use of RDF requires the ability to transparently represent and explain metadata associated with RDF triples. For example, when RDF triples are extracted automatically by information extraction programs, there is a need to represent where the triples came from, what their temporal validity is, and how certain we are that the triple is correct. Today, there is no theoretically clean and practically scalable mechanism that spans these different needs - reification is the only solution propose to date, and its implementations have been ugly. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF supports users who need to think about the uncertainty, temporal aspects, and provenance of the RDF triples in an RDF database. We develop a prototype aRDF implementation and show that our algorithms work efficiently even on real world data sets containing over 10 million triples. <s> BIB003 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> We describe a generic framework for representing and reasoning with annotated Semantic Web data, a task becoming more important with the recent increased amount of inconsistent and non-reliable meta-data on the web. We formalise the annotated language, the corresponding deductive system and address the query answering problem. Previous contributions on specific RDF annotation domains are encompassed by our unified reasoning formalism as we show by instantiating it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we provide a generic method for combining multiple annotation domains allowing to represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the development of a query language -- AnQL -- that is inspired by SPARQL, including several features of SPARQL 1.1 (subqueries, aggregates, assignment, solution modifiers) along with the formal definitions of their semantics. <s> BIB004 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> Spatial and temporal data is plentiful on the Web, and Semantic Web technologies have the potential to make this data more accessible and more useful. Semantic Web researchers have consequently made progress towards better handling of spatial and temporal data.SPARQL, the W3C-recommended query language for RDF, does not adequately support complex spatial and temporal queries. In this work, we present the SPARQL-ST query language. SPARQL-ST is an extension of SPARQL for complex spatiotemporal queries. We present a formal syntax and semantics for SPARQL-ST. In addition, we describe a prototype implementation of SPARQL-ST and demonstrate the scalability of this implementation with a performance study using large real-world and synthetic RDF datasets. <s> BIB005 </s> A Survey on Models and Query Languages for Temporally Annotated RDF <s> V. CONCLUSION-DISCUSSION <s> RDF(S) and OWL 2 can currently represent only static information. In practice, however, the truth of statements often changes with time. Semantic Web applications often need to represent such changes and reason about them. In this paper we present a logic-based approach for representing validity time in RDF(S) and OWL 2. Unlike the existing proposals, our approach is applicable to nondeterministic entailment relations and/or entailment relations that involve existential quantification, such as the OWL 2 Direct Entailment and the OWL 2 RDF-Based Entailment. We also present an extension of SPARQL that can be used to query temporal RDF(S) and OWL 2. Moreover, we present a general query evaluation algorithm that can be used with all entailment relations used in the Semantic Web. Finally, we present two optimizations of the algorithm that are applicable to entailment relations characterized by a set of deterministic rules, such RDF(S) and OWL 2 RL/RDF Entailment. <s> BIB006
|
In this paper, we have reviewed models and query languages of temporally annotated RDF. Below, we compare these models and query languages on various aspects. First, we would like to state that approaches that have their own model theory or extend RDF simple entailment miss important inferences made from the works that extend RDFS entailment. For example, an object o may be an instance of class c during a temporal interval i and the class c may be subclass of a class c' during an interval i'. Only works that extend RDFS entailment are able to derive that o is instance of class c' during the intersection of the intervals i and i'. From the works that extend RDFS entailment, the approach in BIB001 seems less efficient since it computes the RDFS closure of RDF triples at each time point. Additionally, BIB004 considers all temporal intervals that satisfy the query and then selects the maximal ones. In contrast, BIB005 and BIB006 achieve query answering using directly maximal temporal intervals achieving a higher performance. In our opinion, the approach in BIB004 will return no answer. As a criticism, BIB006 is not able to return maximal intervals within a temporal interval of interest. Approaches , BIB004 , and BIB002 save some space since they either use name graphs associated with temporal intervals or associate each RDF triple with its set of validity temporal intervals. Specialized indices for query answering are used only in and , while the rest of the approaches use common indexes. As a final remark, we would like to state that can handle some temporal constraints over RDF triples, BIB001 can handle anonymous timestamps, and can handle anonymous temporal intervals satisfying Allen's temporal interval algebra relations. Temporal consistency constraints are considered only in , which however does not answer temporal queries but only normal queries. As a criticism to the work in BIB003 , each RDF triple is associated with a single maximal temporal interval while an RDF triple is normally associated with multiple maximal temporal intervals. Some of the proposed models and query languages have been implemented as stated in the main text of the paper and for some of them experimental results are provided. In the future, extensions of the proposed temporal RDF query languages with features of SPARQL 1.1 , such as subqueries, and negation, will be of great importance. For example, it will be interesting to ask for events that have not occurred simultaneously before a date and their maximal temporal intervals always overlap after that date. Additionally, it will be interesting to ask for companies located in Crete that have exactly one manager at each point in time within a particular temporal interval of interest. Future work also concerns a survey on spatial, fuzzy, provenance, and contextual RDF. Of course, aspects of contextual RDF can be time, space, trust, and authority.
|
Human tracking over camera networks: a review <s> Introduction <s> The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects. <s> BIB001 </s> Human tracking over camera networks: a review <s> Introduction <s> Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field. <s> BIB002 </s> Human tracking over camera networks: a review <s> Introduction <s> There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers. <s> BIB003
|
Nowadays, the growing demand of video surveillance systems in some applications such as public security, transportation control, defense, military, urban planning, and business information statistics has attracted increasing attention, and a large number of networked video surveillance systems are getting installed in public places, for instance, airports, subways, railway stations, highways, parking lots, banks, schools, shopping malls, and military areas. These video surveillance systems not only effectively protect the security of public facilities and citizens, but also seamlessly help to transform to smart city, which has attracted more and more scientific researchers to invest huge funds in research related to intelligent video surveillance. It is observed that the main focus of the current research on intelligent video surveillance mainly lies on video object detection/tracking, and video object activity analysis/ recognition. The video object tracking is not only one of the most important techniques in intelligent video surveillance, but also the base of high-level video processing and applications such as the subsequent video object activity analysis and recognition. However, in the video object tracking, human tracking is the most challenging since human may vary greatly in appearance on account of changes in illumination and viewpoint, background clutter, occlusion, non-rigid deformations, intra-class variability in shape and pose. Human tracking includes human tracking within a camera and human tracking across multiple cameras. When a person enters into the field of view (FOV) of a camera, human tracking within a camera is needed. However, when he/she leaves the FOV, the human information is no longer available, thus the limited FOV of a camera cannot meet the needs of wide-area human tracking. In order to widen the FOV, human tracking across multiple cameras has to be used since video streams across multiple cameras covering a wider range of areas, which helps to analyze global activities in the real world. Tracking human across multiple cameras includes two different scenarios, i.e., overlapping camera views and non-overlapping camera views. In the overlapping camera views' scenario, there is a common FOV area between two cameras' views, and human located in the common area (as shown in the area between cameras 1 and 2 in Fig. 1 ) will appear simultaneously in both cameras' views. In the non-overlapping camera views' scenario, there is not a common FOV area between two cameras' views, i.e., every camera's view is completely disjointed, and human cannot be seen in the so-called blind area (as shown in the area between cameras 2 and 3 in Fig. 1 ). Compared with human tracking across overlapping cameras, human tracking across non-overlapping cameras will be more challenging and practical. As a result, human tracking over camera networks is necessary and quite challenging in the intelligent video surveillance. Many issues have made human tracking over camera networks very challenging, including real-time human tracking, variable number of human tracking, and changing human appearance caused by several complicated attributes such as illumination variation, occlusion, nonrigid shape deformation, background clutters, pose variation within a camera, and dramatically varying human appearance due to greatly changing illuminations, viewpoints, and intra-class variability in shape and pose across non-overlapping cameras. In order to deal with the above challenges during human tracking over camera networks, numerous researchers have proposed a variety of tracking approaches. Different approaches focus on solving different issues in human tracking over camera networks. Typically, they attempt to answer the following questions: What should be tracked such as bounding box, ellipse, articulation block, and contour? What visual features and their pros/cons are robust and suitable for various human tracking tasks? Which kinds of statistical learning approaches and the associated properties are appropriate for human tracking? Although there are some well-known surveys BIB001 BIB003 BIB002 in terms of object tracking. However, existing surveys mainly focus on object tracking within a camera. In this survey, we focus on human tracking over camera networks. The main contributions of this survey are as follows: 1) We divide human tracking over camera networks into two inter-related modules: human tracking within a camera and human tracking across nonoverlapping cameras. 2) We review the literatures of human tracking within a camera based on the correlation among the human objects. Specifically, we hierarchically categorize the human tracking approaches within a camera into generative trackers and discriminative trackers. 3) We review the literatures of human tracking across non-overlapping cameras from human objects' matching viewpoint. Specifically, we hierarchically categorize the human tracking across nonoverlapping cameras into human re-identification (re-id), camera-link model (CLM)-based tracking and graph model (GM)-based tracking. The rest of the paper is organized as follows: Section 2 gives an overview of the taxonomy of human tracking. Section 3 reviews some core techniques for human tracking within a camera. Section 4 reviews some core techniques for human tracking across non-overlapping cameras, followed by the Conclusions in Section 5. Figure 2 shows the taxonomy of human tracking over camera networks, which is composed of two crucial Camera 2 Camera 3
|
Human tracking over camera networks: a review <s> Human tracking within a camera <s> Abstract In this paper, we propose a model-based tracking algorithm which can extract trajectory information of a target object by detecting and tracking a moving object from a sequence of images. The algorithm constructs a model from the detected moving object and match the model with successive image frames to track the target object. We use an active model which characterizes regional and structural features of a target object such as shape, texture, color, and edgeness. Our active model can adapt itself dynamically to an image sequence so that it can track a non-rigid moving object. Such an adaptation is made under the framework of energy minimization. We design an energy function so that the function can embody structural attributes of a target as well as its spectral attributes. We applied Kalman filter to predict motion information. The predicted motion information by Kalman filter was used very efficiently to reduce the search space in the matching process. <s> BIB001 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> One of the goals in the field of mobile robotics is the development of mobile platforms which operate in populated environments. For many tasks it is therefore highly desirable that a robot can track the positions of the humans in its surrounding. In this paper we introduce sample-based joint probabilistic data association filters as a new algorithm to track multiple moving objects. Our method applies Bayesian filtering to adapt the tracking process to the number of objects in the perceptual range of the robot. The approach has been implemented and tested on a real robot using laser-range data. We present experiments illustrating that our algorithm is able to robustly keep track of multiple people. The experiments furthermore show that the approach outperforms other techniques developed so far. <s> BIB002 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> We describe a framework that explicitly reasons about data association to improve tracking performance in many difficult visual environments. A hierarchy of tracking strategies results from ascribing ambiguous or missing data to: 1) noise-like visual occurrences, 2) persistent, known scene elements (i.e., other tracked objects), or 3) persistent, unknown scene elements. First, we introduce a randomized tracking algorithm adapted from an existing probabilistic data association filter (PDAF) that is resistant to clutter and follows agile motion. The algorithm is applied to three different tracking modalities-homogeneous regions, textured regions, and snakes-and extensibly defined for straightforward inclusion of other methods. Second, we add the capacity to track multiple objects by adapting to vision a joint PDAF which oversees correspondence choices between same-modality trackers and image features. We then derive a related technique that allows mixed tracker modalities and handles object overlaps robustly. Finally, we represent complex objects as conjunctions of cues that are diverse both geometrically (e.g., parts) and qualitatively (e.g., attributes). Rigid and hinge constraints between part trackers and multiple descriptive attributes for individual parts render the whole object more distinctive, reducing susceptibility to mistracking. Results are given for diverse objects such as people, microscopic cells, and chess pieces. <s> BIB003 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> In this paper, a new video moving object tracking method is proposed. In initialization, a moving object selected by the user is segmented and the dominant color is extracted from the segmented target. In tracking step, a motion model is constructed to set the system model of adaptive Kalman filter firstly. Then, the dominant color of the moving object in HSI color space will be used as feature to detect the moving object in the consecutive video frames. The detected result is fed back as the measurement of adaptive Kalman filter and the estimate parameters of adaptive Kalman filter are adjusted by occlusion ratio adaptively. The proposed method has the robust ability to track the moving object in the consecutive frames under some kinds of real-world complex situations such as the moving object disappearing totally or partially due to occlusion by other ones, fast moving object, changing lighting, changing the direction and orientation of the moving object, and changing the velocity of moving object suddenly. The proposed method is an efficient video object tracking algorithm. <s> BIB004 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> In multiple-object tracking applications, it is essential to address the problem of associating targets and observation data. For visual tracking of multiple targets which involves objects that split and merge, a target may be associated with multiple measurements and many targets may be associated with a single measurement. The space of such data association is exponential in the number of targets and exhaustive enumeration is impractical. We pose the association problem as a bipartite graph edge covering problem given the targets and the object detection information. We propose an efficient method of maintaining multiple association hypotheses with the highest probabilities over all possible histories of associations. Our approach handles objects entering and exiting the field of view, merging and splitting objects, as well as objects that are detected as fragmented parts. Experimental results are given for tracking multiple players in a soccer game and for tracking people with complex interaction in a surveillance setting. It is shown through quantitative evaluation that our method tracks through varying degrees of interactions among the targets with high success rate. <s> BIB005 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement. <s> BIB006 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> An eigenshape kernel based mean shift tracker is proposed in this paper. In contrast with the symmetric constant kernel used in the traditional mean shift tracker, this tracker employs eigenshape to construct an arbitrarily shaped kernel that is adaptive to object shape. Therefore, background information is adaptively excluded from the target. Furthermore, the eigenshape kernels are integrated with color and gradient features, which enhance tracking robustness. Experiments demonstrate that this tracker outperforms the traditional mean shift tracker significantly especially when target shape deformation, target occlusion and background clutter occur. <s> BIB007 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Abstract Representing an object with multiple image fragments or patches for target tracking in a video has proved to be able to maintain the spatial information. The major challenges in visual tracking are effectiveness and robustness. In this paper, we propose an efficient and robust fragments-based multiple kernels tracking algorithm. Fusing the log-likelihood ratio image and morphological operation divides the object into some fragments, which can maintain the spatial information. By assigning each fragment to different weight, more robust target and candidate models are built. Applying adaptive scale selection and updating schema for the target model and the weighting factors of each fragment can improve tracking robustness. Upon these advantages, the novel tracking algorithm can provide more accurate performance and can be directly extended to a multiple object tracking system. <s> BIB008 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance. <s> BIB009 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> We present in this paper a new visual tracking framework based on the MCMC-based particle algorithm. Firstly, in order to obtain a more informative likelihood, we propose to combine the color-based observation model with a detection confidence density obtained from the Histograms of Oriented Gradients (HOG) descriptor. The MCMC-based particle algorithm is then employed to estimate the posterior distribution of the target state to solve the tracking problem. The global system has been tested on different real datasets. Experimental results demonstrate the robustness of the proposed system in several difficult scenarios. <s> BIB010 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Multiple object tracking is a fundamental subsystem of many higher level applications such as traffic monitoring, people counting, robotic vision and many more. This paper explains in details the methodology of building a robust hierarchical multiple hypothesis tracker for tracking multiple objects in the videos. The main novelties of our approach are anchor-based track initialization, prediction assistance for unconfirmed track and two virtual measurements for confirmed track. The system is built mainly to deal with the problems of merge, split, fragments and occlusion. The system is divided into two levels where the first level obtains the measurement input from foreground segmentation and clustered optical flow. Only K-best hypothesis and one-to-one association are considered. Two more virtual measurements are constructed to help track retention rate for the second level, which are based on predicted state and division of occluded foreground segments. Track based K-best hypothesis with multiple associations are considered for more comprehensive observation assignment. Histogram intersection testing is performed to limit the tracker bounding box expansion. Simulation results show that all our algorithms perform well in the surroundings mentioned above. Two performance metrics are used; multiple-object tracking accuracy (MOTA) and multiple-object tracking precision (MOTP). Our tracker have performed the best compared to the benchmark trackers in both performance evaluation metrics. The main weakness of our algorithms is the heavy processing requirement. <s> BIB011 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Kernel based trackers have been proven to be a promising approach for video object tracking. The use of a single kernel often suffers from occlusion since the available visual information is not sufficient for kernel usage. In order to provide more robust tracking performance, multiple inter-related kernels have thus been utilized for tracking in complicated scenarios. This paper presents an innovative method, which uses projected gradient to facilitate multiple kernels, in finding the best match during tracking under predefined constraints. The adaptive weights are applied to the kernels in order to efficiently compensate the adverse effect introduced by occlusion. An effective scheme is also incorporated to deal with the scale change issue during the object tracking. Moreover, we embed the multiple-kernel tracking into a Kalman filtering-based tracking system to enable fully automatic tracking. Several simulation results have been done to show the robustness of the proposed multiple-kernel tracking and also demonstrate that the overall system can successfully track the video objects under occlusion. <s> BIB012 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Detecting human beings accurately in a visual surveillance system is crucial for diverse application areas including abnormal event detection, human gait characterization, congestion analysis, person identification, gender classification and fall detection for elderly people. The first step of the detection process is to detect an object which is in motion. Object detection could be performed using background subtraction, optical flow and spatio-temporal filtering techniques. Once detected, a moving object could be classified as a human being using shape-based, texture-based or motion-based features. A comprehensive review with comparisons on available techniques for detecting human beings in surveillance videos is presented in this paper. The characteristics of few benchmark datasets as well as the future research directions on human detection have also been discussed. <s> BIB013 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> This paper proposes an improved data association technique for dealing with occlusions in tracking multiple people in indoor environments. The developed technique can mitigate complex inter-target occlusions by maintaining the identity of targets during their close physical interactions. It can cope with the origin uncertainty of the multiple measurements and performs measurement to target association by automatically detecting the measurement relevance. The measurements are clustered by using the variational Bayesian method. An improved joint probabilistic data association filter (JPDAF) is proposed to associate measurements to targets with the aid of clustering process and extracting image features. A particle filter is used to track the multiple targets by exploiting the data association information. Both qualitative and quantitative evaluations are presented on real data sets which demonstrate that the proposed algorithm successfully tracks targets while solving complex occlusions. <s> BIB014 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints. <s> BIB015 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Object tracking under occlusion sense is a challenging task. Although appearance-based trackers have been greatly improved in the last decade, they are still struggling with this task. Particle filter tracking has been proven as an efficient way which could overcome nonlinear situations. Unfortunately, conventional particle filter approach encounters tracking failure during severe occlusions. In this paper, we propose an interactive particle filter method, by analyzing the occlusion relationship between different targets, the proposed algorithm select different appearance model adaptively for similarity measurement and then update the particle weight. Our method successfully resolved mutual occlusion problem in tracking multi pedestrians, experimental results show that even target is completely occluded and its trajectory is unpredictable, our algorithm is still able to achieve accurate tracking results. <s> BIB016 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> In this paper, we propose an innovative human tracking algorithm, which efficiently integrates the deformable part model (DPM) into the multiple-kernel based tracking using a moving camera. By representing each part model of a DPM detected human as a kernel, the proposed algorithm iteratively mean-shift the kernels (i.e., part models) based on color appearance and histogram of gradient (HOG) features. More specifically, the color appearance features, in terms of kernel histogram, are used for tracking each body part from one frame to the next, the deformation cost provided by DPM detector is further used to constrain the movement of each body kernel based on the HOG features. The proposed deformable multiple-kernel (DMK) tracking algorithm takes advantage of not only low computation owing to the kernelbased tracking, but also robustness of the DPM detector. Experimental results have shown the favorable performance of the proposed algorithm, which can successfully track human using a moving camera more accurately under different scenarios. <s> BIB017 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge. <s> BIB018 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> This paper presents a general formulation for a minimum cost data association problem which associates data features via one-to-one, m-to-one and one-to-n links with minimum total cost of the links. A motivating example is a problem of tracking multiple interacting nanoparticles imaged on video frames, where particles can aggregate into one particle or a particle can be split into multiple particles. Many existing multitarget tracking methods are capable of tracking non-interacting targets or tracking interacting targets of restricted degrees of interactions. The proposed formulation solves a multitarget tracking problem for general degrees of inter-object interactions. The formulation is in the form of a binary integer programming problem. We propose a polynomial time solution approach that can obtain a good relaxation solution of the binary integer programming, so the approach can be applied for multitarget tracking problems of a moderate size (for hundreds of targets over tens of time frames). The resulting solution is always integral and obtains a better duality gap than the simple linear relaxation solution of the corresponding problem. The proposed method was validated through applications to simulated multitarget tracking problems and a real multitarget tracking problem. <s> BIB019 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> Enhanced particle filter tracker by latent occlusion flag to handle full occlusion.Handled persistent and/or complex occlusions in RGBD sequences.Developed data-driven occlusion mask to evaluate various parts of observation.Fused multiple feature from color and depth domains to gain occlusion robustness. Although appearance-based trackers have been greatly improved in the last decade, they still struggle with challenges that are not fully resolved. Of these challenges, occlusions, which can be long lasting and of a wide variety, are often ignored or only partly addressed due to the difficulty in their treatments. To address this problem, in this study, we propose an occlusion-aware particle filter framework that employs a probabilistic model with a latent variable representing an occlusion flag. The proposed framework prevents losing the target by prediction of emerging occlusions, updates the target template by shifting relevant information, expands the search area for an occluded target, and grants quick recovery of the target after occlusion. Furthermore, the algorithm employs multiple features from the color and depth domains to achieve robustness against illumination changes and clutter, so that the probabilistic framework accommodates the fusion of those features. This method was applied to the Princeton RGBD Tracking Dataset, and the performance of our method with different sets of features was compared with those of the state-of-the-art trackers. The results revealed that our method outperformed the existing RGB and RGBD trackers by successfully dealing with different types of occlusions. <s> BIB020 </s> Human tracking over camera networks: a review <s> Human tracking within a camera <s> In this paper, we attempt to solve the challenging task of precise and robust human tracking from a moving camera. We propose an innovative human tracking approach, which efficiently integrates the deformable part model (DPM) into multiple-kernel tracking from a moving camera. The proposed approach consists of a two-stage tracking procedure. For each frame, we first iteratively mean-shift several spatially weighted color histograms, called kernels, from the current frame to the next frame. Each kernel corresponds to a part model of a DPM-detected human. In the second step, conditioned on the tracking results of these kernels on the later frame, we then iteratively mean-shift the part models on that frame. The part models are represented by histogram of gradient (HOG) features, and the deformation cost of each part model provided by the trained DPM detector is used to constrain the movement of each detected body part from the first step. The proposed approach takes advantage of not only low computation owing to the kernel-based tracking, but also robustness of the DPM detector without the need of laborious human detection for each frame. Experimental results have shown that the proposed approach makes it possible to successfully track humans robustly with high accuracy under different scenarios from a moving camera. <s> BIB021
|
Human tracking within a camera generates the moving trajectories of human objects over time by locating their positions in each frame of a given video sequence. Based on the correlation among the human objects, the human Fig. 3 The inter-relation between functional modules of human tracking over camera networks tracking within a camera can be categorized as two types, the generative trackers and the discriminative trackers. For the generative trackers, each target location and correspondence are estimated by iteratively updating respective location obtained from the previous frame. During the iterative search process for human objects, in order to avoid exhaustive search of the new target location to reduce the cost of computation, the most widely used tracking methods include Kalman filtering (KF) BIB001 BIB004 , Particle filtering (PF) BIB010 BIB020 BIB016 , and kernel-based tracking (KT) BIB007 BIB012 BIB008 BIB017 BIB021 . KF expresses a target movement as a dynamic process over the temporal frames and uses the previous target state to predict the next location (and possible size), and then uses the current observation to update the target location. KF can be widely applied to linear/Gaussian real-time tracking. However, when the target state variables do not follow the linear state transition and measurement relationship with Gaussian noise distributions, the KF will give poor state variable estimation results. Moreover, this tracking method cannot deal with target occlusion problem. PF realizes recursive Bayesian filtering through sequential Monte Carlo sampling based on particle representations of probability densities with associated weights. Since the PF generalizes the traditional KF and can be applied to solving non-linear/non-Gaussian tracking problems, it has a wider range of applications due to the superiority in the non-linear and non-Gaussian conditions as well as the multi-modal processing ability. However, PF has relatively high computational complexity, resulting in difficulty in achieving real-time tracking. KT adopts the mean shift (a gradient descent search based optimization method to find local optimal solution) search procedure to find the target candidate which has the highest similarity to the target model, that is represented by a spatially weighted color histogram. KT has gained more popularity for its fast convergence speed and low computation requirement, and thus can achieve real-time tracking. However, when a target is occluded, the conventional KT tends to lose the tracked target because of mismatch between target model and target candidate. Multiplekernel tracking (MKT) can help to solve the target occlusion problem. The MKT, which extends the conventional KT through representing the tracked target model with multiple kernels, e.g., two kernels (a kernel is expressed as an ellipse) are used to represent the upper/lower half of the human body separately, as shown in Fig. 4 . When the lower half of the human body is occluded (left of Fig. 4 ), using the kernel histogram of the visible upper half of the human body as the target model (right of Fig. 4) , the robust human tracking under occlusion can thus be achieved BIB012 . In order to track the objects more effectively, some constraints among kernels need be considered in the MKT. While for the discriminative trackers, all the human locations in each video frame are first obtained through a human detection algorithm BIB013 , and then the tracker jointly establishes these human objects' correspondences across frames through a target association technique. The most widely used target association techniques include joint probability data association filtering (JPDAF) BIB003 BIB002 BIB014 , multiple-hypothesis tracking (MHT) BIB018 BIB005 BIB011 , and flow network framework (FNF) BIB006 BIB009 BIB015 BIB019 . The JPDAF computes a Bayesian estimate of the correspondence between two consecutive frames, based on calculating all possible target-measurement association probabilities jointly. However, JPDAF only applied to performing data association between a fixed number of tracked targets, otherwise the tracking accuracy will be significantly degraded. The MHT overcomes this limitation by attempting to track all of possible associated hypothesis over several temporal frames and then to determine the most likely target correspondences in the several detected observations. More specifically, the MHT performs data association through building a tree of potential track hypotheses for each candidate target, where the likelihood of each track needs be calculated, and the most likely combination of tracks is selected as the finalized measurement association. However, with the increase in the number of associated objects, its computational cost will increase exponentially. The FNF formulates the target association problem as a minimum cost flow network problem with global optimization for all of the target trajectories. More specifically, the FNF represents the number of targets in the video/image as the amount of flow in the network, while the number of targets is unknown in advance. The goal of the FNF is to globally search for the amount of flow that produces the minimum cost. FNF can effectively achieve multitarget tracking. However, when there are a large number of associated objects, it needs a very high computational cost. Table 1 shows the list of the human tracking algorithms within a camera.
|
Human tracking over camera networks: a review <s> KF <s> Abstract In this paper, we propose a model-based tracking algorithm which can extract trajectory information of a target object by detecting and tracking a moving object from a sequence of images. The algorithm constructs a model from the detected moving object and match the model with successive image frames to track the target object. We use an active model which characterizes regional and structural features of a target object such as shape, texture, color, and edgeness. Our active model can adapt itself dynamically to an image sequence so that it can track a non-rigid moving object. Such an adaptation is made under the framework of energy minimization. We design an energy function so that the function can embody structural attributes of a target as well as its spectral attributes. We applied Kalman filter to predict motion information. The predicted motion information by Kalman filter was used very efficiently to reduce the search space in the matching process. <s> BIB001 </s> Human tracking over camera networks: a review <s> KF <s> In this paper, a new video moving object tracking method is proposed. In initialization, a moving object selected by the user is segmented and the dominant color is extracted from the segmented target. In tracking step, a motion model is constructed to set the system model of adaptive Kalman filter firstly. Then, the dominant color of the moving object in HSI color space will be used as feature to detect the moving object in the consecutive video frames. The detected result is fed back as the measurement of adaptive Kalman filter and the estimate parameters of adaptive Kalman filter are adjusted by occlusion ratio adaptively. The proposed method has the robust ability to track the moving object in the consecutive frames under some kinds of real-world complex situations such as the moving object disappearing totally or partially due to occlusion by other ones, fast moving object, changing lighting, changing the direction and orientation of the moving object, and changing the velocity of moving object suddenly. The proposed method is an efficient video object tracking algorithm. <s> BIB002
|
KF, which has been widely used for tracking problems, can be utilized to predict target motion information to reduce the search area of moving objects. Jang et al. BIB001 propose active models-based KF tracking algorithm to handle inter-frame changes of non-rigid human objects such as illumination changes and shape deformation. This method applies the framework of energy minimization to active models which characterizes structural and regional features of a human object such as edge, shape, color as well as texture, and hence, adapts dynamically the changes of non-rigid human objects in the consecutive video frames. Moreover, the proposed algorithm adopts KF to predict human objects' motion information to reduce the search space during the human matching process. However, the proposed approach is not applicable to track human objects in occlusion. Jang et al. further propose structural KF to handle objects' occlusion during the human tracking. The proposed algorithm uses relational information of objects' sub-regions to compensate the unreliable measurements of occluded sub-regions. More specifically, the structural KF is composed of two kinds of KFs: cell KF and relation KF. The cell KF estimates motion information of each sub-region of a human body, and the relation KF estimates the relative relationship between two adjacent sub-regions. The final estimation of a sub-region is obtained through combining the involved KFs' estimations. However, the proposed approach is difficult to select a criterion to partition human objects' sub-regions, especially when tracking multiple human objects. Moreover, it needs the other mechanism to judge each human object's degree of occlusion, resulting in a very complex human tracking system. To overcome this drawback, Weng et al. BIB002 propose a real-time and robust human tracking algorithm in a real-world environment, such as occlusion, lighting changes, fast moving human object, etc., based on adaptive KF, which allows the parameter estimations of KF to adjust automatically. More specifically, the proposed algorithm constructs a motion model to build the system state, which is then applied to prediction step, and uses color features in HSI color space to detect the moving human object so as to obtain the system measurement, where occlusion ratio is used to adaptively adjust the error covariance of KF. Li et al. propose a multi-target (i.e., moving human/vehicle) tracking algorithm using a KF motion model, based on features including the center of mass and tracking window of moving targets. More specifically, the proposed algorithm uses the background subtraction method to detect and extract moving objects, and then the detection results are used to determine whether there is a merge/split among targets. When targets' regions have merged together, multiple moving targets are regarded as a whole target to track for the moment, while when splitting multiple moving targets, feature matching is used to establish corresponding relationship of multiple merged targets, such an example of tracking three human targets in an outdoor scene is shown in Fig. 5 . In short, the KF-based tracking algorithm can effectively track objects, but it is only applicable for linear/Gaussian tracking problems.
|
Human tracking over camera networks: a review <s> PF <s> We present in this paper a new visual tracking framework based on the MCMC-based particle algorithm. Firstly, in order to obtain a more informative likelihood, we propose to combine the color-based observation model with a detection confidence density obtained from the Histograms of Oriented Gradients (HOG) descriptor. The MCMC-based particle algorithm is then employed to estimate the posterior distribution of the target state to solve the tracking problem. The global system has been tested on different real datasets. Experimental results demonstrate the robustness of the proposed system in several difficult scenarios. <s> BIB001 </s> Human tracking over camera networks: a review <s> PF <s> Object tracking under occlusion sense is a challenging task. Although appearance-based trackers have been greatly improved in the last decade, they are still struggling with this task. Particle filter tracking has been proven as an efficient way which could overcome nonlinear situations. Unfortunately, conventional particle filter approach encounters tracking failure during severe occlusions. In this paper, we propose an interactive particle filter method, by analyzing the occlusion relationship between different targets, the proposed algorithm select different appearance model adaptively for similarity measurement and then update the particle weight. Our method successfully resolved mutual occlusion problem in tracking multi pedestrians, experimental results show that even target is completely occluded and its trajectory is unpredictable, our algorithm is still able to achieve accurate tracking results. <s> BIB002 </s> Human tracking over camera networks: a review <s> PF <s> Enhanced particle filter tracker by latent occlusion flag to handle full occlusion.Handled persistent and/or complex occlusions in RGBD sequences.Developed data-driven occlusion mask to evaluate various parts of observation.Fused multiple feature from color and depth domains to gain occlusion robustness. Although appearance-based trackers have been greatly improved in the last decade, they still struggle with challenges that are not fully resolved. Of these challenges, occlusions, which can be long lasting and of a wide variety, are often ignored or only partly addressed due to the difficulty in their treatments. To address this problem, in this study, we propose an occlusion-aware particle filter framework that employs a probabilistic model with a latent variable representing an occlusion flag. The proposed framework prevents losing the target by prediction of emerging occlusions, updates the target template by shifting relevant information, expands the search area for an occluded target, and grants quick recovery of the target after occlusion. Furthermore, the algorithm employs multiple features from the color and depth domains to achieve robustness against illumination changes and clutter, so that the probabilistic framework accommodates the fusion of those features. This method was applied to the Princeton RGBD Tracking Dataset, and the performance of our method with different sets of features was compared with those of the state-of-the-art trackers. The results revealed that our method outperformed the existing RGB and RGBD trackers by successfully dealing with different types of occlusions. <s> BIB003
|
PF, which generalizes the traditional KF, can be applied to non-linear/non-Gaussian tracking problems. The Markov Chain Monte Carlo (MCMC) method, which samples from a probability distribution based on constructing a Markov chain that has the desired distribution as its equilibrium distribution, is well applied to tracking problems to overcome the limitation of important sampling of original PF in high dimensional state space. Cong et al. BIB001 propose a robust MCMC-based PF tracking framework, which combines a color-based observation model with detection confidence density derived from histograms of oriented gradients (HOG) descriptor, and adopts MCMC-based particle algorithm to estimate the posterior distribution of the state of a human object to solve the robust human tracking problem. To further handle sample impoverishment problem suffered by conventional PF, Zhang et al. propose a swarm intelligence-based PF tracking algorithm, where particles are firstly propagated through the state transition model, and then corporately evolved according to particle swarm optimization (PSO) iterations based on the cognitive and social aspects of particle populations. The proposed algorithm regards particles as intelligent individuals, and these particles evolve by communicating and cooperating with each other. In this way, the newest observations are gradually considered to approximate the sampling results from the optimal proposal distribution and hence overcome the sample impoverishment problem suffered by conventional PF. To deal with the challenging occlusion problem during human tracking, Meshgi et al. BIB003 propose an occlusion-aware particle filter framework to deal with complex and persistent occlusions during human tracking. More specifically, the proposed method adopts a binary occlusion flag attached to each particle and treats occlusions in a probabilistic manner. The "occlusion flag" signals whether the corresponding bounding box is occluded, and then triggers the stochastic mechanism to enlarge the objects' search area to accommodate possible trajectory changes during occlusions, meanwhile stops the template updating to prevent the model from being corrupted by irrelevant data. Yang et al. BIB002 propose interactive PF with by feature matching occlusion handing for multi-person tracking. More specifically, they use RGB color space model of each human object obtained by human detection operation, and then use the PF on each human object. Further, the proposed algorithm adopts a particle location conflict set to judge the occlusion relationship between different human objects, and chooses the right appearance model adaptively for similarity measurement to update the corresponding particle weights, thus successfully resolves a fully mutual occlusion problem when tracking multiple pedestrians, such an example of tracking multiple human targets in an outdoor scene is shown in Fig. 6 . In short, the PFbased tracking algorithms can effectively track the moving human objects, applicable to both linear/Gaussian and non-linear/non-Gaussian tracking problems. However, it requires matching a large number of particles to approximate the posterior probability distribution of the target state, hence it is not applicable to real-time object tracking.
|
Human tracking over camera networks: a review <s> KT <s> We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. <s> BIB001 </s> Human tracking over camera networks: a review <s> KT <s> We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function. <s> BIB002 </s> Human tracking over camera networks: a review <s> KT <s> An eigenshape kernel based mean shift tracker is proposed in this paper. In contrast with the symmetric constant kernel used in the traditional mean shift tracker, this tracker employs eigenshape to construct an arbitrarily shaped kernel that is adaptive to object shape. Therefore, background information is adaptively excluded from the target. Furthermore, the eigenshape kernels are integrated with color and gradient features, which enhance tracking robustness. Experiments demonstrate that this tracker outperforms the traditional mean shift tracker significantly especially when target shape deformation, target occlusion and background clutter occur. <s> BIB003 </s> Human tracking over camera networks: a review <s> KT <s> Abstract Representing an object with multiple image fragments or patches for target tracking in a video has proved to be able to maintain the spatial information. The major challenges in visual tracking are effectiveness and robustness. In this paper, we propose an efficient and robust fragments-based multiple kernels tracking algorithm. Fusing the log-likelihood ratio image and morphological operation divides the object into some fragments, which can maintain the spatial information. By assigning each fragment to different weight, more robust target and candidate models are built. Applying adaptive scale selection and updating schema for the target model and the weighting factors of each fragment can improve tracking robustness. Upon these advantages, the novel tracking algorithm can provide more accurate performance and can be directly extended to a multiple object tracking system. <s> BIB004 </s> Human tracking over camera networks: a review <s> KT <s> Kernel based trackers have been proven to be a promising approach for video object tracking. The use of a single kernel often suffers from occlusion since the available visual information is not sufficient for kernel usage. In order to provide more robust tracking performance, multiple inter-related kernels have thus been utilized for tracking in complicated scenarios. This paper presents an innovative method, which uses projected gradient to facilitate multiple kernels, in finding the best match during tracking under predefined constraints. The adaptive weights are applied to the kernels in order to efficiently compensate the adverse effect introduced by occlusion. An effective scheme is also incorporated to deal with the scale change issue during the object tracking. Moreover, we embed the multiple-kernel tracking into a Kalman filtering-based tracking system to enable fully automatic tracking. Several simulation results have been done to show the robustness of the proposed multiple-kernel tracking and also demonstrate that the overall system can successfully track the video objects under occlusion. <s> BIB005 </s> Human tracking over camera networks: a review <s> KT <s> In this paper, we propose an innovative human tracking algorithm, which efficiently integrates the deformable part model (DPM) into the multiple-kernel based tracking using a moving camera. By representing each part model of a DPM detected human as a kernel, the proposed algorithm iteratively mean-shift the kernels (i.e., part models) based on color appearance and histogram of gradient (HOG) features. More specifically, the color appearance features, in terms of kernel histogram, are used for tracking each body part from one frame to the next, the deformation cost provided by DPM detector is further used to constrain the movement of each body kernel based on the HOG features. The proposed deformable multiple-kernel (DMK) tracking algorithm takes advantage of not only low computation owing to the kernelbased tracking, but also robustness of the DPM detector. Experimental results have shown the favorable performance of the proposed algorithm, which can successfully track human using a moving camera more accurately under different scenarios. <s> BIB006 </s> Human tracking over camera networks: a review <s> KT <s> Object tracking under occlusion sense is a challenging task. Although appearance-based trackers have been greatly improved in the last decade, they are still struggling with this task. Particle filter tracking has been proven as an efficient way which could overcome nonlinear situations. Unfortunately, conventional particle filter approach encounters tracking failure during severe occlusions. In this paper, we propose an interactive particle filter method, by analyzing the occlusion relationship between different targets, the proposed algorithm select different appearance model adaptively for similarity measurement and then update the particle weight. Our method successfully resolved mutual occlusion problem in tracking multi pedestrians, experimental results show that even target is completely occluded and its trajectory is unpredictable, our algorithm is still able to achieve accurate tracking results. <s> BIB007 </s> Human tracking over camera networks: a review <s> KT <s> In this paper, we attempt to solve the challenging task of precise and robust human tracking from a moving camera. We propose an innovative human tracking approach, which efficiently integrates the deformable part model (DPM) into multiple-kernel tracking from a moving camera. The proposed approach consists of a two-stage tracking procedure. For each frame, we first iteratively mean-shift several spatially weighted color histograms, called kernels, from the current frame to the next frame. Each kernel corresponds to a part model of a DPM-detected human. In the second step, conditioned on the tracking results of these kernels on the later frame, we then iteratively mean-shift the part models on that frame. The part models are represented by histogram of gradient (HOG) features, and the deformation cost of each part model provided by the trained DPM detector is used to constrain the movement of each detected body part from the first step. The proposed approach takes advantage of not only low computation owing to the kernel-based tracking, but also robustness of the DPM detector without the need of laborious human detection for each frame. Experimental results have shown that the proposed approach makes it possible to successfully track humans robustly with high accuracy under different scenarios from a moving camera. <s> BIB008
|
KT has been widely used for real-time target tracking problems. During the target tracking, when a target is moving toward or away from a camera, the scale of the target often changes over temporal frames. In order to overcome the problem, by taking the merit from asymmetric kernel template, Liu et al. BIB003 propose an eigenshape kernel-based mean shift tracking algorithm to handle the scale changes of tracked objects. The socalled eigenshape kernel refers to an adaptively changing kernel shape by depending on the projection of each tracking window into an eigenshape space. The proposed algorithm utilizes the eigenshape representation, which is obtained by using a principle component analysis method, to construct an arbitrarily shaped kernel so as to adapt to object shape. By making the best of positive correlation between the target size and the corresponding kernel bandwidth, Chu et al. BIB005 adopt the gradient of the density estimator with respect to the kernel bandwidth to update the scale of tracked objects. The proposed scale-updating method is a simple and effective solution to deal with the target scale change issue. In addition, a target often suffers from the occlusion during target tracking, especially in crowd scenes; it is very difficult for the KT to robustly track the target since single kernel is insufficient to represent the target. To overcome this drawback, MKT is thus proposed in recent years BIB005 BIB004 BIB006 BIB008 . Fang et al. BIB004 propose MKT based on fragments to deal with occlusion issue. The tracked target is divided into several fragments by integrating the log-likelihood ratio image and morphological operation, and each fragment is tracked through a kernel using the mean shift procedure. Further, to make the best of the inter-relationship among kernels that can provide useful information for tracking, Chu et al. BIB005 propose adaptive MKT based on the projected gradient optimization algorithm, which combines the total cost function with the constraint functions that defined the inter-relationship among kernels, and hence enables multiple kernels that represents different human body parts to find the best match of the tracked human objects under predefined geometric constraints. However, arbitrary kernel partitioning makes it difficult to define effective geometric constraints among kernels. To better deal with this issue to improve the robustness and effectiveness under occlusion further, Hou et al. BIB006 BIB008 propose a deformable multiple-kernel-based human tracking system using a moving camera. This system regards each part model of a deformable part model (DPM) detected human BIB002 as a kernel, where the DPM represents a human object by a so-called star model, that is composed of a coarse root filter and several higher resolution part filters as shown in Fig. 7 , and adopts the deformation cost provided by the DPM detector to restrict the displacement of kernels during human tracking. Moreover, the proposed algorithm iteratively shifts the (a) (b) (c) Fig. 6 Illustration of tracking multiple human targets in an outdoor scene. a Human targets tracked with different color rectangle bounding boxes. b Two human targets tracked successfully under fully mutual occlusion with a red/green rectangle bounding box. c Two human targets split correctly after fully mutual occlusion with a red/green rectangle bounding box BIB007 kernels based on kernel histogram (i.e., spatially weighted color histogram) and histogram of oriented gradient (HOG) BIB001 in each video frame, and hence enables a robust and efficient human tracking solution without training required. In short, KT can achieve effective and robust as well as real-time human tracking by selecting excellent kernel function and sufficient human object representation. However, when a pedestrian move too fast or is totally occluded for a long time, the KT tends to lose the tracked human target.
|
Human tracking over camera networks: a review <s> JPDAF <s> One of the goals in the field of mobile robotics is the development of mobile platforms which operate in populated environments. For many tasks it is therefore highly desirable that a robot can track the positions of the humans in its surrounding. In this paper we introduce sample-based joint probabilistic data association filters as a new algorithm to track multiple moving objects. Our method applies Bayesian filtering to adapt the tracking process to the number of objects in the perceptual range of the robot. The approach has been implemented and tested on a real robot using laser-range data. We present experiments illustrating that our algorithm is able to robustly keep track of multiple people. The experiments furthermore show that the approach outperforms other techniques developed so far. <s> BIB001 </s> Human tracking over camera networks: a review <s> JPDAF <s> We describe a framework that explicitly reasons about data association to improve tracking performance in many difficult visual environments. A hierarchy of tracking strategies results from ascribing ambiguous or missing data to: 1) noise-like visual occurrences, 2) persistent, known scene elements (i.e., other tracked objects), or 3) persistent, unknown scene elements. First, we introduce a randomized tracking algorithm adapted from an existing probabilistic data association filter (PDAF) that is resistant to clutter and follows agile motion. The algorithm is applied to three different tracking modalities-homogeneous regions, textured regions, and snakes-and extensibly defined for straightforward inclusion of other methods. Second, we add the capacity to track multiple objects by adapting to vision a joint PDAF which oversees correspondence choices between same-modality trackers and image features. We then derive a related technique that allows mixed tracker modalities and handles object overlaps robustly. Finally, we represent complex objects as conjunctions of cues that are diverse both geometrically (e.g., parts) and qualitatively (e.g., attributes). Rigid and hinge constraints between part trackers and multiple descriptive attributes for individual parts render the whole object more distinctive, reducing susceptibility to mistracking. Results are given for diverse objects such as people, microscopic cells, and chess pieces. <s> BIB002 </s> Human tracking over camera networks: a review <s> JPDAF <s> This paper proposes an improved data association technique for dealing with occlusions in tracking multiple people in indoor environments. The developed technique can mitigate complex inter-target occlusions by maintaining the identity of targets during their close physical interactions. It can cope with the origin uncertainty of the multiple measurements and performs measurement to target association by automatically detecting the measurement relevance. The measurements are clustered by using the variational Bayesian method. An improved joint probabilistic data association filter (JPDAF) is proposed to associate measurements to targets with the aid of clustering process and extracting image features. A particle filter is used to track the multiple targets by exploiting the data association information. Both qualitative and quantitative evaluations are presented on real data sets which demonstrate that the proposed algorithm successfully tracks targets while solving complex occlusions. <s> BIB003
|
JPDAF is one of widely used techniques for data association in multi-target tracking. It jointly achieves multitarget tracking by associating all measurements with each track, where a track is defined through a sequence of measurements assumed to derive from the same object. Occlusion between tracked objects is one of the most difficult problems in multi-target tracking. To solve the issue, Rasmussen et al. BIB002 propose to track complex visual objects based on the JPDAF algorithm, where a related technique called Joint Likelihood Filter (JLF), i.e., relating the exclusion principle at the heart of the JPDAF to the method of masking out image data, is used to deal with occlusions between tracked objects. However, this method calls for very high computational requirements with the number of associated objects increasing. To take full advantage of more available information to further improve the tracking performance, Schulz et al. BIB001 propose sample-based JPDAF for tracking multiple moving human objects using a mobile robot, where the JPDAF algorithm is directly applied to the sample sets of the individual particle filter to determine the correspondence between the individual object and measurement. Moreover, the proposed approach adopts different features extracted from consecutive sensor measurements to explicitly deal with occlusions. However, the proposed method adopts fixed sample sizes for the particle filters, and randomly introduces samples whenever a new human object has been discovered. Therefore, more intelligent sampling techniques may result in improved results and faster convergence. To better deal with complex inter-target occlusion problems, with the aid of clustering process and extracted image features, Naqvi et al. BIB003 propose clustering and JPDAF for coping with occlusions in multi-target Root Filter Part Filter Deformation cost tracking. More specifically, the proposed algorithm adopts the variational Bayesian method for grouping measurements into clusters, and then uses a JPDAF technique to associate measurements to targets based on clustering image features; occlusion problems can thus be dealt with more effectively in multi-target tracking. However, this method is difficult to deal with numerous targets and measurements such as multiple human objects tracking in crowded scenes. To overcome this drawback, Rezatofighi et al. revisit the JPDAF technique and propose a novel solution in formulating the problem as an integer linear program, which is embedded in a simple tracking framework. More specifically, the proposed method reformulates the calculation of individual JPDA assignment scores as a series of integer linear programs, and approximates the joint score by the m-best solutions, which is efficiently calculated by using a binary tree partition method, and hence addresses the issue of high computational complexity associated with JPDAF without forfeiting tracking performance. Such an example of tracking multiple human targets in a crowded scene is shown in Fig. 8 . In short, the JPDAF is a good technique for data association in multi-target tracking, but it is very difficult to effectively track variable number of objects, such as a new object entering the field of view (FOV) or a tracked object exiting the FOV. Also, the JPDAF establishes the targets' correspondence using only two frames information; sometimes it will inevitably bring an incorrect correspondence.
|
Human tracking over camera networks: a review <s> MHT <s> In multiple-object tracking applications, it is essential to address the problem of associating targets and observation data. For visual tracking of multiple targets which involves objects that split and merge, a target may be associated with multiple measurements and many targets may be associated with a single measurement. The space of such data association is exponential in the number of targets and exhaustive enumeration is impractical. We pose the association problem as a bipartite graph edge covering problem given the targets and the object detection information. We propose an efficient method of maintaining multiple association hypotheses with the highest probabilities over all possible histories of associations. Our approach handles objects entering and exiting the field of view, merging and splitting objects, as well as objects that are detected as fragmented parts. Experimental results are given for tracking multiple players in a soccer game and for tracking people with complex interaction in a surveillance setting. It is shown through quantitative evaluation that our method tracks through varying degrees of interactions among the targets with high success rate. <s> BIB001 </s> Human tracking over camera networks: a review <s> MHT <s> Multiple object tracking is a fundamental subsystem of many higher level applications such as traffic monitoring, people counting, robotic vision and many more. This paper explains in details the methodology of building a robust hierarchical multiple hypothesis tracker for tracking multiple objects in the videos. The main novelties of our approach are anchor-based track initialization, prediction assistance for unconfirmed track and two virtual measurements for confirmed track. The system is built mainly to deal with the problems of merge, split, fragments and occlusion. The system is divided into two levels where the first level obtains the measurement input from foreground segmentation and clustered optical flow. Only K-best hypothesis and one-to-one association are considered. Two more virtual measurements are constructed to help track retention rate for the second level, which are based on predicted state and division of occluded foreground segments. Track based K-best hypothesis with multiple associations are considered for more comprehensive observation assignment. Histogram intersection testing is performed to limit the tracker bounding box expansion. Simulation results show that all our algorithms perform well in the surroundings mentioned above. Two performance metrics are used; multiple-object tracking accuracy (MOTA) and multiple-object tracking precision (MOTP). Our tracker have performed the best compared to the benchmark trackers in both performance evaluation metrics. The main weakness of our algorithms is the heavy processing requirement. <s> BIB002 </s> Human tracking over camera networks: a review <s> MHT <s> This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge. <s> BIB003
|
MHT is another widely used technique for data association in multi-target tracking. It maintains several correspondence hypotheses for each object at each video frame and establishes the targets' correspondence through several frames of observations. However, the MHT has very high computational load since it exhaustively enumerates all possible associations. To reduce the computational requirement, Zúñiga et al. propose a real-time MHT-based multi-human tracking approach, which can reliably track multiple human objects even in noisy environments. The proposed approach takes advantage of a dual object model through combining 2D with 3D features through reliability measures to generate tracking hypotheses of the moving human objects in the scene. Moreover, the proposed approach can manage many-to-many human objects' correspondences in real time. Kim et al. BIB003 revisit the MHT technique in a tracking-by-detection framework and propose a novel and more efficient MHT algorithm, which embeds online learned appearance models for each track hypothesis through a regularized least squares framework, and hence achieves pruning the hypothesis space more effectively and accurately so as to reduce the ambiguities in data association. However, the above MHT algorithm is still difficult to deal with complex interactions between the objects. To handle the issue, Joo et al. BIB001 propose multiple association-based MHT algorithms, relaxing the association constraint of conventional MHT to allow association of a single target with multiple measurements and multiple targets with a single measurement. More specifically, the proposed method regards the data association among multiple objects as a minimum weight bipartite graph edge, which is defined as a subset of edges such that each vertex is incident on at least one edge and the sum of the weights in the subset of edges is minimum, given an edge weighted graph. In addition, they develop a polynomial-time algorithm to generate only the best multiple association hypotheses, achieving robust and real-time target tracking. Zulkifley et al. BIB002 propose hierarchical two-level MHT for multiple-object tracking. The first level adopts foreground segmentation detection and clusters optical flow detection to generate observations so as to obtain stable velocity values and to filter out false track. The second level combines the outputs of the first-level with two additional virtual measurements based on appearance modeling and a big foreground blob to find the best combination of the observations. In short, the MHT algorithm has a wider practical application in multi-target tracking; it not only can track variable number of objects, but also can deal with the occlusion problem. However, it has vitally high computational requirement, especially with the increased number of associated objects.
|
Human tracking over camera networks: a review <s> FNF <s> We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement. <s> BIB001 </s> Human tracking over camera networks: a review <s> FNF <s> We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance. <s> BIB002 </s> Human tracking over camera networks: a review <s> FNF <s> We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints. <s> BIB003 </s> Human tracking over camera networks: a review <s> FNF <s> This paper presents a general formulation for a minimum cost data association problem which associates data features via one-to-one, m-to-one and one-to-n links with minimum total cost of the links. A motivating example is a problem of tracking multiple interacting nanoparticles imaged on video frames, where particles can aggregate into one particle or a particle can be split into multiple particles. Many existing multitarget tracking methods are capable of tracking non-interacting targets or tracking interacting targets of restricted degrees of interactions. The proposed formulation solves a multitarget tracking problem for general degrees of inter-object interactions. The formulation is in the form of a binary integer programming problem. We propose a polynomial time solution approach that can obtain a good relaxation solution of the binary integer programming, so the approach can be applied for multitarget tracking problems of a moderate size (for hundreds of targets over tens of time frames). The resulting solution is always integral and obtains a better duality gap than the simple linear relaxation solution of the corresponding problem. The proposed method was validated through applications to simulated multitarget tracking problems and a real multitarget tracking problem. <s> BIB004
|
It becomes more and more popular in recent years to solve target association problems based on FNF, which is widely applied to multiple target tracking. Zhang et al. BIB001 propose an explicit occlusion model (EOM)-based minimal cost FNF to achieve robust multi-human tracking. The proposed approach maps the maximum a posteriori (MAP) data association problem into a cost-flow network with a non-overlap constraint on trajectories and adopts a min-cost flow algorithm to find the global optimal trajectory association in the network, given a set of human object detection results in each video frame as input observations, where observation likelihood and transition probabilities are modeled as flow costs, and non-overlapping trajectory hypotheses are modeled as disjoint flow paths. In addition, the proposed approach constructs an EOM through adding occlusion nodes and constraints to the network to solve long-term interobject occlusion problems, and thus achieves real-time and robust multi-human tracking. Following the mincost flow approach of EOM, Pirsiavash et al. BIB002 use a cost function that needs estimating the number of tracks, the objects' birth (i.e., a new object entering the FOV) and death state (i.e., a tracked object exiting the FOV) to formulate the computational problem of multiobject tracking. A greedy but globally optimal algorithm, which adopts shortest path computations based on a min-cost flow framework, is used for tracking a variable number of human objects. Such an example of tracking variable number of human objects in an outdoor scene is shown in Fig. 9 . However, the above methods do not allow for path smoothness constraints. To solve the issue further, Butt et al. BIB003 develop a graph formulation that allows for encoding constant velocity constraints to evaluate the path smoothness over three adjacent frames, where candidate match pairs of observations are viewed as nodes in the graph, allowing each graph edge to encode an observation-based cost, and adopt the principle of Lagrangian relaxation to form a modifiedcost network framework for global multi-human tracking. However, the above methods impose a constraint that one measurement is associated with only one target, i.e., one-to-one data association. To deal with many-toone or one-to-many data associations, Park et al. BIB004 propose a general formulation called binary integer programming to handle a min-cost data association problem among target-measurement data associations through one-to-one, many-to-one, and one-to-many data associations (also called multi-way data associations) to track multiple interacting targets in video frames. The proposed method adopts Lagrangian dual relaxation to solve the binary integer programming problem, and hence achieves integer-valued solution with smaller duality gap than classical linear programming (LP) relaxation so as to improve the accuracy of data associations. However, the multi-way data associations are difficult to achieve real-time multiple human tracking. In short, the FNF-based tracking performance highly depends on the reliable detection. When the missing detection or long-time occlusion occurs, the tracking performance deteriorates significantly.
|
Human tracking over camera networks: a review <s> Human tracking across non-overlapping cameras <s> We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance. <s> BIB001 </s> Human tracking over camera networks: a review <s> Human tracking across non-overlapping cameras <s> Intelligent multi-camera video surveillance is a multidisciplinary field related to computer vision, pattern recognition, signal processing, communication, embedded computing and image sensors. This paper reviews the recent development of relevant technologies from the perspectives of computer vision and pattern recognition. The covered topics include multi-camera calibration, computing the topology of camera networks, multi-camera tracking, object re-identification, multi-camera activity analysis and cooperative video surveillance both with active and static cameras. Detailed descriptions of their technical challenges and comparison of different solutions are provided. It emphasizes the connection and integration of different modules in various environments and application scenarios. According to the most recent works, some problems can be jointly solved in order to improve the efficiency and accuracy. With the fast development of surveillance systems, the scales and complexities of camera networks are increasing and the monitored environments are becoming more and more complicated and crowded. This paper discusses how to face these emerging challenges. <s> BIB002
|
Human tracking across non-overlapping cameras establishes detected/tracked human objects' correspondence between two non-overlapping cameras so as to successfully perform label handoff. Based on the approaches used for target matching, human tracking across cameras can be divided into three main categories, human re-id, CLM-based tracking, and GM-based tracking. For human re-id, which is to identify whether a human taken from one camera is the same as one taken from another camera or not. Human image-pair captured in two different cameras often varies greatly in appearance due to changes in illumination, viewpoint as well as intra-class variability in shape and pose. Such examples in VIPeR dataset are shown in Fig. 10 . The current research on the human re-id is primarily focused on two aspects BIB002 : one is extracting discriminative visual features to characterize human appearance and shape, the other is identifying suitable distance metrics that Fig. 9 Illustration of tracking variable number of human objects in an outdoor scene, including estimated track births and deaths BIB001 maximize the likelihood of a correct correspondence. However, most visual features are either insufficiently discriminative for cross-view matching or insufficiently robust to viewpoint changes, resulting in a significant challenge for automated human re-id. Distance metric learning shifts the focus from capturing feature descriptors to learning distance metrics that maximize the human matching accuracy to improve human re-id performance. However, most distance metric learning requires pairwise supervised labeling of training datasets. It will become infeasible since the labeling needs a large amount of manual effort with the increased size of datasets or number of camera pairs. For the CLM-based tracking, which is to track humans through establishing the link (correlation) models between two adjacent or among multiple neighboring cameras to compensate for the feature difference derived from different cameras. It is mainly applicable for tracking humans across multiple static cameras. The current research on the CLM-based tracking is primarily based on temporal and spatial relationships to reduce mismatch across cameras tracking, as well as appearance relationship to compensate for the appearance difference between two adjacent cameras. The CLM can be estimated in a supervised learning manner, i.e., with manually labeling the human objects' correspondence from given training data in advance; or an unsupervised learning manner, i.e., without manually labeling the human objects' correspondence from given training data. As a result, compared to the supervised learning-based CLM, which needs a lot of human labeling efforts, especially with the increased size of datasets or number of camera pairs, the unsupervised learningbased CLM is more feasible to achieve self-organized and scalable large-scale camera networks. For the GM-based tracking, which is to track humans through a graph modeling technique to form a solvable GM based on input observations (detections, tracklets, trajectories or pairs) to deal with data association across cameras, where the GM is composed of nodes, edges, and weights and solved using an optimization solution through MAP estimation framework, to obtain optimal or suboptimal solutions. This tracking method can effectively track humans in complex scenes, such as occlusion, crowd, and interference of human appearance similarity. However, it is difficult to get the optimal solution of data association across cameras. Table 4 shows the list of the human tracking algorithms across nonoverlapping cameras.
|
Human tracking over camera networks: a review <s> Feature extraction <s> Matching of single individuals as they move across disjoint camera views is a challenging task in video surveillance. In this paper, we present a novel algorithm capable of matching single individuals in such a scenario based on appearance features. In order to reduce the variable illumination effects in a typical disjoint camera environment, a cumulative color histogram transformation is first applied to the segmented moving object. Then, an incremental major color spectrum histogram representation (IMCSHR) is used to represent the appearance of a moving object and cope with small pose changes occurring along the track. An IMCHSR-based similarity measurement algorithm is also proposed to measure the similarity of any two segmented moving objects. A final step of post-matching integration along the object's track is eventually applied. Experimental results show that the proposed approach proved capable of providing correct matching in typical situations. <s> BIB001 </s> Human tracking over camera networks: a review <s> Feature extraction <s> In this work we develop appearance models for computing the similarity between image regions containing deformable objects of a given class in realtime. We introduce the concept of shape and appearance context. The main idea is to model the spatial distribution of the appearance relative to each of the object parts. Estimating the model entails computing occurrence matrices. We introduce a generalization of the integral image and integral histogram frameworks, and prove that it can be used to dramatically speed up occurrence computation. We demonstrate the ability of this framework to recognize an individual walking across a network of cameras. Finally, we show that the proposed approach outperforms several other methods. <s> BIB002 </s> Human tracking over camera networks: a review <s> Feature extraction <s> Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians. <s> BIB003 </s> Human tracking over camera networks: a review <s> Feature extraction <s> In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances. <s> BIB004 </s> Human tracking over camera networks: a review <s> Feature extraction <s> We propose a novel methodology for re-identification, based on Pictorial Structures (PS). Whenever face or other biometric information is missing, humans recognize an individual by selectively focusing on the body parts, looking for part-to-part correspondences. We want to take inspiration from this strategy in a re-identification context, using PS to achieve this objective. For single image re-identification, we adopt PS to localize the parts, extract and match their descriptors. When multiple images of a single individual are available, we propose a new algorithm to customize the fit of PS on that specific person, leading to what we call a Custom Pictorial Structure (CPS). CPS learns the appearance of an individual, improving the localization of its parts, thus obtaining more reliable visual characteristics for re-identification. It is based on the statistical learning of pixel attributes collected through spatio-temporal reasoning. The use of PS and CPS leads to state-of-the-art results on all the available public benchmarks, and opens a fresh new direction for research on re-identification. <s> BIB005 </s> Human tracking over camera networks: a review <s> Feature extraction <s> Visually identifying a target individual reliably in a crowded environment observed by a distributed camera network is critical to a variety of tasks in managing business information, border control, and crime prevention. Automatic re-identification of a human candidate from public space CCTV video is challenging due to spatiotemporal visual feature variations and strong visual similarity between different people, compounded by low-resolution and poor quality video data. In this work, we propose a novel method for re-identification that learns a selection and weighting of mid-level semantic attributes to describe people. Specifically, the model learns an attribute-centric, parts-based feature representation. This differs from and complements existing low-level features for re-identification that rely purely on bottom-up statistics for feature selection, which are limited in discriminating and identifying reliably visual appearances of target people appearing in different camera views under certain degrees of occlusion due to crowdedness. Our experiments demonstrate the effectiveness of our approach compared to existing feature representations when applied to benchmarking datasets. <s> BIB006 </s> Human tracking over camera networks: a review <s> Feature extraction <s> In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach. <s> BIB007
|
Extracting discriminative and robust features from raw pixel data in an image/video has become one of the important tasks in human re-id. There are a lot of feature types proposed for human re-id, such as color BIB001 , texture BIB007 , shape BIB002 , global features BIB001 BIB002 , regional features BIB005 , patch-based features BIB007 , and semantic features BIB006 . In general, compared to other features, color feature is dominant under slight lighting changes since it is robust to changes in viewpoint. Texture or shape feature is stable under significant lighting changes, but they are subject to changes in viewpoint and occlusion. Global features, which reflect the global statistical characteristics of human appearance, have some invariance to changes in viewpoint and pose, but their discriminative power is not enough due to loss of spatial information which represents human object structure. Regional features and patch-based features increase the discriminative power further by taking into account the spatial information derived from partitioning the whole human region into several different regions, such as horizontal stripes, localized patches, and etc. Semantic features have better discriminative power and robustness to the cross-view variations. However, the semantic features require more labeling efforts, therefore, their generalization capability is limited. When executing cross-view human matching, the humans' appearance normally changes significantly due to the changes in illumination and viewpoint, therefore the use of a single feature to identify cross-view human objects is not enough. Most human re-id approaches benefit from integrating several features types to improve the crossview human matching accuracy and robustness by taking advantage of the complementary nature among various features. Gray and Tao BIB003 propose the ensemble of localized features (ELF) to deal with viewpoint variations across cameras. More specifically, the ELF integrates RGB, YCbCr, HSV color features, and two kinds of texture features extracted through Schmid and Gabor filters with different radiuses and scales. An effective feature selection is performed through the AdaBoost machine learning algorithm to find the most discriminating features out of a large pool of color and texture features. Farenzena et al. BIB004 propose the Symmetry-Driven Accumulation of Local Features (SDALF) to describe human appearance across cameras. The SDALF encodes three complementary visual characteristics of the human appearance including the overall chromatic content represented through HSV color histogram, the spatial arrangement of colors into stable regions represented
|
Human tracking over camera networks: a review <s> GM-based tracking <s> Conventional tracking approaches assume proximity in space, time and appearance of objects in successive observations. However, observations of objects are often widely separated in time and space when viewed from multiple non-overlapping cameras. To address this problem, we present a novel approach for establishing object correspondence across non-overlapping cameras. Our multicamera tracking algorithm exploits the redundance in paths that people and cars tend to follow, e.g. roads, walk-ways or corridors, by using motion trends and appearance of objects, to establish correspondence. Our system does not require any inter-camera calibration, instead the system learns the camera topology and path probabilities of objects using Parzen windows, during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework. The learned parameters are updated with changing trajectory patterns. Experiments with real world videos are reported, which validate the proposed approach. <s> BIB001 </s> Human tracking over camera networks: a review <s> GM-based tracking <s> The traditional multi-camera object tracking contains two steps: single camera object tracking (SCT) and inter-camera object tracking (ICT). The ICT performance strongly relies on the great results of SCT. In practice, most of current SCT methods are unperfect and products much more fragments. In this paper, a novel solution using a global tracklet association is proposed, which can provide a good ICT performance when the SCT results are not perfect. The proposed solution is also available in non-overlapping views through a new tracklet representation and experiments shows the effectiveness of the proposed novel solution in real scene. <s> BIB002 </s> Human tracking over camera networks: a review <s> GM-based tracking <s> Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with an improved similarity metric. Our method treats the similarities of single camera tracking and inter-camera tracking differently and obtains the optimization in a global graph model. The results show that our method can work better even in the condition of poor single camera object tracking. <s> BIB003 </s> Human tracking over camera networks: a review <s> GM-based tracking <s> Tracking multiple targets across nonoverlapping cameras aims at estimating the trajectories of all targets, and maintaining their identity labels consistent while they move from one camera to another. Matching targets from different cameras can be very challenging, as there might be significant appearance variation and the blind area between cameras makes the target’s motion less predictable. Unlike most of the existing methods that only focus on modeling the appearance and spatiotemporal cues for inter-camera tracking, this paper presents a novel online learning approach that considers integrating high-level contextual information into the tracking system. The tracking problem is formulated using an online learned conditional random field (CRF) model that minimizes a global energy cost. Besides low-level information, social grouping behavior is explored in order to maintain targets’ identities as they move across cameras. In the proposed method, pairwise grouping behavior of targets is first learned within each camera. During inter-camera tracking, track associations that maintain single camera grouping consistencies are preferred. In addition, we introduce an iterative algorithm to find a good solution for the CRF model. Comparison experiments on several challenging real-world multicamera video sequences show that the proposed method is effective and outperforms the state-of-the-art approaches. <s> BIB004
|
To track humans through partite graph matching based on input observations (detections, tracklets, trajectories, or pairs) GM-based tracking using the optimization framework is also applied to human tracking across non-overlapping cameras. Javed et al. BIB001 propose to establish human objects' correspondences across non-overlapping cameras through the MAP estimation framework based on human motion trends and appearance of human objects. More specifically, the proposed method adopts Parzen windows, i.e., kernel density estimators, to estimate inter-camera space-time probabilities from the training data between each pair of cameras, and models the changes in human appearance using the distances between color models. To estimate the human correspondences across non-overlapping cameras, the proposed method then models the issue of finding the hypothesis that maximizes the MAP as finding the path of a directed graph. In addition, to keep up with the changing human motion and appearance patterns, the proposed method continuously updates the learned parameters during the human tracking across non-overlapping cameras. However, the above method only focuses on appearance and spatio-temporal cues, Chen et al. BIB004 combine high-level contextual information called social grouping behavior with traditionally used appearance and spatiotemporal cues into a non-overlapping intercamera human tracking system, and adopt an online learned conditional random field model that minimizes a global energy cost to associate tracks from the same person of different cameras, and hence effectively achieve human tracking across non-overlapping cameras. The above proposed methods adopt the trajectories obtained from single camera human tracking to achieve inter-camera data association, and hence the overall tracking performance depends on the results of single camera human tracking, especially in challenging scene videos, the direct disturbance of false positives and fragments will seriously decrease the overall tracking performance. Such an example of human tracking across non-overlapping cameras on NLPR 4 is shown in Fig. 14. To deal with human tracklet mismatching and missing issues (as shown in Fig. 15 ) across non-overlapping cameras, Chen et al. BIB002 propose a global tracklet association for human tracking across non-overlapping cameras to improve the overall tracking performance. More specifically, the proposed method adopts fragmentary tracklets as the inputs based on a piecewise major color spectrum histogram representation (PMCSHR) and models a global tracklet association as a global MAP problem, which is mapped into a cost-flow network and solved by a min-cost flow algorithm. In addition, to better achieve tracklet matching across multiple camera views, the minimum uncertainty gap-based measurement, i.e., using the lowest and highest similarity to define the lower and upper bounds of the similarity for two tracklets to obtain a distance metric, is applied to computing the matching result of two tracklets' PMCSHRs. Built upon the research of PMCSHR BIB002 , Chen et al. BIB003 equalize similarity metrics in the global graph based on appearance and motion features, and hence further reduce the number of mismatch errors in non-overlapping inter-camera human tracking so as to further improve human tracking performance across non-overlapping cameras. Table 6 lists several quantitative comparison results of GM-based tracking across non-overlapping cameras on NLPR datasets, using multiple camera tracking accuracy (MCTA) to evaluate the performance of GM-based tracking, where the higher the MCTA is, the better the performance of GM-based tracking.
|
Human tracking over camera networks: a review <s> MAP optimization solution framework <s> We revisit the problem of specific object recognition using color distributions. In some applications-such as specific person identification-it is highly likely that the color distributions will be multimodal and hence contain a special structure. Although the color distribution changes under different lighting conditions, some aspects of its structure turn out to be invariants. We refer to this structure as an intradistribution structure, and show that it is invariant under a wide range of imaging conditions while being discriminative enough to be practical. Our signature uses shape context descriptors to represent the intradistribution structure. Assuming the widely used diagonal model, we validate that our signature is invariant under certain illumination changes. Experimentally, we use color information as the only cue to obtain good recognition performance on publicly available databases covering both indoor and outdoor conditions. Combining our approach with the complementary covariance descriptor, we demonstrate results exceeding the state-of-the-art performance on the challenging VIPeR and CAVIAR4REID databases. <s> BIB001 </s> Human tracking over camera networks: a review <s> MAP optimization solution framework <s> Color naming, which relates colors with color names, can help people with a semantic analysis of images in many computer vision applications. In this paper, we propose a novel salient color names based color descriptor (SCNCD) to describe colors. SCNCD utilizes salient color names to guarantee that a higher probability will be assigned to the color name which is nearer to the color. Based on SCNCD, color distributions over color names in different color spaces are then obtained and fused to generate a feature representation. Moreover, the effect of background information is employed and analyzed for person re-identification. With a simple metric learning method, the proposed approach outperforms the state-of-the-art performance (without user’s feedback optimization) on two challenging datasets (VIPeR and PRID 450S). More importantly, the proposed feature can be obtained very fast if we compute SCNCD of each color in advance. <s> BIB002 </s> Human tracking over camera networks: a review <s> MAP optimization solution framework <s> Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2%, 4.88%, 28.91%, and 31.55% on the four databases, respectively. <s> BIB003 </s> Human tracking over camera networks: a review <s> MAP optimization solution framework <s> Feature representation and metric learning are two critical components in person re-identification models. In this paper, we focus on the feature representation and claim that hand-crafted histogram features can be complementary to Convolutional Neural Network (CNN) features. We propose a novel feature extraction model called Feature Fusion Net (FFN) for pedestrian image representation. In FFN, back propagation makes CNN features constrained by the handcrafted features. Utilizing color histogram features (RGB, HSV, YCbCr, Lab and YIQ) and texture features (multi-scale and multi-orientation Gabor features), we get a new deep feature representation that is more discriminative and compact. Experiments on three challenging datasets (VIPeR, CUHK01, PRID450s) validates the effectiveness of our proposal. <s> BIB004
|
Human tracking in complex scenes such as occlusion, crowd, and interference of appearance similarity It is difficult to get the optimal solution. The cumulative matching scores (%) at rank 1, 10, and 20 are listed through maximally stable color regions (MSCR), and the presence of recurrent local motifs with high entropy represented through recurrent highly structured patches (RHSP), where the symmetry and asymmetry property is considered to handle viewpoint variations. Kviatkovsky et al. BIB001 propose to use color invariants (ColorInv) to perform human re-id. The ColorInv combines three component signatures over log color space including color histogram, covariance descriptor, and parts-based shape context (PartsSC), to describe human appearance, where the PartsSC, as an invariant shape descriptor using different parts of a human object, is used to describe the discriminative intra-distribution structure of color distributions. Yang et al. BIB002 propose salient color names-based color descriptor (SCNCD) for human re-id to deal with illumination changes across cameras, where the SCNCD and color histograms computed in four different color spaces, i.e., original RGB, rgb, l 1 l 2 l 3 , and HSV, are fused to describe color features of human appearance. Note that the salient color names indicate that a color only has a certain probability of being assigned to several nearest color names, and that the closer the color name is to the color, the higher probability the color has of being assigned to this color name. Liao et al. BIB003 propose an effective feature representation of human appearance called Local Maximal Occurrence (LOMO) for human re-id, where the LOMO analyzes local color and texture features' horizontal occurrence and maximizes the occurrence so as to obtain a robust feature representation against viewpoint changes, based on HSV color histogram and scale invariant local ternary pattern (SILTP) texture descriptor. Such an illustration of the LOMO feature extraction method is shown in Fig. 11 . Wu et al. BIB004 propose Feature Fusion Net (FFN) to describe human appearance for human re-id, where the FFN combines convolutional neural network (CNN) deep feature with handcrafted features, including color histogram computed in five different color spaces, i.e., RGB, HSV, YCbCr, Lab and YIQ, and Gabor texture descriptors with multi-scale and multi-orientation. The CNN deep feature is constrained by the handcrafted features through backpropagation to form a more discriminative feature fusion deep neural network. In short, discriminant multi-feature extraction with complementary nature helps to improve the accuracy of human reid. However, the constructed feature vectors have very high dimension, resulting in very high computation requirement.
|
Human tracking over camera networks: a review <s> Distance metric learning <s> In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art. <s> BIB001 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> Metric learning methods, for person re-identification, estimate a scaling for distances in a vector space that is optimized for picking out observations of the same individual. This paper presents a novel approach to the pedestrian re-identification problem that uses metric learning to improve the state-of-the-art performance on standard public datasets. Very high dimensional features are extracted from the source color image. A first processing stage performs unsupervised PCA dimensionality reduction, constrained to maintain the redundancy in color-space representation. A second stage further reduces the dimensionality, using a Local Fisher Discriminant Analysis defined by a training set. A regularization step is introduced to avoid singular matrices during this stage. The experiments conducted on three publicly available datasets confirm that the proposed method outperforms the state-of-the-art performance, including all other known metric learning methods. Further-more, the method is an effective way to process observations comprising multiple shots, and is non-iterative: the computation times are relatively modest. Finally, a novel statistic is derived to characterize the Match Characteristic: the normalized entropy reduction can be used to define the 'Proportion of Uncertainty Removed' (PUR). This measure is invariant to test set size and provides an intuitive indication of performance. <s> BIB002 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> Re-identification of individuals across camera networks with limited or no overlapping fields of view remains challenging in spite of significant research efforts. In this paper, we propose the use, and extensively evaluate the performance, of four alternatives for re-ID classification: regularized Pairwise Constrained Component Analysis, kernel Local Fisher Discriminant Analysis, Marginal Fisher Analysis and a ranking ensemble voting scheme, used in conjunction with different sizes of sets of histogram-based features and linear, χ 2 and RBF-χ 2 kernels. Comparisons against the state-of-art show significant improvements in performance measured both in terms of Cumulative Match Characteristic curves (CMC) and Proportion of Uncertainty Removed (PUR) scores on the challenging VIPeR, iLIDS, CAVIAR and 3DPeS datasets. <s> BIB003 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2%, 4.88%, 28.91%, and 31.55% on the four databases, respectively. <s> BIB004 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> We propose an effective structured learning based approach to the problem of person re-identification which outperforms the current state-of-the-art on most benchmark data sets evaluated. Our framework is built on the basis of multiple low-level hand-crafted and high-level visual features. We then formulate two optimization algorithms, which directly optimize evaluation measures commonly used in person re-identification, also known as the Cumulative Matching Characteristic (CMC) curve. Our new approach is practical to many real-world surveillance applications as the re-identification performance can be concentrated in the range of most practical importance. The combination of these factors leads to a person re-identification system which outperforms most existing algorithms. More importantly, we advance state-of-the-art results on person re-identification by improving the rank-1 recognition rates from 40% to 50% on the iLIDS benchmark, 16% to 18% on the PRID2011 benchmark, 43% to 46% on the VIPeR benchmark, 34% to 53% on the CUHK01 benchmark and 21% to 62% on the CUHK03 benchmark. <s> BIB005 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> In this paper, we propose a novel similarity measure and then introduce an efficient strategy to learn it by using only similar pairs for person verification. Unlike existing metric learning methods, we consider both the difference and commonness of an image pair to increase its discriminativeness. Under a pair-constrained Gaussian assumption, we show how to obtain the Gaussian priors (i.e., corresponding covariance matrices) of dissimilar pairs from those of similar pairs. The application of a log likelihood ratio makes the learning process simple and fast and thus scalable to large datasets. Additionally, our method is able to handle heterogeneous data well. Results on the challenging datasets of face verification (LFW and Pub-Fig) and person re-identification (VIPeR) show that our algorithm outperforms the state-of-the-art methods. <s> BIB006 </s> Human tracking over camera networks: a review <s> Distance metric learning <s> Pose variation remains one of the major factors that adversely affect the accuracy of person re-identification. Such variation is not arbitrary as body parts (e.g. head, torso, legs) have relative stable spatial distribution. Breaking down the variability of global appearance regarding the spatial distribution potentially benefits the person matching. We therefore learn a novel similarity function, which consists of multiple sub-similarity measurements with each taking in charge of a subregion. In particular, we take advantage of the recently proposed polynomial feature map to describe the matching within each subregion, and inject all the feature maps into a unified framework. The framework not only outputs similarity measurements for different regions, but also makes a better consistency among them. Our framework can collaborate local similarities as well as global similarity to exploit their complementary strength. It is flexible to incorporate multiple visual cues to further elevate the performance. In experiments, we analyze the effectiveness of the major components. The results on four datasets show significant and consistent improvements over the state-of-the-art methods. <s> BIB007
|
Since standard metrics, such as Euclidean distance for cross-view human matching in human re-id, based on the extracted features discussed previously, normally produce poor performance due to the potentially enormous changes in illumination, pose, and viewpoint. In order to mitigate cross-view variations and better identify more humans in human re-id, recent approaches BIB004 BIB001 BIB002 BIB003 BIB005 BIB006 BIB007 are focused on learning an optimal metric model that aims to making features associated with the same human to be closer than features associated with different human objects. It is essential to learn a linear transformation that achieves a mapping from the original feature space to a new feature space so as to effectively execute human re-id. Mahalanobis metric learning is widely used to globally find the linear transformation of the feature space. Motivated by a statistical inference perspective based on a likelihood-ratio test, Koestinger et al. BIB001 adopt equivalence constraints to learn a metric model called KISSME (keep it simple and straightforward metric). The proposed method only needs to compute two small-sized covariance matrices of dissimilar pairs and similar pairs, and thus is scalable to large datasets. Pedagadi et al. BIB002 adopt a low manifold distance metric learning framework through unsupervised PCA dimensionality reduction and supervised local fisher discriminant analysis (LFDA) dimensionality reduction, where the LFDA preserves the local neighborhood structure when maximizing between-class separation so as to achieve multi-class modality of the sample data, and the LFDA transformation is estimated via generalized eigenvalues. However, when this metric framework is applied to relatively small datasets, it may produce an undesirable compression of the most discriminative features. To solve this problem, by taking the merits from both kernel method and LFDA, Xiong et al. BIB003 further adopt kernel LFDA (KLFDA) to learn a metric model, where the KLFDA is a closed-form non-linear method that uses a kernel trick to handle large-dimensional feature vectors while maximizing a Fischer optimization criteria. The proposed method preserves discriminant features while achieving a better dimensionality reduction and takes full advantage of the flexibility in choosing the kernel to improve the accuracy of human re-id. However, its computational speed is relatively slow, especially when using non-linear kernel. Liao et al. BIB004 propose to learn a discriminant metric called cross-view quadratic discriminant analysis (XQDA), which aims to learn a low-dimensional subspace with cross-view data, and meanwhile learns a distance function in the lowdimensional subspace so as to measure the cross-view similarity. The proposed XQDA can be formulated as a generalized Rayleigh quotient, which can be solved by the generalized eigenvalue decomposition. However, the above proposed metric learning methods only adopt single metric learning model; integrating multiple metric learning models are thus also proposed in order to further improve the accuracy of human re-id. Paisitkriangkrai et al. BIB005 propose to learn to rank in human re-id with metric ensembles. More specifically, the proposed method first adopts several different features to train individual base metric of each feature using a linear KISSME and a non-linear KLFDA and then adopts two optimization approaches, i.e., relative distance-based approach and top recognition at rank-k, to learn weights of the base metrics. The two optimization approaches directly optimize a cumulative matching characteristic (CMC) curve, which is an evaluation measure commonly used in person re-id. The relative distance-based approach uses triplet information to optimize the relative distance, while the top recognition at rank-k approach maximizes the average rank-k recognition rate. Yang et al. BIB006 propose large-scale similarity learning (LSSL) using similar pairs for human re-id. More specifically, the proposed method jointly learns a Mahalanobis metric and a bilinear similarity metric using difference and commonness of an image pair to increase discrimination. Under a pair-constrained Gaussian assumption, the Gaussian priors (i.e., corresponding covariance matrices) of dissimilar pairs are obtained from those of similar pairs, and the application of a log likelihood ratio makes the whole learning process simple and fast and thus scalable to large datasets. However, the above metric learning methods just focus on a holistic metric, which discard the geometric structure of human objects and thus affect the discriminative power. To deal with the issue effectively, considering a relatively stable space distribution of human body parts such as head, torso, and legs, Chen et al. BIB007 propose spatially constrained similarity learning using polynomial feature map (SCSP) for human re-id. The proposed method, which combines a global similarity metric for the whole human body image region and multiple local similarity metrics for associating local human body parts regions using multiple visual cues, executes human matching across cameras based on multiple polynomial-kernel feature maps to represent human image pairs, which aims to learn a similarity function that could yield high score so as to measure the similarity between human image descriptors across cameras. Such an illustration of the similarity learning using spatial constraints based on polynomialkernel feature map is shown in Fig. 12 . In short, distance metric learning can improve the accuracy of human reid effectively. However, most existing distance metric learning methods for human re-id follow a supervised learning framework, where a large number of labeled matching pairs are used for training, and hence severely limit the scalability in real-world applications. Moreover, the pre-trained distance metric model may not have better generalization ability.
|
Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> When viewed from a system of multiple cameras with non-overlapping fields of view, the appearance of an object in one camera view is usually very different from its appearance in another camera view due to the differences in illumination, pose and camera parameters. In order to handle the change in observed colors of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace and demonstrate that this subspace can be used to compute appearance similarity. In the proposed approach, the system learns the subspace of inter-camera brightness transfer functions in a training phase during which object correspondences are assumed to be known. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both location and appearance cues. We evaluate the proposed method under several real world scenarios obtaining encouraging results. <s> BIB001 </s> Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> The appearance of individuals captured by multiple non-overlapping cameras varies greatly due to pose and illumination changes between camera views. In this paper we address the problem of dealing with illumination changes in order to recover matching of individuals appearing at different camera sites. This task is challenging as accurately mapping colour changes between views requires an exhaustive set of corresponding chromatic brightness values to be collected, which is very difficult in real world scenarios. We propose a Cumulative Brightness Transfer Function (CBTF) for mapping colour between cameras located at different physical sites, which makes better use of the available colour information from a very sparse training set. In addition we develop a bi-directional mapping approach to obtain a more accurate similarity measure between a pair of candidate objects. We evaluate the proposed method using challenging datasets obtained from real world distributed CCTV camera networks. The results demonstrate that our bi-directional CBTF method significantly outperforms existing techniques. <s> BIB002 </s> Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach. <s> BIB003 </s> Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> We propose a novel system for associating multi-target tracks across multiple non-overlapping cameras by an on-line learned discriminative appearance affinity model. Collecting reliable training samples is a major challenge in on-line learning since supervised correspondence is not available at runtime. To alleviate the inevitable ambiguities in these samples, Multiple Instance Learning (MIL) is applied to learn an appearance affinity model which effectively combines three complementary image descriptors and their corresponding similarity measurements. Based on the spatial-temporal information and the proposed appearance affinity model, we present an improved inter-camera track association framework to solve the "target handover" problem across cameras. Our evaluations indicate that our method have higher discrimination between different targets than previous methods. <s> BIB004 </s> Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> In this paper, we present a new solution to inter-camera multiple target tracking with non-overlapping fields of view. The identities of people are maintained when they are moving from one camera to another. Instead of matching snapshots of people across cameras, we mainly explore what kind of context information from videos can be used for inter-camera tracking. We introduce two kinds of context information, spatio-temporal context and relative appearance context in this paper. The spatio-temporal context indicates a way of collecting samples for discriminative appearance learning where target-specific appearance models are learned to distinguish different people from each other. The relative appearance context models inter-object appearance similarities for people walking in proximity. The relative appearance model helps disambiguate individual appearance matching across cameras. We show improved performance with context information for inter-camera tracking. Our method achieves promising results in two crowded scenes compared with state-of-art methods. <s> BIB005 </s> Human tracking over camera networks: a review <s> Supervised learning-based CLM <s> Pose variation remains one of the major factors that adversely affect the accuracy of person re-identification. Such variation is not arbitrary as body parts (e.g. head, torso, legs) have relative stable spatial distribution. Breaking down the variability of global appearance regarding the spatial distribution potentially benefits the person matching. We therefore learn a novel similarity function, which consists of multiple sub-similarity measurements with each taking in charge of a subregion. In particular, we take advantage of the recently proposed polynomial feature map to describe the matching within each subregion, and inject all the feature maps into a unified framework. The framework not only outputs similarity measurements for different regions, but also makes a better consistency among them. Our framework can collaborate local similarities as well as global similarity to exploit their complementary strength. It is flexible to incorporate multiple visual cues to further elevate the performance. In experiments, we analyze the effectiveness of the major components. The results on four datasets show significant and consistent improvements over the state-of-the-art methods. <s> BIB006
|
A supervised learning-based CLM, that is, the correspondences of pairs of individuals across every adjacent camerapair are known in advance based on manually labeled training data, which can then be used to train a CLM. A number of studies have been reported to estimate the brightness transfer function (BTF), which is applied to compensating for the color difference between two adjacent cameras before computing the color feature distance between two observations. Javed et al. BIB001 propose to learn a low-dimensional subspace of the color brightness transfer function (BTF) from the training data for each camerapairs using probabilistic PCA. However, this method depends on training data with a wide range of brightness values so as to accurately model the BTF, and it is difficult to meet this condition in a real-world scenario. To solve this problem, Prosser et al. BIB002 propose to adopt a cumulative brightness transfer function (CBTF) for mapping color information between adjacent cameras, which makes the best of the available color information from a very sparse training data set. This method can preserve uncommon brightness values in the training, resulting in more accurate representation of a color mapping function, therefore can help to improve the accuracy of human tracking across cameras. However, it only takes into account the color information and discards the spatial structural information for human representation. To cope with this problem, built upon the research of CRIPAC-MCT BIB001 , Javed et al. BIB003 further adopt kernel density estimator to estimate the intercamera space-time probabilities through computing the (e.g., walking) transition time values between pairs of correct correspondences based on the difference between the entry and exit time stamps. However, fully supervised learning usually requires a mass of manually labeled training data, which limits the scalability to more realistic openworld applications. To cope with this problem, Kuo et al. BIB004 adopt multiple instances learning (MIL) to learn an appearance affinity model, which is then integrated with the spatial-temporal information to train an improved intercamera track association framework to tackle the target Fig. 12 Illustration of the similarity learning using spatial constraints based on polynomial-kernel feature map BIB006 Symbols √ and × mean whether CLM/GM based tracking is used or not handover tasks across cameras. In addition, people often walk in groups in crowded scenes, thus group information is also applied to appearance matching across cameras. Cai et al. BIB005 propose context information including spatiotemporal context and relative appearance context for nonoverlapping inter-camera human tracking. The spatiotemporal context indicates a way of collecting samples for discriminative appearance learning, and the relative appearance context using RGB color histograms and histogram of gradients as appearance features models inter-object appearance similarities for people walking in proximity. The proposed method can distinguish visually very similar human targets and hence obviously improves human tracking accuracy across non-overlapping cameras. In short, the supervised learning-based CLM helps to achieve robust human tracking across non-overlapping cameras. However, it is unfeasible to scale up to large-scale camera networks due to a mass of manually labeled efforts.
|
Human tracking over camera networks: a review <s> Unsupervised learning-based CLM <s> The paper investigates the unsupervised learning of a model of activity for a multi-camera surveillance network that can be created from a large set of observations. This enables the learning algorithm to establish links between camera views associated with an activity. The learning algorithm operates in a correspondence-free manner, exploiting the statistical consistency of the observation data. The derived model is used to automatically determine the topography of a network of cameras and to provide a means for tracking targets across the "blind" areas of the network. A theoretical justification and experimental validation of the methods are provided. <s> BIB001 </s> Human tracking over camera networks: a review <s> Unsupervised learning-based CLM <s> This paper presents a scalable solution to the problem of tracking objects across spatially separated, uncalibrated, non-overlapping cameras. Unlike other approaches this technique uses an incremental learning method, to model both the colour variations and posterior probability distributions of spatio-temporal links between cameras. These operate in parallel and are then used with an appearance model of the object to track across spatially separated cameras. The approach requires no pre-calibration or batch preprocessing, is completely unsupervised, and becomes more accurate over time as evidence is accumulated. <s> BIB002 </s> Human tracking over camera networks: a review <s> Unsupervised learning-based CLM <s> A multiple-camera tracking system that tracks humans across cameras with nonoverlapping views is proposed in this paper. The systematically estimated camera link model, including transition time distribution, brightness transfer function, region mapping matrix, region matching weights, and feature fusion weights, is utilized to facilitate consistently labeling the tracked humans. The system is divided into two stages: in the training stage, based on an unsupervised scheme, we formulate the estimation of the camera link model as an optimization problem, in which temporal features, holistic color features, region color features, and region texture features are jointly considered. The deterministic annealing is applied to effectively search the optimal model solutions. The unsupervised learning scheme tolerates the presence of outliers in the training data well. In the testing stage, the systematic integration of multiple cues from the above features enables us to perform an effective reidentification. The camera link model can be continuously updated during tracking in the testing stage to adapt the changes of the environment. Several simulations and comparative studies demonstrate the superiority of our proposed estimation method to the others. Moreover, the complete system has been tested in a small-scale real-world camera network scenario. <s> BIB003 </s> Human tracking over camera networks: a review <s> Unsupervised learning-based CLM <s> Human tracking across multiple cameras is highly demanded for large scale video surveillance. To successfully track human across multiple uncalibrated cameras that have no overlapping field of views, a system to train more reliable camera link models is proposed in this paper. We employ a novel approach of combining multiple camera links and building bidirectional transition time distribution in the process of estimation. Through the unsupervised scheme, the system builds several camera link models simultaneously for the camera network that has multi-path in presence of the outliers. Our proposed method decreases incorrect correspondences and results in more accurate camera link model for higher tracking accuracy. The proposed algorithm shows the effectiveness by evaluating in the real-world camera network scenarios. <s> BIB004
|
Contrary to the supervised learning-based CLM, an unsupervised learning-based CLM, that is, the correspondences of pairs of individuals across every adjacent camera-pair are unknown in advance, which can still be estimated and then be used to train a CLM. The timespace and appearance relationships between adjacent cameras are usually used to learn the CLM across camera-pairs. Makris et al. BIB001 adopt the crosscorrelation of the exit and entry time stamps of the training data to estimate the transition time distribution. However, they only consider the single-mode distribution, thus it is difficult to describe most cases in the real world. Gilbert et al. BIB002 propose an incremental learning method to model the color variations and the transition time distribution between cameras. The proposed method allows to increase human tracking accuracy over time without any supervised input. However, they consider all the possible correspondences within a given time window including the true and false correspondences, and hence large amount of noises are produced due to a large number of false correspondences during the whole estimation process, resulting in unreliable model estimation. Chu et al. BIB003 adopt transition time distribution and brightness transfer function, based on space-time relationship and holistic and regional color/ texture information, respectively, between a pair of directly connected cameras, to estimate a CLM. A permutation matrix is introduced as an intermediate variable to be solved by using a deterministic annealing and the barrier method. This approach also takes into account the outliers, which refers to those people who depart from a camera without entering the other connected camera, or enter into a camera without coming from the other connected camera. In order to make the estimated CLM more accurately and adapt to environmental changes, by effective estimation of the feature fusion weights, the CLM can be persistently updated based on the human re-id results during tracking in the testing stage. The proposed CLM estimation method is applied in a deployed 4-camera real-world scenario with non-overlapping views, whose camera topology is shown in Fig. 13 , achieving 79.5% tracking accuracy out of 20 min (more than 280 people) of video testing. However, their approach of coping with the outliers only considers a link of a pair of directly connected cameras. In many real-world camera networks, there are often several links due to multiple directly connected cameras; in this case, their estimated CLM will decrease the accuracy due to higher outlier percentage. In order to solve this problem, built upon the research of Ref. BIB003 , Lee et al. BIB004 propose to combine multi-camera links and build bidirectional transition time distribution during the estimation of the CLM between directly connected camera pairs, and several camera link models are simultaneously estimated for the same deployed 4-camera real-world camera network with non-overlapping views in the presence of the outliers, resulting in more accurate camera link model and achieving 87.3% tracking accuracy. In short, the unsupervised learning-based CLM helps to achieve robust human tracking across nonoverlapping cameras, and can be easily applied to realworld systems with continuous updates of the link models when the conditions between cameras change. Moreover, it is feasible to achieve self-organized and scalable large-scale camera networks due to no need of human labeling efforts.
|
Human tracking over camera networks: a review <s> Conclusions <s> A multiple-camera tracking system that tracks humans across cameras with nonoverlapping views is proposed in this paper. The systematically estimated camera link model, including transition time distribution, brightness transfer function, region mapping matrix, region matching weights, and feature fusion weights, is utilized to facilitate consistently labeling the tracked humans. The system is divided into two stages: in the training stage, based on an unsupervised scheme, we formulate the estimation of the camera link model as an optimization problem, in which temporal features, holistic color features, region color features, and region texture features are jointly considered. The deterministic annealing is applied to effectively search the optimal model solutions. The unsupervised learning scheme tolerates the presence of outliers in the training data well. In the testing stage, the systematic integration of multiple cues from the above features enables us to perform an effective reidentification. The camera link model can be continuously updated during tracking in the testing stage to adapt the changes of the environment. Several simulations and comparative studies demonstrate the superiority of our proposed estimation method to the others. Moreover, the complete system has been tested in a small-scale real-world camera network scenario. <s> BIB001 </s> Human tracking over camera networks: a review <s> Conclusions <s> Tracking multiple targets across nonoverlapping cameras aims at estimating the trajectories of all targets, and maintaining their identity labels consistent while they move from one camera to another. Matching targets from different cameras can be very challenging, as there might be significant appearance variation and the blind area between cameras makes the target’s motion less predictable. Unlike most of the existing methods that only focus on modeling the appearance and spatiotemporal cues for inter-camera tracking, this paper presents a novel online learning approach that considers integrating high-level contextual information into the tracking system. The tracking problem is formulated using an online learned conditional random field (CRF) model that minimizes a global energy cost. Besides low-level information, social grouping behavior is explored in order to maintain targets’ identities as they move across cameras. In the proposed method, pairwise grouping behavior of targets is first learned within each camera. During inter-camera tracking, track associations that maintain single camera grouping consistencies are preferred. In addition, we introduce an iterative algorithm to find a good solution for the CRF model. Comparison experiments on several challenging real-world multicamera video sequences show that the proposed method is effective and outperforms the state-of-the-art approaches. <s> BIB002
|
This paper provides an extensive review of existing research efforts on human tracking over camera networks, covering all the core image/vision technologies, such as generative trackers, discriminative trackers, human re-id, CLM-based tracking, and GM-based tracking. We discuss the most recent development of these technologies Fig. 13 Camera topology. Blue broken lines denote four links, and red ellipses denote the corresponding entry or exit zones. Black rectangles are the other entry or exit zones that have no any link between both cameras BIB001 and compare pros/cons of different solutions. In spite of the great progress made on the human tracking over camera networks including human tracking within a camera and human tracking across non-overlapping cameras, there are still many technical challenges that need to be resolved, especially for real-world camera networks. For example, (1) when a human target is totally occluded for a long time or the background is extremely complex in the same camera scene, it is difficult to extract robust and discriminant features that denote human targets, resulting in the decline of performance for human tracking within a camera; (2) extracting robust and discriminant features adaptive to changes in illumination, viewpoint, occlusion, background clutter, and image quality/resolution across non-overlapping cameras, is still a challenging issue; (3) most learned distance metric models from an initial annotated camera-pair in human re-id are difficult to expand or (a) Human tracklet mismatching (b) Human tracklet missing Bounding boxes with the same color indicate the same human, and the dashed lines illustrate the trajectories generated by human targets walking across different cameras BIB002 adapt to a new camera-pair due to differences in illumination and viewpoint. Moreover, these models cannot be updated adaptively with the real-world environment changes. Also, it is impractical to manually label a large number of training data from every camera-pairs for a large camera networks; (4) so far, the performance of human re-id is still far from satisfactory, for example, the rank-1 accuracy of state-of-the-art, based on cumulative matching scores evaluation, is less than 60% on the representative VIPeR dataset, which will bring huge challenges for the human tracking across non-overlapping cameras when spatio-temporal reasoning between cameras is unreliable, especially for the human tracking across multiple moving cameras due to the fact that the mapping between two cameras will change with the cameras' movement; (5) the larger the spatio-temporal separation between camera views is, the greater the chance that human may appear with more appearance changes in different camera views is, resulting in difficulty to track human across non-overlapping cameras; (6) most existing research efforts on human tracking across non-overlapping cameras are based on available small camera networks composed of no more than five cameras; how to expand these techniques for human tracking over larger-scale camera networks. In terms of the above unsolved technical challenges of tracking human over camera networks, future research directions on human tracking over camera networks can be summarized as follows: 1) Robust and discriminant feature fusion adaptive to camera scene changes for human tracking over camera networks. 2) Robust and discriminant spatio-temporal and appearance context information for inter-camera human tracking. 3) Effective distance metric learning fusion to improve human re-id accuracy. 4) Online human tracking across non-overlapping cameras using unsupervised learning. 5) Effective global data association for human tracking over camera networks. 6) Human tracking on larger-scale camera networks as well as benchmark datasets and comprehensive experimental evaluations on larger-scale camera networks. Authors' contributions LH conceived and designed the study, and wrote the manuscript. WGW and J-NH provided the technical advice. RM revised the manuscript. MYY and KH provided some references. All authors read and approved the final manuscript.
|
Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> In surveillance situations, computer vision systems are often deployed to help humans perform their tasks more effectively. In a typical installation human observers are required to simultaneously monitor a number of video signals. Psychophysical research indicates that there are severe limitations in the ability of humans to monitor simultaneous signals. Do these same limitations extend to surveillance? We present a method for evaluating human surveillance performance in a situation that mimics demands of real world surveillance. A single computer monitor contained either nine display cells or four display cells. Each cell contained a stream of 2 to 4 moving objects. Observers were instructed to signal when a target event occurred - - when one of the objects entered a small square ldquoforbiddenrdquo region in the center of the display. Target events could occur individually or in groups of 2 or 3 temporally close events. The results indicate that the observers missed many targets (60%) when required to monitor 9 displays and many fewer when monitoring 4 displays (20%). Further, there were costs associated with target events occurring in close temporal succession. Understanding these limitations would help computer visions researchers to design algorithms and human-machine interfaces that result in improved overall performance. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> This paper presents a survey of trajectory-based activity analysis for visual surveillance. It describes techniques that use trajectory data to define a general set of activities that are applicable to a wide range of scenes and environments. Events of interest are detected by building a generic topographical scene description from underlying motion structure as observed over time. The scene topology is automatically learned and is distinguished by points of interest and motion characterized by activity paths. The methods we review are intended for real-time surveillance through definition of a diverse set of events for further analysis triggering, including virtual fencing, speed profiling, behavior classification, anomaly detection, and object interaction. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> We propose a novel unsupervised learning framework to model activities and interactions in crowded and complicated scenes. Hierarchical Bayesian models are used to connect three elements in visual surveillance: low-level visual features, simple "atomic" activities, and interactions. Atomic activities are modeled as distributions over low-level visual features, and multi-agent interactions are modeled as distributions over atomic activities. These models are learnt in an unsupervised way. Given a long video sequence, moving pixels are clustered into different atomic activities and short video clips are clustered into different interactions. In this paper, we propose three hierarchical Bayesian models, Latent Dirichlet Allocation (LDA) mixture model, Hierarchical Dirichlet Process (HDP) mixture model, and Dual Hierarchical Dirichlet Processes (Dual-HDP) model. They advance existing language models, such as LDA [1] and HDP [2]. Our data sets are challenging video sequences from crowded traffic scenes and train station scenes with many kinds of activities co-occurring. Without tracking and human labeling effort, our framework completes many challenging visual surveillance tasks of board interest such as: (1) discovering typical atomic activities and interactions; (2) segmenting long video sequences into different interactions; (3) segmenting motions into different activities; (4) detecting abnormality; and (5) supporting high-level queries on activities and interactions. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> We present a novel method for the discovery and statistical representation of motion patterns in a scene observed by a static camera. Related methods involving learning of patterns of activity rely on trajectories obtained from object detection and tracking systems, which are unreliable in complex scenes of crowded motion. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> This paper addresses the problem of fully automated mining of public space video data, a highly desirable capability under contemporary commercial and security considerations. This task is especially challenging due to the complexity of the object behaviors to be profiled, the difficulty of analysis under the visual occlusions and ambiguities common in public space video, and the computational challenge of doing so in real-time. We address these issues by introducing a new dynamic topic model, termed a Markov Clustering Topic Model (MCTM). The MCTM builds on existing dynamic Bayesian network models and Bayesian topic models, and overcomes their drawbacks on sensitivity, robustness and efficiency. Specifically, our model profiles complex dynamic scenes by robustly clustering visual events into activities and these activities into global behaviours with temporal dynamics. A Gibbs sampler is derived for offline learning with unlabeled training data and a new approximation to online Bayesian inference is formulated to enable dynamic scene understanding and behaviour mining in new video data online in real-time. The strength of this model is demonstrated by unsupervised learning of dynamic scene models for four complex and crowded public scenes, and successful mining of behaviors and detection of salient events in each. <s> BIB008 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> Crowd is a unique group of individual or something involves community or society. The phenomena of the crowd are very familiar in a variety of research discipline such as sociology, civil and physic. Nowadays, it becomes the most active-oriented research and trendy topic in computer vision. Traditionally, three processing steps involve in crowd analysis, and these include pre-processing, object detection and event/behavior recognition. Meanwhile, the common process for analysis in video sequence of crowd information extraction consists of Pre-Processing, Object Tracking, and Event/Behavior Recognition. In terms of behavior detection, the crowd density estimation, crowd motion detection, crowd tracking and crowd behavior recognition are adopted. In this paper, we give the general framework and taxonomy of pattern in detecting abnormal behavior in a crowd scene. This study presents the state of art of crowd analysis, taxonomy of the common approach of the crowd analysis and it can be useful to researchers and would serve as a good introduction related to the field undertaken. <s> BIB009 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station. <s> BIB010 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. ::: ::: In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. ::: ::: From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined. <s> BIB011 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> A method is proposed for identifying five crowd behaviors (bottlenecks, fountainheads, lanes, arches, and blocking) in visual scenes. In the algorithm, a scene is overlaid by a grid of particles initializing a dynamical system defined by the optical flow. Time integration of the dynamical system provides particle trajectories that represent the motion in the scene; these trajectories are used to locate regions of interest in the scene. Linear approximation of the dynamical system provides behavior classification through the Jacobian matrix; the eigenvalues determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The eigenvalues are only considered in the regions of interest, consistent with the linear approximation and the implicated behaviors. The algorithm is repeated over sequential clips of a video in order to record changes in eigenvalues, which may imply changes in behavior. The method was tested on over 60 crowd and traffic videos. <s> BIB012 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB013 </s> Crowded Scene Analysis: A Survey <s> I. INTRODUCTION <s> Over the past decades, a wide attention has been paid to crowd control and management in the intelligent video surveillance area. Among the tasks for automatic surveillance video analysis, crowd motion modeling lays a crucial foundation for numerous subsequent analysis but encounters many unsolved challenges due to occlusions among pedestrians, complicated motion patterns in crowded scenarios, etc. Addressing the unsolved challenges, the authors propose a novel spatio-temporal viscous fluid field to model crowd motion patterns by exploring both appearance of crowd behaviors and interaction among pedestrians. Large-scale crowd events are hereby recognized based on characteristics of the fluid field. First, a spatio-temporal variation matrix is proposed to measure the local fluctuation of video signals in both spatial and temporal domains. After that, eigenvalue analysis is applied on the matrix to extract the principal fluctuations resulting in an abstract fluid field. Interaction force is then explored based on shear force in viscous fluid, incorporating with the fluctuations to characterize motion properties of a crowd. The authors then construct a codebook by clustering neighboring pixels with similar spatio-temporal features, and consequently, crowd behaviors are recognized using the latent Dirichlet allocation model. The convincing results obtained from the experiments on published datasets demonstrate that the proposed method obtains high-quality results for large-scale crowd behavior perception in terms of both robustness and effectiveness. <s> BIB014
|
W ITH the increase of population and diversity of human activities, crowded scenes have been more frequent in the real world than ever. It brings enormous challenges to public management, security or safety. Some examples of crowded scenes are shown in Figure 1 . Humans have the ability to extract useful information of behavior patterns in the surveillance area, monitor the scene for abnormal situations in real time, and provide the potential for immediate response BIB008 . However, psychophysical research indicates that there are severe limitations in their ability to monitor simultaneous signals BIB002 . Extremely crowded scenes require monitoring an excessive number of individuals and their activities, which is a significant challenge even for a human observer. In the past decade, automated scene understanding or analysis has already attracted much research attention in the computer vision community BIB005 - BIB010 . One important application is intelligent surveillance in replace of the traditional passive Copyright (c) 2014 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. video surveillance. Although many algorithms have been developed to track, recognize and understand the behaviors of various objects in video BIB006 , they were mainly designed for common scenes with a low density of population BIB001 , , BIB003 . When it comes to crowded scenes, the problems can not be handled well, since the large number of individuals involved not only cause the detection and tracking fail, but also greatly increase computational complexity. Under such circumstance, crowded scene analysis as a unique topic, is specifically addressed. Driven by the practical demand, it is becoming an important research direction and has already attracted lots of efforts BIB006 , BIB007 - BIB009 . The opportunity for such study has never been better. As noted in BIB005 , , scene understanding may refer to scene layout (locating roads, buildings, sidewalks) BIB011 , motion patterns (vehicles turning, pedestrian crossing) BIB005 , , BIB004 , BIB013 and scene status (crowd congestion, split, merge, etc.) BIB012 , BIB014 . In this paper, combined with previous studies, we will elaborate the key aspects of crowded scene analysis in automated video surveillance.
|
Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> This paper argues that a comprehensive approach to crowd safety design, management and risk assessment needs to integrate psychology and engineering frames of reference. Psychology and engineering are characteristically mutually exclusive in their focus on the perspective of crowd members who think and behave (psychology) or on static and dynamic objects (engineering). Engineering places as much emphasis on the physical environment as psychology negates the relationship between the physical environment and people. This paper stresses the need to address the relationship between (A) design and engineering x (B) communications technology x (C) crowd management x (D) crowd behaviour and movement. Theories of crowd psychology are briefly reviewed with particular reference to crowd ingress and egress and misconceptions about 'panic' or irrational behaviour. Assumptions about panic reinforce an emphasis on the control of a crowd, as if a crowd is a homogeneous mass of bodies or 'ballbearings', rather than the management of a crowd as a collection of individuals and social groups who need accurate and timely information if they are to remain safe. Particular emphasis is put on the fact that the time for a crowd to escape from a situation of potential entrapment is a function of T (Time to escape) = t1 (time to start to move) + t2 (time to move to and pass through exits), rather than T = t2. This is illustrated by reference to research of escape behaviour in the Summerland fire and underground station evacuations. The paper concludes by stressing the need to validate computer simulations of crowd movement and escape behaviour against psychological as well as engineering criteria. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> The study of crowd dynamics is interesting because of the various self-organization phenomena resulting from the interactions of many pedestrians, which may improve or obstruct their flow. Besides formation of lanes of uniform walking direction and oscillations at bottlenecks at moderate densities, it was recently discovered that stop-and-go waves [D. Helbing et al., Phys. Rev. Lett. 97, 168001 (2006)] and a phenomenon called"crowd turbulence"can occur at high pedestrian densities [D. Helbing et al., Phys. Rev. E 75, 046109 (2007)]. Although the behavior of pedestrian crowds under extreme conditions is decisive for the safety of crowds during the access to or egress from mass events as well as for situations of emergency evacuation, there is still a lack of empirical studies of extreme crowding. Therefore, this paper discusses how one may study high-density conditions based on suitable video data. This is illustrated at the example of pilgrim flows entering the previous Jamarat Bridge in Mina, 5 kilometers from the Holy Mosque in Makkah, Saudi-Arabia. Our results reveal previously unexpected pattern formation phenomena and show that the average individual speed does not go to zero even at local densities of 10 persons per square meter. Since the maximum density and flow are different from measurements in other countries, this has implications for the capacity assessment and dimensioning of facilities for mass events. When conditions become congested, the flow drops significantly, which can cause stop-and-go waves and a further increase of the density until critical crowd conditions are reached. Then,"crowd turbulence"sets in, which may trigger crowd disasters. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> In this work the results of a bottleneck experiment with pedestrians are presented in the form of total times, fluxes, specific fluxes, and time gaps. A main aim was to find the dependence of these values on the bottleneck width. The results show a linear decline of the specific flux with increasing width as long as only one person at a time can pass, and a constant value for larger bottleneck widths. Differences between small (one person at a time) and wide bottlenecks (two persons at a time) were also found in the distribution of time gaps. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> A. Real-World Applications <s> Over the past decades, a wide attention has been paid to crowd control and management in the intelligent video surveillance area. Among the tasks for automatic surveillance video analysis, crowd motion modeling lays a crucial foundation for numerous subsequent analysis but encounters many unsolved challenges due to occlusions among pedestrians, complicated motion patterns in crowded scenarios, etc. Addressing the unsolved challenges, the authors propose a novel spatio-temporal viscous fluid field to model crowd motion patterns by exploring both appearance of crowd behaviors and interaction among pedestrians. Large-scale crowd events are hereby recognized based on characteristics of the fluid field. First, a spatio-temporal variation matrix is proposed to measure the local fluctuation of video signals in both spatial and temporal domains. After that, eigenvalue analysis is applied on the matrix to extract the principal fluctuations resulting in an abstract fluid field. Interaction force is then explored based on shear force in viscous fluid, incorporating with the fluctuations to characterize motion properties of a crowd. The authors then construct a codebook by clustering neighboring pixels with similar spatio-temporal features, and consequently, crowd behaviors are recognized using the latent Dirichlet allocation model. The convincing results obtained from the experiments on published datasets demonstrate that the proposed method obtains high-quality results for large-scale crowd behavior perception in terms of both robustness and effectiveness. <s> BIB008
|
Research of crowded scene analysis could lead to a lot of critical applications. 1) Visual Surveillance: Many places of security interests such as railway station and shopping mall are very crowded. Conventional surveillance system may fail for high density of objects, regarding both accuracy and computation. We can leverage the results of crowd behavior analysis to crowd flux statistics and congestion analysis BIB008 , BIB005 , anomaly detection and alarming BIB006 - BIB007 , BIB004 , etc. 2) Crowd Management: In mass gatherings such as music festivals and sports events, crowded scene analysis can be used to develop crowd management strategies and assist the movement of the crowd or individuals, to avoid the crowd disasters and ensure the public safety BIB002 , . 3) Public Space Design: The analysis of crowd dynamics and its relevant findings BIB001 , BIB003 can provide some guidelines for public space design, and therefore increase the efficiency arXiv:1502.01812v1 [cs.CV] 6 Feb 2015 IEEE TRANSACTIONS ON XXX, VOL. X, NO. XX, MONTH YEAR 2 and safety of train stations, airport terminals, theaters, public buildings, and mass events in the future.
|
Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. ::: For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a "flow map". The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. ::: Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. ::: The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> In the year 1999 the world population reached 6 billion, doubling the previous census estimate of 1960. Recently, the United States Census Bureau issued a revised forecast for world population showing a projected growth to 9.4 billion by 2050 (US Census Bureau, http://www.census.gov/ipc/www/worldpop.html). Different research disci- plines have studied the crowd phenomenon and its dynamics from a social, psychological and computational standpoint respectively. This paper presents a survey on crowd analysis methods employed in computer vision research and discusses perspectives from other research disciplines and how they can contribute to the computer vision approach. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> We propose an unsupervised learning framework to infer motion patterns in videos and in turn use them to improve tracking of moving objects in sequences from static cameras. Based on tracklets, we use a manifold learning method Tensor Voting to infer the local geometric structures in (x, y) space, and embed tracklet points into (x, y, θ) space, where θ represents motion direction. In this space, points automatically form intrinsic manifold structures, each of which corresponds to a motion pattern. To define each group, a novel robustmanifold grouping algorithm is proposed. Tensor Voting is performed to provide multiple geometric cues which formulate multiple similarity kernels between any pair of points, and a spectral clustering technique is used in this multiple kernel setting. The grouping algorithm achieves better performance than state-of-the-art methods in our applications. Extracted motion patterns can then be used as a prior to improve the performance of any object tracker. It is especially useful to reduce false alarms and ID switches. Experiments are performed on challenging real-world sequences, and a quantitative analysis of the results shows the framework effectively improves state-of-the-art tracker. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> This paper presents a multi-output regression model for crowd counting in public scenes. Existing counting by regression methods either learn a single model for global counting, or train a large number of separate regressors for localised density estimation. In contrast, our single regression model based approach is able to estimate people count in spatially localised regions and is more scalable without the need for training a large number of regressors proportional to the number of local regions. In particular, the proposed model automatically learns the functional mapping between interdependent low-level features and multi-dimensional structured outputs. The model is able to discover the inherent importance of different features for people counting at different spatial locations. Extensive evaluations on an existing crowd analysis benchmark dataset and a new more challenging dataset demonstrate the effectiveness of our approach. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> This chapter presents a review and systematic comparison of the state of the art on crowd video analysis. The rationale of our review is justified by a recent increase in intelligent video surveillance algorithms capable of analysing automati- cally visual streams of very crowded and cluttered scenes, such as those of airport concourses, railway stations, shopping malls and the like. Since the safety and se- curity of potentially very crowded public spaces have become a priority, computer vision researchers have focused their research on intelligent solutions. The aim of this chapter is to propose a critical review of existing literature pertaining to the au- tomatic analysis of complex and crowded scenes. The literature is divided into two broad categories: the macroscopic and the microscopic modelling approach. The effort is meant to provide a reference point for all computer vision practitioners cur- rently working on crowd analysis. We discuss the merits and weaknesses of various approaches for each topic and provide a recommendation on how existing methods can be improved. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> B. Problems and Motivations <s> The paper presents a method for estimating the number of moving people in a scene for video surveillance applications. The method performance has been characterized on the public database used for the PETS 2009 and 2010 international competitions; the proposed method has been compared, on the same database, with the PETS competitions participants. The system exhibits a high accuracy, and revealed to be so fast that it can be used in real time surveillance applications. The rationale of the method lies on the extraction of suited scale-invariant feature points and the successive selection among them of the moving ones, under the hypothesis that the latter are associated to moving people. The perspective distortions are taken into account by dividing the input frames into smaller horizontal zones, each having (approximately) the same perspective effects. Therefore, the evaluation of the number of people is separately carried out for each zone, and the results are summed up. The most important peculiarity of the proposed method is the availability of a simple training procedure using a brief video sequence that shows a person walking around in the scene; the procedure automatically evaluates all the parameters needed by the system, thus making the method particularly suited for end-user applications. <s> BIB008
|
Video analysis and scene understanding usually involve object detection, tracking and behavior recognition BIB001 , . For crowded scenes, due to extreme clutters, severe occlusions and ambiguities, the conventional methods without special considerations are not appropriate. As Ali pointed out BIB002 , the mechanics of human crowds are complex as a crowd exhibits both dynamics and psychological characteristics, which are often goal directed. This makes it very challenging to figure out an appropriate level of granularity to model the dynamics of a crowd. Another challenge in crowded scene analysis is that the specific crowd behaviors needed to be detected and classified may be both rare and subtle , and in most surveillance scenarios, these behaviors have few examples to learn. These challenges have partially drawn attention at some recent conferences, and several relevant scientific papers have also been published in academic journals. In this paper, we try to explore those problems with a comprehensive review and general discussions. The state-of-the-art technical advances in crowded scene analysis will be covered. Feature extraction, segmentation and model learning are considered as core problems addressed in visual behavior analysis . We will discuss the methods in crowded scene analysis regarding these basic issues. It is noted that survey papers BIB004 , BIB003 , , BIB007 relevant to the topic of crowd analysis have been written in the past few years. Zhan et al. BIB003 presented a survey in 2008 on crowd analysis methods in computer vision. They covered the techniques of crowd density estimation, pedestrian/crowd recognition and crowd tracking. They paid much attention on the perspectives from other research disciplines, such as sociology, psychology and computer graphics, to the computer vision approach. The paper BIB004 by Junior et al. in 2010 also presented a survey on a wide range of computer vision techniques for crowd analysis, covering people tracking, crowd density estimation, event detection, validation and simulation. They devoted large sections to reporting how related the areas of computer vision and computer graphics should be to deal with challenges in crowd analysis. Later in 2011, Sjarif et al. wrote a survey with more emphasis on abnormal behaviors detection in crowded scenes. Recently, Thida et al. BIB007 gave a general review on crowd video analysis, providing some valuable summarizations. But still many recent important works have been missed, and the descriptions of methods were rather brief. In this study, we seek to create a more focused review of recent publications on high-level crowded scene understanding, related to motion and behavior analysis in crowd videos. Several important recent works from 2010 to now will be covered and compared, which have not yet been surveyed previously. In order to better elaborate this topic, we divide it into three subtopics according to the purposes of task: motion pattern segmentation, crowd behavior recognition and anomaly detection. It is indicated that these three aspects are closely related to each other in crowded scene analysis BIB005 . Differently from previous survey such as BIB003 , crowd counting or density estimation, a closely related topic BIB006 - BIB008 , is not covered in this survey. We intend to cover the area related to the analysis of behaviors and activities in crowd videos, which is broad enough. The reviewed methods are all about the behaviors or activities, and are usually based on the motion features, while in crowd counting, static visual features can be important. As an indispensable basis for each crowded scene analysis method, feature representation is discussed and summarized separately before elaboration of the three subtopics. Feature representation can be shared by different methods. To clearly and properly situate the problem at hand for the readers, we still follow the task-orientated way to categorize the methods into three subtopics. Besides, some background knowledge and available physical crowd models are provided beforehand. They could be utilized in the analysis methods. Figure 2 illustrates the diagram of crowded scene analysis. It also reveals what this paper is going to elaborate and explain.
|
Crowded Scene Analysis: A Survey <s> II. KNOWLEDGE OF THE CROWD <s> This paper presents a model of crowd behavior to simulate the motion of a generic population in a specific environment. The individual parameters are created by a distributed random behavioral model which is determined by few parameters. This paper explores an approach based on the relationship between the autonomous virtual humans of a crowd and the emergent behavior originated from it. We have used some concepts from sociology to represent some specific behaviors and represent the visual output. We applied our model in two applications: a graphic called sociogram that visualizes our population during the simulation, and a simple visit to a museum. In addition, we discuss some aspects about human crowd collision. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> II. KNOWLEDGE OF THE CROWD <s> This work presents an approach for generating video evidence of dangerous situations in crowded scenes. The scenarios of interest are those with high safety risk such as blocked exit, collapse of a person in the crowd, and escape panic. Real visual evidence for these scenarios is rare or unsafe to reproduce in a controllable way. Thus there is a need for simulation to allow training and validation of computer vision systems applied to crowd monitoring. The results shown here demonstrate how to simulate the most important aspects of crowds for performance analysis of computer based video surveillance systems. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> II. KNOWLEDGE OF THE CROWD <s> This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial location in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> II. KNOWLEDGE OF THE CROWD <s> Crowd is a unique group of individual or something involves community or society. The phenomena of the crowd are very familiar in a variety of research discipline such as sociology, civil and physic. Nowadays, it becomes the most active-oriented research and trendy topic in computer vision. Traditionally, three processing steps involve in crowd analysis, and these include pre-processing, object detection and event/behavior recognition. Meanwhile, the common process for analysis in video sequence of crowd information extraction consists of Pre-Processing, Object Tracking, and Event/Behavior Recognition. In terms of behavior detection, the crowd density estimation, crowd motion detection, crowd tracking and crowd behavior recognition are adopted. In this paper, we give the general framework and taxonomy of pattern in detecting abnormal behavior in a crowd scene. This study presents the state of art of crowd analysis, taxonomy of the common approach of the crowd analysis and it can be useful to researchers and would serve as a good introduction related to the field undertaken. <s> BIB004
|
Crowded scenes can be divided into two categories according to the motion of the crowd BIB003 : structured and unstructured. In the structured crowded scenes, the crowd moves coherently in a common direction, the motion direction does not vary frequently, and each spatial location of the scene contains only one main crowd behavior over the time. The unstructured crowded scenes represent the scenes with chaotic or random crowd motion, where participants move in different directions at different times, and each spatial location contains multiple crowd behaviors BIB004 . Figure 1(a) is structured crowded scenes, while Figure 1(b) is unstructured scenes. Obviously, they have different dynamic and visual characteristics. The crowd has been defined as "a large group of individuals in the same physical environment, sharing a common goal" BIB001 . It can be viewed hierarchically: individuals are collected into groups, and the resulting groups are collected into a crowd, with a set of motivations and basic rules BIB002 . This representation permits a flexible analysis of a large spectrum of crowd densities and complicated behaviors. The analysis of crowd can be conducted at macroscopic or microscopic levels. At the macroscopic level, we are interested in the global motions of a mass of people, without concerning the movements of any individual; at the microscopic level, we concern the movements of each individual pedestrian and do analyze based on the collective information of them. The analysis of crowded scenes could involve the knowledge from both vision area and crowd dynamics. Except for the vision algorithms usually adopted in conventional scene analysis, the physical models from crowd dynamics could also be utilized. In the below we will introduce some available knowledge such as the models from crowd dynamics, as well as their application in crowded scene analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.