reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
A survey on communication technologies and requirements for internet of electric vehicles <s> Introduction <s> Economics and environmental incentives, as well as advances in technology, are reshaping the traditional view of industrial systems. The anticipation of a large penetration of plug-in hybrid electric vehicles (PHEVs) and plug-in electric vehicles (PEVs) into the market brings up many technical problems that are highly related to industrial information technologies within the next ten years. There is a need for an in-depth understanding of the electrification of transportation in the industrial environment. It is important to consolidate the practical and the conceptual knowledge of industrial informatics in order to support the emerging electric vehicle (EV) technologies. This paper presents a comprehensive overview of the electrification of transportation in an industrial environment. In addition, it provides a comprehensive survey of the EVs in the field of industrial informatics systems, namely: 1) charging infrastructure and PHEV/PEV batteries; 2) intelligent energy management; 3) vehicle-to-grid; and 4) communication requirements. Moreover, this paper presents a future perspective of industrial information technologies to accelerate the market introduction and penetration of advanced electric drive vehicles. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Introduction <s> A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Introduction <s> Information and communication technologies (ICT) represent a fundamental element in the growth and performance of smart grids. A sophisticated, reliable and fast communication infrastructure is, in fact, necessary for the connection among the huge amount of distributed elements, such as generators, substations, energy storage systems and users, enabling a real time exchange of data and information necessary for the management of the system and for ensuring improvements in terms of efficiency, reliability, flexibility and investment return for all those involved in a smart grid: producers, operators and customers. This paper overviews the issues related to the smart grid architecture from the perspective of potential applications and the communications requirements needed for ensuring performance, flexible operation, reliability and economics. <s> BIB003
|
As the dependence on a single energy source (crude oil) exposes economies to unstable global oil market and increases environmental concerns, there has been a growing interest to push electric vehicles into mainstream acceptance. The motivation for the electrification of transportation is multifaceted; electricity can be generated through diverse and domestic resources, electricity prices have been relatively stable in the last two decades, and electric miles are cheaper and cleaner . Therefore, internet of electric vehicles are expected to achieve a sizable market portion in the next decade. In fact, the study in estimates that there will be around 50 million grid-enabled vehicles by year 2040. Accordingly, there is a pressing need in the deployment of charging networks to accommodate the projected demand. For instance, [4] presents that there is an attempt to build a statewide charging station network in California. Similarly, Estonia is building the Europe's largest fast-charging station network with 200 nodes . The number of EV charging stations is expected exceed four million in Europe and 11 million in the Globe by year 2020 . However, as the power grid is becoming more congested due to the introduction of EVs, managing and controlling of corresponding demand should be carefully aligned with the available resources. Even though, the long term solution involves the upgrade of the power grid components, by considering the potential cost of such investments, the practical solution for the near term would be to develop intelligent control and scheduling techniques to aid the power grid operations. The realization of such frameworks requires appropriate communication architectures that will enable reliable interaction between the grid and the EV drivers to optimally control power flow under varying network conditions. A handful of surveys have attempted to discuss general smart grid communication requirements, standards, and protocols for household demand management BIB002 BIB003 BIB001 . However, the case for the EVs is unique; electric vehicles can be mobile and a typical EV demand is large and, in fact, it can be more than the daily energy consumption of two households . More importantly, the sustainability of the power grid operations is essential for human life. Therefore, careful attention is required to 2 Internet of electric vehicles and the current power grid
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Power generation and electricity prices 2.2.1 Current status <s> Topics considered include characteristics of power generation units, transmission losses, generation with limited energy supply, control of generation, and power system security. This book is a graduate-level text in electric power engineering as regards to planning, operating, and controlling large scale power generation and transmission systems. Material used was generated in the post-1966 period. Many (if not most) of the chapter problems require a digital computer. A background in steady-state power circuit analysis is required. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Power generation and electricity prices 2.2.1 Current status <s> Electric power and energy engineering has as a basic tenet the relief of humankind of its burden, and the transmission and processing of information. In this paper, the salient advances of the first century of the electrification of the World is reviewed with special emphasis on the contemporary design and operation of electric power and energy systems. The advancements of power engineering are reviewed from a 2012 perspective, and issues of sustainability and the utilization of renewable resources are discussed. <s> BIB002
|
According to the US National Academy of Engineering, the power grid is 'the supreme engineering achievement of the twentieth century'. Currently, there are close to 3,200 utility companies serving more than 143 million customers in the United States. In order to serve the increasing customer demand, the required power supply is generated through diverse resources, including coal, nuclear, hydro, natural gas, and lately renewable sources, such as wind and solar BIB002 . Depending on the efficiency and the unit generation cost, power generation can be roughly divided into base load, intermediate load, and peak hour load. Factors that affect to dispatch a specific generation asset include variable operation and maintenance (O&M) costs, flexibility (fast vs. slow start generators), environmental 'head-room' , and the distance to load and transmission. To meet the base load demand, utilities employ large scale (≥400 MW) and low cost generation assets (e.g., nuclear, hydro, coal). Moreover, base load generation is characterized by high load factor (the percentage of hours that a power plant runs at full capacity) . For intermediate load generation (the difference between expected customer demand and base load generation), power plants with lower load factors (typically around 50%) such as combined cycle combustion turbine fueled by natural gas etc. are employed BIB001 . Finally, utilities may need to employ additional generation assets to accommodate customer demand during peak hours. For this purpose, fast start, high cost, and usually environmentally unfriendly assets are employed. They are characterized by low load factor (5% to 10%), that leads to decreased utilization and hence and increased ratio of peak to average demand. Consequently, the use of such assets gradually increases the average kWh electricity price. A real-world scenario is illustrated in Figure 1a .
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> Plug-in hybrid vehicles (PHEVs) are being developed around the world; much work is going on to optimize engine and battery operations for efficient operation, both during discharge and when grid electricity is available for recharging. However, there has generally been the expectation that the grid will not be greatly affected by the use of the vehicles, because the recharging would only occur during offpeak hours, or the number of vehicles will grow slowly enough that capacity planning will respond adequately. But this expectation does not incorporate that endusers will have control of the time of recharging and the inclination for people will be to plug in when convenient for them, rather than when utilities would prefer. It is important to understand the ramifications of introducing a number of plug-in hybrid vehicles onto the grid. Depending on when and where the vehicles are plugged in, they could cause local or regional constraints on the grid. They could require both the addition of new electric capacity along with an increase in the utilization of existing capacity. Local distribution grids will see a change in their utilization pattern, and some lines or substations may become overloaded sooner than expected. Furthermore, the type ofmore » generation used to recharge the vehicles will be different depending on the region of the country and timing when the PHEVs recharge. We conducted an analysis of what the grid impact may be in 2018 with one million PHEVs added to the VACAR sub-region of the Southeast Electric Reliability Council, a region that includes South Carolina, North Carolina, and much of Virginia. To do this, we used the Oak Ridge Competitive Electricity Dispatch model, which simulates the hourly dispatch of power generators to meet demand for a region over a given year. Depending on the vehicle, its battery, the charger voltage level, amperage, and duration, the impact on regional electricity demand varied from 1,400 to 6,000 MW. If recharging occurred in the early evening, then peak loads were raised and demands were met largely by combustion turbines and combined cycle plants. Nighttime recharging had less impact on peak loads and generation adequacy, but the increased use of coal-fired generation changed the relative amounts of air emissions. Costs of generation also fluctuated greatly depending on the timing. However, initial analysis shows that even charging at peak times may be less costly than using gasoline to operate the vehicles. Even if the overall region may have sufficient generating power, the region's transmission system or distribution lines to different areas may not be large enough to handle this new type of load. A largely residential feeder circuit may not be sized to have a significant proportion of its customers adding 1.4 to 6 kW loads that would operate continuously for two to six hours beginning in the early evening. On a broader scale, the transmission lines feeding the local substations may be similarly constrained if they are not sized to respond to this extra growth in demand. This initial analysis identifies some of the complexities in analyzing the integrated system of PHEVs and the grid. Depending on the power level, timing, and duration of the PHEV connection to the grid, there could be a wide variety of impacts on grid constraints, capacity needs, fuel types used, and emissions generated. This paper provides a brief description of plug-in hybrid vehicle characteristics in Chapter 2. Various charging strategies for vehicles are discussed, with a consequent impact on the grid. In Chapter 3 we describe the future electrical demand for a region of the country and the impact on this demand with a number of plug-in hybrids. We apply that demand to an inventory of power plants for the region using the Oak Ridge Competitive Electricity Dispatch (ORCED) model to evaluate the change in power production and emissions. In Chapter 4 we discuss the impact of demand increases on local distribution systems. In Chapter 5 we conclude and provide insights into the impacts of plug-ins. Future tasks will be proposed to better define the interaction electricity and transportation, and how society can better prepare for their confluence.« less <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> The combination of high oil costs, concerns about oil security and availability, and air quality issues related to vehicle emissions are driving interest in plug-in hybrid electric vehicles (PHEVs). PHEVs are similar to conventional hybrid electric vehicles, but feature a larger battery and plug-in charger that allows electricity from the grid to replace a portion of the petroleum-fueled drive energy. PHEVs may derive a substantial fraction of their miles from grid-derived electricity, but without the range restrictions of pure battery electric vehicles. As of early 2007, production of PHEVs is essentially limited to demonstration vehicles and prototypes. However, the technology has received considerable attention from the media, national security interests, environmental organizations, and the electric power industry. The use of PHEVs would represent a significant potential shift in the use of electricity and the operation of electric power systems. Electrification of the transportation sector could increase generation capacity and transmission and distribution (T&D) requirements, especially if vehicles are charged during periods of high demand. This study is designed to evaluate several of these PHEV-charging impacts on utility system operations within the Xcel Energy Colorado service territory. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> This research report specifically examines the CO2 and NOx emissions of switching a significant number of Vermont vehicles from gasoline to electricity. In addition to the environmental and social impacts, the reliance on petroleum to fuel Vermont vehicles impacts the state’s economy and the pocket-books of consumers. Drivers in Vermont spent more than $1.1 billion to fuel vehicles in 2007, an increase of about $500 million dollars from 2002. Changing the fuel in Vermont vehicles can address both emissions and economic issues. Advances in electric drive systems and energy storage devices have made plug-in hybrid electric vehicles (PHEVs) a reality. Building on the success of hybrid electric vehicles, PHEVs allow the consumer to charge the vehicle’s battery pack directly from the electric grid rather than from the vehicle’s gas engine. This research report looks at the ability of the Vermont electric grid to handle large numbers of PHEVs, and at the emissions impact and end-user economic costs. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> As Plug-in Hybrid Vehicles (PHEVs) take a greater share in the personal automobile market, their penetration levels may bring potential challenges to electric utility especially at the distribution level. This paper examines the impact of charging PHEVs on a distribution transformer under different charging scenarios. The simulation results indicate that at the PHEV penetration level of interest, new load peaks will be created, which in some cases may exceed the distribution transformer capacity. In order to keep the PHEVs from causing harmful new peaks, thus making the system more secure and efficient, several PHEV charging profiles are analyzed and some possible demand management solutions, including PHEV stagger charge and household load control, are explored. <s> BIB004 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> Alternative vehicles, such as plug-in hybrid electric vehicles, are becoming more popular. The batteries of these plug-in hybrid electric vehicles are to be charged at home from a standard outlet or on a corporate car park. These extra electrical loads have an impact on the distribution grid which is analyzed in terms of power losses and voltage deviations. Without coordination of the charging, the vehicles are charged instantaneously when they are plugged in or after a fixed start delay. This uncoordinated power consumption on a local scale can lead to grid problems. Therefore, coordinated charging is proposed to minimize the power losses and to maximize the main grid load factor. The optimal charging profile of the plug-in hybrid electric vehicles is computed by minimizing the power losses. As the exact forecasting of household loads is not possible, stochastic programming is introduced. Two main techniques are analyzed: quadratic and dynamic programming. <s> BIB005 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> This tutorial aims at explaining how signal processing techniques can be used to manage EVs connected to the smart grid. It also introduces the main issues and challenges related with the operation of EVs in the presence of a smart grid infrastructure and how signal processing techniques can be applied in this context. <s> BIB006 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Impact of the EV penetration <s> Electric Vehicles (EVs) charged in a manner that is optimal to the power system will tend to increase the utilization of the lowest cost power generating units on the system, which in turn encourages investment in these preferable forms of generation. Were these gains to be substantial, they could be reflected in future charging tariffs as a means of encouraging EV ownership. However, where the impact of EVs is being quantified, much of the system benefit can only be observed where generator scheduling is performed by unit-commitment based methods. By making use of a rapid, yet robust unit-commitment algorithm, in the context of a capacity expansion procedure, this paper quantifies the impact of EVs for a variety of demand and wind time-series, relative fuel costs and EV penetrations. Typically, the net-cost of EV charging increases with EV penetration and CO2 cost, and falls with increasing wind. Frequently however these relationships do not apply, where changes in an input often lead to step-changes in the optimal plant mix. The impact of EVs is thus strongly dependent on the dynamics of the underlying generation portfolio. <s> BIB007
|
There are a handful of studies investigating the impact of electric vehicle charging on power generation BIB002 BIB003 BIB007 . According to BIB002 , plug-in hybrid electric vehicles (assuming all vehicles are PHEV20 with a battery pack of 7.2 kWh) can increase the total load by 2.7% and the peak load by 2.5% in Colorado. On the other hand, battery sizes of pure EVs range from 16 to 52 kWh, which means actual impacts will be more severe. Similarly, BIB001 presents that if 5% of the EV population charge at the same time, there will be a 5 GW increase in total power demand by year 2018 in VACAR region (Virginia -North Carolina -South Carolina). Overall, uncontrolled EV charging will decrease the utilization of low cost generation assets, increase the peak to average load ratio, and increase the power generation cost. Potential impacts of EV demand on the cost of the power grid is presented in Figure 1b . According to a study conducted by the US Department of Energy , in the Western Interconnection network alone, one third of the lines experienced congestion at least once during the year of the study, and 17% of the lines are congested at least 10% of the times. This study also shows that the situation is even more severe in the Eastern Interconnection, as the infrastructure is older and the network is not designed for long distance delivery of power. On the other hand, the growth in EV load along with the deployment of new generators requires a capacity expansion in the transmission network. However, due to economical and political reasons, the required investments may not be realized in the short term. Past experiences show that new transmission projects can cost up to billions of dollars and may be stalled if the cost allocation and the recovery of investments are not properly planned. To that end, uncontrolled EV demand will allow transmission bottlenecks to emerge. These bottlenecks will increase electricity costs and the risks of blackouts. If charged at parking lots or customer premises, the distribution grid is the part where most electric vehicles will be attached to. Uncontrolled EV charging could stress the distribution grid and cause system failures such as transformer and line overloading deteriorate power quality (e.g., large voltage deviations, harmonics, etc.). Considering the fact that EV penetration is going to be geographically clustered, negative impacts will be more severe in certain regions BIB005 BIB004 . For instance, the US distribution grid is designed to meet three to five houses BIB006 per transformer. Since charging of one EV doubles the daily load of a typical house, further challenges will be faced by the additional load introduced by EVs. A very typical scenario is illustrated in Figure 2 where five houses are served by a 37.5-kVA transformer. If just two level-2 chargers are used concurrently, local transformer is going to be overloaded. The frequent occurrence of such events will increase power loses and voltage deviations, and decreases transformer lifetime (high loading leads to high operating temperature) BIB005 . In BIB004 , the authors presented a comprehensive study on the impacts of variety of EV charging scenarios on the required transformer upgrades and transformer efficiency.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Many believe the electric power system is undergoing a profound change driven by a number of needs. There's the need for environmental compliance and energy conservation. We need better grid reliability while dealing with an aging infrastructure. And we need improved operational effi ciencies and customer service. The changes that are happening are particularly signifi cant for the electricity distribution grid, where "blind" and manual operations, along with the electromechanical components, will need to be transformed into a "smart grid." This transformation will be necessary to meet environmental targets, to accommodate a greater emphasis on demand response (DR), and to support plug-in hybrid electric vehicles (PHEVs) as well as distributed generation and storage capabilities. It is safe to say that these needs and changes present the power industry with the biggest challenge it has ever faced. On one hand, the transition to a smart grid has to be evolutionary to keep the lights on; on the other hand, the issues surrounding the smart grid are signifi cant enough to demand major changes in power systems operating philosophy. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Researchers have proposed that fleets of plug-in hybrid vehicles could be used to perform ancillary services for the electric grid. In many of these studies, the vehicles are able to accrue revenue for performing these grid stabilization services, which would offset the increased purchase cost of plug-in hybrid vehicles. To date, all such studies have assumed a vehicle command architecture that allows for a direct and deterministic communication between the grid system operator and the vehicle. This work compares this direct, deterministic vehicle command architecture to an aggregative vehicle command architecture on the bases of the availability, reliability and value of vehicle-provided ancillary services. This research incorporates a new level of detail into the modeling of vehicle-to-grid ancillary services by incorporating probabilistic vehicle travel models, time series ancillary services pricing, and a consideration of ancillary services reliability. Results show that including an aggregating entity in the command and contracting architecture can improve the scale and reliability of vehicle-to-grid ancillary services, thereby making vehicle-to-grid ancillary services more compatible with the current ancillary services market. However, the aggregative architecture has the deleterious effect of reducing the revenue accrued by plug-in vehicle owners relative to the default architectures. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Abstract This study proposes an intelligent PEV charging scheme that significantly reduces power system cost while maintaining reliability compared to the widely discussed valley-fill method of aggregated charging in the early morning. This study considers optimal PEV integration into the New York Independent System Operator's (NYISO) day-ahead and real-time wholesale energy markets for 21 days in June, July, and August of 2006, a record-setting summer for peak load. NYISO market and load data is used to develop a statistical Locational Marginal Price (LMP) and wholesale energy cost model. This model considers the high cost of ramping generators at peak-load and the traditional cost of steady-state operation, resulting in a framework with two competing cost objectives. Results show that intelligent charging assigns roughly 80% of PEV load to valley hours to take advantage of low steady-state cost, while placing the remaining 20% equally at shoulder and peak hours to reduce ramping cost. Compared to unregulated PEV charging, intelligent charging reduces system cost by 5–16%; a 4–9% improvement over the flat valley-fill approach. Moreover, a Charge Flexibility Constraint (CFC), independent of market modeling, is constructed from a vehicle-at-home profile and the mixture of Level 1 and Level 2 charging infrastructure. The CFC is found to severely restrict the ability to charge vehicles during the morning load valley. This study further shows that adding more Level 2 chargers without regulating PEV charging will significantly increase wholesale energy cost. Utilizing the proposed intelligent PEV charging method, there is a noticeable reduction in system cost if the penetration of Level 2 chargers is increased from 70/30 to 50/50 (Level 1/Level 2). However, the system benefit is drastically diminished for higher penetrations of Level 2 chargers. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Motivated by the power-grid-side challenges in the integration of electric vehicles, we propose a decentralized protocol for negotiating day-ahead charging schedules for electric vehicles. The overall goal is to shift the load due to electric vehicles to fill the overnight electricity demand valley. In each iteration of the proposed protocol, electric vehicles choose their own charging profiles for the following day according to the price profile broadcast by the utility, and the utility updates the price profile to guide their behavior. This protocol is guaranteed to converge, irrespective of the specifications (e.g., maximum charging rate and deadline) of electric vehicles. At convergence, the l 2 norm of the aggregated demand is minimized, and the aggregated demand profile is as “flat” as it can possibly be. The proposed protocol needs no coordination among the electric vehicles, hence requires low communication and computation capability. Simulation results demonstrate convergence to optimal collections of charging profiles within few iterations. <s> BIB004 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> In this paper, the problem of grid-to-vehicle energy exchange between a smart grid and plug-in electric vehicle groups (PEVGs) is studied using a noncooperative Stackelberg game. In this game, on the one hand, the smart grid, which acts as a leader, needs to decide on its price so as to optimize its revenue while ensuring the PEVGs' participation. On the other hand, the PEVGs, which act as followers, need to decide on their charging strategies so as to optimize a tradeoff between the benefit from battery charging and the associated cost. Using variational inequalities, it is shown that the proposed game possesses a socially optimal Stackelberg equilibrium in which the grid optimizes its price while the PEVGs choose their equilibrium strategies. A distributed algorithm that enables the PEVGs and the smart grid to reach this equilibrium is proposed and assessed by extensive simulations. Further, the model is extended to a time-varying case that can incorporate and handle slowly varying environments. <s> BIB005 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Vehicle-to-grid provides a viable approach that feeds the battery energy stored in electric vehicles (EVs) back to the power grid. Meanwhile, since EVs are mobile, the energy in EVs can be easily transported from one place to another. Based on these two observations, we introduce a novel concept called EV energy network for energy transmission and distribution using EVs. We present a concrete example to illustrate the usage of an EV energy network, and then study the optimization problem of how to deploy energy routers in an EV energy network. We prove that the problem is NP-hard and develop a greedy heuristic solution. Simulations using real-world data shows that our method is efficient. <s> BIB006 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> This paper investigates the distribution system impacts of electric vehicle (EV) charging. The analysis is based on a large number of operational distribution networks in The Netherlands. Future load profiles have been constructed by adding different EV charging profiles to household loads and solving the power flows to assess the network impacts on various network levels. The results indicate that controlled charging of EVs leads to significant reduction of overloaded network components that have to be replaced, but the impact varies per network level. Overall, in the uncontrolled charging scenarios roughly two times more replacements are needed compared to the controlled charging scenario. Furthermore, it was shown that for the controlled charging scenario the overall reduction in net present value due to energy losses and the replacement of overloaded network components is approximately 20% in comparison with the uncontrolled charging scenario. The results suggest that the deployment of a flexible and intelligent distribution network is a cost-beneficial way to accommodate large penetrations of EVs. <s> BIB007 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Electric vehicles (EVs) are regarded as one of the most effective tools to reduce the oil demands and gas emissions. And they are welcome in the near future for general road transportation. When EVs are connected to the power grid for charging and/or discharging, they become gridable EVs (GEVs). These GEVs will bring a great impact to our society and thus human life. This paper investigates and discusses the opportunities and challenges of GEVs connecting with the grid, namely, the vehicle-to-home (V2H), vehicle-to-vehicle (V2V), and vehicle-to-grid (V2G) technologies. The key is to provide the methodologies, approaches, and foresights for the emerging technologies of V2H, V2V, and V2G. <s> BIB008 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Electric Vehicles (EVs) charged in a manner that is optimal to the power system will tend to increase the utilization of the lowest cost power generating units on the system, which in turn encourages investment in these preferable forms of generation. Were these gains to be substantial, they could be reflected in future charging tariffs as a means of encouraging EV ownership. However, where the impact of EVs is being quantified, much of the system benefit can only be observed where generator scheduling is performed by unit-commitment based methods. By making use of a rapid, yet robust unit-commitment algorithm, in the context of a capacity expansion procedure, this paper quantifies the impact of EVs for a variety of demand and wind time-series, relative fuel costs and EV penetrations. Typically, the net-cost of EV charging increases with EV penetration and CO2 cost, and falls with increasing wind. Frequently however these relationships do not apply, where changes in an input often lead to step-changes in the optimal plant mix. The impact of EVs is thus strongly dependent on the dynamics of the underlying generation portfolio. <s> BIB009 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Opportunities <s> Electric storage units constitute a key element in the emerging smart grid system. In this paper, the interactions and energy trading decisions of a number of geographically distributed storage units are studied using a novel framework based on game theory. In particular, a noncooperative game is formulated between storage units, such as plug-in hybrid electric vehicles, or an array of batteries that are trading their stored energy. Here, each storage unit's owner can decide on the maximum amount of energy to sell in a local market so as to maximize a utility that reflects the tradeoff between the revenues from energy trading and the accompanying costs. Then in this energy exchange market between the storage units and the smart grid elements, the price at which energy is traded is determined via an auction mechanism. The game is shown to admit at least one Nash equilibrium and a novel algorithm that is guaranteed to reach such an equilibrium point is proposed. Simulation results show that the proposed approach yields significant performance improvements, in terms of the average utility per storage unit, reaching up to 130.2% compared to a conventional greedy approach. <s> BIB010
|
The aforementioned effects can be mitigated with the deployment of necessary smart grid communication technologies which enable EV users to take advantage of low prices during off-peak hours. In such applications, known as valley filling, grid operators encourage customers to postpone their EV charging to low power demand periods aiming to increase the overall power grid efficiency. There are many opportunities to use valley filling applications. The US power grid uses its maximum generation only around 5% of the time BIB001 . If optimal valley filling programs are employed, almost 73% of the vehicles in the US can be substituted by EVs . Such an approach mandates EVs to be charged during the night when the aggregated power demand is low. For instance, the authors in BIB003 propose an EV charging framework for valley-filling applications in New York State with varying EV market penetrations of 5% to 40%. They show that the intelligent scheduling of EV chargings at off-peak hours increases the utilization of low cost generations, hence lowers the wholesale energy cost. In a similar study, authors of BIB009 argue that the savings gained due to intelligent charging of EVs could be reflected in charging tariffs and it promotes EV ownership. Furthermore, the work presented in BIB004 proposes a valley-filling algorithm and models the customer to grid interaction via pricing demand signals. The introduction of bidirectional chargers enables electric vehicles to transfer energy back to the grid (V2G) or to other electric vehicles (V2V) BIB008 . The utilization of such ancillary services can aid the transmission operations, mainly by reducing the congestion during peak hours. For example, group of vehicles can sell back part of their stored energy to other EVs who are in urgent need. This way, energy trading via V2V will eliminate the need to draw power from bulk power plants and hence the associated power losses in transmission will be minimized. For instance, studies in BIB005 BIB010 present mathematical framework to model the interaction of energy trading in a V2V scenario, where the groups of EVs determine the amount of energy to exchange and negotiate on unit price. Moreover, EVs can transport their stored energy from one location to another which can support the grid via V2G applications. For example, BIB006 provides a transmission network based on the capability Internet of vehicles to transfer energy to the regions of high energy consumption. This way, the required upgrades will be deferred and occur gradually over time. Intelligent control mechanisms (presented in the next section) can mitigate the aforementioned effects. Such frameworks requires both parties (EVs and the grid) to http://jwcn.eurasipjournals.com/content/2014/1/223 communicate. According to BIB007 , controlling EV charging can reduce the number of congested (overloaded) network components which need to be replaced, hence eliminate the need for costly upgrades. It is further shown that controlling EV charging can reduce the cost of energy losses by 20% when compared to uncontrolled charging. In addition, EVs can be seen as distributed-energy storage mediums which are very essential for ancillary smart grid applications like integration of renewable energy resources and frequency regulation applications BIB002 . We provide a summary of the negative impacts of uncontrolled EV charging in Figure 3 .
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Technical objectives <s> Plug-in hybrid electric vehicles are a midterm solution to reduce the transportation sector's dependency on oil. However, if implemented in a large scale without control, peak load increases significantly and the grid may be overloaded. Two algorithms to address this problem are proposed and analyzed. Both are based on a forecast of future electricity prices and use dynamic programming to find the economically optimal solution for the vehicle owner. The first optimizes the charging time and energy flows. It reduces daily electricity cost substantially without increasing battery degradation. The latter also takes into account vehicle to grid support as a means of generating additional profits by participating in ancillary service markets. Constraints caused by vehicle utilization as well as technical limitations are taken into account. An analysis, based on data of the California independent system operator, indicates that smart charge timing reduces daily electricity costs for driving from $0.43 to $0.2. Provision of regulating power substantially improves plug-in hybrid electric vehicle economics and the daily profits amount to $1.71, including the cost of driving. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Technical objectives <s> This paper uses a new unit commitment model which can simulate the interactions among plug-in hybrid electric vehicles (PHEVs), wind power, and demand response (DR). Four PHEV charging scenarios are simulated for the Illinois power system: (1) unconstrained charging, (2) 3-hour delayed constrained charging, (3) smart charging, and (4) smart charging with DR. The PHEV charging is assumed to be optimally controlled by the system operator in the latter two scenarios, along with load shifting and shaving enabled by DR programs. The simulation results show that optimally dispatching the PHEV charging load can significantly reduce the total operating cost of the system. With DR programs in place, the operating cost can be further reduced. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Technical objectives <s> With the advent of the plug-in hybrid electric vehicles (PHEVs), the vehicle-to-grid (V2G) technology is attracting increasing attention recently. It is believed that the V2G option can aid to improve the efficiency and reliability of the power grid, as well as reduce overall cost and carbon emission. In this paper, the possibility of smoothing out the load variance in a household microgrid by regulating the charging patterns of family PHEVs is investigated. First, the mathematic model of the problem is built up. Then, the case study is conducted, which demonstrates that, by regulating the charging profiles of the PHEVs, the variance of load power can be dramatically reduced. Third, the energy losses and the subsidy mechanism are discussed. Finally, the impacts of the requested net charging quantities and the battery capacity of PHEVs on the performance of the regulated charging are investigated. <s> BIB003
|
The technical control objectives are usually related to the operating limits of the physical power grid assets. The most common objective functions are the minimization of energy losses, controlling voltage deviations, reducing peak-to-average load ratio, smoothing the consumer demand, and supporting renewable energy generation [ BIB003 BIB001 BIB002 . For vehicle-to-grid and vehicle-to-vehicle applications, the technical objectives include battery degradations and aging, thermal stability, etc. .
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Economical objectives <s> Plug-in hybrid electric vehicles are a midterm solution to reduce the transportation sector's dependency on oil. However, if implemented in a large scale without control, peak load increases significantly and the grid may be overloaded. Two algorithms to address this problem are proposed and analyzed. Both are based on a forecast of future electricity prices and use dynamic programming to find the economically optimal solution for the vehicle owner. The first optimizes the charging time and energy flows. It reduces daily electricity cost substantially without increasing battery degradation. The latter also takes into account vehicle to grid support as a means of generating additional profits by participating in ancillary service markets. Constraints caused by vehicle utilization as well as technical limitations are taken into account. An analysis, based on data of the California independent system operator, indicates that smart charge timing reduces daily electricity costs for driving from $0.43 to $0.2. Provision of regulating power substantially improves plug-in hybrid electric vehicle economics and the daily profits amount to $1.71, including the cost of driving. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Economical objectives <s> We study decentralized plug-in electric vehicle (PEV) charging control, wherein the system operator (SO) sends price-based signals to a load aggregator (LA) that optimizes charging of a PEV fleet. We study a pricing scheme that conveys price and quantity information to the LA and compare it to a simpler price-only scheme. We prove that the price/quantity-based mechanism can yield a socially optimal solution. We also examine several numerical case studies to demonstrate the superior performance of the price/quantity-based scheme. The price/quantity scheme yields nearly identical PEV charging costs compared to the social optima, whereas the price-only scheme is highly sensitive to the choice of a regularization penalty term that is needed to ensure convergence. We also show that the time to compute an equilibrium with the price-only mechanism can be up to two orders of magnitude greater than with the price/quantity scheme and can involve 24 times more information exchange between the SO and LA. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Economical objectives <s> In order to push Electric Vehicles (EVs) into the mainstream, the wide deployment of charging stations that can serve multiple classes of customers (e.g. fast charge, slow charge etc.) and provide a certain level of Quality of Service (QoS) is required. However, the operation of the power grid becoming more strenuous due to the addition of new large loads represented by EVs. Hence in this paper we propose a control and resource provisioning framework that can alleviate the strain on the power grid. We propose two design problems; first one considers a charging station located in a big metropolitan with a large and highly stochastic EV demand. For this case, we propose a pricing based control mechanism to maximize the total aggregated utility by controlling the arrival rates. Second case provides a capacity planning framework for stations located in small cities where arrival rates can be obtained via profiling studies. At each model, station draws a constant power from the grid and provides QoS guarantees, namely blocking probability, to each class. Hence total stochastic demand is replaced with a deterministic one, by sacrificing to reject a very few percentage of customers. Our results indicate that significant gains can be obtained with the proposed model. <s> BIB003
|
The objective functions fall into this category are usually linked to energy market participants: consumers, producers, retailers, etc. The main objectives include minimization of electricity generation and consumption costs. In this case, the objective functions are usually modeled with utility functions and the goal is to develop a charging tariff such that the total cost of charging is minimized compared to uncontrolled case BIB001 BIB002 BIB003 . It is noteworthy that both of the objective functions are actually reflected in electricity prices. Hence, in some cases, technical objectives are coupled to economical objectives. Nodal pricing can be a good example , where the technical aspects (distance of generators, congestion of transmission lines, etc.) are translated into cost functions and the optimal pricing is solved with a more holistic approach.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> Alternative vehicles based on internal combustion engines (ICE), such as the hybrid electric vehicle (HEV), the plug-in hybrid electric vehicle (PHEV) and the fuel-cell electric vehicle (FCEV), are becoming increasingly popular. HEVs are currently commercially available and PHEVs will be the next phase in the evolution of hybrid and electric vehicles. The batteries of the PHEVs are designed to be charged at home, from a standard outlet in the garage, or on a corporate car park. The electrical consumption for charging PHEVs may take up to 5% of the total electrical consumption in Belgium by 2030. These extra electrical loads have an impact on the distribution grid which is analyzed in terms of power losses and voltage deviations. Firstly, the uncoordinated charging is described where the vehicles are charged immediately when they are plugged in or after a fixed start delay. This uncoordinated power consumption on a local scale can lead to grid problems. Therefore coordinated charging is proposed to minimize the power losses and to maximize the main grid load factor. The optimal charge profile of the PHEVs is computed by minimizing the power losses. The exact forecasting of household loads is not possible, so stochastic programming is introduced. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> The main sources of emission today are from the electric power and transportation sectors. One of the main goals of a cyber-physical energy system (CPES) is the integration of renewable energy sources and gridable vehicles (GVs) to maximize emission reduction. GVs can be used as loads, sources and energy storages in CPES. A large CPES is very complex considering all conventional and green distributed energy resources, dynamic data from sensors, and smart operations (e.g., charging/discharging, control, etc.) from/to the grid to reduce both cost and emission. If large number of GVs are connected to the electric grid randomly, peak load will be very high. The use of conventional thermal power plants will be economically expensive and environmentally unfriendly to sustain the electrified transportation. Intelligent scheduling and control of elements of energy systems have great potential for evolving a sustainable integrated electricity and transportation infrastructure. The maximum utilization of renewable energy sources using GVs for sustainable CPES (minimum cost and emission) is presented in this paper. Three models are described and results of the smart grid model show the highest potential for sustainability. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> The problem of scheduling for the large scale charging of electric vehicles with renewable sources is considered. A new online charging algorithm referred to as Threshold Admission with Greedy Scheduling (TAGS) is proposed by formulating the charging problem as one of deadline scheduling with admission control and variable charging capacities. TAGS has low computation cost and requires no prior knowledge on the distributions of arrival traffic, battery charging (service) time, and available energy from renewable sources. It has a reserve dispatch algorithm designed to compensate the intermittency of renewable sources. Performance of TAGS is compared with benchmark scheduling algorithms such as the Earliest Deadline First (EDF) and the First Come First Serve (FCFS) with aggressive and conservative reserve dispatch algorithms. <s> BIB004 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> The introduction of plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs), commonly referred to as plug-in electric vehicles (PEVs), could trigger a stepwise electrification of the whole transportation sector. However, the potential impact of PEV charging on the electric grid is not fully known, yet. This paper presents an iterative approach, which integrates a PEV electricity demand model and a power system simulation to reveal potential bottlenecks in the electric grid caused by PEV energy demand. An agent-based traffic demand model is used to model the electricity demand of each vehicle over the day. An approach based on interconnected multiple energy carrier systems is used as a model for a possible future energy system. Experiments demonstrate that the model is sensitive to policy changes, e.g., changes in electricity price result in modified charging patterns. By implementing an intelligent vehicle charging solution it is demonstrated how new charging schemes can be designed and tested using the proposed framework. <s> BIB005 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Centralized control <s> In order to increase the penetration of electric vehicles, a network of fast charging stations that can provide drivers with a certain level of quality of service (QoS) is needed. However, given the strain that such a network can exert on the power grid, and the mobility of loads represented by electric vehicles, operating it efficiently is a challenging and complex problem. In this paper, we examine a network of charging stations equipped with an energy storage device and propose a scheme that allocates power to them from the grid, as well as routes customers. We examine three scenarios, gradually increasing their complexity. In the first one, all stations have identical charging capabilities and energy storage devices, draw constant power from the grid and no routing decisions of customers are considered. It represents the current state of affairs and serves as a baseline for evaluating the performance of the proposed scheme. In the second scenario, power to the stations is allocated in an optimal manner from the grid and in addition a certain percentage of customers can be routed to nearby stations. In the final scenario, optimal allocation of both power from the grid and customers to stations is considered. The three scenarios are evaluated using real traffic traces corresponding to weekday rush hour from a large metropolitan area in the US. The results indicate that the proposed scheme offers substantial improvements of performance compared to the current mode of operation; namely, more customers can be served with the same amount of power, thus enabling the station operators to increase their profitability. Further, the scheme provides guarantees to customers in terms of the probability of being blocked (and hence not served) by the closest charging station to their location. Overall, the paper addresses key issues related to the efficient operation, both from the perspective of the power grid and the drivers satisfaction, of a network of charging stations. <s> BIB006
|
Centralized control employs a central authority (dispatcher) who up to a large extent controls and mandates EV charging rate, start time, etc. System level decisions, such as the desired state of charge, charging intervals, etc., are taken to finish all jobs by a certain deadline (e.g., by 7 am). Main advantages of centralized control include higher utilization of power grid resources and real-time monitoring of operation conditions across the network. On the other hand, to enable such functionalities, an advanced communication network is needed. Studies presented in BIB006 BIB003 BIB002 BIB005 BIB001 are examples of centralized scheduling. These studies differ by the assumptions they make; interruptible vs. uninterruptible load, constant vs. varying charging rate, and preemptive vs. non-preemptive jobs. Management of EV fleets (e.g., school buses, postal service vehicles, etc.) can be a good example for centralized control. In this case, fleet owners can draw contracts with the utility operators and http://jwcn.eurasipjournals.com/content/2014/1/223 receive discounts. In return, utility can orchestrate EV demand according to network conditions to minimize his operating cost. Moreover, authors of propose a deadline scheduling policy with admission control. They compare their algorithm with classic earliest deadline first and first come first serve scheduling. Similarly, the authors of BIB004 uses an admission control algorithm called Threshold Admission with Greedy Scheduling. In addition, their model incorporates renewable energy resources to charge electric vehicles.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> Deployment of PHEV will initiate an integration of transportation and power systems. Intuitively, the PHEVs will constitute an additional demand to the electricity grid, potentially violating converter or line capacities when recharging. Smart management schemes can alleviate possible congestions in power systems, intelligently distributing available energy. As PHEV are inherently independent entities, an agent based approach is expedient. Nonlinear pricing will be adapted to model and manage recharging behavior of large numbers of autonomous PHEV agents connecting in one urban area modelled as an energy hub. The scheme will incorporate price dependability. An aggregation entity, with no private information about its customers, will manage the PHEV agents whose individual parameters will be based on technical constraints and individual objectives. Analysis of the management scheme will give implications for PHEV modelling and integration schemes as well as tentative ideas of possible repercussions on power systems. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> This paper discusses conceptual frameworks for actively involving highly distributed loads in power system control actions. The context for load control is established by providing an overview of system control objectives, including economic dispatch, automatic generation control, and spinning reserve. The paper then reviews existing initiatives that seek to develop load control programs for the provision of power system services. We then discuss some of the challenges to achieving a load control scheme that balances device-level objectives with power system-level objectives. One of the central premises of the paper is that, in order to achieve full responsiveness, direct load control (as opposed to price response) is required to enable fast time scale, predictable control opportunities, especially for the provision of ancillary services such as regulation and contingency reserves. Centralized, hierarchical, and distributed control architectures are discussed along with benefits and disadvantages, especially in relation to integration with the legacy power system control architecture. Implications for the supporting communications infrastructure are also considered. Fully responsive load control is illustrated in the context of thermostatically controlled loads and plug-in electric vehicles. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> Plug-in hybrid electric vehicles are a midterm solution to reduce the transportation sector's dependency on oil. However, if implemented in a large scale without control, peak load increases significantly and the grid may be overloaded. Two algorithms to address this problem are proposed and analyzed. Both are based on a forecast of future electricity prices and use dynamic programming to find the economically optimal solution for the vehicle owner. The first optimizes the charging time and energy flows. It reduces daily electricity cost substantially without increasing battery degradation. The latter also takes into account vehicle to grid support as a means of generating additional profits by participating in ancillary service markets. Constraints caused by vehicle utilization as well as technical limitations are taken into account. An analysis, based on data of the California independent system operator, indicates that smart charge timing reduces daily electricity costs for driving from $0.43 to $0.2. Provision of regulating power substantially improves plug-in hybrid electric vehicle economics and the daily profits amount to $1.71, including the cost of driving. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> Motivated by the power-grid-side challenges in the integration of electric vehicles, we propose a decentralized protocol for negotiating day-ahead charging schedules for electric vehicles. The overall goal is to shift the load due to electric vehicles to fill the overnight electricity demand valley. In each iteration of the proposed protocol, electric vehicles choose their own charging profiles for the following day according to the price profile broadcast by the utility, and the utility updates the price profile to guide their behavior. This protocol is guaranteed to converge, irrespective of the specifications (e.g., maximum charging rate and deadline) of electric vehicles. At convergence, the l 2 norm of the aggregated demand is minimized, and the aggregated demand profile is as “flat” as it can possibly be. The proposed protocol needs no coordination among the electric vehicles, hence requires low communication and computation capability. Simulation results demonstrate convergence to optimal collections of charging profiles within few iterations. <s> BIB004 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> There is expected to be a large penetration of Plug-in Hybrid Electric Vehicles (PHEVs) into the market in the near future. As a result, many technical problems related to the impact of this technology on the power grid need to be addressed. The anticipating large penetration of PHEV into our societies will add a substantial energy load to power grids, as well as add substantial energy resources that can be utilized. There is also a need for in-depth study on PHEVs in term of Smart Grid environment. In this paper, we propose an algorithm for optimally managing a large number of PHEVs (i.e., 500) charging at a municipal parking station. We used Particle Swarm Optimization (PSO) to intelligently allocate energy to the PHEVs. We considered constraints such as energy price, remaining battery capacity, and remaining charging time. A mathematical framework for the objective function (i.e., maximizing the average State-of-Charge at the next time step) is also given. We characterized the performance of our PSO algorithm using a MATLAB simulation, and compared it with other techniques. <s> BIB005 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> In this paper, the problem of grid-to-vehicle energy exchange between a smart grid and plug-in electric vehicle groups (PEVGs) is studied using a noncooperative Stackelberg game. In this game, on the one hand, the smart grid, which acts as a leader, needs to decide on its price so as to optimize its revenue while ensuring the PEVGs' participation. On the other hand, the PEVGs, which act as followers, need to decide on their charging strategies so as to optimize a tradeoff between the benefit from battery charging and the associated cost. Using variational inequalities, it is shown that the proposed game possesses a socially optimal Stackelberg equilibrium in which the grid optimizes its price while the PEVGs choose their equilibrium strategies. A distributed algorithm that enables the PEVGs and the smart grid to reach this equilibrium is proposed and assessed by extensive simulations. Further, the model is extended to a time-varying case that can incorporate and handle slowly varying environments. <s> BIB006 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> To address the grid-side challenges associated with the anticipated high electric vehicle (EV) penetration level, various charging protocols have been proposed in the literature. Most if not all of these protocols assume continuous charging rates and allow intermittent charging. However, due to charging technology limitations, EVs can only be charged at a fixed rate, and the intermittency in charging shortens the battery lifespan. We consider these charging requirements, and formulate EV charging scheduling as a discrete optimization problem. We propose a stochastic distributed algorithm to approximately solve the optimal EV charging scheduling problem in an iterative procedure. In each iteration, the transformer receives charging profiles computed by the EVs in the previous iteration, and broadcasts the corresponding normalized total demand to the EVs; each EV generates a probability distribution over its potential charging profiles accordingly, and samples from the distribution to obtain a new charging profile. We prove that this stochastic algorithm almost surely converges to one of its equilibrium charging profiles, and each of its equilibrium charging profiles has a negligible sub-optimality ratio. Case studies corroborate our theoretical results. <s> BIB007 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guide their updates. The algorithm converges to optimal charging profiles (that are as “flat” as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation. <s> BIB008 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> This paper develops a strategy to coordinate the charging of autonomous plug-in electric vehicles (PEVs) using concepts from non-cooperative games. The foundation of the paper is a model that assumes PEVs are cost-minimizing and weakly coupled via a common electricity price. At a Nash equilibrium, each PEV reacts optimally with respect to a commonly observed charging trajectory that is the average of all PEV strategies. This average is given by the solution of a fixed point problem in the limit of infinite population size. The ideal solution minimizes electricity generation costs by scheduling PEV demand to fill the overnight non-PEV demand “valley”. The paper's central theoretical result is a proof of the existence of a unique Nash equilibrium that almost satisfies that ideal. This result is accompanied by a decentralized computational algorithm and a proof that the algorithm converges to the Nash equilibrium in the infinite system limit. Several numerical examples are used to illustrate the performance of the solution strategy for finite populations. The examples demonstrate that convergence to the Nash equilibrium occurs very quickly over a broad range of parameters, and suggest this method could be useful in situations where frequent communication with PEVs is not possible. The method is useful in applications where fully centralized control is not possible, but where optimal or near-optimal charging patterns are essential to system operation. <s> BIB009 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> With the interest from car owners in going green being at an all-time high, electric vehicles (EVs) are flooding the automobile market. One of the primary concerns in owning an EV is the availability of charging infrastructure while away from home. There has been a renewed interest in managing and pricing the usage of shared commercial EV chargers, while maximizing the operator's profits. Towards this end, we propose a combined pricing-scheduling quadratic integer programming (QIP) model that iteratively prices and schedules EV charging. A pricing module is used to accept/reject charging requests and control the right number and types (arrival-departure times, charge demand etc.) of EVs to charge. The scheduling module ensures that the demand can be met subject to price-demand sensitivity and other scheduling constraints. Once the EVs to be accepted have been finalized and their permit prices determined, the scheduling module can be run every night once the day-to-day arrival and departure times of each EV is revealed to the operator. <s> BIB010 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Distributed control <s> The operation of the power grid is becoming more stressed, due to the addition of new large loads represented by electric vehicles (EVs) and a more intermittent supply due to the incorporation of renewable sources. As a consequence, the coordination and control of projected EV demand in a network of fast charging stations becomes a critical and challenging problem. In this paper, we introduce a game theoretic based decentralized control mechanism to alleviate negative impacts from the EV demand. The proposed mechanism takes into consideration the nonuniform spatial distribution of EVs that induces uneven power demand at each charging facility, and aims to: 1) avoid straining grid resources by offering price incentives, so that customers accept being routed to less busy stations; 2) maximize total revenue by serving more customers with the same amount of grid resources; and 3) provide charging service to customers with a certain level of quality-of-service (QoS), the latter defined as the long term customer blocking probability. We examine three scenarios of increased complexity that gradually approximate real world settings. The obtained results show that the proposed framework leads to substantial performance improvements in terms of the aforementioned goals when compared to current state of affairs. <s> BIB011
|
Decentralized control allows customers to choose their individual charging pattern. Decisions can be based on the price of the electricity or time of the day. This method eliminates the need of third party controller (dispatcher) and complex monitoring techniques. Since decisions are taken individually, game theoretic models are extensively employed. The works presented in BIB006 BIB011 use Stackelberg game to model interactions of system operator (leader), who sets the prices and have the first move advantage, with individual EVs (followers) who respond to price changes by adjusting their demand. Another popular method is the Nash equilibrium, in which optimal pricing is achieved through maximization of individual utility functions BIB008 BIB002 . Other employed models include mean field games, potential games, and network routing games BIB008 BIB002 BIB003 BIB001 BIB007 BIB004 BIB009 . In addition to scheduling of night time charging, there is an interest in large scale charging of group stationary EVs (park and charge). For instance, BIB005 uses swarm optimization to allocate power to EVs in a parking lot. Authors of BIB010 propose a combined pricing-scheduling quadratic integer programming model to determine optimal prices and schedules to manage EV demand in large scale parking lots.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Scale of the problem <s> This paper discusses conceptual frameworks for actively involving highly distributed loads in power system control actions. The context for load control is established by providing an overview of system control objectives, including economic dispatch, automatic generation control, and spinning reserve. The paper then reviews existing initiatives that seek to develop load control programs for the provision of power system services. We then discuss some of the challenges to achieving a load control scheme that balances device-level objectives with power system-level objectives. One of the central premises of the paper is that, in order to achieve full responsiveness, direct load control (as opposed to price response) is required to enable fast time scale, predictable control opportunities, especially for the provision of ancillary services such as regulation and contingency reserves. Centralized, hierarchical, and distributed control architectures are discussed along with benefits and disadvantages, especially in relation to integration with the legacy power system control architecture. Implications for the supporting communications infrastructure are also considered. Fully responsive load control is illustrated in the context of thermostatically controlled loads and plug-in electric vehicles. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Scale of the problem <s> This paper develops a strategy to coordinate the charging of autonomous plug-in electric vehicles (PEVs) using concepts from non-cooperative games. The foundation of the paper is a model that assumes PEVs are cost-minimizing and weakly coupled via a common electricity price. At a Nash equilibrium, each PEV reacts optimally with respect to a commonly observed charging trajectory that is the average of all PEV strategies. This average is given by the solution of a fixed point problem in the limit of infinite population size. The ideal solution minimizes electricity generation costs by scheduling PEV demand to fill the overnight non-PEV demand “valley”. The paper's central theoretical result is a proof of the existence of a unique Nash equilibrium that almost satisfies that ideal. This result is accompanied by a decentralized computational algorithm and a proof that the algorithm converges to the Nash equilibrium in the infinite system limit. Several numerical examples are used to illustrate the performance of the solution strategy for finite populations. The examples demonstrate that convergence to the Nash equilibrium occurs very quickly over a broad range of parameters, and suggest this method could be useful in situations where frequent communication with PEVs is not possible. The method is useful in applications where fully centralized control is not possible, but where optimal or near-optimal charging patterns are essential to system operation. <s> BIB002
|
The scale of the control framework can vary from individual level to entire transmission voltage level. We classify the scale of the problem into three categories. • Transmission scale: At this scale transmission, system operators and wholesale energy markets operate. Corollary, the control techniques applied considers thousands of EVs located in large geographical regions. The primary goal of this scale is to develop pricing policies to achieve optimal valley-filling during night time BIB001 BIB002 .
|
A survey on communication technologies and requirements for internet of electric vehicles <s> EV-electric vehicle supply equipment <s> In the US, more than 10,000 electric vehicles (EV) have been delivered to consumers during the first three quarters of 2011. A large majority of these vehicles are battery electric, often requiring 220 volt charging. Though the vehicle manufacturers and charging station manufacturers have provided consumers options for charging preferences, there are no existing communications between consumers and the utilities to manage the charging demand. There is also wide variation between manufacturers in their approach to support vehicle charging. There are in-vehicle networks, charging station networks, utility networks each using either cellular, Wi-Fi, ZigBee or other proprietary communication technology with no standards currently available for interoperability. The current situation of ad-hoc solutions is a major barrier to the wide adoption of electric vehicles. SAE, the International Standards Organization/International Electrotechnical Commission (ISO/IEC), ANSI, National Institute of Standards and Technology (NIST) and several industrial organizations are working towards the development of interoperability standards. PNNL has participated in the development and testing of these standards in an effort to accelerate the adoption and development of communication modules. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> EV-electric vehicle supply equipment <s> With the increase in momentum in the transformation of the current grid to smart grid, there is an immediate need of proper standards in place for various distributed resources of energy. Electric vehicles are one such resource and have tremendous potential to play a part in the transformation of the grid and also to make the customers participate in clean technology initiatives. As with all new technologies, equipment, or processes, there is a requirement of body of standards that will govern the functioning of the electric vehicles and will also pave the way for easy assimilation into the fabric of consumer's lifestyle and vendors alike. <s> BIB002
|
The communication at customer premises takes place in several places. First, group contains the standards and technologies between electric vehicle and electric vehicle supply equipment (EVSE) that is required for energy transfer monitoring and management, billing information, and authorization. The standardization is required for fast adoption of EVs and proper functioning of electric vehicle network components. The Society of Automotive Engineers (SAE) have defined the communication standards when an EV is being charged. We described these standards below BIB001 . • SAE J2293: This standard covers the functionalities and architectures required for EV energy transfer system. • SAE J2836/1 and J2847/1: Define use cases and requirements for communications between EVs and the power grid, primarily for energy transfer. The central focus is on grid-optimized energy transfer for EVs to guarantee that drivers have enough energy while minimizing the reducing the stress on the grid. • SAE J2836/2 and J2847/2: Define the uses cases and requirements for the communications between electric vehicles and off-board DC charger. • SAE J2836/3 and J2847/3: Identify use cases and additional messages energy (DC) transfer from grid to electric vehicle. Also supports requirements for grid-to-vehicle energy transfer. • SAE J2931: Defines digital communications requirements between EV and off-board device. SAE J2931/1 covers power line communications for EVs. • SAE J2931/2: Defines the requirements for physical layer communications with in-band signaling between EV and EVSE. In Figure 7 , an overview of SAE communication standards is presented. For instance, J2836/1 use cases for utility programs may include time of use program, real-time pricing program, or critical peak pricing program BIB002 . Moreover, the International Electrotechnical Commission (IEC) is developing several standards under development for DC fast charging option. IEC 61851-23 presents the requirements for gird connections and communication architecture for fast charging. IEC 61851-24 defines the digital communications between EV and EVSE.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> Energy and resource management is an important and growing research area at the intersection of conservation, sustainable design, alternative energy production, and social behavior. Energy consumption can be significantly reduced by simply changing how occupants inhabit and use buildings, with little or no additional costs. Reflecting this fact, an emerging measure of grid energy capacity is the negawatt: a unit of power saved by increasing efficiency or reducing consumption.Visualization clearly has an important role in enabling residents to understand and manage their energy use. This role is tied to providing real-time feedback of energy use, which encourages people to conserve energy.The challenge is to understand not only what kinds of visualizations are most effective but also where and how they fit into a larger information system to help residents make informed decisions. In this article, we also examine the effective display of home energy-use data using a net-zero solar-powered home (North House) and the Adaptive Living Interface System (ALIS), North House's information backbone. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> We investigate the use of white spaces in the TV spectrum for Advanced Meter Infrastructure (AMI) communications. We provide a design for using white spaces for AMI and show its benefits in terms of bandwidth, deployment, and cost. We also discuss ongoing work on applying machine learning classification techniques to improve the attack resilience of spectrum data fusion in the proposed architecture. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> Are Power Line Communications (PLC) a good candidate for Smart Grid applications? The objective of this paper is to address this important question. To do so, we provide an overview of what PLC can deliver today by surveying its history and describing the most recent technological advances in the area. We then address Smart Grid applications as instances of sensor networking and network control problems and discuss the main conclusions one can draw from the literature on these subjects. The application scenario of PLC within the Smart Grid is then analyzed in detail. Because a necessary ingredient of network planning is modeling, we also discuss two aspects of engineering modeling that relate to our question. The first aspect is modeling the PLC channel through fading models. The second aspect we review is the Smart Grid control and traffic modeling problem which allows us to achieve a better understanding of the communications requirements. Finally, this paper reports recent studies on the electrical and topological properties of a sample power distribution network. Power grid topological studies are very important for PLC networking as the power grid is not only the information source but also the information delivery system-a unique feature when PLC is used for the Smart Grid. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> Propelled by the need to reduce the impact on the environment and improve energy efficiency, we see an impetus toward enabling a smart grid. One of the key constituents of the smart grid is the automated metering infrastructure, which is expected to facilitate the transport of meter readings from meters to the utility provider, and (potentially) control information in the other direction. A range of communication technologies are being considered for realizing AMI networks with no clear winner so far. This article provides an overview of some of the candidate solutions and proposes a mesh-radio based solution. The proposed solution is an enhanced version of the RPL protocol and exhibits self-organizing characteristics, and is practical and therefore attractive from a deployment perspective. Additionally, we also discuss network operational issues to improve robustness and scalability, as well as fault recovery due to link failure. <s> BIB004 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected. <s> BIB005 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> Information and communication technologies (ICT) represent a fundamental element in the growth and performance of smart grids. A sophisticated, reliable and fast communication infrastructure is, in fact, necessary for the connection among the huge amount of distributed elements, such as generators, substations, energy storage systems and users, enabling a real time exchange of data and information necessary for the management of the system and for ensuring improvements in terms of efficiency, reliability, flexibility and investment return for all those involved in a smart grid: producers, operators and customers. This paper overviews the issues related to the smart grid architecture from the perspective of potential applications and the communications requirements needed for ensuring performance, flexible operation, reliability and economics. <s> BIB006 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> The operation and control of the next generation electrical grids will depend on a complex network of computers, software, and communication technologies. Being compromised by a malicious adversary would cause significant damage, including extended power outages and destruction of electrical equipment. Moreover, the implementation of the smart grid will include the deployment of many new enabling technologies such as advanced sensors and metering, and the integration of distributed generation resources. Such technologies and various others will require the addition and utilization of multiple communication mechanisms and infrastructures that may suffer from serious cyber vulnerabilities. These need to be addressed in order to increase the security and thus the greatest adoption and success of the smart grid. In this article, we focus on the communication security aspect, which deals with the distribution component of the smart grid. Consequently, we target the network security of the advanced metering infrastructure coupled with the data communication toward the transmission infrastructure. We discuss the security and feasibility aspects of possible communication mechanisms that could be adopted on that subpart of the grid. By accomplishing this, the correlated vulnerabilities in these systems could be remediated, and associated risks may be mitigated for the purpose of enhancing the cyber security of the future electric grid. <s> BIB007 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> The American National Standards Institute (ANSI) announced the publication of Standardization Roadmap for Electric Vehicles?Version 2.0, developed by its Electric Vehicles Standards Panel (EVSP). Available as a free download, the document tracks the progress of the implementation of recommendations made in the roadmap version 1.0, released in April 2012, and identifies additional areas where there is a perceived need for standardization work to help facilitate the safe mass deployment of electric vehicles (EVs) and charging infrastructure in the United States. <s> BIB008 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> Spectrum today is allocated in frequency blocks that serve either licensed or unlicensed services. This static spectrum allocation has limited resources to support the exponential increase in wireless devices. In this article, we present the IEEE 802.11af standard, which defines international specifications for spectrum sharing among unlicensed white space devices (WSDs) and licensed services in the TV white space band. Spectrum sharing is conducted through the regulation of unlicensed WSDs by a geolocation database (GDB), the implementation of which differs among regulatory domains. The main difference between regulatory domains is the timescale in which WSDs are controlled by the GDB, resulting in different TVWS availability and WSD operating parameters. The IEEE 802.11af standard provides a common operating architecture and mechanisms for WSDs to satisfy multiple regulatory domains. This standard opens a new approach to treat spectrum as a single entity shared seamlessly by heterogeneous services. <s> BIB009 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Energy management unit to power grid <s> ZigBee provides a simple and reliable solution for the advanced measuring infrastructures. However, the current routing algorithms cannot fully satisfy the requirements of the application, and the characteristics of the node deployment and the data flows should be more considered. In this paper, we propose a minimum physical distance (MPD) delivery protocol based on the ZigBee specification in the smart grid to optimize the transmission of the monitoring and command packets which are from or to the ZigBee coordinator (ZC). The physical depth, which is introduced to indicate the least hops to the ZC, and the transmission paths are decided based on the neighbour table information. The simulation results show that the MPD could improve the performance of the monitoring and controlling packet transmission, it provided high reliability and short paths, the bits sent by the devices except the coordinator were reduced and the end-to-end delay was also shortened. <s> BIB010
|
Visualization of energy consumption clearly helps customers to understand the cost of their energy usage. However, optimal decisions can only be taken by automated management systems BIB001 . Energy management units (EMU) enables customers to power grid interaction; customers can monitor, control, and optimize their energy consumption. Even though energy management systems have been in the market for a few decades, the widespread adoption has gained pace with the recent advances in smart grid. presents recent advances in EMUs. EVSE will connect to EMU via home area network (HAN). The most popular technologies for HAN are Zigbee [79, BIB010 , 802.11-based wireless local area network (WLAN), and femtocells. Zigbee offers required coverage (30 to 40 m), data rate (256 Kbps), low power usage, and deployment cost. In fact, it has a considerable market share in utility world BIB005 BIB006 . The ubiquity of 802.11-capable devices makes WLAN a strong candidate for HAN. The details of WLAN technology is given in the next section. A comprehensive summary is presented in Table 3 . Femtocells are usually employed as access points of cellular networks. This technology uses customer's broadband, DSL, etc. to connect to the wireless carrier's core network. This way, femtocells offer required indoor coverage and capacity for smart grid applications. Communication technologies with a special focus on security for home area networks is presented in BIB007 . For residential charging, the communications between EMU and the power grid is supported by the existing advanced metering infrastructure (AMI) network BIB008 . There are several candidates for this purpose. Power line communications (PLC): PLC is a strong candidate for EMU to grid interaction. The main motivation for PLC is that already existing grid infrastructure reaches every EMU that wants to charge an EV. There are three different types of PLC technologies which are classified by the used frequency band and data rate. Broadband PLC Wireless mesh network can be implemented with WiFi nodes. Low (L): latency (< 250 ms), throughput (< 500 Kbps), scalability (< 100 nodes/backhaul node). Medium (M): latency (250 ms to 1 s), throughput (500 to 1,500 Kbps), scalability (< 100 to 1,000 nodes/backhaul node). High (H): latency (> 1 s), throughput (> 1,500 Kbps), scalability (> 1,000 nodes/backhaul node). http://jwcn.eurasipjournals.com/content/2014/1/223 uses 1.8 to 250 MHz frequency band and physical data rate varies between a few megabits to hundreds of megabits. Narrowband PLC operates in the 3 to 500 kHz band and provides lower data rates. Third type of PLC communications is ultra narrow band technology, which is also the oldest type of all three. It only provides data rate around hundred bits per second . Several millions of PLC-based communications have already been deployed globally BIB003 . Moreover, for EV to EVSE communications, PLC supports an apparent physical association that cannot be achieved by its wireless alternatives. Another distinctive advantage is that the cost of PLC deployment is relatively low when compared to other wireline options and can be comparable to wireless technologies. However, there are several disadvantages for PLC. First, the communication medium is harsh and noisy. Second, transformers cause high attenuation which limits the range of the communication. Repeaters can be employed to overcome this problem, but additional cost should be taken into account beforehand. The final disadvantage is that regulations in some countries limits the use of PLC. For instance, PLC is not allowed for indoor environments in Japan BIB004 . White-space networking: The long term assignment of wireless spectrum to parties like digital TV broadcasters has created inefficient use of ISM band. Fatemieh, 2010 BIB002 proposes to use TV white spaces to meet communication requirements between users and the grid. IEEE 802.22 is the wireless regional area network (WRAN) standard that uses white spaces in the spectrum. The use of this technology offers the following benefits. It allows high data rates in a cost-effective way. White space networking has deep penetration and long range transmission capabilities, which would eliminate the need for complex designs (for EMU to data aggregation units). Also, high coverage can easily be achieved using white spaces. IEEE 802.11af, also referred to as 'White-Fi' and 'Super Wi-Fi' is a recent proposal that allow WLAN operation in TV white space spectrum in the VHF and UHF bands BIB009 . It uses cognitive radio technology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. However, white-space networking is challenging. Available white spaces must be detected and interferences with the incumbents should be avoided. The underlying network should be able to run for varying bandwidths. Also, there are issues related to operation and management of the network BIB004 BIB002 . Wired infrastructure: Another option might be to build a wired infrastructure. Dedicated communication links give utilities full control over the network and reduce the reliance on the communication infrastructures operated by third parties. However, building such wired infrastructures is very costly. On the other hand, if the two-way communications is going to be a part of the power grid for the next century, it might be logical to build such an infrastructure gradually over time. Customer's broadband: One school of thought suggests to use commodity broadband technologies, e.g., digital subscriber lines (DSL) or cable. The capital expenditures (CAPEX) for this case are lower, as the main communication infrastructure has already been deployed. Moreover, commodity broadband technologies uses Internet protocol (IP), so it can be easily connected to other ubiquitous IP-based communication networks. In a recent deployment, a DSL network was used as an underlying communication technology in Boulder, Colorado [89] . Nonetheless, there are several handicaps. The number of broadband connections is lower than the number of power meters. This is especially the case in developing countries. Moreover, the down times in some deployments is unacceptable for critical smart grid applications. Other technologies: Mesh networks BIB004 have been proposed as alternative communication technology for AMI networks. Mesh networks tend to use different forms of wireless networks, i.e., IEEE 802.11, 3G/4G/5G, and mesh type of radio configuration. This choice is subject to technical, strategic, and even legal constraints. We present a detailed overview of such technologies in the next sections. In Table 4 , we present an overview of candidate technologies and network technologies such as 3G/GSM, 4G/LTE (via smart apps such as ). An overview of the communication technologies for garage charging is presented in Figure 8 and summarized in Tables 5 and 6. Note that the communication requirements for the EV to EVSE is in the orders of milliseconds, while EVSE to EMU communication can occur in the order of seconds. Finally, the EMU can communicate with the grid in the order of minutes (typically every 15 min). In the next section, we will provide a comprehensive overview of such communication requirements.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> System reliability and availability <s> The world is becoming more dependent on wireless and mobile services, but the ability of wireless network infrastructures to handle the growing demand is questionable. As wireless and mobile services grow, weaknesses in network infrastructures become clearer. Failures not only affect current voice and data use but could also limit emerging wireless applications such as e-commerce and high-bandwidth Internet access. As wireless and mobile systems play greater roles in emergency response, including 911 and enhanced 911 services, network failures take on life-or-death significance. Therefore, in addition to directing some attention to designing survivable wireless and mobile networks, developers must also keep in mind that increasingly pervasive and demanding services will further escalate the importance of reliability and survivability requirements. The authors explain several options providers must consider to decrease the number of network failures and to cope with failures when they do occur. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> System reliability and availability <s> Reliability and survivability are the two important attributes of cellular networks. In the existing literature, these measures were studied through the simulation. In this paper, we construct an analytical model to determine reliability and survivability attributes of third generation and beyond Universal Mobile Telecommunication Systems (UMTS) networks. Hierarchical architecture of UMTS networks is modeled using stochastic models such as Markov chains, semi-Markov process, reliability block diagrams and Markov reward models to obtain these attributes. The model can be tailored to evaluate the reliability and survivability attributes of other beyond third generation cellular networks such as All-IP UMTS networks and CDMA2000. Numerical results illustrate the applicability of the proposed analytical model. It is observed that incorporating fault tolerance increases the network reliability and survivability. The results are useful for reliable topological design of UMTS networks. In addition, it can help the guarantee of network connectivity after any failure, without over dimensioning the networks. Moreover, it might have some impact from the point of view of the design and evaluation of UMTS infrastructures. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> System reliability and availability <s> Plug-in hybrid electric vehicles (PHEV) are becoming gradually more attractive than internal combustion engine vehicles, even though the current electrical grid is not potentially able to support the required power demand increase to introduce charging stations. Acknowledging that design and development of charging stations has crucial importance, this paper introduces a candidate PHEV charging station architecture, along with a quantitative stochastic model, that allows us to analyze the performance of the system by using arguments from queuing theory and economics. A relevant component of the proposed architecture is the capability of the charging stations to store excess power obtained from the grid. The goal is to design a general architecture which will be able to sustain grid stability, while providing a required level of quality of service; and to describe a general methodology to analyze the performance of such stations with respect to the traffic characteristics, energy storage size, pricing and cost parameters. Our results indicate that significant gains in net cost/profit and useful insights can be made with the right choice of storage size. Such considerations are crucial in this early stage of designing the smart grid and charging stations of the future. <s> BIB003
|
The successful management of EVs requires extensive use of reliable and (highly) available IoEV. The loss of availability is going to terminate the grid to customer interaction. During these isolation periods, customers will not be able to receive electricity prices, hence cannot optimally adjust and schedule their electricity usage. In fact, the cost of unavailability can be more severe. For instance, for garage charging scenarios, uncontrolled EV charging may lead to unwanted peaks and may overload some of the grid components, such as the distribution transformer. Considering the aforementioned use cases, [100] explores the reliability requirements for home charging EV applications. The authors show that 11 different messages are used, and the minimum reliability requirement varies between 98.8% to 99.5%. This variety is attributed to some messages, such as vehicle identification number (VIN) information request, error messages related to EV charging rate, require high availability than other types. The connectivity loss for mobile EVs is even more critical. Unavailability will refrain customers from locating and scheduling charging stations. Similarly, it may lead to suboptimal station selection both for customers (more expensive) and the grid operator (busy stations or long waiting lines may cause customer dissatisfaction) BIB003 . There are a handful of studies that quantify the cost of bad communication system performance. For instance, garage charging applications use AMI network. In a related study, presents a generic AMI communication network and performs availability analysis for each component (e.g., home area network, 3G network, etc.). Moreover, it quantifies the cost of unavailability due to suboptimal power allocation. There exist quite a few studies that present the performance evaluation of related wireless communication technology (e.g., UMTS etc.) BIB002 BIB001 . A similar approach can be applied to mobile EV networks to quantify the cost of suboptimal charging station selections. On the other hand, redundancy design may help to improve system reliability. Employing redundant communication links between critical nodes such as data aggregation units to utility or between control centers. We present the overall system in Figure 11 .
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Quality-of-service <s> A new medium access control (MAC) protocol is proposed for quality-of-service (QoS) support in wireless local area networks (WLAN). The protocol is an alternative to the recent enhancement 802.11e. A new priority policy provides the system with better performance by simulating time division multiple access (TDMA) functionality. Collisions are reduced and starvation of low-priority classes is prevented by a distributed admission control algorithm. The model performance is found analytically extending previous work on this matter. The results show that a better organization of resources is achieved through this scheme. Throughput analysis is verified with OPNET simulations. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Quality-of-service <s> The main contribution of this work is to compare and enhance known methods for performance analysis of the IEEE 802.11e MAC layer, such as the use of Markov chains, queuing theory, and probabilistic analysis. It is the first paper that bases its outputs upon comparison of metrics such as complexity, flexibility, and accuracy, leading to the novel use of a metamodeling comparison. For the analysis, complexity theory and the L-square distance method for accuracy are used. In addition, the proposed analyses carry by themselves scientific interest, because they are extended enhancements with the latest EDCA parameters. A form of the PMF of the MAC delay and first-order moments are found using the PGF complex frequency domain function. The analyses incorporate a Gaussian erroneous channel in order to reflect the real conditions of the MAC layer. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Quality-of-service <s> In order to meet the requirements of 4G mobile networks targeted by the cellular layer of IMT-advanced, next generation mobile WiMAX devices based on IEEE 802.16m will incorporate sophisticated signal processing, seamless handover functionalities between heterogeneous technologies and advanced mobility mechanisms. This survey provides a description of key projected features of the physical (PHY) and medium access control (MAC) layers of 802.16m, as a major candidate for providing aggregate rates at the range of Gbps to high-speed mobile users. Moreover, a new unified method for simulation modeling, namely the evaluation methodology (EVM), introduced in 802.16m, is also presented. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Quality-of-service <s> The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. <s> BIB004
|
The quality-of-service (QoS) needs are gradually increasing as the EVs gain widespread acceptance. Since centralized or decentralized control of EVs is done via price signals, degradation in communication system performance may cost. In , authors define QoS requirements for general smart grid communications using in terms of communication delays and outage probability. The QoS requirements can be slightly different for mobile EVs and the grid operator. For instance, IEEE P2030 states that an EV can afford to have a few seconds of latency to retrieve location, pricing, and availability information. However, in order to respond to the huge number of queries (approximate number depends on the EV penetration level) grid operator have to receive the information in a timely manner. Even though today's mobile broadband technologies (e.g., 3G/HSPA/EV-DO etc.) promise high throughput and low latency communications, in some occasions, there can be a degradation in the user experience. This is attributed to the network capacity saturation in some areas. For instance, shows that customer demand is going to exceed network capacity, for most metropolitan areas, in the next years. This will force time critical data transfer from EVs to compete with other bandwidth Figure 11 The negative effects of communication unavailability. Left panel: uncontrolled charging , middle panel: suboptimal charging station selection, and right panel: unable to support required storage medium for load shifting . http://jwcn.eurasipjournals.com/content/2014/1/223 demanding applications such as video streaming and voice over IP. On the other hand, the most recent mobile WiMAX/LTE technology can support necessary QoS requirements. More specifically, WiMAX offers four different QoS level, namely BIB003 (1) unsolicited grant service (UGS); (2) real-time polling service (rtPS); (3) non-real time polling service (nrtPS); and (4) best effort (BE). UGS can support low latency and low jitter and prioritize EV charging related data transfer. However, 4G technologies are not available everywhere and a limited but growing number of devices support 4G connectivity. Finally, some discussion is already undergoing about new 5G technologies BIB004 . In some areas, wireless mesh networks have been deployed using different versions of the IEEE 802.11 protocol. The cost of building such infrastructure is not expensive and does not require permission, since they function in the open 2.4 GHz or 5 GHz band. These networks can provide application access priority (starting from 802.11e and more recently with the 802.11ac), but they do not guarantee any strict QoS BIB002 BIB001 . In addition, they have a limited range, which means that vehicles that want to communicate through them may be in a wireless blind spots.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Cyber-physical security <s> A smart grid is a new form of electricity network with high fidelity power-flow control, self-healing, and energy reliability and energy security using digital communications and control technology. To upgrade an existing power grid into a smart grid, it requires significant dependence on intelligent and secure communication infrastructures. It requires security frameworks for distributed communications, pervasive computing and sensing technologies in smart grid. However, as many of the communication technologies currently recommended to use by a smart grid is vulnerable in cyber security, it could lead to unreliable system operations, causing unnecessary expenditure, even consequential disaster to both utilities and consumers. In this paper, we summarize the cyber security requirements and the possible vulnerabilities in smart grid communications and survey the current solutions on cyber security for smart grid communications. <s> BIB001 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Cyber-physical security <s> Smart grid is a promising power delivery infrastructure integrated with communication and information technologies. Its bi-directional communication and electricity flow enable both utilities and customers to monitor, predict, and manage energy usage. It also advances energy and environmental sustainability through the integration of vast distributed energy resources. Deploying such a green electric system has enormous and far-reaching economic and social benefits. Nevertheless, increased interconnection and integration also introduce cyber-vulnerabilities into the grid. Failure to address these problems will hinder the modernization of the existing power system. In order to build a reliable smart grid, an overview of relevant cyber security and privacy issues is presented. Based on current literatures, several potential research fields are discussed at the end of this paper. <s> BIB002 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Cyber-physical security <s> An efficient dependable smart power grid relies on the secure real-time data collection and transmission service provided by a monitoring system. In such a system, the measuring units, such as phasor measurement units (PMUs) and smart meters (SMs), are critical. These measuring equipments function as sensors in the smart grid. Data exchanges between these sensors and the central controller are protected by various security protocols. These protocols usually contain computationally intensive cryptographic algorithms that cause heavy energy overhead to the sensor nodes. Since PMUs and SMs are mostly energy-constrained, the problem of how to ensure the secure communication with minimum energy cost becomes a critical issue for the functionality of the whole smart grid. In this article, we focus on the low power secure communication of the PMUs and SMs. We take two wireless sensor platforms as examples to experimentally investigate the approaches and principles of reconciling the two conflicting system requirements-communication security and low energy consumptions. The proposed methods are general ones and applicable to other energy-constrained yet security sensitive systems. <s> BIB003 </s> A survey on communication technologies and requirements for internet of electric vehicles <s> Cyber-physical security <s> The smart grid system is composed of the power infrastructure and communication infrastructure and thus is characterized by the flow of electric power and information, respectively. Although there is no doubt that the wireless communication architecture will play a significant role in smart grid, the wireless network introduces additional vulnerabilities, given the scale of potential threats. Therefore, the physical layer security issue is of first priority in the study of smart grid and has already attracted substantial attention in the industry and academia. In this paper, we aimed to present a general overview of the physical layer security in wireless smart grid and cover the effective countermeasures proposed in the literature of smart grid to date. We first investigate the security challenges from malicious attacks. Specifically, two typical forms of malicious attack in smart grid, namely, jamming and bad data injecting, are studied. In addition, the related countermeasures against these malicious attacks are illustrated. Further, we analyze the state of the art of the privacy issues in smart grid. The private information and privacy concerns are introduced, and then the effective solutions to privacy security are provided. Finally, voltage regulation, a security topic that has been hardly studied in the wireless smart grid domain, is presented. We expect that the work presented here will advance the research on smart grid security. Copyright © 2013 John Wiley & Sons, Ltd. <s> BIB004
|
The power grid is vital to human life and with the integration of information systems, the power grid becomes a huge cyber-physical system. The grid's unique nature poses new series of security challenges. The components of the power grid are vulnerable to a variety of new cybersecurity threats that could affect national security, pubic safety, and revenues. There has been an increasing interest in smart grid security aspect BIB001 BIB002 [120] BIB003 BIB004 . In [120] , the authors present cyberphysical security overview of smart grid communication infrastructure. Su, 2012 [119] presents security threats for electric vehicle networks. They conclude that electric vehicle networks have the following security requirements: (1) availability (discussed in the previous section); confidentiality (prevent attackers to obtain private information); (3) integrity (block unauthorized users from changing the data); and (4) authenticity. If the security of the EV network communication is not provided at a high level, an adversary can impact the EV network in various ways. A hacker can route customers to a specific charging station to create chaos for drivers. Similar to a home appliance, the garage charging is also programmed to fill up EV battery when price is low. An adversary can launch an attack to inject negative prices to increase power usage (of automated appliances), which may result in a peak or spike in electricity usage. Similarly, price modification can cause instabilities in V2G energy trading. In BIB004 , the authors present the security threats in physical layer of wireless communications for smart grid applications. Moreover, defines the attack types for smart grid communication networks. They introduce three different kinds of smart grid attacks: • Data injection: The type of attacks in this category falsify the meter measurements (e.g., garage charging) to mislead the power grid operator. The main purpose of this type of attack is to create revenue loss. BIB002 . In the second volume of NISTIR 7628 , NIST documents a comprehensive overview of guidelines for smart grid cyber-security. This documents contains several use cases concerning the security issues with EV charging. In , the authors evaluated the effectiveness of NISTIR framework for an electric vehicle charging infrastructure case. They claim that NISTIR 7628 framework is not strong enough in device authentication and protecting the protecting the location privacy of mobile EVs.
|
A survey on communication technologies and requirements for internet of electric vehicles <s> Measurement-based studies <s> Today's mobile, wireless, and ad-hoc communications often exhibit extreme characteristics challenging assumptions underlying the traditional way of end-to-end communication protocol design in the Internet. One specific scenario is Internet access from moving vehicles on the road as we are researching in the drive-thru Internet project. Using wireless LAN as a broadly available access technology leads to intermittent - largely unpredictable and usually short-lived - connectivity, yet providing high performance while available. To allow Internet applications to deal reasonably well with such intermittent connectivity patterns, we have introduced a supportive drive-thru architecture. A key component is a "session" protocol offering persistent end-to-end communications even in the presence of interruptions. In this paper, we present the design of the persistent connectivity management protocol (PCMP) and report on findings from our implementation. <s> BIB001
|
Previous paragraphs show that wide-area wireless communication technologies will be predominant role in EV network communications. On the other hand, since the number of mobile internet users has flourished, the user experience deviated significantly from theoretical results. Hence, there is a need for detailed measurement based studies to understand and predict the performance of the wireless technology and quantify the effects of performance degradation. There are only a handful of measurement-based studies that focuses on the performance of the wireless network (WiFi, 3F (UMTS), EV-DO, and WiMAX) BIB001 . In , authors conducted a measurement study to evaluate the performance of the mobile Internet access with 3G (UMTS) and WiFi networks. The measurement was carried out in Seattle, San Francisco, and Amherst. Across all cities, the average availability of 3G and WiFi is 87% and 11%, respectively. The details of their findings is presented in Table 7 . Then, they proposed a hybrid framework to improve the availability of 3G by augmenting it with WiFi. Similarly, BIB001 presents an architecture to improve end user experience by exploiting (i) channel diversity, (ii) wireless network service provider diversity, and (iii) technology diversity (UMTS, CDMA, etc.). Their results shows that the proposed Mobile Access Router architecture decreases the blackout periods considerably and increases average throughput. In addition, shows the results of a city-wide mobile Internet experimentation results. The mobile nodes in their test bed employs both EV-DO and WiFi interfaces. Their focus is on measuring the signal latency and TCP throughput performance. Their results indicate that average latencies varies between 150 to 400 ms and mobile TCP throughput is around 752 Kbps.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a channel code and the encoded data is split into n streams that are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2-3 dB of the outage capacity for these channels using only 64 state encoders. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated. <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Research results in wireless sensor networks are primarily gained from simulations and theoretical considerations. Currently, the community begins to realize that the results need to be validated in testbeds. Testbeds can be also used to directly gather knowledge with sensor network experiments on real hardware and a real environmental context. This survey gives an overview of different approaches to build testbeds and experimentation environments regarding different research foci. We discuss emerging testbed requirements and present existing solutions of the community. The overview is complemented with a discussion of common design decisions concerning architectures and experimentation support in current testbeds. A look on future trends and developments in wireless sensor network testbeds concludes this paper. This survey is intended to help researchers to attain own results in real-world experiments with wireless sensor networks. The reader gains a comprehensive overview on existing testbeds and practical knowledge documented in the referenced literature. The survey is laying the foundation for design decisions while developing an own testbed using the examples of described approaches. <s> BIB003 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Traditional tracking solutions in wireless sensor networks based on fixed sensors have several critical problems. First, due to the mobility of targets, a lot of sensors have to keep being active to track targets in all potential directions, which causes excessive energy consumption. Second, when there are holes in the deployment area, targets may fail to be detected when moving into holes. Third, when targets stay at certain positions for a long time, sensors surrounding them have to suffer heavier work pressure than do others, which leads to a bottleneck for the entire network. To solve these problems, a few mobile sensors are introduced to follow targets directly for tracking because the energy capacity of mobile sensors is less constrained and they can detect targets closely with high tracking quality. Based on a realistic detection model, a solution of scheduling mobile sensors and fixed sensors for target tracking is proposed. Moreover, the movement path of mobile sensors has a provable performance bound compared to the optimal solution. Results of extensive simulations show that mobile sensors can improve tracking quality even if holes exist in the area and can reduce energy consumption of sensors effectively. <s> BIB004 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Fast data collection is one of the most important research issues for wireless sensor networks (WSNs). In this paper, a time-division-multiple-access-based energy consumption balancing algorithm is proposed for the general $k$ -hop WSNs, where one data packet is collected in one cycle. The optimal $k$ that achieves the longest network life is obtained through our theoretical analysis. Required timeslots (TSs), maximum energy consumption, and residual network energy are all thoroughly analyzed in this paper. Theoretical analysis and simulation results demonstrate the effectiveness of the proposed algorithm in terms of energy efficiency and TS scheduling. <s> BIB005 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> A sensor-cloud system is a combination of wireless sensor networks and cloud computing that is equipped with ubiquitous physical sensing ability, high-speed computation, huge storage, and so on. However, sensor-cloud systems suffer from various types of malicious attacks that can cause sensor communications to become unreliable. Establishing a trust evaluation method to ensure members’ reliability in sensor cloud is an effective way to resist malicious attacks. However, most current trust evaluation methods are constrained to specific attacks or applications, and they lack compatibility, verifiability, and scalability. To solve these problems, we formulated the trust evaluation issue as a multiple linear regression problem. Considering energy restrictions, we adopt fog nodes to assist in the trust computation. Moreover, the least squares algorithm is used to find the fitting function between the communication feature and the trust value. The experimental results show that our approach can find the best trust evaluation model and improve the compatibility, verifiability, and accuracy of trust evaluation. <s> BIB006 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Abstract In recent years, Sensor–Cloud System (SCS) has become a hot research issue. In this system, there are some cyber security problems that can be well solved by the trust mechanism. However, there are still some deficiencies in existing trust mechanisms, especially for the SCS underlying structure. We proposed a fog-based hierarchical trust mechanism for these cyber security deficiencies. This hierarchical mechanism consists of two parts, trust in the underlying structure and trust between cloud service providers (CSPs) and sensor service providers (SSPs). For trust in the underlying structure, the behavior monitoring part is established and implemented in Wireless Sensor Networks (WSNs), and the fine-grained and complicated data analysis part is moved to the fog layer. For trust between CSPs and SSPs, it focuses more on the real-time comparison of service parameters, the gathering of exception information in WSNs, the targeted quantitative evaluation of entities and so on. The experimental results indicate that this fog-based hierarchical structure performs well in saving network energy, detecting malicious nodes rapidly and recovering misjudgment nodes in an acceptable delay. Furthermore, the reliability of edge nodes is well guaranteed by data analyses in the fog layer and an evaluation strategy based on similar service records is put forward. <s> BIB007 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> The Internet of Things (IoT)-Cloud combines the IoT and cloud computing, which not only enhances the IoT’s capability but also expands the scope of its applications. However, it exhibits significant security and efficiency problems that must be solved. Internal attacks account for a large fraction of the associated security problems, however, traditional security strategies are not capable of addressing these attacks effectively. Moreover, as repeated/similar service requirements become greater in number, the efficiency of IoT-Cloud services is seriously affected. In this paper, a novel architecture that integrates a trust evaluation mechanism and service template with a balance dynamics based on cloud and edge computing is proposed to overcome these problems. In this architecture, the edge network and the edge platform are designed in such a way as to reduce resource consumption and ensure the extensibility of trust evaluation mechanism, respectively. To improve the efficiency of IoT-Cloud services, the service parameter template is established in the cloud and the service parsing template is established in the edge platform. Moreover, the edge network can assist the edge platform in establishing service parsing templates based on the trust evaluation mechanism and meet special service requirements. The experimental results illustrate that this edge-based architecture can improve both the security and efficiency of IoT-Cloud systems. <s> BIB008 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Social networks are very important social cyberspaces for people. Currently, information-centric networks (ICN) are the main trend of next-generation networks, which promote traditional social networks to information-centric social networks (IC-SN). Because of the complexity and openness of social networks, the filtering of security services for users is a key issue. However, existing schemes were proposed for traditional social networks and cannot satisfy the new requirements of IC-SN including extendibility, data mobility, use of non-IP addresses, and flexible deployment. To address this challenge, a fog-computing-based content-aware filtering method for security services, FCSS, is proposed in information centric social networks. In FCSS, the assessment and content- matching schemes and the fog-computing-based content-aware filtering scheme is proposed for security services in IC-SN. FCSS contributes to IC-SN as follows. First, fog computing is introduced into IC-SN to shifting intelligence and resources from remote servers to network edge, which provides low-latency for security service filtering and end to end communications. Second, content-label technology based efficient content-aware filtering scheme is adapted for edge of IN-SN to realize accurate filtering for security services. The simulations and evaluations show the advantages of FCSS in terms of hit ratio, filtering delay, and filtering accuracy. <s> BIB009 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> The development of cloud computing pours great vitality into traditional wireless sensor networks (WSNs). The integration of WSNs and cloud computing has received a lot of attention from both academia and industry. However, collecting data from WSNs to cloud is not sustainable. Due to the weak communication ability of WSNs, uploading big sensed data to the cloud within the limited time becomes a bottleneck. Moreover, the limited power of sensor usually results in a short lifetime of WSNs. To solve these problems, we propose to use multiple mobile sinks (MSs) to help with data collection. We formulate a new problem which focuses on collecting data from WSNs to cloud within a limited time and this problem is proved to be NP-hard. To reduce the delivery latency caused by unreasonable task allocation, a time adaptive schedule algorithm (TASA) for data collection via multiple MSs is designed, with several provable properties. In TASA, a non-overlapping and adjustable trajectory is projected for each MS. In addition, a minimum cost spanning tree (MST) based routing method is designed to save the transmission cost. We conduct extensive simulations to evaluate the performance of the proposed algorithm. The results show that the TASA can collect the data from WSNs to Cloud within the limited latency and optimize the energy consumption, which makes the sensor-cloud sustainable. <s> BIB010 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> I. INTRODUCTION <s> Abstract The powerful computing and storage capability of cloud computing can inject new vitality into wireless sensor networks (WSNs) and have motivated a series of new applications. However, data collection from WSNs to the Cloud is a bottleneck because the poor communication ability of WSNs, especially in delay-sensitive applications, limits their further development and applications. We propose a fog structure composed of multiple mobile sinks. Mobile sinks act as fog nodes to bridge the gap between WSNs and the Cloud. They cooperate with each other to set up a multi-input multi-output (MIMO) network, aiming to maximize the throughput and minimize the transmission latency. We district collecting zones for all sinks and then assign sensors to the corresponding sinks. For those assigned sensors, hops and energy consumption are considered to solve the hopspot problem. Sensor data are uploaded to the Cloud synchronously through sinks. The problem is proved to be NP-hard, and we design an approximation algorithm to solve this problem with several provable properties. We also designed a detailed routing algorithm for sensors considering hops and energy consumption. We compare our method to several traditional solutions. Extensive experimental results suggest that the proposed method significantly outperforms traditional solutions. <s> BIB011
|
Wireless Sensor Networks (WSNs) BIB011 , BIB005 are a kind of distributed sensor network and an important technical form of the underlying network of the IoT. With the rapid development of network technology, the network is facing more complex problems. WSNs BIB008 are data-centric and closely related to the information-centric of the Internet BIB009 . Therefore, it is necessary to learn from the information-centric network architecture and study the technology of wireless sensor networks. In the early stage of WSNs research, owing to lack of testing tools and a large number of available nodes, the feasibility of algorithms, protocols and applications, which is mainly based on the theoretical analysis have been verified and evaluated. Due to the high computational complexity of the structure of the mathematical models, much simplification need to be done to solve practical problems in the application of these models, which can reduce the reliability of theoretical performance analysis. After that, all kinds of operation systems and simulation tools are suitable for WSNs to make simulation and physical testing possible. However, the WSNs application environment is complex and variable, and the wireless channel is easily disturbed BIB004 , which means it is difficult for simulation testing to get the evaluation results of high reliability and high trustworthiness. Through the establishment of network testing platform based on sensor nodes, the protocol and algorithm of testing network can be verified during the actual application process. It not only contains all factors that affect the network state comprehensively, but also avoids the theoretical errors caused by the model simplification , BIB007 . Based on the mentioned above, it provides a basis for the study of informationcentric WSNs which are different from data-centric traditional WSNs. Therefore, people are increasingly concerned about the platform of testing technology of wireless sensor networks BIB003 . In recent years, micro sensor BIB002 , wireless communication BIB001 , BIB006 , computing and other related technologies have experienced rapid development. The software and hardware resources and protocol elements in WSNs applications are increasingly expanding. To meet the heterogeneous characteristics of the actual prospect, we need to propose more comprehensive testing requirements for the testing platform. Testing technology and performance evaluation are core elements of the testing platform, and they are also facing enormous challenges. In order to conduct more flexible and precise tests BIB010 , these two aspects still entail constant improvements and perfection. This paper mainly discusses the Heterogeneous Wireless Sensor Networks Test Beds (HWSNTB) testing platform which is information-centric, and summarizes the related testing requirements, testing technologies and the performance evaluation. Explaining the relationship between WSNs and information-centric IoT (IC-IoT) is expounded from technical level. Based on the existing testing platform, the heterogeneity of the mining platform is analyzed. The several aspects are previously explained in detail with the specific testing platform and the applicability of different platforms is put forward. Finally, the performance characteristics of the heterogeneous testing platform and the conformity of the testing requirements are compared, and some related problems are pointed out for future research. The challenges faced by information-centric WSNs are summarized, and a new idea of WSNs development is put forward.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 4) EXPERIMENTAL LEVEL <s> Experimentally driven research for wireless sensor networks is invaluable to provide benchmarking and comparison of new ideas. An increasingly common tool in support of this is a testbed composed of real hardware devices which increases the realism of evaluation. However, due to hardware costs the size and heterogeneity of these testbeds is usually limited. In addition, a testbed typically has a relatively static configuration in terms of its network topology and its software support infrastructure, which limits the utility of that testbed to specific case-studies. We propose a novel approach that can be used to (i) interconnect a large number of small testbeds to provide a federated testbed of very large size, (ii) support the interconnection of heterogeneous hardware into a single testbed, and (iii) virtualise the physical testbed topology and thus minimise the need to relocate devices. We present the most important design issues of our approach and evaluate its performance. Our results indicate that testbed virtualisation can be achieved with high efficiency and without hindering the realism of experiments. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 4) EXPERIMENTAL LEVEL <s> Abstract In recent years, Sensor–Cloud System (SCS) has become a hot research issue. In this system, there are some cyber security problems that can be well solved by the trust mechanism. However, there are still some deficiencies in existing trust mechanisms, especially for the SCS underlying structure. We proposed a fog-based hierarchical trust mechanism for these cyber security deficiencies. This hierarchical mechanism consists of two parts, trust in the underlying structure and trust between cloud service providers (CSPs) and sensor service providers (SSPs). For trust in the underlying structure, the behavior monitoring part is established and implemented in Wireless Sensor Networks (WSNs), and the fine-grained and complicated data analysis part is moved to the fog layer. For trust between CSPs and SSPs, it focuses more on the real-time comparison of service parameters, the gathering of exception information in WSNs, the targeted quantitative evaluation of entities and so on. The experimental results indicate that this fog-based hierarchical structure performs well in saving network energy, detecting malicious nodes rapidly and recovering misjudgment nodes in an acceptable delay. Furthermore, the reliability of edge nodes is well guaranteed by data analyses in the fog layer and an evaluation strategy based on similar service records is put forward. <s> BIB002
|
Debugging: The effective real-time transmission and storage of testing data are the basis of ensuring the debugging of the testing platform. During the running of the testing platform, it is crucial to output text status information and related registers, variables in real time, and related information needs to be saved in the database for post-mortem analysis. Reproducibility: In many cases, the same experiment needs to be repeated in same environment to obtain the accurate testing results BIB002 . For example, when testing a certain parameter, a more accurate conclusion can be obtained by comparing the results of different parameter values. Using reproducibility can quickly build your own experiments on the basis of the predecessors and speed up the research process. Concurrency: For some large-scale testing platforms, concurrent operations that support multiple users and multiple experiments can maximize resource utilization and save experimental time. Since the platform has been virtualized BIB001 , it can better support experimental concurrent operations.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> The testing of programs in wireless sensor networks (WSN) is an important means to assure quality but is a challenging process. As pervasive computing has been identified as a notable trend in computing, investigations on effective software testing techniques for WSN are essential. In particular, energy is a crucial and scarce resource in WSN nodes. Programs running correctly but failing to meet the energy constraintsmay still be problematic. As such, testing techniques for power-aware applications are useful; otherwise, the quickly depleted device batteries will need frequent replacements, hence challenging the effectiveness of automation. Since current testing techniques do not consider the issue of energy constraints, their automation in the WSN domain warrants further investigation. ::: ::: This paper proposes a novel power-aware technique built on top of the notion of metamorphic testing to alleviate both the test oracle issue and the power-awareness issue. It tests the functions of programs in WSN nodes that are in close proximity, and uses the data consolidation criteria of data aggregation in programs as the basis for verifying test results. The power-aware transmissions of intermediate and final test data as well as the computation required for verification of test results are directly supported by the WSN programs. Our proposed technique has been strategically designed to blend in with the special features of the WSN environment. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> ISA100.11 a industrial wireless network standard is based on a deterministic scheduling mechanism.For the timeslot delay caused by deterministic scheduling,a routing algorithm is presented for industrial environments.According to timeslot,superframe,links,channel and data retransmission of deterministic scheduling mechanisms that affect the design of the routing algorithm,the algorithm selects the link quality,timeslot delay and retransmission delay as the routing criteria and finds the optimum communication path by k shortest paths algorithm.Theoretical analysis and experimental verification show that the optimal paths selected by the algorithm not only have high link quality and low retransmission delay,but also meet the requirements of the deterministic scheduling.The algorithm can effectively solve the problem of packet loss and transmission delay during data transmission,and provide a valuable solution for efficient data transmission based on determinacy.更多还原 <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> Abstract In recent years, Sensor–Cloud System (SCS) has become a hot research issue. In this system, there are some cyber security problems that can be well solved by the trust mechanism. However, there are still some deficiencies in existing trust mechanisms, especially for the SCS underlying structure. We proposed a fog-based hierarchical trust mechanism for these cyber security deficiencies. This hierarchical mechanism consists of two parts, trust in the underlying structure and trust between cloud service providers (CSPs) and sensor service providers (SSPs). For trust in the underlying structure, the behavior monitoring part is established and implemented in Wireless Sensor Networks (WSNs), and the fine-grained and complicated data analysis part is moved to the fog layer. For trust between CSPs and SSPs, it focuses more on the real-time comparison of service parameters, the gathering of exception information in WSNs, the targeted quantitative evaluation of entities and so on. The experimental results indicate that this fog-based hierarchical structure performs well in saving network energy, detecting malicious nodes rapidly and recovering misjudgment nodes in an acceptable delay. Furthermore, the reliability of edge nodes is well guaranteed by data analyses in the fog layer and an evaluation strategy based on similar service records is put forward. <s> BIB003 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> Edge-centric computing (ECC) and content- centric networking (CCN) will be the most important technologies in future 5G networks. However, due to different architectures and protocols, it is still a challenge to fuse ECC and CCN together and provide manageable and flexible services. In this article, we present ECCN, an orchestrating scheme that integrates ECC and CCN into a hierarchical structure with software defined networking (SDN). We introduce the SDN technology into the hierarchical structure to decouple data and control planes of ECC and CCN, and then design an SDN protocol to control the data forwarding. We also implement two demonstration applications in our testbed to evaluate the ECCN scheme. The experimental results from the testbed applications, and extensive simulations show ECCN outperforms original structures. <s> BIB004 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> Deep learning is a promising approach for extracting accurate information from raw sensor data from IoT devices deployed in complex environments. Because of its multilayer structure, deep learning is also appropriate for the edge computing environment. Therefore, in this article, we first introduce deep learning for IoTs into the edge computing environment. Since existing edge nodes have limited processing capability, we also design a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing. In the performance evaluation, we test the performance of executing multiple deep learning tasks in an edge computing environment with our strategy. The evaluation results show that our method outperforms other optimization solutions on deep learning for IoT. <s> BIB005 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. TESTING TECHNOLOGY <s> Abstract The powerful computing and storage capability of cloud computing can inject new vitality into wireless sensor networks (WSNs) and have motivated a series of new applications. However, data collection from WSNs to the Cloud is a bottleneck because the poor communication ability of WSNs, especially in delay-sensitive applications, limits their further development and applications. We propose a fog structure composed of multiple mobile sinks. Mobile sinks act as fog nodes to bridge the gap between WSNs and the Cloud. They cooperate with each other to set up a multi-input multi-output (MIMO) network, aiming to maximize the throughput and minimize the transmission latency. We district collecting zones for all sinks and then assign sensors to the corresponding sinks. For those assigned sensors, hops and energy consumption are considered to solve the hopspot problem. Sensor data are uploaded to the Cloud synchronously through sinks. The problem is proved to be NP-hard, and we design an approximation algorithm to solve this problem with several provable properties. We also designed a detailed routing algorithm for sensors considering hops and energy consumption. We compare our method to several traditional solutions. Extensive experimental results suggest that the proposed method significantly outperforms traditional solutions. <s> BIB006
|
Due to the differences in protocols or standards of the products formulated by different standardization organizations, and for the lack of authoritative and complete common standards, the related tests have become extremely difficult. In order to unify testing standards as soon as possible, the Standards Working Group on Sensor Network (WGSN) has set up a testing specification project group (PG11) to conduct research and promote development of the sensor network testing standard system. According to the sensor network standard system framework put forward by WGSN, the testing part includes the following three aspects: 1) Conformance test. It is used to detect the functions of certain sensors such as RFID tags, network gateways, and smart terminals in the sensor network to meet the standards and determine the degree of consistency between the measured object's implementation and the standard. It mainly includes RF consistency and protocol conformance testing. 2) Interoperability test. It is used to verify whether the tested network device has all the functions that the user needs. And the interoperability test of the entire network is completed by observing the interaction process between the device under testing and the standard device on the network interface. 3) System test. It is responsible for testing the performance, security, and functionality of the entire network to determine whether each module can meet the user-related business requirements in actual applications, and also to identify the possible points of failure and insecurity, and thus it can improve system availability. Wireless sensor networks testing technology is of great significance for the development, operation, and maintenance of network systems. On the one hand, it helps analyze network behaviors, locate network failures or bottlenecks, and optimize network operations. On the other hand, it helps evaluate network performance, understand network operation patterns, and plan network deployment . It is instructive to develop related technologies BIB001 . The following tests are for different purposes and some existing testing techniques are introduced. 1) When tested IPv6-based WSNs protocol conformance, the author in this paper selects a series of such measurement instruments as WSNs multi-node simulators, gateway simulators, vector signal generators and analyzers, power meters, data acquisition analyzers and conformance testing instrumentation with them serving as testing tools, to build a verification platform for testing. 2) When we test protocol conformance and interoperability, we need to take into account the heterogeneity, dynamics, and diversity of application of sensor networks. Author in this paper proposes a unified testing architecture for testing managers and agents. The testing agent is used to match different protocols and physical interfaces. It can work independently, and can be completed by multiple testing agents. And the testing manager makes a centralized management of the testing agent and configures different testing applications. This standard provides a unified testing case for WIA-PA, 6LowPAN, and ISA100.11A BIB002 . The specific testing system is shown in Figure 1 . 3) To evaluate and test the feasibility of a sensor network middleware design method, and also verify whether the service provided is efficient and reliable, the author in this paper combines the ISO/IEC 9126 standards and the characteristics of the sensing network application, maps the former to the latter and finds the similar parts of them, and proposes the testing standard for the sensor network middleware and the content of the testing. 4) When testing the performance and security of routing protocols for wireless multi-hop networks, based on the routing algebra and unified routing model, the author in this paper achieved the comprehensive testing of multiple protocols. The protocol rule, parameter, test and analysis libraries are designed by modularization architecture, and the tests are carried out with different testing methods, scalability, and compatibility. What is more important is to realize the automation of operation analysis and reduce human error. Figure 2 shows the structure diagram of the testing core processing module of the platform. It separates the configuration data, testing result data, and forwarding data, and effectively avoids the interference of other data streams to the testing process. 5) On testing the platform performance, the author in this paper develops a wireless sensor networks experiment bed, JmoteNet, which can carry out mirror download, node programming and testing data collection BIB003 , BIB006 through the back end wired control network, and it uses the lightweight measurement module embedded in the node to efficiently obtain the performance parameters of power, throughput, delay, packet loss rate, and network topology. 6) For the testing reliability and fault tolerance of WSNs, Huang adopts the testing technology of fault injection (FI). This technology is based on a specific fault model, artificially and consciously generating faults and applying them to the system testing. The purpose is to accelerate failures of system, occurrence of failures, and also observe and receive feedbacks of the system's response information to the injected failures, equally to validate and evaluate the system through analysis. 7) In order to realize the zero interference of the test, Zhao et al. designed a high precision testing rear panel, using the internal interception technology (using the testing rear panel to capture the interconnect signals of the sensor nodes directly) and the extra transmission network to achieve the transparent test of the high precision of WSNs during operation, which can conduct signal analysis, protocol verification and evaluate the performance of WSNs accurately. Figure 3 shows the relevant modules for the interaction between the platform remote client and the testing server. Results show that the remote access client includes multiple groups of testing applications, such as event replay and performance evaluation. According to specific needs, these testing applications use the subscription mechanism to access the testing data on the testing server through the existing network, and then analyze and process it. According to the summary, it can be seen that most of the existing testing technologies are not universal because of the influence of the characteristics of WSNs itself, even based on the edge computing BIB004 , BIB005 . Therefore, there is still much room available for the study of universal testing models. In order to have a clearer understanding of the testing technology, the related technologies mentioned above are summarized in Table 2 .
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> As a complex network consisting of sensing,processing and communication,wireless sensor network is driven by various applications and highly requires new QoS(Quality of Service) guarantees.However,unlike traditional Internet,its unique characteristics have brought unprecedented challenges in the area of QoS research.First a hierarchical description(user level,network level and node level) and a comprehensive specification for QoS parameters are presented in this paper.Then the mapping relationship between QoS parameters is carefully analyzed.Finally,a hierarchical QoS architecture is proposed for systemic QoS support in wireless sensor networks. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> battery power of nodes and nodes automatically move to the docking station if the power drops below a certain threshold. Secondly, for outdoor testbeds, the solar panels can be used to auto-recharge the batteries. Thirdly, for localization the centralized or distributed mechanisms can be employed and finally, an interface is required so that the user can perform the experiment using testbed interface. The existing interfaces of most of the MWSNTs are the "on the site interfaces" which means the interfaces are located at the testbed site and cannot be accessed remotely. However, most of the static WSN testbeds provideremote, online interface, such as Quri Nettestbed (2). Several initiatives are already taken to address the above mentioned challenges, in the development of various testbeds. A few ofsuch testbeds are included in this paper in order to give an idea about the kind of workalready done and what are the future trends in research. The rest of the paper is organized as follows. In Section II, a brief study of testbeds for different selected parameters (such as infrastructure, deployment, mobility, auto-recharging, localization, collision, cost, and user interface) is presented. In Section III, a quantitative and qualitative comparison of selected testbeds is shown in tabular form. Section IV concludes the paper, highlighting current trends and a few suggestions for future work in development of MWSNTs. <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> With the development of new technologies, these last years have witnessed the emergence of a new paradigm: the Internet of Things (IoT) and of the physical world. We are now able to communicate and interact with our surrounding environment through the use of multiple tiny sensors, RFID technologies or small wireless robots. This allows a set of new applications and usages to be envisioned ranging from logistic and traceability purposes to emergency and rescue operations going through the monitoring of volcanos or forest fires. However, all this comes with several technical and scientific issues like how to ensure the reliability of wireless communications in disturbed environments, how to manage efficiently the low resources (energy, memory, etc) or how to set a safe and sustainable maintenance. All these issues are addressed by researchers all around the world but solutions designed for IoT need to face real experimentations to be validated. To ease such experimentations for IoT, several experimental test beds have been deployed offering diverse and heterogeneous services and tools. This article studies the different requirements and features such facilities should offer and survey the different experimental facilities currently available for the community, the different hardware used (as sensors and robots) and the scope of their services. We expect this survey assist a potential user to easily choose the one to use regarding his own needs. Finally, we identify existing gaps and difficulties and investigate new directions for such facilities. <s> BIB003 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> In order to improve the performance of wireless sensor network, this paper proposed a novel performance evaluation method based on energy efficiency and delay analysis in wireless sensor networks. Firstly, samples of wireless sensor network are collected, secondly, the network energy efficiency and network delay are used for the evaluation index of network performance, finally, neural network is used to establish evaluation model based on learning samples and the simulation experiments are carried out to test the performance. The results show that compared with other evaluation algorithm, the results of the proposed method are more reliable and scientific. <s> BIB004 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> Wireless sensor networks (WSNs) have a significant potential in diverse applications. In contrast to WSNs in a small-scale setting, the real-world adoption of large-scale WSNs is quite slow particularly due to the lack of robustness of protocols at all levels. Upon the demanding need for their experimental verification and evaluation, researchers have developed numerous WSN testbeds. While each individual WSN testbed contributes to the progress with its own unique innovation, still a missing element is an analysis on the overall system architecture and methodologies that can lead to systematic advances. This paper seeks to provide a framework to reason about the evolving WSN testbeds from the architectural perspective. We define three core requirements for WSN testbeds, which are scalability, flexibility, and efficiency. Then, we establish a taxonomy of WSN testbeds that represents the architectural design space by a hierarchy of design domains and associated design approaches. Through a comprehensive literature survey of existing prominent WSN testbeds, we examine their best practices for each design approach in our taxonomy. Finally, we qualitatively evaluate WSN testbeds for their responsiveness to the aforementioned core requirements by assessing the influence by each design approach on the core requirements and suggest future directions of research. <s> BIB005 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> As an alternative to current wired-based networks, wireless sensor networks (WSNs) are becoming an increasingly compelling platform for engineering structural health monitoring (SHM) due to relatively low-cost, easy installation, and so forth. However, there is still an unaddressed challenge: the application-specific dependability in terms of sensor fault detection and tolerance. The dependability is also affected by a reduction on the quality of monitoring when mitigating WSN constrains (e.g., limited energy, narrow bandwidth). We address these by designing a dependable distributed WSN framework for SHM (called DependSHM ) and then examining its ability to cope with sensor faults and constraints. We find evidence that faulty sensors can corrupt results of a health event (e.g., damage) in a structural system without being detected. More specifically, we bring attention to an undiscovered yet interesting fact, i.e., the real measured signals introduced by one or more faulty sensors may cause an undamaged location to be identified as damaged (false positive) or a damaged location as undamaged (false negative) diagnosis. This can be caused by faults in sensor bonding, precision degradation, amplification gain, bias, drift, noise, and so forth. In DependSHM , we present a distributed automated algorithm to detect such types of faults, and we offer an online signal reconstruction algorithm to recover from the wrong diagnosis. Through comprehensive simulations and a WSN prototype system implementation, we evaluate the effectiveness of DependSHM . <s> BIB006 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> C. PERFORMANCE EVALUATION <s> Indoor localization has attracted increasing research attentions in the recent years. However, many important issues still need to be further studied to keep pace with new requirements and technica... <s> BIB007
|
Like traditional networks, information-centric WSNs also need to provide quality of service (QoS) for different users and applications. The differences are the node resources, communication capabilities, and processing capabilities. In the WSNs, they are extremely limited, and the maximum effort can only be made to balance the performance in all aspects. With the expansion of the application field of WSN, many applications have put forward higher requirements for QoS, such as multimedia applications and real-time monitoring systems BIB006 . So qualitative evaluation or quantitative research of their performance are of great importance. For applications, the focus is on coverage, measurement accuracy BIB007 , and the optimum number of active nodes. For the network, the main indices are commonly used endto-end delay, packet loss rate, bandwidth, and throughput. Wen et al. BIB001 systematized it and found the intrinsic relationship, which provides a theoretical reference for analysis and design for QoS guarantee and cross-layer optimization in specific network applications. Figure 4 shows the WSNs hierarchical QoS indicator. Whether it is the testing of various physical parameters or algorithms, protocols, and related functions and performance of different applications, it is necessary to select appropriate performance indicators and testing results for comparison and analysis, which can determine whether to meet the testing requirements. Usually, we choose higher correlation, more commonly used or typical parameters and the accuracy of the results obtained by different evaluation methods are different, usually decided by the accepted values of the parameters or the comparison with the actual situation to get the final conclusion. Wu et al. used the modeling mechanism based on Performance Evaluation Process Algebra (PEPA) to analyze and evaluate the network throughput, utilization rate and response time. PEPA has a compositional description technology that can describe a system model as a set of processes that interact through execution actions to evaluate whether a process is performing correctly and timely. In order to accurately evaluate the overall network performance, Wang and Wang BIB004 proposed a performance evaluation method based on energy efficiency and delay for WSNs. Firstly, the influence of channel error rate, packet retransmission mechanism and collision rate on network performance are analyzed comprehensively. Then two evaluation indexes of network energy efficiency and delay are constructed, and are weighted by entropy method. The neural network with strong nonlinear approximation capability is used to establish a network performance evaluation model. Then the influence of various factors on network performance is analyzed under different packet length conditions. In summary, HWSNTB is crucial to meet the needs of complex and multivariate testing and promote the research of WSNs in practicability and related technology. And, it is imperative to study cross platform, multi-technology integration, large-scale and feasible heterogeneous wireless sensor networks testing platform. At present, scholars at home and abroad mainly focus on mobile BIB002 , BIB003 , different platform research and development and system architecture BIB005 , but the research on platform heterogeneity is not in-depth. The heterogeneous testing platform can accomplish a variety of testing tasks and realize resources reuse and sharing, and reduce the deployment overhead in essence. Therefore, in our work, we will analyze the heterogeneity of the platform in detail. Figure 5 shows the structure of the general WSNs testing platform, which consists of three parts: the area to be tested by sensor nodes, the communication facilities for data transmission and the server required for testing. Sensor nodes transmit perceived data to the server by wireless communication and local or remote users access the server or nodes to obtain destination information. Through analyzing the whole process, the source of platform heterogeneity can be explored further.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> A. HETEROGENEITY OF HARDWARE RESOURCE <s> Wireless Sensor Networks (WSN), an element of pervasive computing, are presently being used on a large scale to monitor real-time environmental status. However these sensors operate under extreme energy constraints and are designed by keeping an application in mind. Designing a new wireless sensor node is extremely challenging task and involves assessing a number of different parameters required by the target application, which includes range, antenna type, target technology, components, memory, storage, power, life time, security, computational capability, communication technology, power, size, programming interface and applications. This paper analyses commercially (and research prototypes) available wireless sensor nodes based on these parameters and outlines research directions in this area. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> A. HETEROGENEITY OF HARDWARE RESOURCE <s> This paper describes a WSN platform architecture uniquely designed and implemented for the Internet of Things (IoTs). The paper elaborates on all the architectural design decisions and challenges across the three major divisions of the platform, that is, the middleware, hardware, and network layer. The result of this research is a unique WSN platform, Sprouts, which is rugged, cost effective, versatile, open source, and multistandard WSN platform that offers a step forward towards interoperable WSN platforms for the IoTs. Sprouts' architecture leverages state of the art technologies in hardware and network standards and builds upon our module-oriented DREAMS middleware architecture. Sprouts presents a much needed new approach that is different than the de-facto MSP430/AVR and Zigbee-based WSNs, and we discusses the reasons behind the necessary changes to meet the needs of IoT. Sprouts was tested in the harsh industrial environment of the Oil-Sands and showcased at the Ontario Centre of Excellence (OCE) Discovery of 2011. <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> A. HETEROGENEITY OF HARDWARE RESOURCE <s> Abstract The recent emergence of cloud computing has drastically influenced everyone’s perception of infrastructure architectures, data transmission and other aspects. With the advent of both mobile networks and cloud computing, the computationally-intensive services are moving to the cloud, and the end user’s mobile device is used as an interface to access these services. However, cyber threats are also becoming various and sophisticated, which will endanger the security of users’ private data. In traditional service mode, users’ data is totally stored in the cloud, they lose the right of control on their data and face cyber threats such as data loss and malicious modification. To this end, we propose a novel cloud storage scheme based on fog computing. In our scheme, user’s private data is separately stored in the cloud and fog servers. By this way, the integrity, availability and confidentiality of user’s data can be ensured because the data is retrieved from cloud as well as fog, which is safer. We implement a system prototype and design a series of mechanisms. Extensive experiments results also validate the proposed scheme and methods. <s> BIB003
|
In the structure of the whole testing platform, there are two kinds of hardware resources involved: one is sensor node, the other is facility, such as PC, mobile portable device, USB hub, AP (Access Point), gateway and adapter. Different testing platforms have different numbers and types of hardware resources. As an important carrier of data perception, sensor nodes need to be able to independently complete the collection and the processing of various parameters in the physical world. The low-level nodes are only responsible for the acquisition of data. So it is necessary to meet the characteristics of low power consumption, long working hours, and large memory for the gateway nodes have to meet the characteristics of high computing power, high processing speed and wide communication range. Energy supply modules mainly consider whether to obtain energy from the outside world, build a built-in energy consumption measurement module to regulate energy consumption and configure a variety of energy saving modes (such as dormancy, Power-aware). Potdar et al. BIB001 compared the specific nodes with the parameters required for the target application, including the key technologies and communication technologies used in the design, such as antenna design, module components, storage, power, security BIB003 , remote programming and interfaces. In addition, Farooq also introduced multimedia nodes, including Mesh Eyeen and WiCa. Table 3 selects the commonly used nodes in the testing platform to sum up the related parameters. It can be seen that most of these nodes have a single structure and the techniques used are very similar. Thus to make the node more flexible and meet the diversified requirements, Kouche BIB002 also improved the structure by comparing the existing technology of node processor and communication module to realize the node sprout, and provide valuable reference for the design of nodes.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. HETEROGENEITY OF SOFTWARE RESOURCES <s> Wireless sensor networks are composed of large numbers of tiny networked devices that communicate untethered. For large scale networks, it is important to be able to download code into the network dynamically. We present Contiki, a lightweight operating system with support for dynamic loading and replacement of individual programs and services. Contiki is built around an event-driven kernel but provides optional preemptive multithreading that can be applied to individual processes. We show that dynamic loading and unloading is feasible in a resource constrained environment, while keeping the base system lightweight and compact. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. HETEROGENEITY OF SOFTWARE RESOURCES <s> Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOS's design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Mate virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Mate and TinyOS. <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. HETEROGENEITY OF SOFTWARE RESOURCES <s> The MANTIS MultimodAl system for NeTworks of In-situ wireless Sensors provides a new multithreaded cross-platform embedded operating system for wireless sensor networks. As sensor networks accommodate increasingly complex tasks such as compression/aggregation and signal processing, preemptive multithreading in the MANTIS sensor OS (MOS) enables micro sensor nodes to natively interleave complex tasks with time-sensitive tasks, thereby mitigating the bounded buffer producer-consumer problem. To achieve memory efficiency, MOS is implemented in a lightweight RAM footprint that fits in less than 500 bytes of memory, including kernel, scheduler, and network stack. To achieve energy efficiency, the MOS power-efficient scheduler sleeps the microcontroller after all active threads have called the MOS sleep() function, reducing current consumption to the µA range. A key MOS design feature is flexibility in the form of cross-platform support and testing across PCs, PDAs, and different micro sensor platforms. Another key MOS design feature is support for remote management of in-situ sensors via dynamic reprogramming and remote login. <s> BIB003 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. HETEROGENEITY OF SOFTWARE RESOURCES <s> We present TinyOS, a flexible, application-specific operating system for sensor networks, which form a core component of ambient intelligence systems. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation.We present our experiences with TinyOS as a platform for sensor network innovation and applications. <s> BIB004 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> B. HETEROGENEITY OF SOFTWARE RESOURCES <s> In this paper, we survey the current state-of-the-art in middleware and systems for Wireless Sensor Networks (WSN). We provide a discussion on the definition ofWSN middleware, design issues associated with it, and the taxonomies commonly used to categorize it. We also present a categorization of a number of such middleware platforms, using middleware functionalities and challenges which we think will play a crucial role in developing software for WSN in the near future. Finally, we provide a short discussion on WSN middleware trends. <s> BIB005
|
The construction of the testing platform cannot be built without the support of various software resources. In general, software resources need to complete the execution and monitoring of the testing task and management and distribution of resources through the cooperation of different functional modules on the related hardware platform. It is divided into operating system, middleware and server. In fact, the server is the combination of software and hardware, but it is classified as software resource, because it mainly acts as the role of providing service and managing storage resources in the testing platform. The operating system (OS) includes Windows and Linux on PC, as well as OS for WSNs. Due to the unique nature of WSNs and the resource limitation of nodes, it is necessary to design a new type of operating system for WSNs, which can support node processor, memory, peripheral communication interface, energy and a variety of specific upper layer applications. At present, TinyOS BIB004 , SOS BIB002 , Contiki BIB001 and Mantis BIB003 are more common. Middleware, as the system software between the operating system and the application program, provides a unified running platform and friendly development environment by shielding the heterogeneity of the underlying components. It narrows the gap between the application and the underlying equipment, and solves the interoperability problem of the application cross platform. The main role it plays is to support node programming and provide support of service quality, data management, resource management, remote communication with nodes and control the topology of WSNs and provide security protection. More importantly, it can provide a variety of mechanisms such as effective interaction between tasks and networks, task decomposition, cooperative work among nodes and heterogeneous abstraction. But a dozen kinds of middleware can only support the heterogeneity of the platform in theory. So there is still a lack of support for heterogeneity. And, we need to study in depth BIB005 .
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 1) WSNTB <s> In this paper, we design and implement a testbed to realize various experiments in heterogeneous wireless sensor networks. Our implementation includes hardware infrastructure and software framework. The hardware infrastructure consists of servers, gateways with converters, and sensor nodes in three-tier. Our testbed can support different sensor nodes with USB or RS232 interface. Users can experiment with real hardware resources and interactive with our testbed in real-time. Here, we deploy two kinds of self-designed sensor nodes, Octopus I and Octopus II, in our testbed. The Octopus sensor nodes are compatible with IEEE 802.15.4/ZigBee standard for experiments. The software framework composes with three main layers: services interface layer, testbed core layer, and resource access layer. Our testbed allows users to customize their applications for specific sensor nodes and experiments locally with remote hardware resource. Users can freely choose the number of nodes and assign a period of processing time through our testbed Website. By using our testbed, users can save lots time from creating an experiment environment, reduce the hardware expense, enhance the devices utilization rate, and fasten on the verification of experiment results. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 1) WSNTB <s> As an extension for Internet of Things (IoT), Internet of Vehicles (IoV) achieves unified management in smart transportation area. With the development of IoV, an increasing number of vehicles are connected to the network. Large scale IoV collects data from different places and various attributes, which conform with heterogeneous nature of big data in size, volume, and dimensionality. Big data collection between vehicle and application platform becomes more and more frequent through various communication technologies, which causes evolving security attack. However, the existing protocols in IoT cannot be directly applied in big data collection in large scale IoV. The dynamic network structure and growing amount of vehicle nodes increases the complexity and necessary of the secure mechanism. In this paper, a secure mechanism for big data collection in large scale IoV is proposed for improved security performance and efficiency. To begin with, vehicles need to register in the big data center to connect into the network. Afterward, vehicles associate with big data center via mutual authentication and single sign-on algorithm. Two different secure protocols are proposed for business data and confidential data collection. The collected big data is stored securely using distributed storage. The discussion and performance evaluation result shows the security and efficiency of the proposed secure mechanism. <s> BIB002
|
WSNTB BIB001 is a reconfigurable heterogeneous sensor network testing platform developed by National Tsinghua University of Taiwan. WSNTB is composed of server layer, gateway layer and node layer. Using self-made nodes Octopus I and Octopus II can support ZigBee protocol communication. They connect Ethernet through USB or RS232 interface and run TinyOS and selfdeveloped operating system LOSs, and develop middleware to facilitate user processing experiment. The platform provides two access routes: 1) local mode that allows users to select specific nodes locally to cater for their needs. When the user starts the experiment, he is reminded to add a remote sequence port, and then transmits data directly through this port; 2) carrying out related operations through the web side. The platform includes 2 WSNs and 3 gateways. The users are free to choose the configuration. Net4501 and net4801 single board computers are used as gateways and ZyXEL ES-108A Ethernet gateway as LAN gateway. VIPs can use high priority bandwidth. The software structure includes service interface layer, testing platform core layer and resource access layer. To ensure security, private IP addresses are used to communicate with the database. In addition, it also contains an event reminder module to ensure accurate and real-time control of the experimental process. Simulation server is extremely essential to local mode and real time control protocol. Microsoft VB and Java run-time library are installed to support TinyOS and Cygwin environment. Virtual com software (creating virtual port) is used for local mode. In order to enable the node to restart automatically, each node is bound up with the hardware reset device. The node has its own energy consumption measurement module, and the platform verifies the protocol and algorithm on the visualization software by collecting data BIB002 .
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 1) LabVIEW <s> This paper describes the development of a mobile sensor network test-bed at the Automation and Robotics Research Institute (University of Texas at Arlington). LabVIEW high-level programming language is used to program, control and monitor a variety of off-the-shelf hardware platforms (both sensor motes and mobile robots). The test-bed is composed of two independent mobile sensor networks connected to the same base station. The first network has controlled mobility and performs environmental monitoring tasks. The second network has random mobility and acts as an unpredictable source of events for the first network. After providing a detailed description of the hardware and software design of our test-bed, we describe two case studies in mobile sensor network research which we are currently implementing on our test-bed, namely potential field localization and discrete event coordination. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 1) LabVIEW <s> This paper proposes an energy-efficient routing mechanism by introducing intentional mobility to wireless sensor networks (WSNs) with obstacles. In the sensing field, Mobile Data Collectors (MDCs) can freely move for collecting data from sensors. An MDC begins its periodical movement from the base station and finally returns and transports the data to the base station. In physical environments, the sensing field may contain various obstacles. A research challenge is how to find an obstacle-avoiding shortest tour for the MDC. Firstly, we obtain the same size grid cells by dividing the network region. Secondly, according to the line sweep technique, the spanning graph is easily constructed. The spanning graph composed of some grid cells usually includes the shortest search path for the MDC. Then, based on the spanning graph, we can construct a complete graph by Warshall-Floyd algorithm. Finally, we present a heuristic tour-planning algorithm on the basis of the complete graph. Through simulation, the validity of our method is verified. This paper contributes in providing an energy-efficient routing mechanism for the WSNs with obstacles. <s> BIB002
|
LabVIEW BIB001 is widely used to simplify the deployment and design and it is beneficial to the reuse of resources and the migration of the platform, which adopts the existing hardware and LabVIEW software package. The user interface and system management tools are developed under LabVIEW programming environment to provide a high level network view to users. The interface program is installed on the management PC. Once the node is programmed separately through RS-232 interface, the LabVIEW application program will record and visualize the testing data in real time. The platform consists of two independent networks, each connected to the base station node of PC through serial port. In the first network, 15 MicaZ nodes are configured to receive data in a multi-hop mode, and the second one is configured with 8 Cricket nodes that communicate with the base station in a single hop mode. Four of the MicaZ nodes are installed on Arconame robots or Cybermotion sentry robots that provide mobile control. The initial position of the robot is known. Cricket nodes have ultrasonic positioning modules that measure the relative positions between nodes. Two of them were carried on people to track their locations. The wireless communication between robot and MicaZ node is at 433MHz frequency. LabVIEW interface allows the installation of TinyOS program to node programming, and provides GUI for visual analysis of runtime data. For mobile nodes BIB002 , a library function for sending commands and receiving data is established. If a new node is added to the robot or a new location algorithm is completed on it, the corresponding library functions will be updated synchronously. The platform has carried out two experiments to verify the precise location of nodes and to adapt to the variability of network topology. A dynamic model is proposed to evaluate the absolute and relative positions of nodes. The strategy of matrix-based Discrete Event Control is used to deal with the problems of node mobility, adding or deleting, and network failure caused by poor communication connection.
|
A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> In this paper we present an overview of WISEBED, a large-scale wireless sensor network testbed, which is currently being built for research purposes. This project is led by a number of European Universities and Research Institutes, hoping to provide scientists, researchers and companies with an environment to conduct experiments with, in order to evaluate and validate their sensor network-related work. The initial planning of the project includes a large, heterogeneous testbed, consisting of at least 9 geographically disparate networks that include both sensor and actuator nodes, and scaling in the order of thousands (currently being in total 550 nodes). We present here the overall architecture of WISEBED, focusing on certain aspects of the software ecosystem surrounding the project, such as the Open Federation Alliance, which will enable a view of the whole testbed, or parts of it, as single entities, and the testbed’s tight integration with the Shawn network simulator. We also present examples of the actual hardware used currently in the testbed and outline the architecture of two of the testbed’s sites. <s> BIB001 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> Wireless sensor networks promise great success in many areas from environmental monitoring to medical and military applications. Forest fire detection is one of these areas where many of the ongoing WSN research is focused today. Unfortunately, most of these studies choose simulating their proposed solutions instead of doing experiments in real testbed environments, since that kind of setup exposes additional difficulties. Our previous work, named FireSense, proposed a fire detection algorithm, which was shown to be successful in terms of simulation results. In this study, we take FireSense to a real outdoor testbed for further analysis of its effectiveness in terms of various parameters such as link and node failures, topology and physical configuration changes, wind direction, ignition point position and sampling period variations. <s> BIB002 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> SensorScope is a turnkey solution for environmental monitoring systems, based on a wireless sensor network and resulting from a collaboration between environmental and network researchers. Given the interest in climate change, environmental monitoring is a domain where sensor networks will have great impact by providing high resolution spatio-temporal data for long periods of time. SensorScope is such a system, which has already been successfully deployed multiple times in various environments (e.g., mountainous, urban). Here, we describe the overall hardware and software architectures and especially focus on the sensor network itself. We also describe one of our most prominent deployments, on top of a rock glacier in Switzerland, which resulted in the description of a micro-climate phenomenon leading to cold air release from a rock-covered glacier in a region of high alpine risks. Another focus of this paper is the description of what happened behind the scenes to turn SensorScope from a laboratory experiment into successful outdoor deployments in harsh environments. Illustrated by various examples, we point out many lessons learned while working on the project. We indicate the importance of simple code, well suited to the application, as well as the value of close interaction with end-users in planning and running the network and finally exploiting the data. <s> BIB003 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> Research in the area of Wireless Sensor Networks (WSNs) has become more and more driven by real-world experimental evaluations rather than network simulation. Numerous testbeds of WSNs have been set up in the past decade, often with very much differing architectural design and hardware. The Testbed Management Architecture for Wireless Sensor Networks (TARWIS) presented in this paper provides the most crucial management and scheduling functionalities for WSN testbeds, independent from the testbed architecture and the sensor node's operating systems. These functionalities are: a consistent notion of users and user groups, resource reservation features, support for reprogramming and reconfiguration of the nodes, provisions to debug and remotely reset sensor nodes in case of node failures, as well as a solution for collecting and storing experimental data. We describe the workflow of using a TARWIS on a WSN testbed over the entire experimentation life cycle, starting from resource reservation over experiment definition to the collection of real-world experimental data. <s> BIB004 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> Wireless sensor network (WSN) has emerged as one of the most promising technologies for the future. This has been enabled by advances in technology and availability of small, inexpensive, and smart sensors resulting in cost effective and easily deployable WSNs. However, researchers must address a variety of challenges to facilitate the widespread deployment of WSN technology in real-world domains. In this survey, we give an overview of wireless sensor networks and their application domains including the challenges that should be addressed in order to push the technology further. Then we review the recent technologies and testbeds for WSNs. Finally, we identify several open research issues that need to be investigated in future. ::: ::: Our survey is different from existing surveys in that we focus on recent developments in wireless sensor network technologies. We review the leading research projects, standards and technologies, and platforms. Moreover, we highlight a recent phenomenon in WSN research that is to explore synergy between sensor networks and other technologies and explain how this can help sensor networks achieve their full potential. This paper intends to help new researchers entering the domain of WSNs by providing a comprehensive survey on recent developments. <s> BIB005 </s> A Survey on the Progress of Testing Techniques and Methods for Wireless Sensor Networks <s> 2) WISEBED <s> This paper introduces the FIT IoT-LAB testbed, an open testbed composed of 2728 low-power wireless nodes and 117 mobile robots available for experimenting with large-scale wireless IoT technologies, ranging from low-level protocols to advanced Internet services. IoT-LAB is built to accelerate the development of tomorrow's IoT technology by offering an accurate open-access and open-source multi-user scientific tool. The IoT-LAB testbed is deployed in 6 sites across France. Each site features different node and hardware capabilities, but all sites are interconnected and available through the same web portal, common REST interfaces and consistent CLI tools. The result is a heterogeneous testing environment, which covers a large spectrum of IoT use cases and applications. IoT-LAB is a one-of-a-kind facility, allowing anyone to test their solution at scale, experiment and fine-tune new networking concept. <s> BIB006
|
WISEBED BIB001 , similar to SensLAB, consists of 9 separate testing platforms which are combined based on the concepts of platform virtualization and virtual links in different regions in Europe. The platform is deployed with 550 nodes, such as still nodes likes iSenseo, TelosBor, MicaZTMote Sky and mobile nodes likes Roomba Robot530, SunSpot, SpotMoway. These nodes have a variety of sensors and wireless chips, such as CC2420 (2.4GHz) and CC1100 (868MHz). The backbone network includes the wired and the wireless (Ethernet, IEEE 802.15.4, WiFi). The system structure is based on hierarchical structure and each layer is composed of one or more brother testing platforms which is mainly responsible for responding to different event commands and communicating with other platforms. The lowest layer is the node device running on iSense, Contiki and TinyOS. Figure 8 is a software configuration diagram of a platform. Each independent platform is controlled by a port server. And port servers in different geographical platforms are connected by overlay network. The coverage node has the same interface as the port server. And the user can access the unified distributed testing platform through the connection overlay network. The single platform can be accessed through the port server. The services provided by inner layer of the port server include providing gateway access nodes (IEEE802.15.4, RS232), connecting local storage devices (XML files, RDBMS) to store debug history and access lists. The outer layer provides user services to manipulate the platform and access port servers with a common IP interface. The platform uses TARWIS BIB004 management system to manage resources, and provide multiuser access, online configuration and scheduling experiments, automatic data query and real-time monitoring. The system is independent of node type and OS. Web services provide authentication for the platform, authorization (using the famous distributed registration system Shibboleth), user management, network control debugging and configuration (WSNs API). Each entity in the network has unique identification (using URN). As a virtual testing platform, the Shawn simulator communicates through virtual links. The messages sent by the nodes are transmitted to the local port server of the testing platform through routing, and the server is compared with the description of the virtual testing platform to judge the adjacent nodes. At the same time, it checks the possibility of LQI computing to lose or change parts of the message, and then transmits it to the target node through the form of message. Wiselib is a special platform general algorithm library. It provides general API implementation algorithms and can be compiled on different software and hardware platforms. The algorithm is written in C++ and supports a wide range of platforms and OSs. 3) IoT-LAB IOC-LAB BIB006 is suitable for the testing of a wide range of IoT applications. It is the logical evolution platform of Sensor LAB and has deployed a large number of nodes and mobile robots. And these nodes are distributed into 6 different sites in France. IoT-lab nodes are interconnected through the backbone network which provides energy and connects them to the server. The management software provides real-time access to the nodes. An IOC-LAB node consists of three modules: Open Node (ON), Gate Way (GW) and Control Node (CN). ON is a low power device reprogrammed by users. GW is a small Linux computer. CN is used to control ON and monitor its energy consumption. The platform consists of static nodes such as WSN430, M3, A8 and dynamic nodes such as Turtlebot, WiFibot. Robots can use infrared beams and cameras to find the docking point for charging. The nodes in the platform can support five kinds of OS: FreeRTOS, Contiki, TinyOS, Riot and Open WSNs. By controlling CN, users can select configuration to monitor the frequency and the number of measurements, activate the radio frequency monitoring mode and control the receiving and forwarding of the node data packets. Meanwhile, using the open source measurement function library to collect data and compressing the data by using ZEP protocol can also be executed. The GW module provides a REST-based management interface to implement all API instructions. GW connects to ON's JTAG port to run Open OCD GDB server, which enables users to debug nodes remotely. The robot runs Ros Linux that contains the same management interface. It can locate itself by calculating the number of wheel rotation and record position information by using OML format, and synchronize it to the back-end software through WiFi. The platform connected by VPN has one main site and six sub-sites. The main site is responsible for user authentication (LDAP directory tree) and private domain name server (DNS) system, and interacts with the open source resource management software OAR. The operating system platform uses the register manager to manage authorization, users and resources. Users can deploy their experiments on the IOC-LAB website after registering their accounts. They can directly use CLI tools to edit source code, install firmware for nodes, visit the serial ports of nodes, record A8 nodes through SSH commands and use Open OCD and GDB to debug M3 nodes remotely. The OML file is used to restore the energy consumption data and the Wire shark is used to analyze the monitored traffic data. Some GPS modules on A8 nodes provide precise end-to-end time synchronization, accurately monitoring and evaluating communication protocols. Two experiments were carried out on the platform. One is to measure the effect of WIFI traffic under the IEEE802.15.4 network, and to detect the interference of other communication technologies on the same frequency channel by observing the RSSI parameters of the node. The second experiment is using M3 nodes to detect and track human or mobile nodes, mainly using sensors and RF location algorithm. Figure 9 , is a system architecture diagram for IoT-LAB. Most of the testing platforms described above are deployed in the laboratory, and there are many large-scale field deployment testing platforms available. For example, FireSenseTB BIB002 is designed to detect forest fires and used to simulate fire scenarios. SensorScrope BIB003 , which is used to solve the problem that long-term monitoring cannot be maintained in severe environment, is a powerful outdoor environmental monitoring system. There are also some advanced research projects in WSNs field, such as Smart Santander Project Glacs Web project e-SENSE, which are summarized in BIB005 .
|
The many faces of data-centric workflow optimization: a survey <s> Introduction <s> The past decade has witnessed a growing trend in designing and using workflow systems with a focus on supporting the scientific research process in bioinformatics and other areas of life sciences. The aim of these systems is mainly to simplify access, control and orchestration of remote distributed scientific data sets using remote computational resources, such as EBI web services. In this paper we present the state of the art in the field by reviewing six such systems: Discovery Net, Taverna, Triana, Kepler, Yawl and BPEL. We provide a high-level framework for comparing the systems based on their control flow and data flow properties with a view of both informing future research in the area by academic researchers and facilitating the selection of the most appropriate system for a specific application task by practitioners. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> Scientific workflow systems have become a necessary tool for many applications, enabling the composition and execution of complex analysis on distributed resources. Today there are many workflow systems, often with overlapping functionality. A key issue for potential users of workflow systems is the need to be able to compare the capabilities of the various available tools. There can be confusion about system functionality and the tools are often selected without a proper functional analysis. In this paper we extract a taxonomy of features from the way scientists make use of existing workflow systems and we illustrate this feature set by providing some examples taken from existing workflow systems. The taxonomy provides end users with a mechanism by which they can assess the suitability of workflow in general and how they might use these features to make an informed choice about which workflow system would be a good choice for their particular application. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> This chapter describes a design methodology for business processes and workflows that focuses first on “business artifacts”, which represent key (real or conceptual) business entities, including both the business-relevant data about them and their macro-level lifecycles. Individual workflow services (a.k.a. tasks) are then incorporated, by specifying how they operate on the artifacts and fit into their lifecycles. The resulting workflow is specified in a particular artifact-centric workflow model, which is introduced using an extended example. At the logical level this workflow model is largely declarative, in contrast with most traditional workflow models which are procedural and/or graph-based. The chapter includes a discussion of how the declarative, artifact-centric workflow specification can be mapped into an optimized physical realization. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> Business Intelligence (BI) refers to technologies, tools, and practices for collecting, integrating, analyzing, and presenting large volumes of information to enable better decision making. Today's BI architecture typically consists of a data warehouse (or one or more data marts), which consolidates data from several operational databases, and serves a variety of front-end querying, reporting, and analytic tools. The back-end of the architecture is a data integration pipeline for populating the data warehouse by extracting data from distributed and usually heterogeneous operational sources; cleansing, integrating and transforming the data; and loading it into the data warehouse. Since BI systems have been used primarily for off-line, strategic decision making, the traditional data integration pipeline is a oneway, batch process, usually implemented by extract-transform-load (ETL) tools. The design and implementation of the ETL pipeline is largely a labor-intensive activity, and typically consumes a large fraction of the effort in data warehousing projects. Increasingly, as enterprises become more automated, data-driven, and real-time, the BI architecture is evolving to support operational decision making. This imposes additional requirements and tradeoffs, resulting in even more complexity in the design of data integration flows. These include reducing the latency so that near real-time data can be delivered to the data warehouse, extracting information from a wider variety of data sources, extending the rigidly serial ETL pipeline to more general data flows, and considering alternative physical implementations. We describe the requirements for data integration flows in this next generation of operational BI system, the limitations of current technologies, the research challenges in meeting these requirements, and a framework for addressing these challenges. The goal is to facilitate the design and implementation of optimal flows to meet business requirements. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> Despite an increasing interest in scientific workflow technologies in recent years, workflow design remains a challenging, slow, and often error-prone process, thus limiting the speed of further adoption of scientific workflows. Based on practical experience with data-driven workflows, we identify and illustrate a number of recurring scientific workflow design challenges, i.e., parameter-rich functions; data assembly, disassembly, and cohesion; conditional execution; iteration; and, more generally, workflow evolution. In conventional approaches, such challenges usually lead to the introduction of different types of "shims", i.e., intermediary workflow steps that act as adapters between otherwise incorrectly wired components. However, relying heavily on the use of shims leads to brittle (i.e., change-intolerant) workflow designs that are hard to comprehend and maintain. To this end, we present a general workflow design paradigm called virtual data assembly lines (VDAL). In this paper, we show how the VDAL approach can overcome common scientific workflow design challenges and improve workflow designs by exploiting (i) a semistructured, nested data model like XML, (ii) a flexible, statically analyzable configuration mechanism (e.g., an XQuery fragment), and (iii) an underlying virtual assembly line model that is resilient to workflow and data changes. The approach has been implemented as Kepler/COMAD, and applied to improve the design of complex, real-world workflows. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> As business intelligence becomes increasingly essential for organizations and as it evolves from strategic to operational, the complexity of Extract-Transform-Load (ETL) processes grows. In consequence, ETL engagements have become very time consuming, labor intensive, and costly. At the same time, additional requirements besides functionality and performance need to be considered in the design of ETL processes. In particular, the design quality needs to be determined by an intricate combination of different metrics like reliability, maintenance, scalability, and others. Unfortunately, there are no methodologies, modeling languages or tools to support ETL design in a systematic, formal way for achieving these quality requirements. The current practice handles them with ad-hoc approaches only based on designers' experience. This results in either poor designs that do not meet the quality objectives or costly engagements that require several iterations to meet them. A fundamental shift that uses automation in the ETL design task is the only way to reduce the cost of these engagements while obtaining optimal designs. Towards this goal, we present a novel approach to ETL design that incorporates a suite of quality metrics, termed QoX, at all stages of the design process. We discuss the challenges and tradeoffs among QoX metrics and illustrate their impact on alternative designs. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> BI technologies are essential to running today's businesses and this technology is going through sea changes. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Introduction <s> of the ETL products in the market today provide tools for design of ETL workflows, with very little or no support for opti- mization of such workflows. Optimization of ETL workflows pose several new challenges compared to traditional query optimization in database systems. There have been many attempts both in the industry and the research community to support cost-based opti- mization techniques for ETL Workflows, but with limited success. Non-availability of source statistics in ETL is one of the major chal- lenges that precludes the use of a cost based optimization strategy. However, the basic philosophy of ETL workflows of design once and execute repeatedly allows interesting possibilities for determin- ing the statistics of the input. In this paper, we propose a frame- work to determine various sets of statistics to collect for a given workflow, using which the optimizer can estimate the cost of any alternative plan for the workflow. The initial few runs of the work- flow are used to collect the statistics and future runs are optimized based on the learned statistics. Since there can be several alterna- tive sets of statistics that are sufficient, we propose an optimization framework to choose a set of statistics that can be measured with the least overhead. We experimentally demonstrate the effective- ness and efficiency of the proposed algorithms. <s> BIB010
|
Workflows aim to model and execute real-world intertwined or interconnected processes, named as tasks or activities. While this is still the case, workflows play an increasingly significant role in processing very large volumes of data, possibly under highly demanding requirements. Scientific workflow systems tailored to data-intensive e-science applications have been around since the last decade, e.g., BIB001 BIB002 . This trend is nowadays complemented by the evolution of workflow technology to serve (big) data analysis, in settings such as business intelligence, e.g., BIB007 , and business process management, e.g., BIB003 . Additionally, massively parallel engines, such as Spark, are becoming increasingly popular for designing and executing workflows. Broadly, there are two big workflow categories, namely control-centric and data-centric. A workflow is commonly represented as a directed graph, where each task corresponds to a node in the graph and the edges represent the control flow or the data flow, respectively. The control-centric workflows are most often encountered in business process management , and they emphasize the passing of control across tasks and gateway semantics, such as branching execution, iterations, and so on; transmitting and sharing data across tasks is a second class citizen. In control-centric workflows, only a subset of the graph nodes correspond to activities, while the remainder denote events and gateways, as in the BPMN standard. In data-centric workflows (or workflows for data analytics or simply data flows 1 ), the graph is typically acyclic (directed acyclic graph-DAG). The nodes of the DAG represent solely actions related to the manipulation, transformation, access and storage of data, e.g., as in BIB004 BIB008 BIB005 and in popular data flow systems, such as Pentaho Data Integration (Kettle) and Spark. The tokens passing through the tasks correspond to processed data. The control is modeled implicitly assuming that each task may start executing when the entire or part of the input becomes available. This survey considers data-centric flows exclusively. Executing data-centric flows efficiently is a far from trivial issue. Even in the most widely used data flow tools, flows are commonly designed manually. Problems in the optimality of those designs stem from the complexity of such flows and the fact that in some applications, flow designers might not be systems experts and consequently, they tend to design with only semantic correctness in mind. In addition, executing flows in a dynamic environment may entail that an optimized design in the past may behave suboptimally in the future due to changing conditions BIB010 BIB009 . The issues above call for a paradigm shift in the way data flow management systems are engineered and more specifically; there is a growing demand for automated optimization of flows. An analogy with database query processing, where declarative statements, e.g., in SQL, are automatically parsed, optimized, and then passed on to the execution engine is drawn. But data flow optimization is more complex, because tasks need not belong to a predefined set of algebraic operators with clear semantics and there may be arbitrary dependencies among their execution order. In addition, in data flows there may be optimization criteria apart from performance, such as reliability and freshness depending on business objectives and execution environments BIB006 . This survey covers optimization techniques 2 applicable to data flows, including database query optimization techniques that consider arbitrary plan operators, e.g., user-defined functions (UDFs), and dependencies between them. To the contrary, we do not aim to cover techniques that perform optimizations considering solely specific types of tasks, such as filters, joins and so on; the techniques covered in this survey do not necessarily rely on any type of algebraic task modeling. The contribution of this survey is the provision of a taxonomy of data flow optimization techniques that refer to the flow plan generation layer. In addition, a concise overview of the existing approaches with a view to (i) explaining the technical details and the distinct features of each approach in a way that facilitates result synthesis; and (ii) highlighting strengths and weaknesses, and areas deserving more attention from the community is provided. The main findings are that on the one hand, big advances have been made and most of the aspects of data flow optimization have started to be investigated. On the other hand, data flow optimization is rather a technology in evolution. Contrary to query optimization, research so far seems to be less systematic and mainly consists of ad hoc techniques, the combination of which is unclear. The structure of the rest of this article is as follows. The next section describes the survey methodology and provides details about the exact context considered. Section 3 presents a taxonomy of existing optimizations that take place before the flow enactment. Section 4 describes the state-of-the-art techniques grouped by the main optimization mechanism they employ. Section 5 presents the ways in which optimization proposals for data-centric workflows have been evaluated. Section 6 highlights our findings. Section 7 touches upon tangential flow optimization-related techniques that have recently been developed along with scheduling optimizations taking place during flow execution. Section 8 reviews surveys that have been conducted in related areas, and finally, Sect. 9 concludes the paper.
|
The many faces of data-centric workflow optimization: a survey <s> Techniques covered <s> Web services are becoming a standard method of sharing data and functionality among loosely-coupled systems. We propose a general-purpose Web Service Management System (WSMS) that enables querying multiple web services in a transparent and integrated fashion. This paper tackles a first basic WSMS problem: query optimization for Select-Project-Join queries spanning multiple web services. Our main result is an algorithm for arranging a query's web service calls into a pipelined execution plan that optimally exploits parallelism among web services to minimize the query's total running time. Surprisingly, the optimal plan can be found in polynomial time even in the presence of arbitrary precedence constraints among web services, in contrast to traditional query optimization where the analogous problem is NP-hard. We also give an algorithm for determining the optimal granularity of data "chunks" to be used for each web service call. Experiments with an initial prototype indicate that our algorithms can lead to significant performance improvement over more straightforward techniques. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques covered <s> Background: Systematic literature studies have become common in software engineering, and hence it is important to understand how to conduct them efficiently and reliably. Objective: This paper presents guidelines for conducting literature reviews using a snowballing approach, and they are illustrated and evaluated by replicating a published systematic literature review. Method: The guidelines are based on the experience from conducting several systematic literature reviews and experimenting with different approaches. Results: The guidelines for using snowballing as a way to search for relevant literature was successfully applied to a systematic literature review. Conclusions: It is concluded that using snowballing, as a first search strategy, may very well be a good alternative to the use of database searches. <s> BIB002
|
The main part of this survey covers all the data flow optimization techniques that meet the following criteria to the best of authors' knowledge: -They refer to the WEP generation layer in the architecture described above that is the focus is on the optimizations performed before execution rather than during execution. -They refer to techniques that are applicable to any type of tasks rather than being tailored to specific types, such as filters and joins, or to an algebraic modeling of tasks. -The partial ordering of the flow tasks is subject to dependency (or, else precedence) constraints between tasks, as is the generic case for example of scientific and data analysis flows; these constraints denote whether a specific task must precede another task or not in the flow plan. We surveyed all types of venues where relevant techniques are published. Most of the covered works come from the broader data management and e-science community, but there are proposals from other areas, such as algorithms. We also include techniques that were proposed without generic data flows in mind, but meet our criteria and thus are applicable to generic data flows. An example is the proposal for queries over Web Services (WSs) in BIB001 . The main keywords we searched for are: "workflow optimization," "flow optimization," "query optimization AND constraints," and "query optimization AND UDF," while we applied snowballing in both directions BIB002 using both the reference list of and the citations to a paper.
|
The many faces of data-centric workflow optimization: a survey <s> Technique dimensions considered <s> In data integration systems, queries posed to a mediator need to be translated into a sequence of queries to the underlying data sources. In a heterogeneous environment, with sources of diverse and limited query capabilities, not all the translations are feasible. In this paper, we study the problem of finding feasible and efficient query plans for mediator systems. We consider conjunctive queries on mediators and model the source capabilities through attribute-binding adornments. We use a simple cost model that focuses on the major costs in mediation systems, those involved with sending queries to sources and getting answers back. Under this metric, we develop two algorithms for source query sequencing - one based on a simple greedy strategy and another based on a partitioning scheme. The first algorithm produces optimal plans in some scenarios, and we show a linear bound on its worst case performance when it misses optimal plans. The second algorithm generates optimal plans in more scenarios, while having no bound on the margin by which it misses the optimal plans. We also report on the results of the experiments that study the performance of the two algorithms. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Technique dimensions considered <s> We consider the problem of query optimization in the presence of limitations on access patterns to the data (i.e., when one must provide values for one of the attributes of a relation in order to obtain tuples). We show that in the presence of limited access patterns we must search a space of annotated query plans , where the annotations describe the inputs that must be given to the plan. We describe a theoretical and experimental analysis of the resulting search space and a novel query optimization algorithm that is designed to perform well under the different conditions that may arise. The algorithm searches the set of annotated query plans, pruning invalid and non-viable plans as early as possible in the search space, and it also uses a best-first search strategy in order to produce a first complete plan early in the search. We describe experiments to illustrate the performance of our algorithm. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Technique dimensions considered <s> The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a user-defined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithm which is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. Highlights? We propose two workflow scheduling algorithms for IaaS Clouds. ? The algorithms aim to minimize the workflow execution cost while meeting a deadline. ? The pricing model of the Clouds is considered which is based on a time interval. ? The algorithms are compared with a list heuristic through simulation. ? The experiments show the promising performance of both algorithms. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Technique dimensions considered <s> Analyzing big data with the help of automated data flows attracts a lot of attention because of the growing need for end-to-end processing of this data. Modern data flows may consist of a high number of tasks and it is difficult for flow designers to define an efficient execution order of the tasks manually given that, typically, there is significant freedom in the valid positioning for some of the tasks. Several automated execution plan enumeration techniques have been proposed. These solutions can be broadly classified into three categories, each having significant limitations: (i) the optimizations are based on rewrite rules similar to those used in databases, such as filter and projection push-down, but these rules cover only the flow tasks that correspond to extended relational algebra operators. To cover arbitrary tasks, the solutions (ii) either rely on simple heuristics, or (iii) they exhaustively check all orderings, and thus cannot scale. We target the second category and we propose an efficient and polynomial cost-based task ordering solution for flows with arbitrary tasks seen as black boxes. We evaluated our proposals using both real runs and simulations, and the results show that we can achieve speed-ups of orders of magnitude, especially for flows with a high number of tasks even for relatively low flexibility in task positioning. <s> BIB004
|
We assume that the user initially defines the flow either at a high-level non-executable form or in an executable form that is not optimized. The role of the optimizations considered is to transform the initial flow into an optimized ready-tobe executed one. BIB003 Analogously to query optimization, it is convenient to distinguish between high-level and low-level flow details. The former capture essential flow parts, such as the final task sequencing, at a higher level than that of complete execution details, whereas the latter include all the information needed for execution. In order to drive the optimization, a set of metadata is assumed to be in place. This metadata can be statistics, e.g., cost per task invocation and size of task output per input data item, information about the dependency constraints between tasks, that is a partial order of tasks, which must be always preserved to ensure semantic correctness, or other types of information as explained in this survey. To characterize optimizations that take place before the flow execution (or enactment), we pose a set of complementary questions when examining each existing proposal aiming at shedding light onto and covering all the main aspects of interest: 1. What is the effect on the execution plan?, which aims to identify the type of incurred enhancements to the initial flow plan. 2. Why?, which asks for the objectives of the optimization. 3. How?, which aims to clarify the type of the solution. 4. When?, to distinguish between cases where the WEP generation phase takes place strictly before the WEP execution one, and where these phases are interleaved. 5. Where the flow is executed?, which refers to the execution environment. 6. What are the requirements?, which refers to the input flow metadata in order to apply the optimization. 7. In which application domain?, which refers to the domain for which the technique initially targets. We regard each of the above questions as a different dimension. As such, we derive seven dimensions: (i) the Mechanisms referring to the process through which an initial flow is transformed into an optimized one; (ii) the Objectives that capture the one or more criteria of the optimization process; (iii) the Solution Types defining whether an optimization solution is accurate or approximate with respect to the underlying formulation of the optimization problem; (iv) the Adaptivity during the flow execution; (v) the Execution Environment of the flow and its distribution; (vi) the Metadata necessary to apply the optimization technique; and finally, (vii) the Application Domain, for which each optimization technique is initially proposed. data integration that optimize the plan after it has been devised, such as BIB001 or BIB002 , which is subsumed by Kougka and Gounaris BIB004 .
|
The many faces of data-centric workflow optimization: a survey <s> Flow optimization mechanisms <s> Many systems for big data analytics employ a data flow abstraction to define parallel data processing tasks. In this setting, custom operations expressed as user-defined functions are very common. We address the problem of performing data flow optimization at this level of abstraction, where the semantics of operators are not known. Traditionally, query optimization is applied to queries with known algebraic semantics. In this work, we find that a handful of properties, rather than a full algebraic specification, suffice to establish reordering conditions for data processing operators. We show that these properties can be accurately estimated for black box operators by statically analyzing the general-purpose code of their user-defined functions. ::: ::: We design and implement an optimizer for parallel data flows that does not assume knowledge of semantics or algebraic properties of operators. Our evaluation confirms that the optimizer can apply common rewritings such as selection reordering, bushy join-order enumeration, and limited forms of aggregation push-down, hence yielding similar rewriting power as modern relational DBMS optimizers. Moreover, it can optimize the operator order of nonrelational data flows, a unique feature among today's systems. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Flow optimization mechanisms <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Flow optimization mechanisms <s> To remain competitive, enterprises are evolving in order to quickly respond to changing market conditions and customer needs. In this new environment, a single centralized data warehouse is no longer sufficient. Next generation business intelligence involves data flows that span multiple, diverse processing engines, that contain complex functionality like data/text analytics, machine learning operations, and that need to be optimized against various objectives. A common example is the use of Hadoop to analyze unstructured text and merging these results with relational database queries over the data warehouse. We refer to these multi-engine analytic data flows as hybrid flows. Currently, it is a cumbersome task to create and run hybrid flows. Custom scripts must be written to dispatch tasks to the individual processing engines and to exchange intermediate results. So, designing correct hybrid flows is a challenging task. Optimizing such flows is even harder. Additionally, when the underlying computing infrastructure changes, existing flows likely need modification and reoptimization. The current, ad-hoc design approach cannot scale as hybrid flows become more commonplace. To address this challenge, we are building a platform to design and manage hybrid flows. It supports the logical design of hybrid flows in which implementation details are not exposed. It generates code for the underlying processing engines and orchestrates their execution. But the key enabling technology in the platform is an optimizer that converts the logical flow to an executable form that is optimized for the underlying infrastructure according to user-specified objectives. In this paper, we describe challenges in designing the optimizer and our solutions. We illustrate the optimizer through a real-world use case. We present a logical design and optimized designs for the use case. We show how the performance of the use case varies depending on the system configuration and how the optimizer is able to generate different optimized flows for different configurations. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Flow optimization mechanisms <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Flow optimization mechanisms <s> Data-intensive flows are increasingly encountered in various settings, including business intelligence and scientific scenarios. At the same time, flow technology is evolving. Instead of resorting to monolithic solutions, current approaches tend to employ multiple execution engines, such as Hadoop clusters, traditional DBMSs, and stand-alone tools. We target the problem of allocating flow activities to specific heterogeneous and interdependent execution engines while minimizing the flow execution cost. To date, the state-of-the-art is limited to simple heuristics. Although the problem is intractable, we propose practical anytime solutions that are capable of outperforming those simple heuristics and yielding allocation plans in seconds even when optimizing large flows on ordinary machines. Moreover, we prove the NP-hardness of the problem in the generic case and we propose an exact polynomial solution for a specific form of flows, namely, linear flows. We thoroughly evaluate our solutions in both real-world and flows synthetic, and the results show the superiority of our solutions. Especially in real-world scenarios, we can decrease execution time up to more than 3 times. A set of anytime algorithms for yielding mappings of flow nodes to execution engines.An optimal solution with polynomial complexity for linear flows.Evaluation using both real and synthetic flows in a wide range of settings.Proof of the NP-hardness of the problem. <s> BIB005
|
A data flow is typically represented as a directed acyclic graph (DAG) that is defined as G = (V , E), where V denotes the nodes of the graph corresponding to a set of tasks and E represents a set of pair of nodes, where each pair denotes the data flow between the tasks. If a task outputs data that cannot be directly consumed by a subsequent task, then data transformation needs to take place through a third task; no data transformation takes place through an edge. Each graph element, either a vertex or an edge, is associated with properties, such as how exactly is implemented, for which execution engine, and under which configuration. Data flow optimization is a multi-dimensional problem, and its multiple dimensions are broadly divided according to the two flow specification levels. Consequently, we identify the optimization of the high-level (or logical) flow plan and the low-level (or physical) flow plan, and each type of optimization mechanism can affect the set of V or E of the workflow graph and their properties. The logical flow optimization types are largely based on workflow structure reformations, while preserving any dependency constraints between tasks; structure reformations are reflected as modifications in V and E. The output of the optimized flow needs to be semantically equivalent as the output of the initial flow, which practically means that two flows receive the same input data and produce the same output data without considering the way this result was produced. Given that data manipulation takes place only in the context of tasks, logical flow optimization is task-oriented. The logical optimization types are characterized as follows (summarized also in Fig. 2 ): -Task Ordering, where we change the sequence of the tasks by applying a set of partial (re)orderings. -Task Introduction, where new tasks are introduced in the data flow plan in order, for example, to minimize the data to be processed and thus, the overall execution cost. -Task Removal, which can be deemed as the opposite of task introduction. A task can be safely removed from the flow, if it does not actually contribute to its result dataset. -Task Merge is the optimization action of grouping flow tasks into a single task without changing the semantics, for example, to minimize the overall flow execution cost or to mitigate the overhead of enacting multiple tasks. -Task Decomposition, where a set of grouped tasks is split to more than one flow tasks with less complex functionality for generating more optimal sub-tasks. This is the opposite operation of merge action and may provide more optimization opportunities, as discussed in BIB001 BIB002 , because of the potential increase in the number of valid (re)orderings. At the low level, a wide range of implementation aspects need to be specified so that the flow can be later executed (see also Fig. 3 ): -Task Implementation Selection, which is one of the most significant lower-level problems in flow optimization. This optimization type includes the selection of the exact, logically equivalent, task implementation for each task that will satisfy the defined optimization objectives BIB002 . A well-known counterpart in database optimization is choosing the exact join algorithm (e.g., hash-join, sortmerge-join, nested loops). -Execution Engine Selection, where we have to decide the type of processing engine to execute each task. The need for such optimization stems from the availability of multiple options in modern data-intensive flows BIB005 BIB003 . Common choices, nowadays, include DBMSs, massively parallel engines, such as Hadoop clusters, apart from the execution engines that are bundled with data flow management systems. -Execution Engine Configuration, where we decide on configuration details of the execution environment, such as the bandwidth, CPU, memory to be reserved during execution or the number of cores allocated BIB004 . The fact that the optimization types are task-oriented must not lead to a misinterpretation that they are unsuitable for data flows. Again, we draw an analogy with query optimization, where the main techniques, e.g., dynamic programming for join ordering, filter push down, and so on are operator-oriented; nevertheless, such an approach has proven sufficient for making query plans capable of processing terabytes of data.
|
The many faces of data-centric workflow optimization: a survey <s> Metadata <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Metadata <s> Data-intensive analytic flows, such as populating a datawarehouse or analyzing a click stream at runtime, are very common in modern business intelligence scenarios. Current state-of-the-art data flow management techniques rely on the users to specify the flow structure without performing automated optimization of that structure. In this work, we introduce a declarative way to specify flows, which is based on annotated descriptions of the output schema of each flow activity. We show that our approach is adequate to capture both a wide-range of arbitrary data transformations, which cannot be supported by traditional relational operators, and the precedence constraints between the various stages in the flow. Moreover, we show that we can express the flows as annotated queries and thus apply precedence-aware query optimization algorithms. We propose an approach to optimizing linear conceptual data flows by producing a parallel execution plan and our evaluation results show that we can speedup the flow execution by up to an order of magnitude compared to existing techniques. <s> BIB002
|
The set of metadata includes the information needed to apply the optimizations and as such can be regarded as existential pre-conditions that should hold. The most basic input requirement of the optimization solutions is an initial set V of tasks. However, additional metadata with regard to the flow graph are typically required as well. These metadata are both qualitative and quantitative (statistical), as discussed below. Qualitative metadata include: -Dependencies, which explicitly refer to the definition of which vertices in the graph should always precede other vertices. Typically, the definition of dependencies comes in the form of an auxiliary graph. -Task schemata, which refer to the definition of schema of the data input and/or output of each task. Note that dependencies may be produced by task schemata through simple processing BIB001 , especially if they contain information about which schema elements are bound or free BIB002 . However, task schemata may serve additional purposes than deriving dependencies, e.g., to check whether a task contributes to the final desired output of the flow. -Task profile, which refers to information about the execution logic of the task, that is the manner it manipulates its input data, e.g., through analysis of the commands implementing each task. If there are no such metadata, the task is considered as a black-box. Otherwise, information, e.g., about which attributes are read and which are written, can be extracted.
|
The many faces of data-centric workflow optimization: a survey <s> Task ordering <s> We consider the problem of optimally arranging a collection of query operators into a pipelined execution plan in the presence of precedence constraints among the operators. The goal of our optimization is to maximize the rate at which input data items can be processed through the pipelined plan. We consider two different scenarios: one in which each operator is fixed to run on a separate machine, and the other in which all operators run on the same machine. Due to parallelism in the former scenario, the cost of a plan is given by the maximum (or {\em bottleneck}) cost incurred by any operator in the plan. In the latter scenario, the cost of a plan is given by the {\em sum} of the costs incurred by the operators in the plan. These two different cost metrics lead to fundamentally different optimization problems: Under the bottleneck cost metric, we give a general, polynomial-time greedy algorithm that always finds the optimal plan. However, under the sum cost metric, the problem is much harder: We show that it is unlikely that any polynomial-time algorithm can approximate the optimal plan to within a factor smaller than $O(n^{\theta})$, where $n$ is the number of operators, and $\theta$ is some positive constant. Finally, under the sum cost metric, for the special case when the selectivity of each operator lies in $[\epsilon,1-\epsilon]$, we give an algorithm that produces a $2$-approximation to the optimal plan but has running time exponential in $1/\epsilon$. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task ordering <s> Web services are becoming a standard method of sharing data and functionality among loosely-coupled systems. We propose a general-purpose Web Service Management System (WSMS) that enables querying multiple web services in a transparent and integrated fashion. This paper tackles a first basic WSMS problem: query optimization for Select-Project-Join queries spanning multiple web services. Our main result is an algorithm for arranging a query's web service calls into a pipelined execution plan that optimally exploits parallelism among web services to minimize the query's total running time. Surprisingly, the optimal plan can be found in polynomial time even in the presence of arbitrary precedence constraints among web services, in contrast to traditional query optimization where the analogous problem is NP-hard. We also give an algorithm for determining the optimal granularity of data "chunks" to be used for each web service call. Experiments with an initial prototype indicate that our algorithms can lead to significant performance improvement over more straightforward techniques. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task ordering <s> This paper deals with pipelined queries over services. The execution plan of such queries defines an order in which the services are called. We present the theoretical underpinnings of a newly proposed algorithm that produces the optimal linear ordering corresponding to a query being executed in a decentralized manner, i.e., when the services communicate directly with each other. The optimality is defined in terms of query response time, which is determined by the bottleneck service in the plan. The properties discussed in this work allow a branch-and-bound approach to be very efficient. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task ordering <s> In the parallel pipelined filter ordering problem, we are given a set of n filters that run in parallel. The filters need to be applied to a stream of elements, to determine which elements pass all filters. Each filter has a rate limit ri on the number of elements it can process per unit time, and a selectivity pi, which is the probability that a random element will pass the filter. The goal is to maximize throughput. This problem appears naturally in a variety of settings, including parallel query optimization in databases and query processing over Web services. ::: We present an O(n3) algorithm for this problem, given tree-structured precedence constraints on the filters. This extends work of Condon et al. [2009] and Kodialam [2001], who presented algorithms for solving the problem without precedence constraints. Our algorithm is combinatorial and produces a sparse solution. Motivated by join operators in database queries, we also give algorithms for versions of the problem in which “filter” selectivities may be greater than or equal to 1. ::: We prove a strong connection between the more classical problem of minimizing total work in sequential filter ordering (A), and the parallel pipelined filter ordering problem (B). More precisely, we prove that A is solvable in polynomial time for a given class of precedence constraints if and only if B is as well. This equivalence allows us to show that B is NP-Hard in the presence of arbitrary precedence constraints (since A is known to be NP-Hard in that setting). <s> BIB004
|
The goal of Task Ordering is typically specified as that of optimizing an objective function, possibly under certain constraints. A common feature of all proposals is that they assign a metric m(v i ) to each vertex v i ∈ V , i = 1 . . . n. To date, task ordering techniques have been employed to optimize performance. More specifically, all aspects of performance that we introduced previously have been investigated: the minimization of the sum of execution costs of either all tasks (both under and without constraints) or the tasks that belong to the critical path, the minimization of the maximum task cost, and the maximization of the throughput. Table 3 summarizes the objective functions of these metrics that have been employed by approaches to task ordering in data flow optimization to date. Existing techniques can be modeled at an abstract level uniformly as follows. The metric m refers either to costs (denoted as c(v i )) or to throughput values (denoted as f (v i )). Costs are expressed in either time or abstract units, whereas throughput is expressed as number of records (or tuples) processed per time unit. A more generic modeling assigns a cost to each vertex v i along with its outcoming edges e i j , j = 1 . . . n (denoted as c(v i , e i j )). These objective functions correspond to problems with different algorithmic complexities. Specifically, the problems that target the minimization of the sum of the vertex cost are intractable BIB001 . Moreover, Burge et al. BIB001 discuss that "it is unlikely that any polynomial time algorithm can approximate the optimal plan to within a factor of O(n θ )," where θ is some positive constant. The generic bottleneck minimization problem is intractable as well BIB003 . However, the bottleneck minimization based only on vertex costs and the other two objective functions can be optimally solved in polynomial time BIB004 BIB002 . Independently of the exact optimization objectives, all the known optimization techniques in this category assume the existence of dependency constraints between the tasks either explicitly or implicity through the definition of task schemata. For the cost or throughput metadata, some techniques rely on the existence of lower-level information, such as selectivity (see Sect. 4.1.5).
|
The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> State-of-the-art optimization approaches for relational database systems, e.g., those used in systems such as OBE, SQL/DS, and commercial INGRES. when used for queries in non-traditional database applications, suffer from two problems. First, the time complexity of their optimization algorithms, being combinatoric, is exponential in the number of relations to be joined in the query. Their cost is therefore prohibitive in situations such as deductive databases and logic oriented languages for knowledge bases, where hundreds of joins may be required. The second problem with the traditional approaches is that, albeit effective in their specific domain, it is not clear whether they can be generalized to different scenarios (e.g. parallel evaluation) since they lack a formal model to define the assumptions and critical factors on which their valiclity depends. This paper proposes a solution to these problems by presenting (i) a formal model and a precise statement of the optimization problem that delineates the assumptions and limitations of the previous approaches, and (ii) a quadratic-tinie algorithm th& determines the optimum join order for acyclic queries. The approach proposed is robust; in particular, it is shown that it remains heuristically effective for cyclic queries as well. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Object-relational database management systems allow knowledgeable users to define new data types as well as new methods (operators) for the types. This flexibility produces an attendant complexity, which must be handled in new ways for an object-relational database management system to be efficient. In this article we study techniques for optimizing queries that contain time-consuming methods. The focus of traditional query optimizers has been on the choice of join methods and orders; selections have been handled by “pushdown” rules. These rules apply selections in an arbitrary order before as many joins as possible, using th e assumption that selection takes no time. However, users of object-relational systems can embed complex methods in selections. Thus selections may take significant amounts of time, and the query optimization model must be enhanced. In this article we carefully define a query cost framework that incorporates both selectivity and cost estimates for selections. We develop an algorithm called Predicate Migration, and prove that it produces optimal plans for queries with expensive methods. We then describe our implementation of Predicate Migration in the commercial object-relational database management system Illustra, and discuss practical issues that affect our earlier assumptions. We compare Predicate Migration to a variety of simplier optimization techniques, and demonstrate that Predicate Migration is the best general solution to date. The alternative techniques we present may be useful for constrained workloads. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> In data integration systems, queries posed to a mediator need to be translated into a sequence of queries to the underlying data sources. In a heterogeneous environment, with sources of diverse and limited query capabilities, not all the translations are feasible. In this paper, we study the problem of finding feasible and efficient query plans for mediator systems. We consider conjunctive queries on mediators and model the source capabilities through attribute-binding adornments. We use a simple cost model that focuses on the major costs in mediation systems, those involved with sending queries to sources and getting answers back. Under this metric, we develop two algorithms for source query sequencing - one based on a simple greedy strategy and another based on a partitioning scheme. The first algorithm produces optimal plans in some scenarios, and we show a linear bound on its worst case performance when it misses optimal plans. The second algorithm generates optimal plans in more scenarios, while having no bound on the margin by which it misses the optimal plans. We also report on the results of the experiments that study the performance of the two algorithms. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> An ETL process is used to extract data from various sources, transform it and load it into a Data Warehouse. In this paper, we analyse an ETL flow and observe that only some of the dependencies in an ETL flow are essential while others are basically represents the flow of data. For the linear flows, we exploit the underlying dependency graph and develop a greedy heuristic technique to determine a reordering that significantly improves the quality of the flow. Rather than adopting a state-space search approach, we use the cost functions and selectivities to determine the best option at each position in a right-to-left manner. To deal with complex flows, we identify activities that can be transferred between linear segments in it and position those activities appropriately. We then use the re-orderings of the linear segments to obtain a cost-optimal semantically equivalent flow for a given complex flow. Experimental evaluation has shown that by using the proposed techniques, ETL flows can be better optimized and with much less effort compared to existing methods. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Many systems for big data analytics employ a data flow abstraction to define parallel data processing tasks. In this setting, custom operations expressed as user-defined functions are very common. We address the problem of performing data flow optimization at this level of abstraction, where the semantics of operators are not known. Traditionally, query optimization is applied to queries with known algebraic semantics. In this work, we find that a handful of properties, rather than a full algebraic specification, suffice to establish reordering conditions for data processing operators. We show that these properties can be accurately estimated for black box operators by statically analyzing the general-purpose code of their user-defined functions. ::: ::: We design and implement an optimizer for parallel data flows that does not assume knowledge of semantics or algebraic properties of operators. Our evaluation confirms that the optimizer can apply common rewritings such as selection reordering, bushy join-order enumeration, and limited forms of aggregation push-down, hence yielding similar rewriting power as modern relational DBMS optimizers. Moreover, it can optimize the operator order of nonrelational data flows, a unique feature among today's systems. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Modern data analysis is increasingly employing data-intensive flows for processing very large volumes of data. As the data flows become more and more complex and operate in a highly dynamic environment, we argue that we need to resort to automated cost-based optimization solutions rather than relying on efficient designs by human experts. We further demonstrate that the current state-of-the-art in flow optimizations needs to be extended and we propose a promising direction for optimizing flows at the logical level, and more specifically, for deciding the sequence of flow tasks. <s> BIB010 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> We present Stratosphere, an open-source software stack for parallel data analysis. Stratosphere brings together a unique set of features that allow the expressive, easy, and efficient programming of analytical applications at very large scale. Stratosphere's features include "in situ" data processing, a declarative query language, treatment of user-defined functions as first-class citizens, automatic program parallelization and optimization, support for iterative programs, and a scalable and efficient execution engine. Stratosphere covers a variety of "Big Data" use cases, such as data warehousing, information extraction and integration, data cleansing, graph analysis, and statistical analysis applications. In this paper, we present the overall system architecture design decisions, introduce Stratosphere through example queries, and then dive into the internal workings of the system's components that relate to extensibility, programming model, optimization, and query execution. We experimentally compare Stratosphere against popular open-source alternatives, and we conclude with a research outlook for the next years. <s> BIB011 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Analyzing big data with the help of automated data flows attracts a lot of attention because of the growing need for end-to-end processing of this data. Modern data flows may consist of a high number of tasks and it is difficult for flow designers to define an efficient execution order of the tasks manually given that, typically, there is significant freedom in the valid positioning for some of the tasks. Several automated execution plan enumeration techniques have been proposed. These solutions can be broadly classified into three categories, each having significant limitations: (i) the optimizations are based on rewrite rules similar to those used in databases, such as filter and projection push-down, but these rules cover only the flow tasks that correspond to extended relational algebra operators. To cover arbitrary tasks, the solutions (ii) either rely on simple heuristics, or (iii) they exhaustively check all orderings, and thus cannot scale. We target the second category and we propose an efficient and polynomial cost-based task ordering solution for flows with arbitrary tasks seen as black boxes. We evaluated our proposals using both real runs and simulations, and the results show that we can achieve speed-ups of orders of magnitude, especially for flows with a high number of tasks even for relatively low flexibility in task positioning. <s> BIB012 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the sum of costs <s> Modern data flows generalize traditional Extract-Transform-Load and data integration workflows in order to enable end-to-end data processing and analytics. The more complex they become, the more pressing the need for automated optimization solutions. Optimizing data flows comes in several forms, among which, optimal task ordering is one of the most challenging ones. We take a practical approach; motivated by real-world examples, such as those captured by the TPC-DI benchmark, we argue that exhaustive non-scalable solutions are indeed a valid choice for chain flows. Our contribution is that we thoroughly discuss the three main directions for exhaustive enumeration of task ordering alternatives, namely backtracking, dynamic programming and topological sorting, and we provide concrete evidence up to which size and level of flexibility of chain flows they can be applied. <s> BIB013
|
Regarding the minimization of the sum of the vertex costs (first row in Table 3 ), there have been proposed both accurate and heuristic optimization solutions dealing with this intractable problem; apparently the former are not scalable. An accurate task ordering optimization solution is the application of the dynamic programming; dynamic programming is extensively used in query optimization and such a technique has been proposed for generic data flows in BIB010 . The rationale of this algorithm is to calculate the cost of task subsets of size n based on subsets of size n − 1. For each of these subsets, we keep only the optimal solution that satisfies the dependency constraints. This solution has exponential complexity even for simple linear non-distributed flows (O(2 n )) but, for small values of n, is applicable and fast. Another optimization technique is the exhaustive production of all the topological sortings in a way that each sorting is produced from the previous one with the minimal amount of changes ; this approach has been also employed to optimize flows in BIB010 BIB012 . Despite having a worst-case complexity of O(n!), it is more scalable than dynamic programming solution, especially, for flows with many dependency constraints between tasks. Another exhaustive technique is to define the problem as a state space search one BIB004 . In such a space, each possible task ordering is modeled as a distinct state and all states are eventually visited. Similar to the optimization proposals described previously, this technique is not scalable either. Another form of task re-ordering is when a single input/output task is moved before or after a multi-input or a multi-output task BIB004 BIB005 . An example case is when two copies of a proliferate single input/ output task are originally placed in the two inputs of a binary fork operation and after re-ordering, are moved after the fork. In such a case, the two task copies moved downstream are merged into a single one. As another example, a single input/output task placed after a multi-input task can be moved upstream, e.g., when a filter task placed after a binary fork is moved upstream to both fork input branches (or to just one, based on their predicates). This is similar to traditional query optimization where a selective operation can be moved before an expensive operation like a join. The branch-and-bound task ordering technique is similar to the dynamic programming one in that it builds a complete flow by appending tasks to smaller sub-flows. To this end, it examines only sub-flows in terms of meeting the dependency constraints and applies a set of recursive calls until generating all the promising data flow plans employing early pruning. Such an optimization technique has been applied in BIB007 BIB009 for executing parallel scientific workflows efficiently, as part of a new optimization technique for the development of a logical optimizer, which is integrated into the Stratosphere system BIB011 , the predecessor of Apache Flink. An interesting feature of this approach is that following common practice from database systems it performs static task analysis (i.e., task profiling) in order to yield statistics and fine-grained dependency constraints between tasks going further from the knowledge that can be derived from simply examining the task schemata. For practical reasons, the four accurate techniques described above are not a good fit for medium and large flows, e.g., with over 15-20 tasks. In these cases, the space of possible solutions is large and needs to be pruned. Thus, heuristic algorithms have been presented to find near optimal solutions for larger data flows. For example, Simitsis et al. BIB004 propose a technique of task ordering by allowing state transitions, which corresponds to orderings that differ in the ordering of only two adjacent tasks. Such transitions are equivalent to a heuristic, which swaps every pair of adjacent tasks, if this change yields lower cost, always preserving the defined dependency constraints, until no further changes can be applied. This heuristic, initially proposed for ETL flows, can be applied to parallel and distributed execution environments with streaming or batch input data. Interestingly, this technique is combined with another set of heuristics using additional optimization techniques, such as task merge. In general, this heuristic is shown to be capable of yielding significant improvements. Its complexity is O(n 2 ), but there can be no guarantee for how much its solutions can deviate from the optimal one. There is another family of techniques minimizing the sum of the tasks by ordering the tasks based on their rank value defined as The first examples of these techniques were initially proposed for optimizing queries containing UDFs, while dependency constraints between pairs of a join and UDF are considered BIB002 . However, they can be applied in data flows by considering flow tasks as UDFs and performing straightforward extensions. For example, an extended version of , also discussed in BIB010 , builds a flow incrementally in n steps instead of starting from a complete flow and performing changes. In each step, the next task to be appended is the one with the maximum rank value, for which all the prerequisite tasks have been already included. This results in a greedy heuristic of O(n 2 ) time complexity. This heuristic has been extended by Kougka et al. BIB012 with techniques that leverage the query optimization algorithm for join ordering by Krishnamurthy et al. BIB001 with appropriate post-processing steps in order to yield novel and more efficient task ordering algorithms for data flows. In BIB006 , a similar rationale is followed with the difference that the execution plan is built from the sink to source task. Both proposals build linear plans, i.e., plans in the form of a chain with a single source and a single sink. These proposals for generic or traditional ETL data flows are essentially similar to the Chain algorithm proposed by Yerneni et al. BIB003 for choosing the order of accessing remote data sources in online data integration scenarios. Interestingly, in BIB003 , it is explained that such techniques are n-competitive, i.e., they can deviate from the optimal plan up to n times. The incurred performance improvements can be significant. Consider the example in Fig. 4 , where let the cost per single input tweet of the five steps be 1, 10, 1, 1, and 5 units, respectively. Let the selectivities be 1, 1, 0.1, 1, and 0.15, respectively. Then, the average cost in Fig. 4 for each initial tweet is 1 for ordering arbitrary flow tasks in order to minimize the sum of the task costs, any of the above solutions can be used. If the flow is small, exhaustive solutions are applicable BIB013 ; otherwise, the techniques in BIB012 are the ones that seem to be capable of yielding the best plans. Finally, minimizing the sum of the tasks cost appears also in multi-criteria proposals that consider also reliability, and more specifically fault tolerance BIB008 BIB005 . These proposals employ a further constraint in the objective function denoted as function g() (see second row in Table 3 ). In these proposals, g() defines the number of faults that can be tolerated in a specific time period. The strategy for exploring the search space of different orderings extends the techniques that proposed by Simitsis et al. BIB004 .
|
The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> Web services are becoming a standard method of sharing data and functionality among loosely-coupled systems. We propose a general-purpose Web Service Management System (WSMS) that enables querying multiple web services in a transparent and integrated fashion. This paper tackles a first basic WSMS problem: query optimization for Select-Project-Join queries spanning multiple web services. Our main result is an algorithm for arranging a query's web service calls into a pipelined execution plan that optimally exploits parallelism among web services to minimize the query's total running time. Surprisingly, the optimal plan can be found in polynomial time even in the presence of arbitrary precedence constraints among web services, in contrast to traditional query optimization where the analogous problem is NP-hard. We also give an algorithm for determining the optimal granularity of data "chunks" to be used for each web service call. Experiments with an initial prototype indicate that our algorithms can lead to significant performance improvement over more straightforward techniques. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> The problem of ordering expensive predicates (or filter ordering) has recently received renewed attention due to emerging computing paradigms such as processing engines for queries over remote Web Services, and cloud and grid computing. The optimization of pipelined plans over services differs from traditional optimization significantly, since execution takes place in parallel and thus the query response time is determined by the slowest node in the plan, which is called the bottleneck node. Although polynomial algorithms have been proposed for several variants of optimization problems in this setting, the fact that communication links are typically heterogeneous in wide-area environments has been largely overlooked. The authors propose an attempt to optimize linear orderings of services when the services communicate directly with each other and the communication links are heterogeneous. The authors propose a novel optimal algorithm to solve this problem efficiently. The evaluation of the proposal shows that it can result in significant reductions of the response time. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> In this paper, we explore the complexity of mapping filtering streaming applications on large-scale homogeneous and heterogeneous platforms, with a particular emphasis on communication models and their impact. Filtering applications are streaming applications where each node also has a selectivity which either increases or decreases the size of its input data set. This selectivity makes the problem of scheduling these applications more challenging than the more studied problem of scheduling “non-filtering” streaming workflows. We address the complexity of the following two problems: ::: Optimization: Given a filtering workflow, how can one compute the mapping and schedule that minimize the period or latency? A solution to this problem requires generating both the mapping and the associated operation list—the order in which each processor executes its assigned tasks. ::: ::: ::: ::: We address this general problem in two steps. First, we address the simplified model without communication cost. In this case, the evaluation problems are easy, and the optimization problems have polynomial complexity on homogeneous platforms. However, we show that the optimization problems become NP-hard on heterogeneous platforms. Second, we consider platforms with communication costs. Clearly, due to the previous results, the optimization problems on heterogeneous platforms are still NP-hard. Therefore we come back to homogeneous platforms and extend the framework with three significant realistic communication models. Now even evaluation problems become difficult, because the mapping must now be enriched with an operation list that provides the time-steps at which each computation and each communication occurs in the system: determining the best operation list has a combinatorial nature. Not too surprisingly, optimization problems are NP-hard too. Altogether, this paper provides a comprehensive overview of the additional difficulties induced by heterogeneity and communication costs. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> This paper deals with pipelined queries over services. The execution plan of such queries defines an order in which the services are called. We present the theoretical underpinnings of a newly proposed algorithm that produces the optimal linear ordering corresponding to a query being executed in a decentralized manner, i.e., when the services communicate directly with each other. The optimality is defined in terms of query response time, which is determined by the bottleneck service in the plan. The properties discussed in this work allow a branch-and-bound approach to be very efficient. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> The development of workflow management systems (WfMSs) for the effective and efficient management of workflows in wide-area infrastructures has received a lot of attention in recent years. Existing WfMSs provide tools that simplify the workflow composition and enactment actions, while they support the execution of complex tasks on remote computational resources usually through calls to web services (WSs). Nowadays, an increasing number of WfMSs employ pipelining during the workflow execution. In this work, we focus on improving the performance of long-running workflows consisting of multiple pipelined calls to remote WSs when the execution takes place in a totally decentralized manner. The novelty of our algorithm lies in the fact that it considers the network heterogeneity, and although the optimization problem becomes more complex, it is capable of finding an optimal solution in a short time. Our proposal is evaluated through a real prototype deployed on PlanetLab, and the experimental results are particularly encouraging. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques for minimizing the bottleneck cost <s> In the parallel pipelined filter ordering problem, we are given a set of n filters that run in parallel. The filters need to be applied to a stream of elements, to determine which elements pass all filters. Each filter has a rate limit ri on the number of elements it can process per unit time, and a selectivity pi, which is the probability that a random element will pass the filter. The goal is to maximize throughput. This problem appears naturally in a variety of settings, including parallel query optimization in databases and query processing over Web services. ::: We present an O(n3) algorithm for this problem, given tree-structured precedence constraints on the filters. This extends work of Condon et al. [2009] and Kodialam [2001], who presented algorithms for solving the problem without precedence constraints. Our algorithm is combinatorial and produces a sparse solution. Motivated by join operators in database queries, we also give algorithms for versions of the problem in which “filter” selectivities may be greater than or equal to 1. ::: We prove a strong connection between the more classical problem of minimizing total work in sequential filter ordering (A), and the parallel pipelined filter ordering problem (B). More precisely, we prove that A is solvable in polynomial time for a given class of precedence constraints if and only if B is as well. This equivalence allows us to show that B is NP-Hard in the presence of arbitrary precedence constraints (since A is known to be NP-Hard in that setting). <s> BIB007
|
Regarding the problem of minimizing the maximum task cost (third row in Table 3 ), which acts as the performance bottleneck, there is a Task Ordering mechanism initially proposed for the parallel execution of online WSs represented as queries BIB001 . The rationale of this technique is to push the selective flow tasks (i.e., those with sel < 1) in an earlier stage of the execution plan in order to prune the input dataset of each service. Based on the selectivity values, there may be cases where the output of a service may be dispatched to multiple other services for executing in parallel or in a sequence having time complexity in O(n 5 ) in the worst case. The problem is formulated in a way that it is tractable and the solutions is accurate. Another optimization technique that considers task ordering mechanism for online queries over Web Services appears in BIB006 BIB002 . The formulation in these proposals extends the Bottleneck cost min max(c(v i )), where i = 1 . . . n BIB003 BIB001 min max(c(v i , e i j )), where i = 1 . . . n BIB006 BIB002 Critical path cost min c(v i ), where v i belongs to critical path BIB003 Throughput max f (v i ), where i = 1 . . . n BIB007 one proposed by Srivastava et al. BIB001 in that it considers also edge costs. This modification renders the problem intractable BIB004 . The practical value is that edge costs naturally capture the data transmission between tasks in a distributed setting. The solution proposed by Tsamoura et al. BIB006 BIB002 consists of a branch-and-bound optimization approach with advanced heuristics for early pruning and despite its exponential complexity, it is shown that it can apply to flows with hundreds of tasks, for reasonable probability distributions of vertex and edge costs. The techniques for minimizing the bottleneck cost can be combined with those for the minimization of the sum of the costs. More specifically, the pipelined tasks can be grouped together and for the corresponding sub-flow, the optimization can be performed according to the bottleneck cost metric. Then, these groups of tasks can be optimized considering the sum of their costs. This essentially leads to a hybrid objective function that aims to minimize the sum of the costs for segments of pipelining operators, where each segment cost is defined according to the bottleneck cost. A heuristic combining the two metrics has appeared in BIB005 .
|
The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> In data integration systems, queries posed to a mediator need to be translated into a sequence of queries to the underlying data sources. In a heterogeneous environment, with sources of diverse and limited query capabilities, not all the translations are feasible. In this paper, we study the problem of finding feasible and efficient query plans for mediator systems. We consider conjunctive queries on mediators and model the source capabilities through attribute-binding adornments. We use a simple cost model that focuses on the major costs in mediation systems, those involved with sending queries to sources and getting answers back. Under this metric, we develop two algorithms for source query sequencing - one based on a simple greedy strategy and another based on a partitioning scheme. The first algorithm produces optimal plans in some scenarios, and we show a linear bound on its worst case performance when it misses optimal plans. The second algorithm generates optimal plans in more scenarios, while having no bound on the margin by which it misses the optimal plans. We also report on the results of the experiments that study the performance of the two algorithms. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> An ETL process is used to extract data from various sources, transform it and load it into a Data Warehouse. In this paper, we analyse an ETL flow and observe that only some of the dependencies in an ETL flow are essential while others are basically represents the flow of data. For the linear flows, we exploit the underlying dependency graph and develop a greedy heuristic technique to determine a reordering that significantly improves the quality of the flow. Rather than adopting a state-space search approach, we use the cost functions and selectivities to determine the best option at each position in a right-to-left manner. To deal with complex flows, we identify activities that can be transferred between linear segments in it and position those activities appropriately. We then use the re-orderings of the linear segments to obtain a cost-optimal semantically equivalent flow for a given complex flow. Experimental evaluation has shown that by using the proposed techniques, ETL flows can be better optimized and with much less effort compared to existing methods. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> Data-intensive analytic flows, such as populating a datawarehouse or analyzing a click stream at runtime, are very common in modern business intelligence scenarios. Current state-of-the-art data flow management techniques rely on the users to specify the flow structure without performing automated optimization of that structure. In this work, we introduce a declarative way to specify flows, which is based on annotated descriptions of the output schema of each flow activity. We show that our approach is adequate to capture both a wide-range of arbitrary data transformations, which cannot be supported by traditional relational operators, and the precedence constraints between the various stages in the flow. Moreover, we show that we can express the flows as annotated queries and thus apply precedence-aware query optimization algorithms. We propose an approach to optimizing linear conceptual data flows by producing a parallel execution plan and our evaluation results show that we can speedup the flow execution by up to an order of magnitude compared to existing techniques. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> Modern data analysis is increasingly employing data-intensive flows for processing very large volumes of data. As the data flows become more and more complex and operate in a highly dynamic environment, we argue that we need to resort to automated cost-based optimization solutions rather than relying on efficient designs by human experts. We further demonstrate that the current state-of-the-art in flow optimizations needs to be extended and we propose a promising direction for optimizing flows at the logical level, and more specifically, for deciding the sequence of flow tasks. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Task cost models <s> Analyzing big data with the help of automated data flows attracts a lot of attention because of the growing need for end-to-end processing of this data. Modern data flows may consist of a high number of tasks and it is difficult for flow designers to define an efficient execution order of the tasks manually given that, typically, there is significant freedom in the valid positioning for some of the tasks. Several automated execution plan enumeration techniques have been proposed. These solutions can be broadly classified into three categories, each having significant limitations: (i) the optimizations are based on rewrite rules similar to those used in databases, such as filter and projection push-down, but these rules cover only the flow tasks that correspond to extended relational algebra operators. To cover arbitrary tasks, the solutions (ii) either rely on simple heuristics, or (iii) they exhaustively check all orderings, and thus cannot scale. We target the second category and we propose an efficient and polynomial cost-based task ordering solution for flows with arbitrary tasks seen as black boxes. We evaluated our proposals using both real runs and simulations, and the results show that we can achieve speed-ups of orders of magnitude, especially for flows with a high number of tasks even for relatively low flexibility in task positioning. <s> BIB006
|
Orthogonally to the objective functions in Table 3 , different cost models can be employed to derive c(v i ), the cost of the ith task v i . The important issue is that a task cost model can be used as a component in any cost-based optimization technique, regardless of whether it has been employed in the original work proposing that technique. A common assumption is that c(v i ) depends on the volume of data processed by v i , but this feature can be expressed in several ways: | j=1 sel j * cpi i : this cost model defines the cost of the ith task as the product of (i) the cost per input data unit (cpi i ) and (ii) the product of the selectivities sel of preceding tasks; T prec i is the set of all the tasks between the data sources and v i . This cost model is explicitly used in proposals such as BIB004 BIB005 BIB006 BIB002 BIB001 . c(v i ) = rs(v i ) : In this case, the cost model is defined as the size of the results (rs) of v i ; it is used in BIB001 , where each task is a remote database query. this cost model is a weighted sum of the three main cost components, namely the cpu, I/O, and data shipping costs. Further, C PU(v i ) can be elaborated and specified as BIB003 . It explicitly covers task parallelization and splits the cost of a tasks into the processing cost pr oc and the cost to partition and merge data part. The former cost is divided into a part that depends on input size and a fixed one. The proposal in BIB003 considers the tasks in the flow that add recovery points or create replicas by providing differently specific formulas for them.
|
The many faces of data-centric workflow optimization: a survey <s> Additional remarks <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Additional remarks <s> Abstract Integration flows are used to propagate data between heterogeneous operational systems or to consolidate data into data warehouse infrastructures. In order to meet the increasing need of up-to-date information, many messages are exchanged over time. The efficiency of those integration flows is therefore crucial to handle the high load of messages and to reduce message latency. State-of-the-art strategies to address this performance bottleneck are based on incremental statistic maintenance and periodic cost-based re-optimization. This also achieves adaptation to unknown statistics and changing workload characteristics, which is important since integration flows are deployed for long time horizons. However, the major drawbacks of periodic re-optimization are many unnecessary re-optimization steps and missed optimization opportunities due to adaptation delays. In this paper, we therefore propose the novel concept of on-demand re-optimization. We exploit optimality conditions from the optimizer in order to (1) monitor optimality of the current plan, and (2) trigger directed re-optimization only if necessary. Furthermore, we introduce the PlanOptimalityTree as a compact representation of optimality conditions that enables efficient monitoring and exploitation of these conditions. As a result and in contrast to existing work, re-optimization is immediately triggered but only if a new plan is certain to be found. Our experiments show that we achieve near-optimal re-optimization overhead and fast workload adaptation. <s> BIB002
|
Regarding the execution environment, since the task (re-) ordering techniques refer to the logical WEP level, they can be applied to both centralized and distributed flow execution environments. However, in parallel and distributed environments, the data communication cost needs to be considered. The difference between these environments with regard to the communication cost is that in the latter, this cost depends both on the sender and receiver task and as such, it needs to be represented, not as a component of vertex cost but as a property of edge cost. Additionally, very few techniques, e.g., BIB001 , explicitly consider re-orderings between single input/output and Finally, none of the proposed techniques for task ordering technique discussed are adaptive ones, that is they do not consider workflow re-optimization during its execution phase. In general, adaptive flow optimization is a subarea in its infancy. However, Böhm et al. BIB002 have proposed solutions for choosing when to trigger re-optimization, which, in principle, can be coupled with any cost-based flow optimization technique.
|
The many faces of data-centric workflow optimization: a survey <s> Task introduction <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task introduction <s> In this paper, we deal with the problem of determining the best possible physical implementation of an ETL workflow, given its logical-level description and an appropriate cost model as inputs. We formulate the problem as a state-space problem and provide a suitable solution for this task. We further extend this technique by intentionally introducing sorter activities in the workflow in order to search for alternative physical implementations with lower cost. We experimentally assess our method based on a principled organization of test suites. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task introduction <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task introduction <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Task introduction <s> We describe Cumulon, a system aimed at helping users develop and deploy matrix-based data analysis programs in a public cloud. A key feature of Cumulon is its end-to-end support for the so-called spot instances---machines whose market price fluctuates over time but is usually much lower than the regular fixed price. A user sets a bid price when acquiring spot instances, and loses them as soon as the market price exceeds the bid price. While spot instances can potentially save cost, they are difficult to use effectively, and run the risk of not finishing work while costing more. Cumulon provides a highly elastic computation and storage engine on top of spot instances, and offers automatic cost-based optimization of execution, deployment, and bidding strategies. Cumulon further quantifies how the uncertainty in the market price translates into the cost uncertainty of its recommendations, and allows users to specify their risk tolerance as an optimization constraint. <s> BIB005
|
Task introduction has been proposed for three reasons. Firstly, to achieve fault-tolerance through the introduction of recovery points and replicator tasks in online ETLs BIB003 . For recovery points, a new node storing the current flow state is inserted in the flow in order to assist recovering from failures without needing to recompute the flow from scratch. Adding a recovery (to a specific point in the plan) depends on a cost function that compares the projected recovery cost in case of failure against the cost to maintain a recovery point. Additionally, the replicator nodes produce copies of specified sub-flows in order to tolerate local failures, when no recovery points can be inserted, e.g., because the associated overhead increases the execution time above a threshold. In both cases of task introduction, the semantics of the flow are immutable. The proposed technique extends the state space search in BIB001 after having pruned the state search space. The objective function employed is the constrained sum cost one (2nd row in Table 3) , where the constraint is on the number of places where a failure can occur. The cost model explicitly covers the recovery maintenance overhead (last case in Sect. 4.1.5). The key idea behind the pruning of search space is first to apply task re-ordering and then, to detect all the promising places to add the recovery points based on heuristic rules. An example of the technique is in Fig. 6 and suppose that we examine the introduction of up to two recovery points. The two possible places are just after the Sort and Join tasks, respectively. Assume that the most beneficial place is the first one, denoted as RP 1 . Also, given RP 1 , RP 2 is discarded because it incurs higher cost than re-executing the Join task. Similarly to the recovery points above, the technique pro-posed by Huang et al. BIB005 introduces operations that copy intermediate data from transient nodes to primary ones, using a cluster of machines containing both transient and primary cloud machines; the former can be reclaimed by the cloud provided at any time, whereas the latter are allocated to flow execution throughout its execution. Secondly, task introduction has been employed by Rheinländer et al. BIB004 to automatically insert explicit filtering tasks, when the user has not initially introduced them. This becomes plausible with a sophisticated task profiling mechanism employed in that proposal, which allows the system to detect that some data are not actually needed. The goal is to optimize a sum cost objective function, but the technique is orthogonal to any objective function aiming at performance improvement. For example, in Fig. 6 , we introduce a filtering task if the final report needs only a subset of the initial data, e.g., it refers to a specific range of products. Third, task introduction can be combined with Implementation Selection (Sect. 4.6). An example appears in BIB002 , where the purpose is to exploit the benefit of processing sorted records. To this end, it explores the possibility of introducing new vertices, called sorters, and then to choose task implementations that assume sorted input; the overhead of the insertion of the new tasks is outweighed by the benefits of sort-based implementations. In Fig. 6 , we add such a sorter task just before the Join if a sort-based join implementation and report output is preferred. Proactively ordering data to reduce the overall cost has been used in traditional database query optimization and it seems to be profitable for ETL flows as well. Finally, all these three techniques can be combined; for example, in the example all can apply simultaneously yielding the complete plan in the figure.
|
The many faces of data-centric workflow optimization: a survey <s> Task removal <s> This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task removal <s> Recently, utility Grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with service providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility Grids is workflow scheduling, i.e., the problem of satisfying the QoS of the users as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline. The PCP algorithm has two phases: in the deadline distribution phase it recursively assigns subdeadlines to the tasks on the partial critical paths ending at previously assigned tasks, and in the planning phase it assigns the cheapest service to each task while meeting its subdeadline. The simulation results show that the performance of the PCP algorithm is very promising. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task removal <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task removal <s> BackgroundScientific workflows management systems are increasingly used to specify and manage bioinformatics experiments. Their programming model appeals to bioinformaticians, who can use them to easily specify complex data processing pipelines. Such a model is underpinned by a graph structure, where nodes represent bioinformatics tasks and links represent the dataflow. The complexity of such graph structures is increasing over time, with possible impacts on scientific workflows reuse. In this work, we propose effective methods for workflow design, with a focus on the Taverna model. We argue that one of the contributing factors for the difficulties in reuse is the presence of "anti-patterns", a term broadly used in program design, to indicate the use of idiomatic forms that lead to over-complicated design. The main contribution of this work is a method for automatically detecting such anti-patterns, and replacing them with different patterns which result in a reduction in the workflow's overall structural complexity. Rewriting workflows in this way will be beneficial both in terms of user experience (easier design and maintenance), and in terms of operational efficiency (easier to manage, and sometimes to exploit the latent parallelism amongst the tasks).ResultsWe have conducted a thorough study of the workflows structures available in Taverna, with the aim of finding out workflow fragments whose structure could be made simpler without altering the workflow semantics. We provide four contributions. Firstly, we identify a set of anti-patterns that contribute to the structural workflow complexity. Secondly, we design a series of refactoring transformations to replace each anti-pattern by a new semantically-equivalent pattern with less redundancy and simplified structure. Thirdly, we introduce a distilling algorithm that takes in a workflow and produces a distilled semantically-equivalent workflow. Lastly, we provide an implementation of our refactoring approach that we evaluate on both the public Taverna workflows and on a private collection of workflows from the BioVel project.ConclusionWe have designed and implemented an approach to improving workflow structure by way of rewriting preserving workflow semantics. Future work includes considering our refactoring approach during the phase of workflow design and proposing guidelines for designing distilled workflows. <s> BIB004
|
A set of optimization proposals support the idea of removing a task or a set of tasks from the workflow execution plan without changing the semantics in order to improve the performance; these proposals have been proposed mostly for offline scientific workflows, where it is common to reuse tasks or sub-flows from previous workflows without necessarily examining whether all tasks included are actually necessary or whether some results are already present. Three techniques adopt this rationale BIB004 BIB001 BIB003 , which are discussed in turn. The idea of Rheinländer et al. BIB003 is to remove a task or multiple tasks until the workflow consists only of tasks that are necessary for the production of the desired output. This implies that the execution result dataset remains the same regardless of the changes that have been applied. It aims to protect users that have carelessly copied data flow tasks from previous flows. In Fig. 7 , we see that, initially, the example data flow contains an Extract Dates task, which is not actually necessary. The heuristic of Deelman et al. BIB001 has been proposed for a parallel execution environment and is one of the few dynamic techniques allowing the re-optimization of the workflow during the workflow execution. At runtime, it checks whether any intermediate results already exist at some node, thus making part of the flow obsolete. Both BIB003 and BIB001 are rule-based and do not target an objective function directly. Another approach for applying task removal optimization mechanism is to detect the duplicate tasks, i.e., tasks performing exactly the same operation and keep only a single copy in the execution workflow plan BIB004 . This might be caused by carelessly combining existing smaller flows from a repository, e.g., myExperiment. BIB002 A necessary condition in order to ensure that there will be no precedence violations is that these tasks must be dependency constraint free, which is checked with the help of the task schemata. Such a heuristic has O(n 2 ) time complexity.
|
The many faces of data-centric workflow optimization: a survey <s> Task merge <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> In order to optimize their revenues and profits, an increasing number of businesses organize their business activities in terms of business processes. Typically, they automate important business tasks by orchestrating a number of applications and data stores. Obviously, the performance of a business process is directly dependent on the efficiency of data access, data processing, and data management. ::: ::: In this paper, we propose a framework for the optimization of data processing in business processes. We introduce a set of rewrite rules that transform a business process in such a way that an improved execution with respect to data management can be achieved without changing the semantics of the original process. These rewrite rules are based on a semi-procedural process graph model that externalizes data dependencies as well as control flow dependencies of a business process. Furthermore, we present a multi-stage control strategy for the optimization process. We illustrate the benefits and opportunities of our approach through a prototype implementation. Our experimental results demonstrate that independent of the underlying database system performance gains of orders of magnitude are achievable by reasoning about data and control in a unified framework. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> There is a growing trend of performing analysis on large datasets using workflows composed of MapReduce jobs connected through producer-consumer relationships based on data. This trend has spurred the development of a number of interfaces---ranging from program-based to query-based interfaces---for generating MapReduce workflows. Studies have shown that the gap in performance can be quite large between optimized and unoptimized workflows. However, automatic cost-based optimization of MapReduce workflows remains a challenge due to the multitude of interfaces, large size of the execution plan space, and the frequent unavailability of all types of information needed for optimization. ::: ::: We introduce a comprehensive plan space for MapReduce workflows generated by popular workflow generators. We then propose Stubby, a cost-based optimizer that searches selectively through the subspace of the full plan space that can be enumerated correctly and costed based on the information available in any given setting. Stubby enumerates the plan space based on plan-to-plan transformations and an efficient search algorithm. Stubby is designed to be extensible to new interfaces and new types of optimizations, which is a desirable feature given how rapidly MapReduce systems are evolving. Stubby's efficiency and effectiveness have been evaluated using representative workflows from many domains. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> A complex analytic data flow may perform multiple, inter-dependent tasks where each task uses a different processing engine. Such a multi-engine flow, termed a hybrid flow, may comprise subflows written in more than one programming language. However, as the number and variety of these engines grow, developing and maintaining hybrid flows at the physical level becomes increasingly challenging. To address this problem, we present BabbleFlow, a system for enabling flow design at a logical level and automatic translation to physical flows. BabbleFlow translates a hybrid flow expressed in a number of languages to a semantically equivalent hybrid flow expressed in the same or a different set of languages. To this end, it composes the multiple physical flows of a hybrid flow into a single logical representation expressed in a unified flow language called xLM. In doing so, it enables a number of graph transformations such as (de-)composition and optimization. Then, it converts the, possibly transformed, xLM data flow graph into an executable form by expressing it in one or more target programming languages. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Task merge <s> A complex analytic flow in a modern enterprise may perform multiple, logically independent, tasks where each task uses a different processing engine. We term these multi-engine flows hybrid flows. Using multiple processing engines has advantages such as rapid deployment, better performance, lower cost, and so on. However, as the number and variety of these engines grows, developing and maintaining hybrid flows is a significant challenge because they are specified at a physical level and, so are hard to design and may break as the infrastructure evolves. We address this problem by enabling flow design at a logical level and automatic translation to physical flows. There are three main challenges. First, we describe how flows can be represented at a logical level, abstracting away details of any underlying processing engine. Second, we show how a physical flow, expressed in a programming language or some design GUI, can be imported and converted to a logical flow. In particular, we show how a hybrid flow comprising subflows in different languages can be imported and composed as a single, logical flow for subsequent manipulation. Third, we describe how a logical flow is translated into one or more physical flows for execution by the processing engines. The paper concludes with experimental results and example transformations that demonstrate the correctness and utility of our system. <s> BIB008
|
Task Merge has been also employed for improving the performance of the workflow execution plan. The main technique is to apply re-writing rules to merge tasks with similar functions into one bigger task. There are three techniques in this group, all tailored to a specific setting. As such, it is unclear whether they can be combined. First, in BIB002 , tasks that encapsulate invocations to an underlying database are merged so that fewer (and more complex) invocations take place. This rule-based heuristic has been proposed for business processes, for which it is common to access various data stores, and such invocations incur a large time overhead. Second, a related technique has been proposed for SQL statements in commercial data integration products . The rationale of this idea is to group the SQL statements into a bigger query in order to push the task functionalities to the best processing engine. Both approaches presented in derive the necessary information about the functionality of each task with the help of task profiling and produce larger queries employing standard database technology. For example, instead of processing a series of SQL queries to transform data, it is preferable to create a single bigger query. As previously, the proposed optimization is a heuristic that does not target to optimize any objective function explicitly. A generalization of this idea to languages beyond SQL is presented by Simitsis et al. BIB004 BIB006 , and a programming language translator has been described by Jovanovic et al. BIB007 BIB008 . Third, Harold et al. BIB005 presents a heuristic non-exhaustive solution for merging MapReduce jobs. Merging occurs at two levels: first MapReduce jobs are tried to be transformed into Map-only jobs. Then, sharing common Map or Reduce tasks is investigated. These two aspects are examined with the help of a 2-phase heuristic technique. Finally, in the optimizations in BIB001 BIB003 , which rely on a state space search as described previously, adjacent tasks that should not be separated may be grouped together during optimization. The aim of this type of merger is not to produce a flow execution plan with fewer and more complex tasks (i.e., no actual task merge optimization takes place), but to reduce the search space so that the optimization is speeded up; after optimization, the merged tasks are split.
|
The many faces of data-centric workflow optimization: a survey <s> Task decomposition <s> Many systems for big data analytics employ a data flow abstraction to define parallel data processing tasks. In this setting, custom operations expressed as user-defined functions are very common. We address the problem of performing data flow optimization at this level of abstraction, where the semantics of operators are not known. Traditionally, query optimization is applied to queries with known algebraic semantics. In this work, we find that a handful of properties, rather than a full algebraic specification, suffice to establish reordering conditions for data processing operators. We show that these properties can be accurately estimated for black box operators by statically analyzing the general-purpose code of their user-defined functions. ::: ::: We design and implement an optimizer for parallel data flows that does not assume knowledge of semantics or algebraic properties of operators. Our evaluation confirms that the optimizer can apply common rewritings such as selection reordering, bushy join-order enumeration, and limited forms of aggregation push-down, hence yielding similar rewriting power as modern relational DBMS optimizers. Moreover, it can optimize the operator order of nonrelational data flows, a unique feature among today's systems. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task decomposition <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task decomposition <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task decomposition <s> To remain competitive, enterprises are evolving in order to quickly respond to changing market conditions and customer needs. In this new environment, a single centralized data warehouse is no longer sufficient. Next generation business intelligence involves data flows that span multiple, diverse processing engines, that contain complex functionality like data/text analytics, machine learning operations, and that need to be optimized against various objectives. A common example is the use of Hadoop to analyze unstructured text and merging these results with relational database queries over the data warehouse. We refer to these multi-engine analytic data flows as hybrid flows. Currently, it is a cumbersome task to create and run hybrid flows. Custom scripts must be written to dispatch tasks to the individual processing engines and to exchange intermediate results. So, designing correct hybrid flows is a challenging task. Optimizing such flows is even harder. Additionally, when the underlying computing infrastructure changes, existing flows likely need modification and reoptimization. The current, ad-hoc design approach cannot scale as hybrid flows become more commonplace. To address this challenge, we are building a platform to design and manage hybrid flows. It supports the logical design of hybrid flows in which implementation details are not exposed. It generates code for the underlying processing engines and orchestrates their execution. But the key enabling technology in the platform is an optimizer that converts the logical flow to an executable form that is optimized for the underlying infrastructure according to user-specified objectives. In this paper, we describe challenges in designing the optimizer and our solutions. We illustrate the optimizer through a real-world use case. We present a logical design and optimized designs for the use case. We show how the performance of the use case varies depending on the system configuration and how the optimizer is able to generate different optimized flows for different configurations. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Task decomposition <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB005
|
An advanced optimization functionality is Task Decomposition, according to which, the operations of a task are split into more tasks, this results in a modification of the set V of vertices. This mechanism has appeared in BIB001 BIB003 as a pre-processing step, before the task ordering takes place. Its advantage is that it opens up opportunities for ordering, i.e., it does not optimize an objective function in its own, but it enables more profitable task orderings. Task decomposition is also employed by Simitsis et al. BIB002 BIB004 BIB005 . In these proposals, complex analysis tasks, such as sentiment analysis presented in previous examples, can be split into a sequence of tasks at a finer granularity, such as tokenization, and part-of-speech tagging. Note that both these techniques are tightly coupled to the task implementation platform assumed.
|
The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> QoS-based service selection mechanisms will play an essential role in service-oriented architectures, as e-Business applications want to use services that most accurately meet their requirements. Standard approaches in this field typically are based on the prediction of services’ performance from the quality advertised by providers as well as from feedback of users on the actual levels of QoS delivered to them. The key issue in this setting is to detect and deal with false ratings by dishonest providers and users, which has only received limited attention so far. In this paper, we present a new QoS-based semantic web service selection and ranking solution with the application of a trust and reputation management method to address this problem. We will give a formal description of our approach and validate it with experiments which demonstrate that our solution yields high-quality results under various realistic cheating behaviors. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Web services are becoming a standard method of sharing data and functionality among loosely-coupled systems. We propose a general-purpose Web Service Management System (WSMS) that enables querying multiple web services in a transparent and integrated fashion. This paper tackles a first basic WSMS problem: query optimization for Select-Project-Join queries spanning multiple web services. Our main result is an algorithm for arranging a query's web service calls into a pipelined execution plan that optimally exploits parallelism among web services to minimize the query's total running time. Surprisingly, the optimal plan can be found in polynomial time even in the presence of arbitrary precedence constraints among web services, in contrast to traditional query optimization where the analogous problem is NP-hard. We also give an algorithm for determining the optimal granularity of data "chunks" to be used for each web service call. Experiments with an initial prototype indicate that our algorithms can lead to significant performance improvement over more straightforward techniques. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> In this paper, we deal with the problem of determining the best possible physical implementation of an ETL workflow, given its logical-level description and an appropriate cost model as inputs. We formulate the problem as a state-space problem and provide a suitable solution for this task. We further extend this technique by intentionally introducing sorter activities in the workflow in order to search for alternative physical implementations with lower cost. We experimentally assess our method based on a principled organization of test suites. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> The advent of Grid environments made feasible the solution of computational intensive problems in a reliable and cost-effective way. As workflow systems carry out more complex and mission-critical applications, Quality of Service (QoS) analysis serves to ensure that each application meets user requirements. In that frame, we present a novel algorithm which allows the mapping of workflow processes to Grid provided services assuring at the same time end-to-end provision of QoS based on user-defined parameters and preferences. We also demonstrate the operation of the implemented algorithm and evaluate its effectiveness using a Grid scenario, based on a 3D image rendering application. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Where can I attend an interesting database workshop close to a sunny beach? Who are the strongest experts on service computing based upon their recent publication record and accepted European projects? Can I spend an April weekend in a city served by a low-cost direct flight from Milano offering a Mahler's symphony? We regard the above queries as multi-domain queries, i.e., queries that can be answered by combining knowledge from two or more domains (such as: seaside locations, flights, publications, accepted projects, conference offerings, and so on). This information is available on the Web, but no general-purpose software system can accept the above queries nor compute the answer. At the most, dedicated systems support specific multi-domain compositions (e.g., Google-local locates information such as restaurants and hotels upon geographic maps). ::: ::: This paper presents an overall framework for multi-domain queries on the Web. We address the following problems: (a) expressing multi-domain queries with an abstract formalism, (b) separating the treatment of "search" services within the model, by highlighting their differences from "exact" Web services, (c) explaining how the same query can be mapped to multiple "query plans", i.e., a well-defined scheduling of service invocations, possibly in parallel, which complies with their access limitations and preserves the ranking order in which search services return results; (d) introducing cross-domain joins as first-class operation within plans; (e) evaluating the query plans against several cost metrics so as to choose the most promising one for execution. This framework adapts to a variety of application contexts, ranging from end-user-oriented mash-up scenarios up to complex application integration scenarios. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Cloud Computing is promising as a new style of collaborative environment. Efficient Workflow Scheduling is crucial for achieving high performance in Cloud Computing environment. In spite of workflow scheduling has been widely studied. And various algorithms have been proposed to optimize execution time and cost. However the existing cloud services are owned and operated by third-party organizations or enterprises in a closed network. The uncertainty and unreliability existed in the network has caused great threat to the applications. Therefore trust services-oriented strategies must also be considered in workflow scheduling. This paper proposes a Trust services-oriented multi-objectives Workflow Scheduling (TMOWS) model. And a case study has been given to explain the proposed model. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Contemporary continuous dataflow systems use elastic scaling on distributed cloud resources to handle variable data rates and to meet applications' needs while attempting to maximize resource utilization. However, virtualized clouds present an added challenge due to the variability in resource performance -- over time and space -- thereby impacting the application's QoS. Elastic use of cloud resources and their allocation to continuous dataflow tasks need to adapt to such infrastructure dynamism. In this paper, we develop the concept of "dynamic dataflows" as an extension to continuous dataflows that utilizes alternate tasks and allows additional control over the dataflow's cost and QoS. We formalize an optimization problem to perform both deployment and runtime cloud resource management for such dataflows, and define an objective function that allows trade-off between the application's value against resource cost. We present two novel heuristics, local and global, based on the variable sized bin packing heuristics to solve this NP-hard problem. We evaluate the heuristics against a static allocation policy for a dataflow with different data rate profiles that is simulated using VM performance traces from a private cloud data center. The results show that the heuristics are effective in intelligently utilizing cloud elasticity to mitigate the effect of both input data rate and cloud resource performance variabilities on QoS. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algorithm; instead it is a general advanced optimization mechanism which is highly scalable with robustness and randomness. Therefore, this book demonstrates the flexibility of these algorithms, as well as their robustness and reusability in order to solve mass complicated problems in manufacturing. Since the genetic algorithm was presented decades ago, a large number of intelligent optimization algorithms and their improvements have been developed. However, little work has been done to extend their applications and verify their competence in solving complicated problems in manufacturing. This book will provide an invaluable resource to students, researchers, consultants and industry professionals interested in engineering optimization. It will also be particularly useful to three groups of readers: algorithm beginners, optimization engineers and senior algorithm designers. It offers a detailed description of intelligent optimization algorithms to algorithm beginners; recommends new configurable design methods for optimization engineers, and provides future trends and challenges of the new configuration mechanism to senior algorithm designers. <s> BIB010 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Knowledge Discovery in Databases is a complex process that involves many different data processing and learning operators. Today's Knowledge Discovery Support Systems can contain several hundred operators. A major challenge is to assist the user in designing workflows which are not only valid but also - ideally - optimize some performance measure associated with the user goal. In this paper we present such a system. The system relies on a meta-mining module which analyses past data mining experiments and extracts meta-mining models which associate dataset characteristics with workflow descriptors in view of workflow performance optimization. The meta-mining model is used within a data mining workflow planner, to guide the planner during the workflow planning. We learn the meta-mining models using a similarity learning approach, and extract the workflow descriptors by mining the workflows for generalized relational patterns accounting also for domain knowledge provided by a data mining ontology. We evaluate the quality of the data mining workflows that the system produces on a collection of real world datasets coming from biology and show that it produces workflows that are significantly better than alternative methods that can only do workflow selection and not planning. <s> BIB011 </s> The many faces of data-centric workflow optimization: a survey <s> Task implementation selection <s> Integrating heterogeneous data sets has been a significant barrier to many analytics tasks, due to the variety in structure and level of cleanliness of raw data sets requiring one-off ETL code. We propose HiperFuse, which significantly automates the data integration process by providing a declarative interface, robust type inference, extensible domain-specific data models, and a data integration planner which optimizes for plan completion time. The proposed tool is designed for schema-less data querying, code reuse within specific domains, and robustness in the face of messy unstructured data. To demonstrate the tool and its reference implementation, we show the requirements and execution steps for a use case in which IP addresses from a web clickstream log are joined with census data to obtain average income for particular site visitors (IPs), and offer preliminary performance results and qualitative comparisons to existing data integration and ETL tools. <s> BIB012
|
A set of optimization techniques target the Implementation Selection mechanism. At a high level, the problem is that there exist multiple equivalent candidate implementations for each task and we need to decide which ones to employ in the execution plan. The issue of whether the different implementations may produce different results is orthogonal to this discussion as far as all implementations are acceptable by the user; however, we mostly refer to settings where equivalence implies also the production of the same result set. For example, a task encapsulating a call to a remote WS can contact multiple equivalent WSs, or a task may be implemented to run either in a single-threaded or in a multi-threaded mode. These techniques typically require as input metadata the vertex costs of each task implementation alternative. Suppose that, for each task, there are m alternatives. This leads to a total of O(m n ) of combinations; thus, a key challenge is to cope with the exponential search space. In general, the number of alternatives for each task may be different and the total number of combinations is the product of these numbers. For example, in Fig. 8 , there are four and three alternatives (I mpl 1 , . . . , I mpl n ) for the Sentiment Analysis and Lookup Product tasks, respectively, corresponding to twelve combinations. It is important to note that, conceptually, the choice of the implementation of each task is orthogonal to decisions on task ordering and the rest of the high-level optimization mechanisms. As such, the techniques in this section can be combined with techniques from the previous sections. A brute force, and thus of exponential complexity approach to finding the optimal physical implementation of each flow task before its execution has appeared in BIB003 . This approach models the problem as a state space search one and, although it assumes that the sum cost objective function is to be optimized, it can support other objective functions too. An interesting feature of this solution is that it explicitly explores the potential benefit from processing sorted data. Also, the ordering and task introduction algorithm in BIB007 allows for choosing parallel flavors of tasks. The parallel flavors, apart from cloning the tasks as many times as the degree of partitioned parallelism decided, explicitly consider issues, such as splitting the input data, distributing them across all clones, and merging all their outputs. These issues are reflected in an elaborate cost function as mentioned previously, which is used to decide whether parallelization is beneficial. Additionally to the optimization techniques above, there is a set of multi-objective optimization approaches for Implementation Selection. These multi-objective heuristics, apart from the vertex cost, require further metadata that depend on the specified optimization objectives. For example, several multi-objective optimization approaches have been proposed for flows, where each task is essentially an invocation to an online WS that may not be always available; in such settings, the aim of the optimizer is the selection of the best service for each service type taking into account both performance and availability metadata. Three proposals that target this specific environment are BIB004 BIB008 BIB001 . To achieve scalability, each task is checked in isolation, thus resulting in O(nm) time complexity, but at the expense of finding local optimal solutions only. Kyriazis et al. BIB004 consider availability, performance, and cost for each task. As initial metadata, scalar values for each objective and for candidate services are assumed to be in place. The main focus of the proposed solution is (i) on normalizing and scaling the initial values for each of the objectives and (ii) on devising an iterative improvement algorithm for making the final decisions for each task. The multi-objective function is either the optimization of a single criterion under constraints on the others or the optimization of all the objectives at the same time. However, in both cases, no optimality guarantees (e.g., finding a Pareto optimal solution) are provided. The proposal in BIB001 is similar in not guaranteeing pareto optimal solutions. It considers performance, availability, and reliability for each candidate WS, where each criterion is weighted and contributes to a single scalar value, according to which services are ordered. The notion of reliability in this proposal is based on its trustworthiness. BIB008 is another service selection proposal that considers the three objectives, namely performance, monetary cost, and reliability in terms of successful execution. The service metadata are normalized, and the technique proposed employs a max-min heuristic that aims to select a service based on its smallest normalized value. An additional common feature of the proposals in BIB004 BIB008 BIB001 is that no objective function is explicitly targeted. Another multi-objective optimization approach to choosing the best implementation selection of each task consists of linear complexity heuristics BIB009 . The main value of those heuristics are that they are designed to be applied on the fly, thus forming one of the few existing adaptive data flow optimization proposals. Additionally, the technique proposed by Braga et al. BIB005 extends the task ordering approach in BIB002 so that, for each task, the most appropriate implementation is first selected. None of these proposals employ a specific objective function as well. Finally, multi-objective WS selection mechanism can be performed with the help of ant colony optimization algorithms; an example of applying this optimization technique for selecting WS instantiations between Fig. 8 An example where Task Implementation Selection is applicable, where there are four equivalent ways to implement sentiment analysis and three ways to extract product ids multiple candidates in a setting where the workflows mainly consist of a series of remote WS invocations appears in BIB006 , which is further extended by Tao et al. BIB010 . Based on the above descriptions, two main observations can be drawn regarding the majority of the techniques. Firstly, they address a multi-objective problem. Secondly, they are proposed for a WS application domain. The latter may imply that transferring the results to data flows where tasks exchange big volumes of data directly may not be straightforward. As a final note, there are numerous proposals that perform task implementation selection considering specific types of tasks, such as classification tasks in data mining data flows (e.g., BIB011 ), and file descriptors in ETLs (e.g., BIB012 ). We do not discuss in detail such techniques, because they do not meet the criteria in Sect. 2.2; further, when generalized to arbitrary tasks, they typically correspond to non-interesting enumeration solutions.
|
The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> To remain competitive, enterprises are evolving in order to quickly respond to changing market conditions and customer needs. In this new environment, a single centralized data warehouse is no longer sufficient. Next generation business intelligence involves data flows that span multiple, diverse processing engines, that contain complex functionality like data/text analytics, machine learning operations, and that need to be optimized against various objectives. A common example is the use of Hadoop to analyze unstructured text and merging these results with relational database queries over the data warehouse. We refer to these multi-engine analytic data flows as hybrid flows. Currently, it is a cumbersome task to create and run hybrid flows. Custom scripts must be written to dispatch tasks to the individual processing engines and to exchange intermediate results. So, designing correct hybrid flows is a challenging task. Optimizing such flows is even harder. Additionally, when the underlying computing infrastructure changes, existing flows likely need modification and reoptimization. The current, ad-hoc design approach cannot scale as hybrid flows become more commonplace. To address this challenge, we are building a platform to design and manage hybrid flows. It supports the logical design of hybrid flows in which implementation details are not exposed. It generates code for the underlying processing engines and orchestrates their execution. But the key enabling technology in the platform is an optimizer that converts the logical flow to an executable form that is optimized for the underlying infrastructure according to user-specified objectives. In this paper, we describe challenges in designing the optimizer and our solutions. We illustrate the optimizer through a real-world use case. We present a logical design and optimized designs for the use case. We show how the performance of the use case varies depending on the system configuration and how the optimizer is able to generate different optimized flows for different configurations. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> We present Cumulon, a system designed to help users rapidly develop and intelligently deploy matrix-based big-data analysis programs in the cloud. Cumulon features a flexible execution model and new operators especially suited for such workloads. We show how to implement Cumulon on top of Hadoop/HDFS while avoiding limitations of MapReduce, and demonstrate Cumulon's performance advantages over existing Hadoop-based systems for statistical data analysis. To support intelligent deployment in the cloud according to time/budget constraints, Cumulon goes beyond database-style optimization to make choices automatically on not only physical operators and their parameters, but also hardware provisioning and configuration settings. We apply a suite of benchmarking, simulation, modeling, and search techniques to support effective cost-based optimization over this rich space of deployment plans. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> Data-intensive flows are increasingly encountered in various settings, including business intelligence and scientific scenarios. At the same time, flow technology is evolving. Instead of resorting to monolithic solutions, current approaches tend to employ multiple execution engines, such as Hadoop clusters, traditional DBMSs, and stand-alone tools. We target the problem of allocating flow activities to specific heterogeneous and interdependent execution engines while minimizing the flow execution cost. To date, the state-of-the-art is limited to simple heuristics. Although the problem is intractable, we propose practical anytime solutions that are capable of outperforming those simple heuristics and yielding allocation plans in seconds even when optimizing large flows on ordinary machines. Moreover, we prove the NP-hardness of the problem in the generic case and we propose an exact polynomial solution for a specific form of flows, namely, linear flows. We thoroughly evaluate our solutions in both real-world and flows synthetic, and the results show the superiority of our solutions. Especially in real-world scenarios, we can decrease execution time up to more than 3 times. A set of anytime algorithms for yielding mappings of flow nodes to execution engines.An optimal solution with polynomial complexity for linear flows.Evaluation using both real and synthetic flows in a wide range of settings.Proof of the NP-hardness of the problem. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> We describe Cumulon, a system aimed at helping users develop and deploy matrix-based data analysis programs in a public cloud. A key feature of Cumulon is its end-to-end support for the so-called spot instances---machines whose market price fluctuates over time but is usually much lower than the regular fixed price. A user sets a bid price when acquiring spot instances, and loses them as soon as the market price exceeds the bid price. While spot instances can potentially save cost, they are difficult to use effectively, and run the risk of not finishing work while costing more. Cumulon provides a highly elastic computation and storage engine on top of spot instances, and offers automatic cost-based optimization of execution, deployment, and bidding strategies. Cumulon further quantifies how the uncertainty in the market price translates into the cost uncertainty of its recommendations, and allows users to specify their risk tolerance as an optimization constraint. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> Recently, we have witnessed workflows from science and other data-intensive applications emerging on Infrastructure-as-a-Service (IaaS) clouds, and many workflow service providers offering workflow-as-a-service (WaaS). The major concern of WaaS providers is to minimize the monetary cost of executing workflows in the IaaS clouds. The selection of virtual machines (instances) types significantly affects the monetary cost and performance of running a workflow. Moreover, IaaS cloud environment is dynamic , with high performance dynamics caused by the interference from concurrent executions and price dynamics like spot prices offered by Amazon EC2. Therefore, we argue that WaaS providers should have the notion of offering probabilistic performance guarantees for individual workflows to explicitly expose the performance and cost dynamics of IaaS clouds to users. We develop a scheduling system called Dyna to minimize the expected monetary cost given the user-specified probabilistic deadline guarantees. Dyna includes an ${A^\star}$ -based instance configuration method for performance dynamics, and a hybrid instance configuration refinement for using spot instances. Experimental results with three scientific workflow applications on Amazon EC2 and a cloud simulator demonstrate (1) the ability of Dyna on satisfying the probabilistic deadline guarantees required by the users; (2) the effectiveness on reducing monetary cost in comparison with the existing approaches. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine selection <s> Spark has become one of the main options for large-scale analytics running on top of shared-nothing clusters. This work aims to make a deep dive into the parallelism configuration and shed light on the behavior of parallel spark jobs. It is motivated by the fact that running a Spark application on all the available processors does not necessarily imply lower running time, while may entail waste of resources. We first propose analytical models for expressing the running time as a function of the number of machines employed. We then take another step, namely to present novel algorithms for configuring dynamic partitioning with a view to minimizing resource consumption without sacrificing running time beyond a user-defined limit. The problem we target is NP-hard. To tackle it, we propose a greedy approach after introducing the notions of dependency graphs and of the benefit from modifying the degree of partitioning at a stage; complementarily, we investigate a randomized approach. Our polynomial solutions are capable of judiciously use the resources that are potentially at user’s disposal and strike interesting trade-offs between running time and resource consumption. Their efficiency is thoroughly investigated through experiments based on real execution data. <s> BIB008
|
The techniques in this category focus on choosing the best execution engine for executing the data flow tasks in distributed environments, where there are multiple options. For example, assume that the sentiment analysis in our running example can take place on either a DBMS server or a MapReduce cluster. As previously, for the techniques using this mechanism, the vertex cost of each task for each candidate execution engine is a necessary piece of metadata for the optimization algorithm. Also, corresponding techniques are orthogonal to optimizations referring to the high-level execution plan aspects. For those tasks that can be executed by multiple engines, an exhaustive solution can be adopted for optimally allocating the tasks of a flow to different execution engines in order to meet multiple objectives. The drawback is that an exhaustive solution in general does not scale for large number of flow tasks and execution engines similarly to the case of task implementation selection. To overcome this, a set of heuris-tics can be used for pruning the search space BIB001 BIB002 BIB003 . This technique aims to improve not only the performance, but also the reliability of ETL workflows in terms of fault tolerance. Additionally, a multi-objective solution for optimizing the monetary cost and the performance is to check all the possible execution plans that satisfy a specific time constraint; this approach cannot scale for execution plans with high number of operators. The objective functions are those mentioned in Sect. 4.1. The same approach to deciding the execution engine can be used to choose the task implementation in BIB001 BIB002 BIB003 . Anytime single-objective heuristics for choosing between multiple engine have been proposed Kougka et al. BIB005 . Such heuristics take into account, apart from vertex costs, the edge costs and constraints on the capability of an engine to execute certain tasks and are coupled with a dynamic programming pseudo-polynomial algorithm that can find optimal allocation for a specific form of DAG shapes, namely linear ones. The objective function is minimizing the sum of the costs for both tasks and edges, extending the definition in Table 3 : min c(v i , e i j ), where i, j = 1 . . . n. An extension in BIB008 explains how these techniques can be extended to optimizing the degree of parallelism in Spark flows taking into account two criteria. A different approach to engine selection has appeared in the commercial tools in . There, the main option is ETL operators to execute on a specialized data integration server, unless a heuristic decides to delegate the execution of some of the tasks to the underlying databases, after merging the tasks and reformulating them as a single query. Finally, the engine selection mechanism can be employed in combination with configuration of execution engine parameters. An example technique is presented by Huang et al. BIB004 , where the initial optimization step deals with the decision of the best type of execution engine and then, the configuration parameters are defined, as it is analyzed in Sect. 4.8. This technique is extended by Huang et al. BIB006 , which focuses on how to decide on the usage of a specific type of cloud machines, namely spot instances. The problem of deciding whether to employ spot instances in clouds is also considered by Zhou et al. BIB007 .
|
The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> MapReduce has emerged as a viable competitor to database systems in big data analytics. MapReduce programs are being written for a wide variety of application domains including business data processing, text analysis, natural language processing, Web graph and social network analysis, and computational science. However, MapReduce systems lack a feature that has been key to the historical success of database systems, namely, cost-based optimization. A major challenge here is that, to the MapReduce system, a program consists of black-box map and reduce functions written in some programming language like C++, Java, Python, or Ruby. We introduce, to our knowledge, the first Cost-based Optimizer for simple to arbitrarily complex MapReduce programs. We focus on the optimization opportunities presented by the large space of configuration parameters for these programs. We also introduce a Profiler to collect detailed statistical information from unmodified MapReduce programs, and a What-if Engine for fine-grained cost estimation. All components have been prototyped for the popular Hadoop MapReduce system. The effectiveness of each component is demonstrated through a comprehensive evaluation using representative MapReduce programs from various application domains. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Scheduling data processing workflows (dataflows) on the cloud is a very complex and challenging task. It is essentially an optimization problem, very similar to query optimization, that is characteristically different from traditional problems in two aspects: Its space of alternative schedules is very rich, due to various optimization opportunities that cloud computing offers; its optimization criterion is at least two-dimensional, with monetary cost of using the cloud being at least as important as query completion time. In this paper, we study scheduling of dataflows that involve arbitrary data processing operators in the context of three different problems: 1) minimize completion time given a fixed budget, 2) minimize monetary cost given a deadline, and 3) find trade-offs between completion time and monetary cost without any a-priori constraints. We formulate these problems and present an approximate optimization framework to address them that uses resource elasticity in the cloud. To investigate the effectiveness of our approach, we incorporate the devised framework into a prototype system for dataflow evaluation and instantiate it with several greedy, probabilistic, and exhaustive search algorithms. Finally, through several experiments that we have conducted with the prototype elastic optimizer on numerous scientific and synthetic dataflows, we identify several interesting general characteristics of the space of alternative schedules as well as the advantages and disadvantages of the various search algorithms. The overall results are quite promising and indicate the effectiveness of our approach. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> There is a growing trend of performing analysis on large datasets using workflows composed of MapReduce jobs connected through producer-consumer relationships based on data. This trend has spurred the development of a number of interfaces---ranging from program-based to query-based interfaces---for generating MapReduce workflows. Studies have shown that the gap in performance can be quite large between optimized and unoptimized workflows. However, automatic cost-based optimization of MapReduce workflows remains a challenge due to the multitude of interfaces, large size of the execution plan space, and the frequent unavailability of all types of information needed for optimization. ::: ::: We introduce a comprehensive plan space for MapReduce workflows generated by popular workflow generators. We then propose Stubby, a cost-based optimizer that searches selectively through the subspace of the full plan space that can be enumerated correctly and costed based on the information available in any given setting. Stubby enumerates the plan space based on plan-to-plan transformations and an efficient search algorithm. Stubby is designed to be extensible to new interfaces and new types of optimizations, which is a desirable feature given how rapidly MapReduce systems are evolving. Stubby's efficiency and effectiveness have been evaluated using representative workflows from many domains. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Contemporary continuous dataflow systems use elastic scaling on distributed cloud resources to handle variable data rates and to meet applications' needs while attempting to maximize resource utilization. However, virtualized clouds present an added challenge due to the variability in resource performance -- over time and space -- thereby impacting the application's QoS. Elastic use of cloud resources and their allocation to continuous dataflow tasks need to adapt to such infrastructure dynamism. In this paper, we develop the concept of "dynamic dataflows" as an extension to continuous dataflows that utilizes alternate tasks and allows additional control over the dataflow's cost and QoS. We formalize an optimization problem to perform both deployment and runtime cloud resource management for such dataflows, and define an objective function that allows trade-off between the application's value against resource cost. We present two novel heuristics, local and global, based on the variable sized bin packing heuristics to solve this NP-hard problem. We evaluate the heuristics against a static allocation policy for a dataflow with different data rate profiles that is simulated using VM performance traces from a private cloud data center. The results show that the heuristics are effective in intelligently utilizing cloud elasticity to mitigate the effect of both input data rate and cloud resource performance variabilities on QoS. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> We present Cumulon, a system designed to help users rapidly develop and intelligently deploy matrix-based big-data analysis programs in the cloud. Cumulon features a flexible execution model and new operators especially suited for such workloads. We show how to implement Cumulon on top of Hadoop/HDFS while avoiding limitations of MapReduce, and demonstrate Cumulon's performance advantages over existing Hadoop-based systems for statistical data analysis. To support intelligent deployment in the cloud according to time/budget constraints, Cumulon goes beyond database-style optimization to make choices automatically on not only physical operators and their parameters, but also hardware provisioning and configuration settings. We apply a suite of benchmarking, simulation, modeling, and search techniques to support effective cost-based optimization over this rich space of deployment plans. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> MapReduce based data-intensive computing solutions are increasingly deployed as production systems. Unlike Internet companies who invent and adopt the technology from the very beginning, traditional enterprises demand easy-to-use software due to the limited capabilities of administrators. Automatic job optimization software for MapReduce is a promising technique to satisfy such requirements. In this paper, we introduce a toolkit from IBM, called MRTuner, to enable holistic optimization for MapReduce jobs. In particular, we propose a novel Producer-Transporter-Consumer (PTC) model, which characterizes the tradeoffs in the parallel execution among tasks. We also carefully investigate the complicated relations among about twenty parameters, which have significant impact on the job performance. We design an efficient search algorithm to find the optimal execution plan. Finally, we conduct a thorough experimental evaluation on two different types of clusters using the HiBench suite which covers various Hadoop workloads from GB to TB size levels. The results show that the search latency of MRTuner is a few orders of magnitude faster than that of the state-of-the-art cost-based optimizer, and the effectiveness of the optimized execution plan is also significantly improved. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Declarative large-scale machine learning (ML) aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations to distributed computations on MapReduce (MR) or similar frameworks. State-of-the-art compilers in this context are very sensitive to memory constraints of the master process and MR cluster configuration. Different memory configurations can lead to significant performance differences. Interestingly, resource negotiation frameworks like YARN allow us to explicitly request preferred resources including memory. This capability enables automatic resource elasticity, which is not just important for performance but also removes the need for a static cluster configuration, which is always a compromise in multi-tenancy environments. In this paper, we introduce a simple and robust approach to automatic resource elasticity for large-scale ML. This includes (1) a resource optimizer to find near-optimal memory configurations for a given ML program, and (2) dynamic plan migration to adapt memory configurations during runtime. These techniques adapt resources according to data, program, and cluster characteristics. Our experiments demonstrate significant improvements up to 21x without unnecessary over-provisioning and low optimization overhead. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Extract-Transform-Load (ETL) handles large amounts of data and manages workload through dataflows. ETL dataflows are widely regarded as complex and expensive operations in terms of time and system resources. In order to minimize the time and the resources required by ETL dataflows, this paper presents an optimization framework using partitioning and parallelization. The framework first partitions an ETL dataflow into multiple execution trees according to the characteristics of ETL constructs, then within an execution tree pipelined parallelism and shared cache are used to optimize the partitioned dataflow. Furthermore, multi-threading is used in component-based optimization. The experimental results show that the proposed framework can achieve 4.7 times faster than the ordinary ETL dataflows (without using the proposed partitioning and optimization methods), and is comparable to the similar ETL tools. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Execution engine configuration <s> Data analytics has recently grown to include increasingly sophisticated techniques, such as machine learning and advanced statistics. Users frequently express these complex analytics tasks as workflows of user-defined functions (UDFs) that specify each algorithmic step. However, given typical hardware configurations and dataset sizes, the core challenge of complex analytics is no longer sheer data volume but rather the computation itself, and the next generation of analytics frameworks must focus on optimizing for this computation bottleneck. While query compilation has gained widespread popularity as a way to tackle the computation bottleneck for traditional SQL workloads, relatively little work addresses UDF-centric workflows in the domain of complex analytics. ::: ::: In this paper, we describe a novel architecture for automatically compiling workflows of UDFs. We also propose several optimizations that consider properties of the data, UDFs, and hardware together in order to generate different code on a case-by-case basis. To evaluate our approach, we implemented these techniques in Tupleware, a new high-performance distributed analytics system, and our benchmarks show performance improvements of up to three orders of magnitude compared to alternative systems. <s> BIB010
|
This type of flow optimization has recently received attention due to the increasing number of parallel data flow platforms, such as Hadoop and Spark. The Engine Configuration mechanism can serve as a complementary component of an optimization technique that applies implementation or engine selection, and in general, can be combined with the other optimization mechanisms. For example, the rationale of the heuristic presented by Kumbhare et al. BIB005 (based on variable sized bin packing) is also to decide the best implementation for each task and then, dynamically configure the resources, such as the number of CPU cores allocated, for executing the tasks. A common feature of all the solutions in this section is that they deal with parallelism, but from different perspectives depending on the exact execution environment. A specific type of engine configuration, namely to decide the degree of parallelism in MapReduce-like clusters for each task and parameters, such as the number of slots on each node, appears in BIB006 . The time complexity of this optimization technique is exponential. This is repeated for each different type of machines (i.e., different type of execution engine), assuming a context where several heterogeneous clusters are at user's disposal. Both of these techniques have been proposed for cloud environments and aim to optimize multiple optimization criteria. In general, execution engines come with a large number of configuration parameters and fine tuning them is a challenging task. For example, MapReduce systems may have more than one hundred configuration parameters. The proposal in BIB007 aims to provide a principle approach to their configuration. Given the number of MapReduce slots and hardware details, the proposed algorithm initially checks all combinations of four key parameters, such as the number of map and reduce waves, and whether to use compression or not. Then, the values of a dozen other configuration parameters that have significant impact on performance are derived. The overall goal is to reduce the execution time taking to account the pipeline nature of MapReduce execution. An alternative configuration technique is employed by Lim et al. BIB004 , which leverages the what-if engine initially proposed by Herodotou et al. BIB002 . This engine is responsible to configure execution settings, such as memory allocation and number of map and reduce tasks, by answering questions on real and hypothetical input parameters using a random search algorithm. What-if analysis is also employed by Huang et al. BIB008 for optimally configuring memory configurations. The distinctive feature of this proposal is that it is dynamic in the sense that it can take decisions at runtime leading to task migrations. In a more traditional ETL setting, apart from the optimizations described previously, an additional optimization mechanism has been proposed by Simitsis et al. BIB001 in order to define the degree of parallelism. Specifically, due to the large size of data that a workflow has to process, data are partitioned to be executed following the intra-operator parallelism paradigm. The parallelism is considered profitable whenever the overhead of data partitioning and merging does not incur an overhead higher than the expected benefits. Sometimes, it might be worth investigating whether splitting an input dataset into partitions could reduce the latency in ETL flow execution on a single server as well. An example study can be found in BIB009 . Another approach to choosing the degree of parallelism appears in BIB003 , where a set of greedy and simulated annealing heuristics that decide the degree of parallelism are proposed. This proposal considers two objectives, performance and monetary cost assuming that resources are offered by a public cloud at a certain price. The objective function targets either the minimization of the sum of the task costs constrained by a defined monetary budget, or the minimization of the monetary cost under a constraint on runtime. Additionally, both metrics can be minimized simultaneously using an appropriate objective function, which expresses the speedup when budget is increased. Another optimization technique in BIB010 proposes a set of optimizations at the chip processor level and more specifically, proposes heuristics to drive compiler decisions on whether to execute low-level commands in a pipelined fashion or to employ SIMD (single instruction multiple data) parallelism. Interestingly, these optimizations are coupled with traditional database-like ones at a higher level, such as pushing selections as early as possible.
|
The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Where can I attend an interesting database workshop close to a sunny beach? Who are the strongest experts on service computing based upon their recent publication record and accepted European projects? Can I spend an April weekend in a city served by a low-cost direct flight from Milano offering a Mahler's symphony? We regard the above queries as multi-domain queries, i.e., queries that can be answered by combining knowledge from two or more domains (such as: seaside locations, flights, publications, accepted projects, conference offerings, and so on). This information is available on the Web, but no general-purpose software system can accept the above queries nor compute the answer. At the most, dedicated systems support specific multi-domain compositions (e.g., Google-local locates information such as restaurants and hotels upon geographic maps). ::: ::: This paper presents an overall framework for multi-domain queries on the Web. We address the following problems: (a) expressing multi-domain queries with an abstract formalism, (b) separating the treatment of "search" services within the model, by highlighting their differences from "exact" Web services, (c) explaining how the same query can be mapped to multiple "query plans", i.e., a well-defined scheduling of service invocations, possibly in parallel, which complies with their access limitations and preserves the ranking order in which search services return results; (d) introducing cross-domain joins as first-class operation within plans; (e) evaluating the query plans against several cost metrics so as to choose the most promising one for execution. This framework adapts to a variety of application contexts, ranging from end-user-oriented mash-up scenarios up to complex application integration scenarios. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Extraction---Transform---Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> In this paper, we explore the complexity of mapping filtering streaming applications on large-scale homogeneous and heterogeneous platforms, with a particular emphasis on communication models and their impact. Filtering applications are streaming applications where each node also has a selectivity which either increases or decreases the size of its input data set. This selectivity makes the problem of scheduling these applications more challenging than the more studied problem of scheduling “non-filtering” streaming workflows. We address the complexity of the following two problems: ::: Optimization: Given a filtering workflow, how can one compute the mapping and schedule that minimize the period or latency? A solution to this problem requires generating both the mapping and the associated operation list—the order in which each processor executes its assigned tasks. ::: ::: ::: ::: We address this general problem in two steps. First, we address the simplified model without communication cost. In this case, the evaluation problems are easy, and the optimization problems have polynomial complexity on homogeneous platforms. However, we show that the optimization problems become NP-hard on heterogeneous platforms. Second, we consider platforms with communication costs. Clearly, due to the previous results, the optimization problems on heterogeneous platforms are still NP-hard. Therefore we come back to homogeneous platforms and extend the framework with three significant realistic communication models. Now even evaluation problems become difficult, because the mapping must now be enriched with an operation list that provides the time-steps at which each computation and each communication occurs in the system: determining the best operation list has a combinatorial nature. Not too surprisingly, optimization problems are NP-hard too. Altogether, this paper provides a comprehensive overview of the additional difficulties induced by heterogeneity and communication costs. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Scheduling data processing workflows (dataflows) on the cloud is a very complex and challenging task. It is essentially an optimization problem, very similar to query optimization, that is characteristically different from traditional problems in two aspects: Its space of alternative schedules is very rich, due to various optimization opportunities that cloud computing offers; its optimization criterion is at least two-dimensional, with monetary cost of using the cloud being at least as important as query completion time. In this paper, we study scheduling of dataflows that involve arbitrary data processing operators in the context of three different problems: 1) minimize completion time given a fixed budget, 2) minimize monetary cost given a deadline, and 3) find trade-offs between completion time and monetary cost without any a-priori constraints. We formulate these problems and present an approximate optimization framework to address them that uses resource elasticity in the cloud. To investigate the effectiveness of our approach, we incorporate the devised framework into a prototype system for dataflow evaluation and instantiate it with several greedy, probabilistic, and exhaustive search algorithms. Finally, through several experiments that we have conducted with the prototype elastic optimizer on numerous scientific and synthetic dataflows, we identify several interesting general characteristics of the space of alternative schedules as well as the advantages and disadvantages of the various search algorithms. The overall results are quite promising and indicate the effectiveness of our approach. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> In the parallel pipelined filter ordering problem, we are given a set of n filters that run in parallel. The filters need to be applied to a stream of elements, to determine which elements pass all filters. Each filter has a rate limit ri on the number of elements it can process per unit time, and a selectivity pi, which is the probability that a random element will pass the filter. The goal is to maximize throughput. This problem appears naturally in a variety of settings, including parallel query optimization in databases and query processing over Web services. ::: We present an O(n3) algorithm for this problem, given tree-structured precedence constraints on the filters. This extends work of Condon et al. [2009] and Kodialam [2001], who presented algorithms for solving the problem without precedence constraints. Our algorithm is combinatorial and produces a sparse solution. Motivated by join operators in database queries, we also give algorithms for versions of the problem in which “filter” selectivities may be greater than or equal to 1. ::: We prove a strong connection between the more classical problem of minimizing total work in sequential filter ordering (A), and the parallel pipelined filter ordering problem (B). More precisely, we prove that A is solvable in polynomial time for a given class of precedence constraints if and only if B is as well. This equivalence allows us to show that B is NP-Hard in the presence of arbitrary precedence constraints (since A is known to be NP-Hard in that setting). <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Many systems for big data analytics employ a data flow abstraction to define parallel data processing tasks. In this setting, custom operations expressed as user-defined functions are very common. We address the problem of performing data flow optimization at this level of abstraction, where the semantics of operators are not known. Traditionally, query optimization is applied to queries with known algebraic semantics. In this work, we find that a handful of properties, rather than a full algebraic specification, suffice to establish reordering conditions for data processing operators. We show that these properties can be accurately estimated for black box operators by statically analyzing the general-purpose code of their user-defined functions. ::: ::: We design and implement an optimizer for parallel data flows that does not assume knowledge of semantics or algebraic properties of operators. Our evaluation confirms that the optimizer can apply common rewritings such as selection reordering, bushy join-order enumeration, and limited forms of aggregation push-down, hence yielding similar rewriting power as modern relational DBMS optimizers. Moreover, it can optimize the operator order of nonrelational data flows, a unique feature among today's systems. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> There is a growing trend of performing analysis on large datasets using workflows composed of MapReduce jobs connected through producer-consumer relationships based on data. This trend has spurred the development of a number of interfaces---ranging from program-based to query-based interfaces---for generating MapReduce workflows. Studies have shown that the gap in performance can be quite large between optimized and unoptimized workflows. However, automatic cost-based optimization of MapReduce workflows remains a challenge due to the multitude of interfaces, large size of the execution plan space, and the frequent unavailability of all types of information needed for optimization. ::: ::: We introduce a comprehensive plan space for MapReduce workflows generated by popular workflow generators. We then propose Stubby, a cost-based optimizer that searches selectively through the subspace of the full plan space that can be enumerated correctly and costed based on the information available in any given setting. Stubby enumerates the plan space based on plan-to-plan transformations and an efficient search algorithm. Stubby is designed to be extensible to new interfaces and new types of optimizations, which is a desirable feature given how rapidly MapReduce systems are evolving. Stubby's efficiency and effectiveness have been evaluated using representative workflows from many domains. <s> BIB010 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Next generation business intelligence involves data flows that span different execution engines, contain complex functionality like data/text analytics, machine learning operations, and need to be optimized against various objectives. Creating correct analytic data flows in such an environment is a challenging task and is both labor-intensive and time-consuming. Optimizing these flows is currently an ad-hoc process where the result is largely dependent on the abilities and experience of the flow designer. Our previous work addressed analytic flow optimization for multiple objectives over a single execution engine. This paper focuses on optimizing flows for a single objective, namely performance, over multiple execution engines. We consider flows that span a DBMS, a Map-Reduce engine, and an orchestration engine (e.g., an ETL tool or scripting language). This configuration is emerging as a common paradigm used to combine analysis of unstructured data with analysis of structured data (e.g., NoSQL plus SQL). We present flow transformations that model data shipping, function shipping, and operation decomposition and we describe how flow graphs are generated for multiple engines. Performance results for various configurations demonstrate the benefit of optimization. <s> BIB011 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times. <s> BIB012 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB013 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> To remain competitive, enterprises are evolving in order to quickly respond to changing market conditions and customer needs. In this new environment, a single centralized data warehouse is no longer sufficient. Next generation business intelligence involves data flows that span multiple, diverse processing engines, that contain complex functionality like data/text analytics, machine learning operations, and that need to be optimized against various objectives. A common example is the use of Hadoop to analyze unstructured text and merging these results with relational database queries over the data warehouse. We refer to these multi-engine analytic data flows as hybrid flows. Currently, it is a cumbersome task to create and run hybrid flows. Custom scripts must be written to dispatch tasks to the individual processing engines and to exchange intermediate results. So, designing correct hybrid flows is a challenging task. Optimizing such flows is even harder. Additionally, when the underlying computing infrastructure changes, existing flows likely need modification and reoptimization. The current, ad-hoc design approach cannot scale as hybrid flows become more commonplace. To address this challenge, we are building a platform to design and manage hybrid flows. It supports the logical design of hybrid flows in which implementation details are not exposed. It generates code for the underlying processing engines and orchestrates their execution. But the key enabling technology in the platform is an optimizer that converts the logical flow to an executable form that is optimized for the underlying infrastructure according to user-specified objectives. In this paper, we describe challenges in designing the optimizer and our solutions. We illustrate the optimizer through a real-world use case. We present a logical design and optimized designs for the use case. We show how the performance of the use case varies depending on the system configuration and how the optimizer is able to generate different optimized flows for different configurations. <s> BIB014 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> To remain competitive, enterprises are evolving their business intelligence systems to provide dynamic, near realtime views of business activities. To enable this, they deploy complex workflows of analytic data flows that access multiple storage repositories and execution engines and that span the enterprise and even outside the enterprise. We call these multi-engine flows hybrid flows. Designing and optimizing hybrid flows is a challenging task. Managing a workload of hybrid flows is even more challenging since their execution engines are likely under different administrative domains and there is no single point of control. To address these needs, we present a Hybrid Flow Management System (HFMS). It is an independent software layer over a number of independent execution engines and storage repositories. It simplifies the design of analytic data flows and includes optimization and executor modules to produce optimized executable flows that can run across multiple execution engines. HFMS dispatches flows for execution and monitors their progress. To meet service level objectives for a workload, it may dynamically change a flow's execution plan to avoid processing bottlenecks in the computing infrastructure. We present the architecture of HFMS and describe its components. To demonstrate its potential benefit, we describe performance results for running sample batch workloads with and without HFMS. The ability to monitor multiple execution engines and to dynamically adjust plans enables HFMS to provide better service guarantees and better system utilization. <s> BIB015 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> We present Cumulon, a system designed to help users rapidly develop and intelligently deploy matrix-based big-data analysis programs in the cloud. Cumulon features a flexible execution model and new operators especially suited for such workloads. We show how to implement Cumulon on top of Hadoop/HDFS while avoiding limitations of MapReduce, and demonstrate Cumulon's performance advantages over existing Hadoop-based systems for statistical data analysis. To support intelligent deployment in the cloud according to time/budget constraints, Cumulon goes beyond database-style optimization to make choices automatically on not only physical operators and their parameters, but also hardware provisioning and configuration settings. We apply a suite of benchmarking, simulation, modeling, and search techniques to support effective cost-based optimization over this rich space of deployment plans. <s> BIB016 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> BackgroundScientific workflows management systems are increasingly used to specify and manage bioinformatics experiments. Their programming model appeals to bioinformaticians, who can use them to easily specify complex data processing pipelines. Such a model is underpinned by a graph structure, where nodes represent bioinformatics tasks and links represent the dataflow. The complexity of such graph structures is increasing over time, with possible impacts on scientific workflows reuse. In this work, we propose effective methods for workflow design, with a focus on the Taverna model. We argue that one of the contributing factors for the difficulties in reuse is the presence of "anti-patterns", a term broadly used in program design, to indicate the use of idiomatic forms that lead to over-complicated design. The main contribution of this work is a method for automatically detecting such anti-patterns, and replacing them with different patterns which result in a reduction in the workflow's overall structural complexity. Rewriting workflows in this way will be beneficial both in terms of user experience (easier design and maintenance), and in terms of operational efficiency (easier to manage, and sometimes to exploit the latent parallelism amongst the tasks).ResultsWe have conducted a thorough study of the workflows structures available in Taverna, with the aim of finding out workflow fragments whose structure could be made simpler without altering the workflow semantics. We provide four contributions. Firstly, we identify a set of anti-patterns that contribute to the structural workflow complexity. Secondly, we design a series of refactoring transformations to replace each anti-pattern by a new semantically-equivalent pattern with less redundancy and simplified structure. Thirdly, we introduce a distilling algorithm that takes in a workflow and produces a distilled semantically-equivalent workflow. Lastly, we provide an implementation of our refactoring approach that we evaluate on both the public Taverna workflows and on a private collection of workflows from the BioVel project.ConclusionWe have designed and implemented an approach to improving workflow structure by way of rewriting preserving workflow semantics. Future work includes considering our refactoring approach during the phase of workflow design and proposing guidelines for designing distilled workflows. <s> BIB017 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Data-intensive flows are increasingly encountered in various settings, including business intelligence and scientific scenarios. At the same time, flow technology is evolving. Instead of resorting to monolithic solutions, current approaches tend to employ multiple execution engines, such as Hadoop clusters, traditional DBMSs, and stand-alone tools. We target the problem of allocating flow activities to specific heterogeneous and interdependent execution engines while minimizing the flow execution cost. To date, the state-of-the-art is limited to simple heuristics. Although the problem is intractable, we propose practical anytime solutions that are capable of outperforming those simple heuristics and yielding allocation plans in seconds even when optimizing large flows on ordinary machines. Moreover, we prove the NP-hardness of the problem in the generic case and we propose an exact polynomial solution for a specific form of flows, namely, linear flows. We thoroughly evaluate our solutions in both real-world and flows synthetic, and the results show the superiority of our solutions. Especially in real-world scenarios, we can decrease execution time up to more than 3 times. A set of anytime algorithms for yielding mappings of flow nodes to execution engines.An optimal solution with polynomial complexity for linear flows.Evaluation using both real and synthetic flows in a wide range of settings.Proof of the NP-hardness of the problem. <s> BIB018 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Data analytics has recently grown to include increasingly sophisticated techniques, such as machine learning and advanced statistics. Users frequently express these complex analytics tasks as workflows of user-defined functions (UDFs) that specify each algorithmic step. However, given typical hardware configurations and dataset sizes, the core challenge of complex analytics is no longer sheer data volume but rather the computation itself, and the next generation of analytics frameworks must focus on optimizing for this computation bottleneck. While query compilation has gained widespread popularity as a way to tackle the computation bottleneck for traditional SQL workloads, relatively little work addresses UDF-centric workflows in the domain of complex analytics. ::: ::: In this paper, we describe a novel architecture for automatically compiling workflows of UDFs. We also propose several optimizations that consider properties of the data, UDFs, and hardware together in order to generate different code on a case-by-case basis. To evaluate our approach, we implemented these techniques in Tupleware, a new high-performance distributed analytics system, and our benchmarks show performance improvements of up to three orders of magnitude compared to alternative systems. <s> BIB019 </s> The many faces of data-centric workflow optimization: a survey <s> Evaluation approaches <s> Recently, we have witnessed workflows from science and other data-intensive applications emerging on Infrastructure-as-a-Service (IaaS) clouds, and many workflow service providers offering workflow-as-a-service (WaaS). The major concern of WaaS providers is to minimize the monetary cost of executing workflows in the IaaS clouds. The selection of virtual machines (instances) types significantly affects the monetary cost and performance of running a workflow. Moreover, IaaS cloud environment is dynamic , with high performance dynamics caused by the interference from concurrent executions and price dynamics like spot prices offered by Amazon EC2. Therefore, we argue that WaaS providers should have the notion of offering probabilistic performance guarantees for individual workflows to explicitly expose the performance and cost dynamics of IaaS clouds to users. We develop a scheduling system called Dyna to minimize the expected monetary cost given the user-specified probabilistic deadline guarantees. Dyna includes an ${A^\star}$ -based instance configuration method for performance dynamics, and a hybrid instance configuration refinement for using spot instances. Experimental results with three scientific workflow applications on Amazon EC2 and a cloud simulator demonstrate (1) the ability of Dyna on satisfying the probabilistic deadline guarantees required by the users; (2) the effectiveness on reducing monetary cost in comparison with the existing approaches. <s> BIB020
|
The purpose of this section is to describe what approach the authors of the proposals have followed to evaluate their work. Due to the diversity of the objectives and the lack of a common and comprehensive evaluation approach and benchmark, in general, the proposals are not comparable to each other; therefore, no performance evaluation results are presented. We can divide the proposals in three categories (see also Fig. 9 ). The first category includes the optimization proposals that are theoretical in their nature and their results are not accompanied by experiments. Examples of this category are BIB005 BIB008 . The second category consists of optimizations that have found their way into data flow tools; the only examples in this category are . Fig. 9 The three main evaluation approaches followed and the aspects discussed in the experimental one The third category covers the majority of the proposals, for which experimental evaluation has been provided. We are mostly interested in three aspects of such experiments, namely the workflow type used in the experiments, the data type used to instantiate the workflows, and the implementation environment of the experiments. In Table 4 , the experimental evaluation approaches are summarized, along with the maximum DAG size (in terms of number of tasks) employed. Specifically, the implementation environment defines the execution environment of a workflow during the evaluation procedure. The environment can be a real-world one, which considers either the customization of an existing system to support the proposed optimization solutions or the design of a prototype system, which is essentially a new platform, possibly designed from scratch and tailored to support the evaluation. A common approach consists of a simulation of a real execution environment. Discussing the pros and cons of each approach is out of our scope, but in general, simulations allow the experimentation with a broader range of flow types, whereas real experiments can better reveal the actual benefits of optimizations in practice. The type of the workflows considered are either synthetic or real-world. In the former case, arbitrary DAGs are produced, e.g., based on the guidelines in BIB003 . In the latter case, the flow structure is according to real-world cases. For example, the evaluation of BIB004 BIB017 BIB001 BIB007 BIB018 BIB020 is based on real-world scientific workflows, such as the Montage and Cybershake ones described in BIB012 . Another example of realworld workflows are derived by TPC-H queries (used for some of the evaluation experiments in BIB009 BIB010 BIB013 along with real-world text mining and information extraction examples). In BIB011 BIB014 BIB006 BIB015 , the evaluation of the optimization proposals is based on workflows that represent arbitrary, real-world data transformations and text analytics. The case studies in BIB019 BIB010 include standard analytical algorithms, such as PageRank, kmeans, logistic regression, and naive bayes. The datasets used for workflow execution may affect the evaluation results, since they specify the range of the statistical metadata considered. The processed datasets can be either synthetic or real ones extracted by repositories, such as the Twitter repository with sample data of real tweets. Examples of real datasets used in BIB009 BIB013 include biomedical texts, a set of Wikipedia articles, and datasets from DBpedia. Additionally, Braga et al. BIB002 have evaluated the proposed optimization techniques using real data extracted by www.conference-service.com, www.bookings.com, and www.accuweather.com. Typically, when employing standard scientific flows, the datasets used are also fixed; however, in BIB018 a wide range of artificially created metadata have been used to cover more cases. As shown in Table 4 , a big portion of the optimization techniques have been evaluated by executing workflows in a simulated environment. The real environments that have been employed include among others ETL tools, such as Kettle and Talend, extensions to MapReduce, tailored prototypes, and DBMSs. Finally, for many techniques, only small data flows comprising no more than 15 nodes were used, or the information with regard to the size of the flows could not be derived. In the latter case, this might be due to the fact that wellknown algorithms have been used (e.g., k-means in BIB019 and matrix-multiplication in BIB016 ) without explaining how these algorithms are internally translated to data flows. All experiments with workflows comprising hundreds of tasks used synthetic datasets.
|
The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> We present the NIMO system that automatically learns cost models for predicting the execution time of computational-science applications running on large-scale networked utilities such as computational grids. Accurate cost models are important for selecting efficient plans for executing these applications on the utility. Computational-science applications are often scripts (written, e.g., in languages like Perl or Matlab) connected using a workflow-description language, and therefore, pose different challenges compared to modeling the execution of plans for declarative queries with well-understood semantics. NIMO generates appropriate training samples for these applications to learn fairly-accurate cost models quickly using statistical learning techniques. NIMO's approach is active and noninvasive: it actively deploys and monitors the application under varying conditions, and obtains its training data from passive instrumentation streams that require no changes to the operating system or applications. Our experiments with real scientific applications demonstrate that NIMO significantly reduces the number of training samples and the time to learn fairly-accurate cost models. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Estimation of the execution time is an important part of the workflow scheduling problem. The aim of this paper is to highlight common problems in estimating the workflow execution time and propose a solution that takes into account the complexity and the stochastic aspects of the workflow components as well as their runtime. The solution proposed in this paper addresses the problems at different levels from a task to a workflow, including the error measurement and the theory behind the estimation algorithm. The proposed makespan estimation algorithm can be integrated easily into a wide class of schedulers as a separate module. We use a dual stochastic representation, characteristic/distribution function, in order to combine task estimates into the overall workflow makespan. Additionally, we propose the workflow reductions—operations on a workflow graph that do not decrease the accuracy of the estimates but simplify the graph structure, hence increasing the performance of the algorithm. Another very important feature of our work is that we integrate the described estimation schema into earlier developed scheduling algorithm GAHEFT and experimentally evaluate the performance of the enhanced solution in the real environment using the CLAVIRE platform. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> of the ETL products in the market today provide tools for design of ETL workflows, with very little or no support for opti- mization of such workflows. Optimization of ETL workflows pose several new challenges compared to traditional query optimization in database systems. There have been many attempts both in the industry and the research community to support cost-based opti- mization techniques for ETL Workflows, but with limited success. Non-availability of source statistics in ETL is one of the major chal- lenges that precludes the use of a cost based optimization strategy. However, the basic philosophy of ETL workflows of design once and execute repeatedly allows interesting possibilities for determin- ing the statistics of the input. In this paper, we propose a frame- work to determine various sets of statistics to collect for a given workflow, using which the optimizer can estimate the cost of any alternative plan for the workflow. The initial few runs of the work- flow are used to collect the statistics and future runs are optimized based on the learned statistics. Since there can be several alterna- tive sets of statistics that are sufficient, we propose an optimization framework to choose a set of statistics that can be measured with the least overhead. We experimentally demonstrate the effective- ness and efficiency of the proposed algorithms. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Scientific workflows, which capture large computational problems, may be executed on large-scale distributed systems such as Clouds. Determining the amount of resources to be provisioned for the execution of scientific workflows is a key component to achieve cost-efficient resource management and good performance. In this paper, a performance prediction model is presented to estimate execution time of scientific workflows for a different number of resources, taking into account their structure as well as their system-dependent characteristics. In the evaluation, three real-world scientific workflows are used to compare the estimated makespan calculated by the model with the actual makespan achieved on different system configurations of Amazon EC2. The results show that the proposed model can predict execution time with an error of less than 20% for over 96.8% of the experiments. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Business intelligence (BI) systems depend on efficient integration of disparate and often heterogeneous data. The integration of data is governed by data-intensive flows and is driven by a set of information requirements. Designing such flows is in general a complex process, which due to the complexity of business environments is hard to be done manually. In this paper, we deal with the challenge of efficient design and maintenance of data-intensive flows and propose an incremental approach, namely CoAl , for semi-automatically consolidating data-intensive flows satisfying a given set of information requirements. CoAl works at the logical level and consolidates data flows from either high-level information requirements or platform-specific programs. As CoAl integrates a new data flow, it opts for maximal reuse of existing flows and applies a customizable cost model tuned for minimizing the overall cost of a unified solution. We demonstrate the efficiency and effectiveness of our approach through an experimental evaluation using our implemented prototype. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Discussion on findings <s> Although the modern data flows are executed in parallel and distributed environments, e.g. on a multi-core machine or on the cloud, current cost models, e.g., those considered by state-of-the-art data flow optimization techniques, do not accurately reflect the response time of real data flow execution in these execution environments. This is mainly due to the fact that the impact of parallelism, and more specifically, the impact of concurrent task execution on the running time is not adequately modeled. In this work, we propose a cost modeling solution that aims to accurately reflect the response time of a data flow that is executed in parallel. We focus on the single multi-core machine environment provided by modern business intelligence tools, such as Pentaho Kettle, but our approach can be extended to massively parallel and distributed settings. The distinctive features of our proposal is that we model both time overlaps and the impact of concurrency on task running times in a combined manner; the latter is appropriately quantified and its significance is exemplified. <s> BIB008
|
Data flow optimization is a research area with high potential for further improvements given the increasing role of data flows in modern data-driven applications. In this survey, we have listed more than thirty research proposals, most of which have been published after 2010. In the previous sections, we mostly focused on the merits and the technical details of each proposal. They can lead to performance improvements, and more importantly, they have the potential to lift the burden of manually fixing all implementation details from the data flow designers, which is a key motivation for automated optimization solutions. In this section, we complement any remarks made before with a list of additional observations, which may also serve as a description of directions for further research: -In principle, the techniques described previously can serve as building block toward more holistic solutions. For instance, task ordering can, in principle, be combined with (i) additional high-level mechanisms, such as task introduction, removal, merge, and decomposition; and (ii) low-level mechanisms, such as engine configura-tion, thus yielding added benefits. The main issue arising when mechanisms are combined is the increased complexity. An approach to mitigating the complexity is a two-phase approach, as commonly happens in database queries. An additional issue is to determine which mechanism should first be explored. For some mechanisms, this is straightforward, e.g., decomposition should precede task ordering and task removal should be placed afterward. But, for mechanisms, such as configuration, this is unclear, e.g., whether it is beneficial to first configure low-level details before higher level ones remains an open issue. -In general, there is little work on low-complexity, holistic, and multi-objective solutions. Toward this direction, Simitsis et al. BIB003 consider more than one objective and combines mechanisms at both high-and low-level execution plan details; for instance, both task ordering and engine configuration are addressed in the same technique. But clearly more work is needed here. In general, most of the techniques have been developed in isolation, each one typically assuming a specific setting and targeting a subset of optimization aspects. This and the lack of a common agreed benchmark makes it difficult to understand how exactly they compare to each other, the details of how the various proposals can be combined in a common framework and how they interplay. -There seems to be no common approach to evaluating the optimization proposals. Some proposals have not been adequately tested in terms of scalability, since they have considered only small graphs. In some data flow evaluations, workloads inspired from benchmarks such as TPC-DI/DS have been employed, but as most of the authors report as well, it is doubtful whether these benchmarks can completely capture all dimensions of the problem. There is a growing need for the development of systematic and broadly adopted techniques to evaluate optimization techniques for data flows. -A significant part of the techniques covered in this survey have not been incorporated in tools, nor have been exploited commercially. Most of the optimization techniques described here, especially regarding the high-level execution plan details, have not been implemented in real data flow systems apart from very few exceptions, as explained earlier. Hence, the full potential and practical value of the proposals have not been investigated in actual execution conditions, despite the fact that evaluation results thus far are shown to provide improvements by several orders of magnitude over non-optimized plans. -A plethora of objective functions and cost models have been investigated, which, to a large extent, they are compatible with each other, despite the fact that original proposals have examined them in isolation. However, it is unclear whether any of such cost models can capture aspects, such as the execution time of parallel data flows, which are very common nowadays, in a fairly accurate manner. A more sophisticated cost model should take into account sequential, pipelined and partitioned execution in a unified manner, essentially combining the sum, bottleneck and critical path cost metrics. An early work on this topic has appeared in BIB008 . Optimizing multiple flows simultaneously is another area requiring attention. An initial effort is described by Jovanovic et al. BIB007 , which builds upon the task ordering solutions of BIB001 . -There is early work on statistics collection BIB004 BIB005 BIB006 BIB002 , but clearly, there is more to be done here given that without appropriate statistics, cost-based optimization becomes problematic and prone to significant errors. -On the other hand, a different school of thought advocates that in contrast to relational databases, automated optimization cannot help in practice in flow optimization due to flow complexity and increased difficulty in maintaining flow statistics, and developing accurate cost models. Based on that, there is a number of commercial flow execution engines (e.g., ETL tools) that instead of offering a flow optimizer they provide users with tips and best practices. No doubt, this is an interesting point, but we consider this category as out of the scope of this work.
|
The many faces of data-centric workflow optimization: a survey <s> Optimization in massively parallel data flow systems <s> There is a growing need for ad-hoc analysis of extremely large data sets, especially at internet companies where innovation critically depends on being able to analyze terabytes of data collected every day. Parallel database products, e.g., Teradata, offer a solution, but are usually prohibitively expensive at this scale. Besides, many of the people who analyze this data are entrenched procedural programmers, who find the declarative, SQL style to be unnatural. The success of the more procedural map-reduce programming model, and its associated scalable implementations on commodity hardware, is evidence of the above. However, the map-reduce paradigm is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain, and reuse. We describe a new language called Pig Latin that we have designed to fit in a sweet spot between the declarative style of SQL, and the low-level, procedural style of map-reduce. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. We give a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. We also report on a novel debugging environment that comes integrated with Pig, that can lead to even higher productivity gains. Pig is an open-source, Apache-incubator project, and available for general use. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Optimization in massively parallel data flow systems <s> Many systems for big data analytics employ a data flow abstraction to define parallel data processing tasks. In this setting, custom operations expressed as user-defined functions are very common. We address the problem of performing data flow optimization at this level of abstraction, where the semantics of operators are not known. Traditionally, query optimization is applied to queries with known algebraic semantics. In this work, we find that a handful of properties, rather than a full algebraic specification, suffice to establish reordering conditions for data processing operators. We show that these properties can be accurately estimated for black box operators by statically analyzing the general-purpose code of their user-defined functions. ::: ::: We design and implement an optimizer for parallel data flows that does not assume knowledge of semantics or algebraic properties of operators. Our evaluation confirms that the optimizer can apply common rewritings such as selection reordering, bushy join-order enumeration, and limited forms of aggregation push-down, hence yielding similar rewriting power as modern relational DBMS optimizers. Moreover, it can optimize the operator order of nonrelational data flows, a unique feature among today's systems. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Optimization in massively parallel data flow systems <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Optimization in massively parallel data flow systems <s> SystemML aims at declarative, large-scale machine learning (ML) on top of MapReduce, where high-level ML scripts with R-like syntax are compiled to programs of MR jobs. The declarative specification of ML algorithms enables---in contrast to existing large-scale machine learning libraries---automatic optimization. SystemML's primary focus is on data parallelism but many ML algorithms inherently exhibit opportunities for task parallelism as well. A major challenge is how to efficiently combine both types of parallelism for arbitrary ML scripts and workloads. In this paper, we present a systematic approach for combining task and data parallelism for large-scale machine learning on top of MapReduce. We employ a generic Parallel FOR construct (ParFOR) as known from high performance computing (HPC). Our core contributions are (1) complementary parallelization strategies for exploiting multi-core and cluster parallelism, as well as (2) a novel cost-based optimization framework for automatically creating optimal parallel execution plans. Experiments on a variety of use cases showed that this achieves both efficiency and scalability due to automatic adaptation to ad-hoc workloads and unknown data characteristics. <s> BIB004
|
A specific form of data flow systems are massively parallel processing (MPP) engines, such as Spark and Hadoop. These data flow systems can scale to a large number of computing nodes and are specifically tailored to big data management taking care of parallelism efficiency and fault tolerance issues. They accept their input in a declarative form (e.g., PigLatin BIB001 , Hive, SparkSQL), which is then automatically transformed into an executable DAG. Several optimizations take place during this transformation. We broadly classify these optimizations in two categories. The first category comprises database-like optimizations, such as pushing filtering tasks as early as possible, choosing the join implementation, and using index tables, corresponding to task ordering and implementation selection, respectively. This can be regarded as a direct technology transfer from databases to parallel data flows and to date, these optimizations do not cover arbitrary user-defined transformations. The second category is specific to the parallel execution environment with a view to minimizing the amount of data read from disk, transmitted over the network, and being processed. For example, Spark groups pipelining tasks in larger jobs (called stages) to benefit from this type of parallelism. Also, it leverages cached data and columnar storage, performs compression, and reduces the amount of data transmitted during data shuffling through early partial aggregation, when this is possible. Grouping tasks into pipelining stages is a case of runtime scheduling. Early partial aggregation can be deemed as a task introduction technique. The other forms of optimizations (leveraging cached data, columnar storage, and compression) can be deemed as specific forms of implementation selection. Flink is another system employing optimizations, but it has not yet incorporated all the (advanced) optimization proposals in its predecessor projects, as described in BIB002 BIB003 . The proposal in BIB004 is another example that proposes optimizations for a specific operator, namely ParFOR. We do not include these techniques in Tables 1 and 2 because they apply to specific DAG instances and have not matured enough to benefit generic data flows including arbitrary tasks. Finally, in terms of scheduling tools for data-intensive flows, several software artefacts have started emerging, such as Apache Oozie, Apache Cascading. We also do not cover these because they refer to the WEP execution rather than the WEP generation layer.
|
The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> We present the NIMO system that automatically learns cost models for predicting the execution time of computational-science applications running on large-scale networked utilities such as computational grids. Accurate cost models are important for selecting efficient plans for executing these applications on the utility. Computational-science applications are often scripts (written, e.g., in languages like Perl or Matlab) connected using a workflow-description language, and therefore, pose different challenges compared to modeling the execution of plans for declarative queries with well-understood semantics. NIMO generates appropriate training samples for these applications to learn fairly-accurate cost models quickly using statistical learning techniques. NIMO's approach is active and noninvasive: it actively deploys and monitors the application under varying conditions, and obtains its training data from passive instrumentation streams that require no changes to the operating system or applications. Our experiments with real scientific applications demonstrate that NIMO significantly reduces the number of training samples and the time to learn fairly-accurate cost models. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Extract-Transform-Load (ETL) activities are software modules responsible for populating a data warehouse with operational data, which have undergone a series of transformations on their way to the warehouse. The whole process is very complex and of signifi-cant importance for the design and maintenance of the data ware-house. A plethora of commercial ETL tools are already available in the market. However, each one of them follows a different ap-proach for the modeling of ETL activities; i.e., of the building blocks of an ETL workflow. As a result, so far there is no standard or unified approach for describing such activities. In this paper, we are working towards the identification of generic properties that characterize ETL activities. In doing so, we follow a black-box approach and provide a taxonomy that characterizes ETL activities in terms of the relationship of their input to their output and provide a normal form that is based on interpreted semantics for the black box activities. Finally, we show how the proposed taxonomy can be used in the construction of larger modules, i.e., ETL archetype patterns, which can be used for the composition and optimization of ETL workflows. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Extraction---Transform---Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Extract-Transform-Load (ETL) processes play an important role in data warehousing. Typically, design work on ETL has focused on performance as the sole metric to make sure that the ETL process finishes within an allocated time window. However, other quality metrics are also important and need to be considered during ETL design. In this paper, we address ETL design for performance plus fault-tolerance and freshness. There are many reasons why an ETL process can fail and a good design needs to guarantee that it can be recovered within the ETL time window. How to make ETL robust to failures is not trivial. There are different strategies that can be used and they each have different costs and benefits. In addition, other metrics can affect the choice of a strategy; e.g., higher freshness reduces the time window for recovery. The design space is too large for informal, ad-hoc approaches. In this paper, we describe our QoX optimizer that considers multiple design strategies and finds an ETL design that satisfies multiple objectives. In particular, we define the optimizer search space, cost functions, and search algorithms. Also, we illustrate its use through several experiments and we show that it produces designs that are very near optimal. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Due to the growing complexity of scientific workflows, it is important to provide abstraction levels to aid scientists to compose these workflows. By doing this, we isolate scientists from infrastructure issues and let them focus on their domain of expertise when composing the workflow. Although using abstract workflows is a first step, there are many open issues, such as the ones related to semantics. Adding semantics to abstract workflows enables the explicit representation of which activities can be linked to each other, or which activities are similar to each other. Existing approaches address either the representation of abstract workflows or using domain ontologies to add semantics to workflow activities, but not both. In the latter case, these approaches focus only on adding semantics to executable workflows, instead of abstract ones. This makes it difficult to group executable workflows into a common abstract representation in the conceptual level. This article proposes coupling a workflow ontology, named SciFlow, to an abstract workflow representation named Experiment Line and implemented in the GExpLine tool. This is a step towards semantic mechanisms, helping scientists to identify equivalent activities or grouping executable activities into one abstract activity with the same semantics. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Modern business intelligence systems integrate a variety of data sources using multiple data execution engines. A common example is the use of Hadoop to analyze unstructured text and merging the results with relational database queries over a data warehouse. These analytic data flows are generalizations of ETL flows. We refer to multi-engine data flows as hybrid flows. In this paper, we present our benchmark infrastructure for hybrid flows and illustrate its use with an example hybrid flow. We then present a collection of parameters to describe hybrid flows. Such parameters are needed to define and run a hybrid flows benchmark. An inherent difficulty in benchmarking ETL flows is the diversity of operators offered by ETL engines. However, a commonality for all engines is extract and load operations, operations which rely on data and function shipping. We propose that by focusing on these two operations for hybrid flows, it may be feasible to revisit the ETL benchmark effort and thus, enable comparison of flows for modern business intelligence applications. We believe our framework may be a useful step toward an industry standard benchmark for ETL flows. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Abstract Scientific workflows have emerged as an important tool for combining the computational power with data analysis for all scientific domains in e-science, especially in the life sciences. They help scientists to design and execute complex in silico experiments. However, with rising complexity it becomes increasingly impractical to optimize scientific workflows by trial and error. To address this issue, we propose to insert a new optimization phase into the common scientific workflow life cycle. This paper describes the design and implementation of an automated optimization framework for scientific workflows to implement this phase. Our framework was integrated into Taverna, a life-science oriented workflow management system and offers a versatile programming interface (API), which enables easy integration of arbitrary optimization methods. We have used this API to develop an example plugin for parameter optimization that is based on a Genetic Algorithm. Two use cases taken from the areas of structural bioinformatics and proteomics demonstrate how our framework facilitates setup, execution, and monitoring of workflow parameter optimization in high performance computing e-science environments. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Abstract Recent years have seen an increased interest in large-scale analytical data flows on non-relational data. These data flows are compiled into execution graphs scheduled on large compute clusters. In many novel application areas the predominant building blocks of such data flows are user-defined predicates or functions (U df s). However, the heavy use of U df s is not well taken into account for data flow optimization in current systems. S ofa is a novel and extensible optimizer for U df -heavy data flows. It builds on a concise set of properties for describing the semantics of Map/Reduce-style U df s and a small set of rewrite rules, which use these properties to find a much larger number of semantically equivalent plan rewrites than possible with traditional techniques. A salient feature of our approach is extensibility: we arrange user-defined operators and their properties into a subsumption hierarchy, which considerably eases integration and optimization of new operators. We evaluate S ofa on a selection of U df -heavy data flows from different domains and compare its performance to three other algorithms for data flow optimization. Our experiments reveal that S ofa finds efficient plans, outperforming the best plans found by its competitors by a factor of up to six. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Estimation of the execution time is an important part of the workflow scheduling problem. The aim of this paper is to highlight common problems in estimating the workflow execution time and propose a solution that takes into account the complexity and the stochastic aspects of the workflow components as well as their runtime. The solution proposed in this paper addresses the problems at different levels from a task to a workflow, including the error measurement and the theory behind the estimation algorithm. The proposed makespan estimation algorithm can be integrated easily into a wide class of schedulers as a separate module. We use a dual stochastic representation, characteristic/distribution function, in order to combine task estimates into the overall workflow makespan. Additionally, we propose the workflow reductions—operations on a workflow graph that do not decrease the accuracy of the estimates but simplify the graph structure, hence increasing the performance of the algorithm. Another very important feature of our work is that we integrate the described estimation schema into earlier developed scheduling algorithm GAHEFT and experimentally evaluate the performance of the enhanced solution in the real environment using the CLAVIRE platform. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> of the ETL products in the market today provide tools for design of ETL workflows, with very little or no support for opti- mization of such workflows. Optimization of ETL workflows pose several new challenges compared to traditional query optimization in database systems. There have been many attempts both in the industry and the research community to support cost-based opti- mization techniques for ETL Workflows, but with limited success. Non-availability of source statistics in ETL is one of the major chal- lenges that precludes the use of a cost based optimization strategy. However, the basic philosophy of ETL workflows of design once and execute repeatedly allows interesting possibilities for determin- ing the statistics of the input. In this paper, we propose a frame- work to determine various sets of statistics to collect for a given workflow, using which the optimizer can estimate the cost of any alternative plan for the workflow. The initial few runs of the work- flow are used to collect the statistics and future runs are optimized based on the learned statistics. Since there can be several alterna- tive sets of statistics that are sufficient, we propose an optimization framework to choose a set of statistics that can be measured with the least overhead. We experimentally demonstrate the effective- ness and efficiency of the proposed algorithms. <s> BIB010 </s> The many faces of data-centric workflow optimization: a survey <s> Techniques facilitating data-centric flow optimization <s> Scientific workflows, which capture large computational problems, may be executed on large-scale distributed systems such as Clouds. Determining the amount of resources to be provisioned for the execution of scientific workflows is a key component to achieve cost-efficient resource management and good performance. In this paper, a performance prediction model is presented to estimate execution time of scientific workflows for a different number of resources, taking into account their structure as well as their system-dependent characteristics. In the evaluation, three real-world scientific workflows are used to compare the estimated makespan calculated by the model with the actual makespan achieved on different system configurations of Amazon EC2. The results show that the proposed model can predict execution time with an error of less than 20% for over 96.8% of the experiments. <s> BIB011
|
Statistical metadata, such as cost per task invocation and selectivity, play a significant role in data flow optimization as discussed previously. References BIB009 BIB010 BIB011 BIB001 deal with statistics collection and modeling the execution cost of workflows; such issues are essential components in performing sophisticated flow optimization. Vassiliadis et al. BIB002 analyze the properties of tasks, e.g., multiple-input vs singleinput ones; such properties along with dependency constraint information complement statistics as the basis on top of which optimization solutions can be built. In principle, algebraic approaches to workflow execution and modeling facilitate flow optimization, e.g., in establishing dependency constraints. Examples of such proposals appear in BIB008 . The techniques that we discuss go beyond any type of modeling; however, when an algebraic approach is followed, further operator-specific optimizations become possible capitalizing on the vast literature of query optimization as already mentioned. Some techniques allow for choosing among multiple implementations of the same tasks using ontologies, rather than performing cost-based or heuristic optimization BIB005 . In , improving the flow with the help of user interactions is discussed. Additionally, in , different scheduling strategies to account for data shipping between tasks are presented, without however proposing an optimization algorithm that takes decisions as to which strategy should be employed. Apart from the optimizations described in Sect. 4, the proposal in BIB004 considers also the objective of data freshness. To this end, the proposal optimizes the activation time of ETL data flows, so that the changes in data sources are reflected on the state of a Data Warehouse within a time window. Nevertheless, this type of optimization objective leads to techniques that do not focus on optimizing the flow execution plan per se, which is the main topic of this survey. For the evaluation of optimization proposals, benchmarks for evaluating techniques are proposed in BIB003 BIB006 . Finally, in BIB007 , the significant role of correct parameter configuration in large-scale workflow execution is identified and relevant approaches are proposed. Proper tuning of the data flow execution environment is orthogonal and complementary to optimization of flow execution plan.
|
The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Automatic construction of workflows on the Grid is a challenging task. The problems that have to be solved are manifold: How can existing services be integrated into a workflow that is able to accomplish a specific task? How can an optimal workflow be constructed with respect to changing resource characteristics during the optimization process? How to cope with dynamically changing or incomplete knowledge of the goal function of the optimization process? and finally: How to react to service failures during workflow execution? In this paper, we propose a method to optimize a workflow based on a heuristic A* approach that allows to react to dynamics in the environment. Changes in the Grid infrastructure and in the users' requirements can be handled during the optimization process as well as during the execution of the workflow. Our algorithm also allows the workflow to recover from failing resources during the execution phase. Copyright © 2008 John Wiley & Sons, Ltd. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Pipelined workflows are a popular programming paradigm for parallel applications. In these workflows, the computation is divided into several stages, and these stages are connected to each other through first-in first-out channels. In order to execute these workflows on a parallel machine, we must first determine the mapping of the stages onto the various processors on the machine. After finding the mapping, we must compute the schedule, i.e., the order in which the various stages execute on their assigned processors. In this paper, we assume that the mapping is given and explore the latter problem of scheduling, particularly for linear workflows. Linear workflows are those in which dependencies between stages can be represented by a linear graph. The objective of the scheduling algorithm is either to minimize the period (the inverse of the throughput), or to minimize the latency (response time), or both. We consider two realistic execution models: the one-port model (all operations are serialized) and the multi-port model (bounded communication capacities and communication/computation overlap). In both models, finding a schedule to minimize the latency is easy. However, computing the schedule to minimize the period is NP-hard in the one-port model, but can be done in polynomial time in the multi-port model. We also present an approximation algorithm to minimize the period in the one-port model. Finally, the bi-criteria problem, which consists in finding a schedule respecting a given period and a given latency, is NP-hard in both models. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> This paper aims to address the problem of scheduling large workflows onto multiple execution sites with storage constraints. Three heuristics are proposed to first partition the workflow into sub-workflows. Three estimators and two schedulers are then used to schedule sub-workflows to the execution sites. Performance with three real-world workflows shows that this approach is able to satisfy storage constraints and improve the overall runtime by up to 48% over a default whole-workflow scheduling. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Many computation-intensive scientific applications feature complex workflows of distributed computing modules with intricate execution dependencies. Such scientific workflows must be mapped and executed in shared environments to support distributed scientific collaborations. We formulate workflow mapping as an optimization problem for latency minimization, whose difficulty essentially arises from the topological matching nature in the spatial domain, which is further compounded by the resource sharing complicacy in the temporal dimension. We conduct a rigorous analysis of the resource sharing dynamics in workflow executions, which constitutes the base for a workflow mapping algorithm to minimize the end-to-end delay. The correctness of the dynamics analysis is verified in comparison with an approximate solution, a dynamic system simulation program, and a real network deployment, and the performance superiority of the proposed mapping solution is illustrated by extensive comparisons with existing methods using both simulations and experiments. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Recently, utility Grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with service providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility Grids is workflow scheduling, i.e., the problem of satisfying the QoS of the users as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline. The PCP algorithm has two phases: in the deadline distribution phase it recursively assigns subdeadlines to the tasks on the partial critical paths ending at previously assigned tasks, and in the planning phase it assigns the cheapest service to each task while meeting its subdeadline. The simulation results show that the performance of the PCP algorithm is very promising. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> We propose a new heuristic called Resubmission Impact to support fault tolerant execution of scientific workflows in heterogeneous parallel and distributed computing environments. In contrast to related approaches, our method can be effectively used on new or unfamiliar environments, even in the absence of historical executions or failure trace models. On top of this method, we propose a dynamic enactment and rescheduling heuristic able to execute workflows with a high degree of fault tolerance, while taking into account soft deadlines. Simulated experiments of three real-world workflows in the Austrian Grid demonstrate that our method significantly reduces the resource waste compared to conservative task replication and resubmission techniques, while having a comparable makespan and only a slight decrease in the success probability. On the other hand, the dynamic enactment method manages to successfully meet soft deadlines in faulty environments in the absence of historical failure trace information or models. <s> BIB007 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> Extract-transform-load (ETL) workflows model the population of enterprise data warehouses with information gathered from a large variety of heterogeneous data sources. ETL workflows are complex design structures that run under strict performance requirements and their optimization is crucial for satisfying business objectives. In this paper, we deal with the problem of scheduling the execution of ETL activities (a.k.a. transformations, tasks, operations), with the goal of minimizing ETL execution time and allocated memory. We investigate the effects of four scheduling policies on different flow structures and configurations and experimentally show that the use of different scheduling policies may improve ETL performance in terms of memory consumption and execution time. First, we examine a simple, fair scheduling policy. Then, we study the pros and cons of two other policies: the first opts for emptying the largest input queue of the flow and the second for activating the operation (a.k.a. activity) with the maximum tuple consumption rate. Finally, we examine a fourth policy that combines the advantages of the latter two in synergy with flow parallelization. <s> BIB008 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> As system scales and application complexity grow, managing and processing simulation data has become a significant challenge. While recent approaches based on data staging and in-situ/in-transit data processing are promising, dynamic data volumes and distributions, such as those occurring in AMR-based simulations, make the efficient use of these techniques challenging. In this paper we propose cross-layer adaptations that address these challenges and respond at runtime to dynamic data management requirements. Specifically we explore (1) adaptations of the spatial resolution at which the data is processed, (2) dynamic placement and scheduling of data processing kernels, and (3) dynamic allocation of in-transit resources. We also exploit coordinated approaches that dynamically combine these adaptations at the different layers. We evaluate the performance of our adaptive cross-layer management approach on the Intrepid IBM-BlueGene/P and Titan Cray-XK7 systems using Chombo-based AMR applications, and demonstrate its effectiveness in improving overall time-to-solution and increasing resource efficiency. <s> BIB009 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> SUMMARY ::: Effective scheduling is a key concern for the execution of performance-driven grid applications such as workflows. In this paper, we first define the workflow scheduling problem and describe the existing heuristic-based and metaheuristic-based workflow scheduling strategies in grids. Then, we propose a dynamic critical-path-based adaptive workflow scheduling algorithm for grids, which determines efficient mapping of workflow tasks to grid resources dynamically by calculating the critical path in the workflow task graph at every step. Using simulation, we compared the performance of the proposed approach with the existing approaches, discussed in this paper for different types and sizes of workflows. The results demonstrate that the heuristic-based scheduling techniques can adapt to the dynamic nature of resource and avoid performance degradation in dynamically changing grid environments. Finally, we outline a hybrid heuristic combining the features of the proposed adaptive scheduling technique with metaheuristics for optimizing execution cost and time as well as meeting the users requirements to efficiently manage the dynamism and heterogeneity of the hybrid cloud environment. Copyright © 2013 John Wiley & Sons, Ltd. <s> BIB010 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a user-defined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithm which is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. Highlights? We propose two workflow scheduling algorithms for IaaS Clouds. ? The algorithms aim to minimize the workflow execution cost while meeting a deadline. ? The pricing model of the Clouds is considered which is based on a time interval. ? The algorithms are compared with a list heuristic through simulation. ? The experiments show the promising performance of both algorithms. <s> BIB011 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> The ultimate goal of cloud providers by providing resources is increasing their revenues. This goal leads to a selfish behavior that negatively affects the users of a commercial multicloud environment. In this paper, we introduce a pricing model and a truthful mechanism for scheduling single tasks considering two objectives: monetary cost and completion time. With respect to the social cost of the mechanism, i.e., minimizing the completion time and monetary cost, we extend the mechanism for dynamic scheduling of scientific workflows. We theoretically analyze the truthfulness and the efficiency of the mechanism and present extensive experimental results showing significant impact of the selfish behavior of the cloud providers on the efficiency of the whole system. The experiments conducted using real-world and synthetic workflow applications demonstrate that our solutions dominate in most cases the Pareto-optimal solutions estimated by two classical multiobjective evolutionary algorithms. <s> BIB012 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> The elasticity of Cloud infrastructures makes them a suitable platform for execution of deadline-constrained workflow applications, because resources available to the application can be dynamically increased to enable application speedup. Existing research in execution of scientific workflows in Clouds either try to minimize the workflow execution time ignoring deadlines and budgets or focus on the minimization of cost while trying to meet the application deadline. However, they implement limited contingency strategies to correct delays caused by underestimation of tasks execution time or fluctuations in the delivered performance of leased public Cloud resources. To mitigate effects of performance variation of resources on soft deadlines of workflow applications, we propose an algorithm that uses idle time of provisioned resources and budget surplus to replicate tasks. Simulation experiments with four well-known scientific workflows show that the proposed algorithm increases the likelihood of deadlines being met and reduces the total execution time of applications as the budget available for replication increases. <s> BIB013 </s> The many faces of data-centric workflow optimization: a survey <s> On scheduling optimizations in data-centric flows <s> A workflow is a systematic computation or a data-intensive application that has a regular computation and data access patterns. It is a key to design scalable scheduling algorithms in Cloud environments to address these runtime regularities effectively. While existing researches ignore to join the tasks scheduling and the optimization of data management for workflow, little attention has been paid so far to understand the combination between the two. The proposed scheme indicates that the coordination between task computation and data management can improve the scheduling performance. ::: ::: Our model considers data management to obtain satisfactory makespan on multiple datacenters. At the same time, our adaptive data-dependency analysis can reveal parallelization opportunities. In this paper, we introduce an adaptive data-aware scheduling (ADAS) strategy for workflow applications. It consist of a set-up stage which builds the clusters for the workflow tasks and datasets, and a run-time stage which makes the overlapped execution for the workflows. Through rigorous performance evaluation studies, we demonstrate that our strategy can effectively improve the workflow completion time and utilization of resources in a Cloud environment. <s> BIB014
|
In general, data flow execution engines tend to have builtin scheduling policies, which are not configured on a single flow basis. In principle, such policies can be extended to take into account the specific characteristics of data flows, where the placement of data and the transmission of data across tasks, represented by the DAG edges, requires special attention BIB004 . For example, in BIB008 , a set of scheduling strategies for improving the performance through the minimization of memory consumption and the execution time of Extract-Transform-Load (ETL) workflows running on a single machine is proposed. As it is difficult to execute the data in pipeline in ETLs due to the blocking nature of some of the ETL tasks, the authors suggest splitting the workflow into several sub-flows and apply different scheduling policies if necessary. Finally, in BIB009 , the placement of data management tasks is decided according to the memory availability of resources taking into account the trade-off between colocating tasks and the increased memory consumption when running multiple tasks on the same physical computational node. A large set of scheduling proposals target specific execution environments. For example, the technique in BIB005 targets shared resource environments. Proposals, such as BIB013 BIB002 BIB010 BIB001 BIB014 , are specific to grid and cloud datacentric flow scheduling. Agrawal et al. BIB003 discuss optimal time schedules given a fixed allocation of tasks to engines, provided that the tasks belong to a linear workflow. Also, a set of optimization algorithms for scheduling flows based on deadline and time constraints is analyzed in BIB011 BIB006 . Another proposal of flow scheduling optimization is presented in BIB007 based on soft deadline rescheduling in order to deal with the problem of fault tolerance in flow executions. In BIB013 , an optimization technique for minimizing the performance fluctuations that might occur by the resource diversity, which also considers deadlines, is proposed. Additionally, there is a set of scheduling techniques based on multi-objective optimization, e.g., BIB012 .
|
The many faces of data-centric workflow optimization: a survey <s> Related work <s> Object-relational database management systems allow knowledgeable users to define new data types as well as new methods (operators) for the types. This flexibility produces an attendant complexity, which must be handled in new ways for an object-relational database management system to be efficient. In this article we study techniques for optimizing queries that contain time-consuming methods. The focus of traditional query optimizers has been on the choice of join methods and orders; selections have been handled by “pushdown” rules. These rules apply selections in an arbitrary order before as many joins as possible, using th e assumption that selection takes no time. However, users of object-relational systems can embed complex methods in selections. Thus selections may take significant amounts of time, and the query optimization model must be enhanced. In this article we carefully define a query cost framework that incorporates both selectivity and cost estimates for selections. We develop an algorithm called Predicate Migration, and prove that it produces optimal plans for queries with expensive methods. We then describe our implementation of Predicate Migration in the commercial object-relational database management system Illustra, and discuss practical issues that affect our earlier assumptions. We compare Predicate Migration to a variety of simplier optimization techniques, and demonstrate that Predicate Migration is the best general solution to date. The alternative techniques we present may be useful for constrained workloads. <s> BIB001 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> Workflow technologies are emerging as the dominant approach to coordinate groups of distributed services. However with a space filled with competing specifications, standards and frameworks from multiple domains, choosing the right tool for the job is not always a straightforward task. Researchers are often unaware of the range of technology that already exists and focus on implementing yet another proprietary workflow system. As an antidote to this common problem, this paper presents a concise survey of existing workflow technology from the business and scientific domain and makes a number of key suggestions towards the future development of scientific workflow systems. <s> BIB002 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> Scientific workflow systems have become a necessary tool for many applications, enabling the composition and execution of complex analysis on distributed resources. Today there are many workflow systems, often with overlapping functionality. A key issue for potential users of workflow systems is the need to be able to compare the capabilities of the various available tools. There can be confusion about system functionality and the tools are often selected without a proper functional analysis. In this paper we extract a taxonomy of features from the way scientists make use of existing workflow systems and we illustrate this feature set by providing some examples taken from existing workflow systems. The taxonomy provides end users with a mechanism by which they can assess the suitability of workflow in general and how they might use these features to make an informed choice about which workflow system would be a good choice for their particular application. <s> BIB003 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> The past decade has witnessed a growing trend in designing and using workflow systems with a focus on supporting the scientific research process in bioinformatics and other areas of life sciences. The aim of these systems is mainly to simplify access, control and orchestration of remote distributed scientific data sets using remote computational resources, such as EBI web services. In this paper we present the state of the art in the field by reviewing six such systems: Discovery Net, Taverna, Triana, Kepler, Yawl and BPEL. We provide a high-level framework for comparing the systems based on their control flow and data flow properties with a view of both informing future research in the area by academic researchers and facilitating the selection of the most appropriate system for a specific application task by practitioners. <s> BIB004 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> Nowadays, technologies such as grid and cloud computing infrastructures and service-oriented architectures have become adequately mature and have been adopted by a large number of enterprizes and organizations [2,19,36]. A Web Service (WS) is a software system designed to support interoperable machine-to-machine interaction over a network and is implemented using open standards and protocols. WSs became popular data management entities; some of their benefits are interoperability and reuseability. <s> BIB005 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> Grids designed for computationally demanding scientific applications started experimental phases ten years ago and have been continuously delivering computing power to a wide range of applications for more than half of this time. The observation of their emergence and evolution reveals actual constraints and successful approaches to task mapping across administrative boundaries. Beyond differences in distributions, services, protocols, and standards, a common architecture is outlined. Application-agnostic infrastructures built for resource registration, identification, and access control dispatch delegation to grid sites. Efficient task mapping is managed by large, autonomous applications or collaborations that temporarily infiltrate resources for their own benefits. <s> BIB006 </s> The many faces of data-centric workflow optimization: a survey <s> Related work <s> Data-intensive flows are central processes ini¾?today's business intelligence BI systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load ETL processes that populate a data warehouse DW from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today's research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges. <s> BIB007
|
To the best of our knowledge, there is no prior survey or overview article on data flow optimization; however, there are several surveys on related topics. Related work falls into two categories: (i) surveys on generic DAG scheduling and on narrow-scope scheduling problems, which are also encountered in data flow optimization; and (ii) overviews of workflow systems. DAG scheduling is a persisting topic in computing and has received a renewed attention due to the emergence of Grid and cloud infrastructures, which allow for the usage of remote computational resources. For such distributed settings, the proposals tend to refer to the WEP execution layer and to focus on mapping computational tasks ignoring the data transfer between them, or assume a non-pipelined mode of execution that does not fit will into data-centric flow setting . A more recent survey of task mapping is presented in BIB006 , which discusses techniques that assign tasks to resources for efficient execution in Grids under the demanding requirements and resource allocation constraints, such as the dependencies between the tasks, the resource reservation, and so on. In , an overview of the pipelined workflow time scheduling problem is presented, where the problem formulation targets streaming applications. In order to compare the effectiveness of the proposed optimization techniques, they present a taxonomy of workflow optimization techniques taking into account workflow characteristics, such as the structure of flow (i.e., linear, fork, tree-shaped DAGs), the computation requirements, the size of data to be transferred between tasks, the parallel or sequential task exe-cution mode, and the possibility of executing task replicas. Additionally, the taxonomy takes into consideration a performance model that describes whether the optimization aims to a single or multiple objectives, such as throughput, latency, reliability, and so on. However, in data-centric flows, tasks are activated upon receipt of input data and not as a result of an activation message from a controller, as assumed in . None of the surveys above provides a systematic study of the optimizations at the WEP generation layer. The second class of related work deals with a broaderscope presentation of workflow systems. The survey in BIB003 aims to present a taxonomy of the workflow system features and capabilities to allow end users to take the best option for each application. Specifically, the taxonomy is inspired by the workflow lifecycle and categorizes the workflow systems according to the lifecycle phase they are capable of supporting. However, the optimizations considered suffer from the same limitations as those in . Similarly, in BIB002 , an evaluation of the current workflow technology is also described, considering both scientific and business workflow frameworks. The control and data flow mechanisms and capabilities of workflow systems both for e-science, e.g., Taverna and Triana, and business processes, e.g., YAWL and BPEL-based engines, are discussed in BIB004 . discusses how leading commercial tools in the data analysis market handle SQL statements, as a means to perform data management tasks within workflows. Liu et al. focus on scientific workflows, which are an essential part of data flows, but does not delve into the details of optimization. Finally, Jovanovic et al. BIB007 present a survey that aims to present the challenges of modern data flows through different data flow scenarios. Additionally, related data flow optimization techniques are summarized, but not surveyed, in order to underline the importance of low data latency in Business Intelligence (BI) processes, while an architecture of next generation BI systems that manage the complexity of modern data flows in such systems is proposed. Modeling and processing ETL workflows focuses on the detailed description of conceptual and logical modeling of ETLs. Conceptual modeling refers to the initial design of ETL processes by using UML diagrams, while the logical modeling refers to the design of ETL processes taking into account required constraints. This survey discusses the generic problems in ETL data flows, including optimization issues in minimizing the execution time of an ETL workflow and the resumption in case of failures during the processing of large amount of data. Data flow optimization bears also similarities with query optimization over Web Services (WSs) BIB005 , especially when the valid orderings of the calls to the WSs are subject to dependency constraints. This survey includes all the WSs related techniques that can also be applied to data flows. Part of the optimizations covered in this survey can be deemed as generalizations of the corresponding techniques in database queries. An example is the correspondence between pushing selections down in the query plan and moving filtering tasks as close to data source as possible . Comprehensive surveys on database query optimization are in , whereas lists of semantic equivalence rules between expressions of relational operators that provide the basis for query optimization can be found in classical database textbooks (e.g., ). However, as discussed in the introduction, there are essential differences between database queries and data flows, which cannot be described as expressions over a limited set of elementary operations. At a higher level, data flow optimization covers more mechanisms (e.g., task decomposition and engine selection) and a broader setting with regard to the criteria considered and the metadata required. Nevertheless, it is arguable that data flow task ordering bears similarities to optimization of database queries containing user-defined functions (UDFs) (or expensive predicates), as reported in BIB001 . This similarity is based on the intrinsic correspondence between UDFs and data flow tasks, but there are two main differences. First, the dependency constraints considered in BIB001 refer to pairs of a join and a UDF, rather than between UDFs. As such, when joins are removed and only UDFs are considered, the techniques described in these proposals are reduced to unconstrained filter ordering. Second, the straightforward extensions to the proposals BIB001 are already covered and improved by solutions targeting data flow task ordering explicitly as discussed in Sect. 4.1.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is incapable of meeting today's increasing performance requirements. An automatic fingerprint identification system (AFIS) is needed. This paper describes the design and implementation of an online fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al. (1995), which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an online inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of online verification with high accuracy. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Proposes a fingerprint minutia matching technique, which matches the fingerprint minutiae by using both the local and global structures of minutiae. The local structure of a minutia describes a rotation and translation invariant feature of the minutia in its neighborhood. It is used to find the correspondence of two minutiae sets and increase the reliability of the global matching. The global structure of minutiae reliably determines the uniqueness of fingerprint. Therefore, the local and global structures of minutiae together provide a solid basis for reliable and robust minutiae matching. The proposed minutiae matching scheme is suitable for an online processing due to its high processing speed. Experimental results show the performance of the proposed technique. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprint identification is based on two basic premises: (1) persistence and (2) individuality. We address the problem of fingerprint individuality by quantifying the amount of information available in minutiae features to establish a correspondence between two fingerprint images. We derive an expression which estimates the probability of a false correspondence between minutiae-based representations from two arbitrary fingerprints belonging to different fingers. Our results show that (1) contrary to the popular belief, fingerprint matching is not infallible and leads to some false associations, (2) while there is an overwhelming amount of discriminatory information present in the fingerprints, the strength of the evidence degrades drastically with noise in the sensed fingerprint images, (3) the performance of the state-of-the-art automatic fingerprint matchers is not even close to the theoretical limit, and (4) because automatic fingerprint verification systems based on minutia use only a part of the discriminatory information present in the fingerprints, it may be desirable to explore additional complementary representations of fingerprints for automatic matching. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field. Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems. This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprints have been an invaluable tool for law enforcement and forensics for over a century, motivating research into automated fingerprint-based identification in the early 1960s. More recently, fingerprints have found an application in biometric systems. Biometrics is the automatic identification of an individual based on physiological or behavioural characteristics. Due to its security-related applications and the current world political climate, biometrics is presently the subject of intense research by private and academic institutions. Fingerprints are emerging as the most common and trusted biometric for personal identification. The main objective of this paper is to review the extensive research that has been done on automated fingerprint matching over the last four decades. In particular, the focus is on minutiae-based algorithms. Minutiae features contain most of a fingerprint’s individuality, and are consequently the most important fingerprint feature for verification systems. Minutiae extraction, matching algorithms, and verification performance are discussed in detail, with open problems and future directions identified. <s> BIB005 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprint matching is an important problem in fingerprint identification. A set of minutiae is usually used to represent a fingerprint. Most existing fingerprint identification systems match two fingerprints using minutiae-based method. Typically, they choose a reference minutia from the template fingerprint and the query fingerprint, respectively. When matching the two sets of minutiae, the template and the query, firstly reference minutiae pair is aligned coordinately and directionally, and secondly the matching score of the rest minutiae is evaluated. This method guarantees satisfactory alignments of regions adjacent to the reference minutiae. However, the alignments of regions far away from the reference minutiae are usually not so satisfactory. In this paper, we propose a minutia matching method based on global alignment of multiple pairs of reference minutiae. These reference minutiae are commonly distributed in various fingerprint regions. When matching, these pairs of reference minutiae are to be globally aligned, and those region pairs far away from the original reference minutiae will be aligned more satisfactorily. Experiment shows that this method leads to improvement in system identification performance. <s> BIB006 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> This paper presents a front-end filtering algorithm for fingerprint identification, which uses orientation field and dominant ridge distance as retrieval features. We propose a new distance measure that better quantifies the similarity evaluation between two orientation fields than the conventional Euclidean and Manhattan distance measures. Furthermore, fingerprints in the data base are clustered to facilitate a fast retrieval process that avoids exhaustive comparisons of an input fingerprint with all fingerprints in the data base. This makes the proposed approach applicable to large databases. Experimental results on the National Institute of Standards and Technology data base-4 show consistent better retrieval performance of the proposed approach compared to other continuous and exclusive fingerprint classification methods as well as minutia-based indexing schemes <s> BIB007 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprint matching has been successfully used by law enforcement for more than a century. The technology is now finding many other applications such as identity management and access control. The authors describe an automated fingerprint recognition system and identify key challenges and research opportunities in the field. <s> BIB008 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Introduction <s> Fingerprint systems have received a great deal of research and attracted many researchers’ effort since they provide a powerful tool for access control and security and for practical applications. A literature review of the techniques used to extract the features of fingerprint as well as recognition techniques is given in this paper. Some of the reviewed research articles have used traditional methods such as recognition techniques, whereas the other articles have used neural networks methods. In addition, fingerprint techniques of enhancement are introduced. <s> BIB009
|
Automatic fingerprint recognition has been one of the most known and used biometric authentication systems during the last decades. It has been used for personal verification and identification with great achievements . A vast number of applications incorporate fingerprint recognition as basics, such as forensics, building accessing, ATM authentication or secure payment BIB004 . There are some other human characteristics that can be used as traits of a biometric system, such as the person's face, the retina or iris , the voice, etc. There is no trait that highlights as the best one. However, on average, fingerprints offer good capabilities in all properties analyzed by the experts and excellent results in distinctiveness BIB003 , permanence and global performance BIB004 . Although the recognition is not as accurate as with other traits, it provides a good balance between accuracy, speed, resource requirements and robustness. Independent of the type of task, either verification BIB001 (one-to-one comparison) or identification (search for an input fingerprint in a database) BIB007 , it is necessary to perform a sequence of operations to build a template database and later use the system. Assuming that there is a database and that proper enrollments have been previously taken, the order of the operations for both tasks is given by: a capture of the fingerprint, a feature extraction stage, a matching and a pre-selection or filtering (which is associated to identification tasks only). The capture of the fingerprint obtains an image that is not usually stored as such in the database. Instead, a feature extraction process is applied to obtain up to three levels of features BIB009 : level 1 features provide, at the global level, information of singular points and ridge line flow or orientation; level 2 features, at a local level, refer to minutiae details which usually correspond to bifurcations and ridge endings; and level 3 features, at the very-fine level, include features inside the ridges such as width, shape, curvature, dots, etc. These features are only observable in high resolution images. Once a set of features is extracted from the fingerprint image, the final goal is to find (or confirm) the identity of a person whose fingerprint has been previously enrolled into the system. The matching mechanism is the responsible to provide a likeliness score between two fingerprints. Most of the efforts in matching are with the use of minutiae details, although there are other types of matching methods based on correlations of images, other types of features and even on level 3 features. Minutiae matching consists of finding the alignment between two templates that results in the maximum number of minutiae pairings. Furthermore, minutiae matching can be classified as local or global BIB002 , aligned or not BIB006 , etc; all the categories will be detailed in this paper. Many fingerprint matching algorithms have been proposed in the literature, and the operations with features they use are sometimes similar or even repeated. In spite of the existence of some reviews on the topic, such as BIB005 BIB004 BIB008 , they are not explicitly focused on matching and the characteristics of the methods are not completely studied or categorized. This issue may lead to a lack of unification and even to propose very similar matching methods in the future. Moreover, there are few attempts to empirically compare them. In this sense, the motivation of this paper can be segregated into three main objectives: • To gather and briefly describe all the matching methods proposed in the specialized literature. • To offer an entire taxonomy based on the main processes and properties observed in the matching methods. It allows us to understand the reasons to choose the most suitable matching algorithm depending on the circumstances. • To conduct an empirical study analyzing the most important local minutiae-based matching algorithms in terms of accuracy and speed throughput when they are applied to both verification and identification tasks. The rest of this paper is organized as follows. Section 2 provides the necessary background in fingerprint minutiae matching. In Section 3, we introduce the main properties and the taxonomy for the matching methods. Next, Section 4 overviews the current trends in fingerprint matching. In Section 5, experiments on several data sets compare some of the most important local minutiae-based matching methods. Finally, Section 6 concludes the paper, including some original opinions for instruction in theory and application and future research directions. Additional material to the paper can be found at http://sci2s.ugr.es/MatchingReview/.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> With the current rapid growth in multimedia technology, there is an imminent need for efficient techniques to search and query large image databases. Because of their unique and peculiar needs, image databases cannot be treated in a similar fashion to other types of digital libraries. The contextual dependencies present in images, and the complex nature of two-dimensional image data make the representation issues more difficult for image databases. An invariant representation of an image is still an open research issue. For these reasons, it is difficult to find a universal content-based retrieval technique. Current approaches based on shape, texture, and color for indexing image databases have met with limited success. Further, these techniques have not been adequately tested in the presence of noise and distortions. A given application domain offers stronger constraints for improving the retrieval performance. Fingerprint databases are characterized by their large size as well as noisy and distorted query images. Distortions are very common in fingerprint images due to elasticity of the skin. In this paper, a method of indexing large fingerprint image databases is presented. The approach integrates a number of domain-specific high-level features such as pattern class and ridge density at higher levels of the search. At the lowest level, it incorporates elastic structural feature-based matching for indexing the database. With a multilevel indexing approach, we have been able to reduce the search space. The search engine has also been implemented on Splash 2-a field programmable gate array (FPGA)-based array processor to obtain near-ASIC level speed of matching. Our approach has been tested on a locally collected test data and on NIST-9, a large fingerprint database available in the public domain. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is incapable of meeting today's increasing performance requirements. An automatic fingerprint identification system (AFIS) is needed. This paper describes the design and implementation of an online fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al. (1995), which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an online inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of online verification with high accuracy. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint matching is one of the most important problems in AFIS. In general, we use minutiae such as ridge endings and ridge bifurcation to represent a fingerprint and do fingerprint matching through minutiae matching. We propose a minutiae matching algorithm which modified Jain et al.'s algorithm (1997). Our algorithm can better distinguish two images from different fingers and is more robust to nonlinear deformation. Experiments done on a set of fingerprint images captured with an inkless scanner shows that our algorithm is fast and has high accuracy. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Abstract In this paper, a fuzzy bipartite weighted graph model is proposed to solve fingerprint verification problem. A fingerprint image is preprocessed first to form clusters of feature points, which are called feature point clusters. Twenty-four attributes are extracted for each feature point cluster. The attributes are characterized by fuzzy values. Attributes of an input image to be verified are considered as the set of left nodes in a fuzzy bipartite weighted graph, and the attributes of claimed template fingerprint image are considered as the set of right nodes in the graph. The fingerprint verification problem is thus converted into a fuzzy bipartite weighted graph matching problem. A matching algorithm is proposed for the fuzzy bipartite weighted graph model to find an optimal matching with a goodness score. Experimental results reveal the feasibility of the proposed approach in fingerprint verification. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> This paper addresses the improvement on the matching accuracy and speed of generalized Hough transform for the biometric identification applications. The difficulties encountered in generalized Hough transform are investigated and a new hierarchical Hough transform algorithm is proposed, which is faster and more accurate compared to conventional generalized Hough transform. <s> BIB005 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> An important step in automatic fingerprint identification system (AFIS) is fingerprint matching. The task of fingerprint matching is to verify whether two fingerprints are coming from same finger. In this paper we detail and discuss the fingerprint matching algorithm. A minutia matching algorithm is proposed which modified the algorithm presented by Jain et al. In this algorithm, in order to reduce the effect of noise and false minutiae, block orientation and ridge information are introduced into the minutiae-based matching algorithm in a simple but reliable way. Ridge information which is some sampled points in the ridge are used to align two fingerprints, in order to avoid misalign two fingerprints, we use block orientation to correct the ridge alignment. Experiments on the database FVC2002 show the performance. <s> BIB006 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fast and robust fingerprint matching is a challenging task today in fingerprint-based biometric systems. A fingerprint matching algorithm compares two given fingerprints and returns either a degree of similarity or a binary decision. Minutiae-based fingerprint matching is the most well-known and widely used method. This paper reveals a new technique of fingerprint matching, using an efficient data structure, combining the minutiae representation with the individual usefulness of each minutia, to make the matching more powerful. Experimental results exhibit the strength of this method. <s> BIB007 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> In this paper, it is provided statistical evidence that supports fingerprint minutiae matching algorithms which use line segments formed by pairs of minutiae as candidates for pivots. This pivots are used to superimpose the analyzed fingerprint templates. Also in this work an algorithm to improve the matcher performance for uncontrolled acquisition systems is proposed. This algorithm employs a technique to sort the minutiae list in templates increasing the chances that corresponding line segments in two templates are tested in the early algorithm iterations. The analysis and the proposed algorithm are validated with data from FVC2000 and FVC2002 databases. <s> BIB008 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> We propose a fast fingerprint matching methodology based on localizing the matching regions in captured fingerprint images. The determination of the locations of such regions relies on the accurate detection of reference points in the images together with a priori knowledge of the complete fingerprint obtained during the fingerprint enrollment procedure. The relationship between authentication reliability and region size is studied experimentally. Results show that sufficiently accurate fingerprint matching can be achieved using very small bitmaps, making it possible to implement very fast fingerprint authentication systems using relatively slow embedding processors. <s> BIB009 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint matching is an important problem in fingerprint identification. A set of minutiae is usually used to represent a fingerprint. Most existing fingerprint identification systems match two fingerprints using minutiae-based method. Typically, they choose a reference minutia from the template fingerprint and the query fingerprint, respectively. When matching the two sets of minutiae, the template and the query, firstly reference minutiae pair is aligned coordinately and directionally, and secondly the matching score of the rest minutiae is evaluated. This method guarantees satisfactory alignments of regions adjacent to the reference minutiae. However, the alignments of regions far away from the reference minutiae are usually not so satisfactory. In this paper, we propose a minutia matching method based on global alignment of multiple pairs of reference minutiae. These reference minutiae are commonly distributed in various fingerprint regions. When matching, these pairs of reference minutiae are to be globally aligned, and those region pairs far away from the original reference minutiae will be aligned more satisfactorily. Experiment shows that this method leads to improvement in system identification performance. <s> BIB010 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Dealing with non-linear distortion in fingerprint images is a major difficulty for automated fingerprint verification systems. While this distortion can be a nuisance in minutiae matching systems, it is a major concern when matching smaller structures, such as points along ridges and pore locations. In this paper we show that a simple transformation derived from a Taylor series expansion can be used in conjunction with a set of corresponding minutia points to improve the correspondence of finer fingerprint details within a fingerprint image. This is demonstrated by applying the transformation to a database of fingerprint images and examining the ridge and pore match scores with and without the transformation. The results of our study show that this transform does provide a noticeable increase in matching accuracy. <s> BIB011 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> In this paper we present a new adaptive hybrid energy-based fingerprint matching system, which combines both minutiae information available in a fingerprint with the information of the local ridges in their vicinity. A more continuous representation of fingerprints can be obtained by using an energy-based rectangular tessellation with non-overlapped squared cells. However, a fixed tessellation is not efficient in handling non-linear deformations in fingerprints for which we propose an adaptive matching technique that uses dynamic rectangular tessellation to handle them. Each time a match is not found the dynamic tessellation increases its cell size until there is a match or cell size is greater than image size where the fingerprint is rejected. The basic idea of this system is to divide the fingerprint-matching problem into several small sub-problems that involve the use of cell energy minimization for which an iterative schema is devised. At each minimization step this schema optimizes its local energy according to the previous estimate and the observed image features. Minutiae and local ridges in their vicinity, produce different amounts of energy which form the energy vectors of the fingerprint image. In this work, we focus on the difficult problem of recognizing known fingerprints while rejecting unknown ones. Our system was tested on FVC2000 benchmark database of fingerprints and showed promising results. We show that matching performance can be improved by using energy vectors and adaptive matching, where adaptive matching reduces the effect of intra-class variations between different impressions of the same fingerprint image and energy vectors can efficiently represent fingerprints by using both information extracted from the minutiae and their local surrounding ridges. <s> BIB012 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Minutiae pattern remains a widely used representation of a fingerprint. The research on minutiae matching never stops due to its complexity and intractability. In this paper, an efficient fingerprint minutiae matching algorithm is proposed. To obtain reliable reference minutiae pairs, the bank of coordinate systems is introduced. The coordinate systems bank is derived from the original minutiae features and applied to get more useful information about the minutiae. To improve the accuracy of minutiae matching, a global optimum alignment approach is developed, which is targeted on the alignment of the set of reference minutiae pairs. Experimental results show that this algorithm achieves excellent performance with high matching speed and high matching reliability. <s> BIB013 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Many fingerprint matching algorithms have been reported in articles in recent years. And people did fingerprint images matching through minutiae matching in most of the algorithms. In this paper, we proposed a new fingerprint minutiae matching algorithm, which is fast, accurate and suitable for the real time fingerprint identification system. In this algorithm we used the core point to determine the reference point and used a round bounding box for matching. Experiments done on a set of fingerprint images captured with a scanner showed that our algorithm is faster and more accurate than Xiping Luo's algorithm. <s> BIB014 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Utilizing more information other than minutiae is much helpful for large-scale fingerprint recognition applications. In this paper, we proposed a polynomial model to approximate the density map of fingerprints and used the model's parameters as a novel kind of feature for fingerprint representation. Thus, the density information can be utilized into the matching stage with a low additional storage cost. A decision-level fusion scheme is further used to combine the density map matching with conventional minutiae-based matching and experimental results showed a much better performance than using single minutiae-based matching. <s> BIB015 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> As an important feature, orientation field describes the global structure of fingerprints. It provides robust discriminatory information other than traditional widely-used minutiae points. However, there are few works explicitly incorporating this information into fingerprint matching stage, partly due to the difficulty of saving the orientation field in the feature template. In this paper, we propose a novel representation for fingerprints which includes both minutiae and model-based orientation field. Then, fingerprint matching can be done by combining the decisions of the matchers based on the global structure (orientation field) and the local cue (minutiae). We have conducted a set of experiments on large-scale databases and made thorough comparisons with the state-of-the-arts. Extensive experimental results show that combining these local and global discriminative information can largely improve the performance. The proposed system is more robust and accurate than conventional minutiae-based methods, and also better than the previous works which implicitly incorporate the orientation information. In this system, the feature template takes less than 420 bytes, and the feature extraction and matching procedures can be done in about 0.30 s. We also show that the global orientation field is beneficial to the alignment of the fingerprints which are either incomplete or poor-qualitied. <s> BIB016 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint registration is a critical step in fingerprint matching. Although a variety of registration alignment algorithms have been proposed, accurate fingerprint registration remains an unresolved problem. We propose a new algorithm for fingerprint registration using orientation field. This algorithm finds the correct alignment by maximization of mutual information between features extracted from orientation fields of template and input fingerprint images. Orientation field, representing the flow of ridges, is a relatively stable global feature of fingerprint images. This method uses the statistics and distribution of global feature of fingerprint images so that it is robust to image quality and local changes in images. The primary characteristic of this method is that it uses this stable global feature to align fingerprints, and that its behavior may resemble the way humans compare fingerprints. Experimental results show that the occurrence of misalignment is dramatically reduced and that registration accuracy is greatly improved at the same time, leading to enhanced matching performance. <s> BIB017 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> This paper presents a novel approach to fingerprint alignment based on the optimization of cost functions. The optimization is performed in two stages: the first stage provides a robust initial registration based on non-minutiae features, and the second stage proceeds by fine tuning the alignment parameters using minutiae. This approach represents a significant departure from traditional fingerprint matching algorithms that rely heavily on minutiae features for both registration and verification. The resulting algorithm is not only simple and intuitive, but is also robust, efficient, and accurate. Several alternative alignment algorithms have been implemented, and their results are compared using an FVC2002 dataset. An EER of 1.6% has been achieved for the proposed algorithm. <s> BIB018 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint matching is still a challenging problem for reliable person authentication because of the complex distortions involved in two impressions of the same finger. In this paper, we propose a fingerprint-matching approach based on genetic algorithms (GA), which tries to find the optimal transformation between two different fingerprints. In order to deal with low-quality fingerprint images, which introduce significant occlusion and clutter of minutiae features, we design a fitness function based on the local properties of each triplet of minutiae. The experimental results on National Institute of Standards and Technology fingerprint database, NIST-4, not only show that the proposed approach can achieve good performance even when a large portion of fingerprints in the database are of poor quality, but also show that the proposed approach is better than another approach, which is based on mean-squared error estimation. <s> BIB019 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Minutiae point pattern matching is the most common approach for fingerprint verification. Although many minutiae point pattern matching algorithms have been proposed, reliable automatic fingerprint verification remains as a challenging problem, both with respect to recovering the optimal alignment and the construction of an adequate matching function. In this paper, we develop a memetic fingerprint matching algorithm (MFMA) which aims to identify the optimal or near optimal global matching between two minutiae sets. Within the MFMA, we first introduce an efficient matching operation to produce an initial population of local alignment configurations by examining local features of minutiae. Then, we devise a hybrid evolutionary procedure by combining the use of the global search functionality of a genetic algorithm with a local improvement operator to search for the optimal or near optimal global alignment. Finally, we define a reliable matching function for fitness computation. The proposed algorithm was evaluated by means of a series of experiments conducted on the FVC2002 database and compared with previous work. Experimental results confirm that the MFMA is an effective and practical matching algorithm for fingerprint verification. The algorithm is faster and more accurate than a traditional genetic-algorithm-based method. It is also more accurate than a number of other methods implemented for comparison, though our method generally requires more computational time in performing fingerprint matching. <s> BIB020 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprint matching has been approached using various criteria based on different extracted features. However, robust and accurate fingerprint matching is still a challenging problem. In this paper, we propose an improved integrated method which operates by first suggesting a consensus matching function, which combines different matching criteria based on heterogeneous features. We then devise a genetically guided approach to optimise the consensus matching function for simultaneous fingerprint alignment and verification. Since different features usually offer complementary information about the matching task, the consensus function is expected to improve the reliability of fingerprint matching. A related motivation for proposing such a function is to build a robust criterion that can perform well over a variety of different fingerprint matching instances. Additionally, by employing the global search functionality of a genetic algorithm along with a local matching operation for population initialisation, we aim to identify the optimal or near optimal global alignment between two fingerprints. The proposed algorithm is evaluated by means of a series of experiments conducted on public domain collections of fingerprint images and compared with previous work. Experimental results show that the consensus function can lead to a substantial improvement in performance while the local matching operation helps to identify promising initial alignment configurations, thereby speeding up the verification process. The resulting algorithm is more accurate than several other proposed methods which have been implemented for comparison. <s> BIB021 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> In this paper, we proposed a new method using long digital straight segments (LDSSs) for fingerprint recognition based on such a discovery that LDSSs in fingerprints can accurately characterize the global structure of fingerprints. Different from the estimation of orientation using the slope of the straight segments, the length of LDSSs provides a measure for stability of the estimated orientation. In addition, each digital straight segment can be represented by four parameters: x-coordinate, y-coordinate, slope and length. As a result, only about 600 bytes are needed to store all the parameters of LDSSs of a fingerprint, as is much less than the storage orientation field needs. Finally, the LDSSs can well capture the structural information of local regions. Consequently, LDSSs are more feasible to apply to the matching process than orientation fields. The experiments conducted on fingerprint databases FVC2002 DB3a and DB4a show that our method is effective. <s> BIB022 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Fingerprints and palmprints are the most common authentic biometrics for personal identification, especially for forensic security. Previous research have been proposed to speed up the searching process in fingerprint and palmprint identification systems, such as those based on classification or indexing, in which the deterioration of identification accuracy is hard to avert. In this paper, a novel hierarchical minutiae matching algorithm for fingerprint and palmprint identification systems is proposed. This method decomposes the matching step into several stages and rejects many false fingerprints or palmprints on different stages, thus it can save much time while preserving a high identification rate. Experimental results show that the proposed algorithm can save almost 50% searching time compared with traditional methods and illustrate its effectiveness. <s> BIB023 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> References <s> Incorporation of global features in minutia-based fingerprint recognition schemes enhances their recognition capability but at the expense of a substantially increased complexity. In this paper, we introduce a novel low-complexity multilevel structural technique for fingerprint recognition by first decomposing a fingerprint image into regions based on only some of the global features and then formulating multilevel feature vectors to represent the fingerprint by employing both the global and local features. A fast multilevel matching algorithm based on the new fingerprint representation is proposed. In order to show the effectiveness of the proposed scheme, extensive experiments are conducted using challenging benchmark databases from the 2002, 2004 and 2006 Fingerprint Verification Competitions (FVC2002, FVC2004 and FVC2006), and the results compared with those of some state-of-the-art schemes. The experimental results show that the average template size of the fingerprint representation is only 253bytes, whereas the average enrollment and matching time is about 0.23s. The proposed scheme is shown to yield recognition accuracy higher than that provided by the existing schemes at a lower cost. <s> BIB024
|
Main Property BIB001 BIB005 Hough transform-based approaches BIB002 BIB003 BIB006 Ridge-based relative pre-alignment BIB004 BIB010 Global matching of clusters of minutiae BIB007 BIB008 BIB013 Algebraic geometry-based approaches BIB009 BIB014 Singularity-based relative pre-alignment BIB011 Warping modeling-based approaches BIB012 Minutiae matching with tesselated local information BIB015 Global minutiae matching with image correlation BIB016 BIB017 BIB018 BIB022 Orientation image-based relative pre-alignment BIB019 BIB020 BIB021 Global matching by evolutionary algorithms Weighted global matching with adjustment of scores BIB023 BIB024 Hierarchical and/or multilevel minutiae matching Recently, most of the proposals of fingerprint minutiae matching designed to be implemented in real systems have given up the idea of global matching in favor of local matching. Nevertheless, although the focus of this paper is to review the properties and methods belonging to local minutiae matching, we also provide an enumeration of the most influential global minutiae matching methods proposed in the specialized literature (see Table 1 ).
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> A fast parallel thinning algorithm is proposed in this paper. It consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deleting the north-west boundary points and the south-east corner points. End points and pixel connectivity are preserved. Each pattern is thinned down to a skeleton of unitary thickness. Experimental results show that this method is very effective. 12 references. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> The skeleton of a digital figure can often be regarded as a convenient alternative to the figure itself. It is useful both to diminish drastically the amount of data to be handled, and to simplify the computational procedures required for description and classification purposes. Thinning a digital figure down to its skeleton is a time-consuming process when conventional sequential computers are employed. The procedure we propose allows one to speed up the thinning transformation, and to get a well-shaped skeleton. After cleaning of the input picture has been performed, the pixels of the figure are labeled according to their distance from the background, and a set, whose pixels are symmetrically placed with respect to distinct contour parts of the figure, is found. This set is then given a linear structure by applying topology preserving removal operations. Finally, a pruning step, regarding branches not relevant in the framework of the problem domain, completes the process. The resulting skeleton is a labeled set of pixels which is shown to possess all the required properties, particularly those concerning connectedness, topology, and shape. Moreover, the original figure can almost completely be recovered by means of a reverse distance transformation. Only a fixed and small number of sequential passes through the picture is necessary to achieve the goal. The computational effort is rather modest, and the use of the proposed algorithm turns out to be more advantageous the greater the width of the figure to be thinned. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Two parallel thinning algorithms are presented and evaluated in this article. The two algorithms use two-subiteration approaches: (1) alternatively deleting north and east and then south and west boundary pixels and (2) alternately applying a thinning operator to one of two subfields. Image connectivities are proven to be preserved and the algorithms' speed and medial curve thinness are compared to other two-subiteration approaches and a fully parallel approach. Both approaches produce very thin medial curves and the second achieves the fastest overall parallel thinning. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Abstract A reliable method for extracting structural features from fingerprint images is presented. Viewing fingerprint images as a textured image, an orientation flow field is computed. The rest of the stages in the algorithm use the flow field to design adaptive filters for the input image. To accurately locate ridges, a waveform projection-based ridge segmentation algorithm is used. The ridge skeleton image is obtained and smoothed using morphological operators to detect the features. A large number of spurious features from the detected set of minutiae is deleted by a postprocessing stage. The performance of the proposed algorithm has been evaluated by computing a “goodness index” (GI) which compares the results of automatic extraction with manually extracted ground truth. The significance of the observed GI values is determined by comparing the index for a set of fingerprints against the GI values obtained under a baseline distribution. The detected features are observed to be reliable and accurate. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> In order to ensure that the performance of an automatic fingerprint identification/verification system will be robust with respect to the quality of input fingerprint images, it is essential to incorporate a fingerprint enhancement algorithm in the minutiae extraction module. We present a fast fingerprint enhancement algorithm, which can adaptively improve the clarity of ridge and valley structures of input fingerprint images based on the estimated local ridge orientation and frequency. We have evaluated the performance of the image enhancement algorithm using the goodness index of the extracted minutiae and the accuracy of an online fingerprint verification system. Experimental results show that incorporating the enhancement algorithm improves both the goodness index and the verification accuracy. <s> BIB005 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> This paper introduces a new efficient method for estimating the local ridge-line density in digital images. A mathematical characterization of the local frequency of sinusoidal signals is given, and a 2D-model is developed in order to approximate the ridge-line patterns. Experimental results obtained through a discrete implementation of the method are presented both in terms of accuracy and efficiency. <s> BIB006 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Fingerprint matching is one of the most important problems in Fingerprint Identification System (AFIS). In this paper a new method of the reference point alignment has been presented. A new approach of reference point localization is based on so-called identification masks which have been composed on the basis of analysis of biometric characteristic of human finger. Construction of such masks has been presented.Experiments show that our approach locates a unique reference point with high accuracy for all types of fingerprints. Generally, fingerprint matching consists with three steps: core (reference) point detection, filter the image using a bank Gabor filters, and comparison with imprint pattern. It seems, that today, the Gabor filtering gives the best results in fingerprint recognition. The proposed method was evaluated and tested on various fingerprint images, included in the FVC2000 fingerprint database. Performed results with representative investigations have been compared. <s> BIB007 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Fingerprint enhancement is a critical step in fingerprint identification. Most of the existing enhancement algorithms are based on local ridge direction. The main drawback of these methods lies in the fact that false estimate of local ridge direction will lead to poor enhancement. But the estimate of local ridge directions is unreliable in the areas corrupted by noise where enhancement is most needed. In this paper, we proposed a rule-based method to do fingerprint enhancement. We introduced human knowledge about fingerprints into the enhancement process in the form of rules and simulate what an expert will do to enhance a fingerprint image. In our method, the skeleton image is used to give ridge connection information for the enhancement of the binary image. Experiments show our algorithm is fast and has excellent performance. <s> BIB008 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Most of the current fingerprint identification and verification systems performs fingerprint matching based on different attributes of the minutia details present in fingerprints. The minutiae (i.e. ridge endings and ridge bifurcations) are usually detected in the thinned binary image of the fingerprint. Due to the presence of noise as well as the use of different preprocessing stages the thinned binary image contains a large number of false minutiae which may highly decrease the matching performance of the system. A new algorithm of fingerprint image postprocessing is proposed. The algorithm operates onto the thinned binary image of the fingerprint in order to eliminate the false minutiae. The proposed algorithm is able to detect and cancel the minutiae associated with most of the false minutia structures which may be encountered in the thinned fingerprint image. <s> BIB009 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> The first subject of the paper is the estimation of a high resolution directional field of fingerprints. Traditional methods are discussed and a method, based on principal component analysis, is proposed. The method not only computes the direction in any pixel location, but its coherence as well. It is proven that this method provides exactly the same results as the "averaged square-gradient method" that is known from literature. Undoubtedly, the existence of a completely different equivalent solution increases the insight into the problem's nature. The second subject of the paper is singular point detection. A very efficient algorithm is proposed that extracts singular points from the high-resolution directional field. The algorithm is based on the Poincare index and provides a consistent binary decision that is not based on postprocessing steps like applying a threshold on a continuous resemblance measure for singular points. Furthermore, a method is presented to estimate the orientation of the extracted singular points. The accuracy of the methods is illustrated by experiments on a live-scanned fingerprint database. <s> BIB010 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> True minutiae extraction in fingerprint image is critical to the performance of an automated identification system. Generally, a set of endings and bifurcations (both called feature points) can be obtained by the thinning image from which the true minutiae of the fingerprint are extracted by using the rules based on the structure of ridges. However, considering some false and true minutiae have similar ridge structures in the thinning image, in a lot of cases, we have to explore their difference in the binary image or the original gray image. In this paper, we first define the different types of feature points and analyze the properties of their ridge structures in both thinning and binary images for the purpose of distinguishing the true and false minutiae. Based on the knowledge of these properties, a fingerprint post-processing approach is developed to eliminate the false minutiae and at the same time improve the thinning image for further application. Many experiments are performed and the results have shown the great effectiveness of the approach. <s> BIB011 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> A major new professional reference work on fingerprint security systems and technology from leading international researchers in the field. Handbook provides authoritative and comprehensive coverage of all major topics, concepts, and methods for fingerprint security systems. This unique reference work is an absolutely essential resource for all biometric security professionals, researchers, and systems administrators. <s> BIB012 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as post processing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background. <s> BIB013 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> In this work, we introduce a new approach to automatic fingerprint classification. The directional image is partitioned into "homogeneous" connected regions according to the fingerprint topology, thus giving a synthetic representation which can be exploited as a basis for the classification. A set of dynamic masks, together with an optimization criterion, are used to guide the partitioning. The adaptation of the masks produces a numerical vector representing each fingerprint as a multidimensional point, which can be conceived as a continuous classification. Different search strategies are discussed to efficiently retrieve fingerprints both with continuous and exclusive classification. Experimental results have been given for the most commonly used fingerprint databases and the new method has been compared with other approaches known in the literature: As to fingerprint retrieval based on continuous classification, our method gives the best performance and exhibits a very high robustness. <s> BIB014 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> In this paper, we propose to use the fingerprint valley instead of ridge for the binarization-thinning process to extract fingerprint minutiae. We first use several preprocessing steps on the binary image in order to eliminate the spurious lakes and dots, and to reduce the spurious islands, bridges, and spurs in the skeleton image. By removing all the bug pixels introduced at the thinning stage, our algorithm can detect a maximum number of minutiae from the fingerprint skeleton using the Rutovitz Crossing Number. This allows the true minutiae preserved and false minutiae removed in later postprocessing stages. Finally, using the intrinsic duality property of fingerprint image we develop several postprocessing techniques to efficiently remove spurious minutiae. Especially, we define an H-point structure to remove several types of spurious minutiae including bridge, triangle, ladder, and wrinkle all together. Experimental results clearly demonstrate the effectiveness of the new algorithms. <s> BIB015 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Fingerprint image-quality checking is one of the most important issues in fingerprint recognition because recognition is largely affected by the quality of fingerprint images. In the past, many related fingerprint-quality checking methods have typically considered the condition of input images. However, when using the preprocessing algorithm, ridge orientation may sometimes be extracted incorrectly. Unwanted false minutiae can be generated or some true minutiae may be ignored, which can also affect recognition performance directly. Therefore, in this paper, we propose a novel quality-checking algorithm which considers the condition of the input fingerprints and orientation estimation errors. In the experiments, the 2-D gradients of the fingerprint images were first separated into two sets of 1-D gradients. Then, the shapes of the probability density functions of these gradients were measured in order to determine fingerprint quality. We used the FVC2002 database and synthetic fingerprint images to evaluate the proposed method in three ways: 1) estimation ability of quality; 2) separability between good and bad regions; and 3) verification performance. Experimental results showed that the proposed method yielded a reasonable quality index in terms of the degree of quality degradation. Also, the proposed method proved superior to existing methods in terms of separability and verification performance. <s> BIB016 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Singular points detection is the most challenging and important process in biometrics fingerprint verification and identification systems. Singular points are used for fingerprint classification, fingerprint matching and fingerprint alignment. This paper overcomes problems of the previous methods of miss- deducting or deducting spurious singular points. We propose a novel algorithm for singular point detection based on the fingerprint orientation field reliability. The algorithm starts by enhancing the fingerprint image using the short time Fourier Transform analysis (STFT), followed by calculating the orientation field reliability and locating the singular points. Experimental results have proven that the proposed algorithm locates singular points in a fingerprint image with high accuracy and can even locate the secondary core and delta if they exist. <s> BIB017 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Feature Extraction Techniques <s> Fingerprint minutiae extraction is a critical issue in fingerprint recognition. Both missing and spurious minutiae hinder the posterior matching process. Spurious minutiae are more frequent than missing ones, but they can be removed by post-processing. In this work, we study the usage of a state-of-the-art minutiae extractor, MINDTCT, and we analyze its major drawback: the presence of spurious minutiae lying on the borders of the fingerprint and out its area. In order to overcome this problem, we use two different filtering approaches based on the convex hull of the minutiae and the segmentation of the fingerprint. We will analyze, supported by an exhaustive experimental study, the efficacy of these methods to remove spurious minutiae. We will evaluate both the effect on different state-of-the-art matchers and the goodness of the minutiae, by comparing the extracted minutiae with the ground-truth ones. For this purpose, the experiments have been performed on several databases of both real and synthetic fingerprints. The filters used allow us to remove spurious minutiae, resulting in more accurate results even in the case of robust matchers. The EER is improved up to 2% for good quality databases, and up to 25% for FVC databases. Additionally, the matching time is accelerated, since less minutiae are processed, attaining up to a 60% runtime reduction for the tested database. <s> BIB018
|
This section is devoted to briefly identify the subset of feature extraction techniques frequently used in conjunction with fingerprint minutiae matching. It is worth mentioning that an exhaustive review of existing techniques can be found in BIB012 . Next, we will summarize the most representative algorithms according to their usage in practice and in subsequent matching approaches proposed in the literature: • Fingerprint segmentation BIB013 . • Local orientation map estimation BIB004 BIB010 . • Local ridge frequencies estimation BIB005 BIB006 . • Singular and core points searching BIB007 BIB017 . • Alignment of local orientations and ridge frequencies BIB014 . • Fingerprint binarization BIB005 . • Fingerprint skeletonization BIB001 BIB003 BIB008 . • Minutiae extraction BIB002 . • Spurious minutiae removal BIB009 BIB011 BIB015 BIB016 BIB018 .
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Taxonomy of Minutiae-Based Local Matching Methods <s> Proposes a fingerprint minutia matching technique, which matches the fingerprint minutiae by using both the local and global structures of minutiae. The local structure of a minutia describes a rotation and translation invariant feature of the minutia in its neighborhood. It is used to find the correspondence of two minutiae sets and increase the reliability of the global matching. The global structure of minutiae reliably determines the uniqueness of fingerprint. Therefore, the local and global structures of minutiae together provide a solid basis for reliable and robust minutiae matching. The proposed minutiae matching scheme is suitable for an online processing due to its high processing speed. Experimental results show the performance of the proposed technique. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Taxonomy of Minutiae-Based Local Matching Methods <s> Fingerprint matching is challenging as the matcher has to minimize two competing error rates: the False Accept Rate and the False Reject Rate. We propose a novel, efficient, accurate and distortion-tolerant fingerprint authentication technique based on graph representation. Using the fingerprint minutiae features, a labeled, and weighted graph of minutiae is constructed for both the query fingerprint and the reference fingerprint. In the first phase, we obtain a minimum set of matched node pairs by matching their neighborhood structures. In the second phase, we include more pairs in the match by comparing distances with respect to matched pairs obtained in first phase. An optional third phase, extending the neighborhood around each feature, is entered if we cannot arrive at a decision based on the analysis in first two phases. The proposed algorithm has been tested with excellent results on a large private livescan database obtained with optical scanners. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Taxonomy of Minutiae-Based Local Matching Methods <s> We introduce a novel fingerprint representation scheme that relies on describing the orientation field of the fingerprint pattern with respect to each minutia detail. This representation allows the derivation of a similarity function between minutiae that is used to identify corresponding features and evaluate the resemblance between two fingerprint impressions. A fingerprint matching algorithm, based on the proposed representation, is developed and tested with a series of experiments conducted on two public domain collections of fingerprint images. The results reveal that our method can achieve good performance on these data collections and that it outperforms other alternative approaches implemented for comparison. <s> BIB003
|
Nowadays, more than 80 minutiae-based local matching methods have been proposed in the specialized literature. This section is focused on enumerating and categorizing them according to the properties studied before. Table 2 presents an enumeration of the methods reviewed in this paper. In this field, the authors do not usually give a name for their proposal, with few exceptions. Thus, we will use the reference of the paper as their identifier. As we can see in Table 2 , the most common proposals use the Texture based topology, being the main baseline method the one proposed in BIB003 . Regarding other topologies, almost all the NN and Radius approaches provide from the matchers BIB001 and BIB002 . Referring to consolidation and the additional features, we can observe that all categories are spread over all methods without a clear norm. The access to the RP is more common in recent methods. Moreover, the RC and the use of the Types of minutiae are in decline in recent years, due to their lack of uniformity in different prints obtained from the same finger. Finally, few techniques require the use of parameter learning.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> An effective fingerprint verification system is presented. It assumes that an existing reference fingerprint image must validate the identity of a person by means of a test fingerprint image acquired online and in real-time using minutiae matching. The matching system consists of two main blocks: The first allows for the extraction of essential information from the reference image off-line, the second performs the matching itself online. The information is obtained from the reference image by filtering and careful minutiae extraction procedures. The fingerprint identification is based on triangular matching to cope with the strong deformation of fingerprint images due to static friction or finger rolling. The matching is finally validated by dynamic time warping. Results reported on the NIST Special Database 4 reference set, featuring 85 percent correct verification (15 percent false negative) and 0.05 percent false positive, demonstrate the effectiveness of the verification technique. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Biometrics-based verification, especially fingerprint-based identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiae-based representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiae-based matching has difficulty in quickly matching two fingerprint images containing a different number of unregistered minutiae points. The proposed filter-based algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature. Our system performs better than a state-of-the-art minutiae-based system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiae-based and filter-based) fingerprint information. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> An optical modeless Automatic Fingerprint Identification Systems (AFISs) is presented. The system uses a segmented Fourier-Mellin transform to preprocess the images. Image identification is performed using a HAusdorff-Voronoi NETwork (HAVNET), an artificial neural network designed for two-dimensional binary pattern recognition. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> For the alignment of two fingerprints certain landmark points are needed. These should be automaticaly extracted with low misidentification rate. As landmarks we suggest the prominent symmetry points (singular points, SPs) in the fingerprints. We identify an SP by its symmetry properties. SPs are extracted from the complex orientation field estimated from the global structure of the fingerprint, i.e. the overall pattern of the ridges and valleys. Complex filters, applied to the orientation field in multiple resolution scales, are used to detect the symmetry and the type of symmetry. Experimental results are reported. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Biometric authentication can provide an added level of security and/or ease of convenience in access control applications. Fingerprints are a popular choice among the biometric features and have been successfully used in criminal identification. In access control applications, we are interested in obtaining digital live-scan fingerprints from sensors, rather than the inked fingerprints usually used in criminal identification. In this paper, we evaluate the performance of composite correlation filters in fingerprint verification for access control applications. The NIST Special Database 24, obtained from an optical fingerprint sensor, is used to evaluate the performance of fingerprint verification in the presence of distortion. <s> BIB005 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> SUMMARY This paper presents an algorithm for fingerprint matching using the Phase-Only Correlation (POC) function. One of the most difficult problems in human identification by fingerprints has been that the matching performance is significantly influenced by fingertip surface condition, which may vary depending on environmental or personal causes. This paper proposes a new fingerprint matching algorithm using phase spectra of fingerprint images. The proposed algorithm is highly robust against fingerprint image degradation due to inadequate fingertip conditions. A set of experiments is carried out using fingerprint images captured by a pressure sensitive fingerprint sensor. The proposed algorithm exhibits efficient identification performance even for difficult fingerprint images that could not <s> BIB006 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> A new fingerprint matching method is proposed in this paper, with which two fingerprint skeleton images are matched directly. In this method, an associate table is introduced to describe the relation of a ridge with its neighbor ridges, so the whole ridge pattern can be easily handed. In addition, two unique similarity measures, one for ridge curves, another for ridge patterns, are defined with the elastic distortion taken into account. Experiment results on several databases demonstrate the effectiveness and robustness of the proposed method. <s> BIB007 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> The fingerprint matching using the original FingerCode generation has proved its effectiveness but it suffers from some limitations such as the reference point localization and the recourse to the relative fingerprint pre-alignment stage. In this paper, we propose a new hybrid fingerprint matching technique based on minutiae texture maps according to their orientations. Therefore, rather than exploiting the eight fixed directions of Gabor filters for all original fingerprint images filtering process, we construct absolute images starting from the minutiae localizations and orientations to generate our weighting oriented Minutiae Codes. The extracted features are invariant to translation and rotation, which allows us avoiding the fingerprint pair relative alignment stage. Results are presented demonstrating significant improvements in fingerprint matching accuracy through public fingerprint databases. <s> BIB008 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Fingerprint friction ridge details are generally described in a hierarchical order at three different levels, namely, level 1 (pattern), level 2 (minutia points), and level 3 (pores and ridge contours). Although latent print examiners frequently take advantage of level 3 features to assist in identification, automated fingerprint identification systems (AFIS) currently rely only on level 1 and level 2 features. In fact, the Federal Bureau of Investigation's (FBI) standard of fingerprint resolution for AFIS is 500 pixels per inch (ppi), which is inadequate for capturing level 3 features, such as pores. With the advances in fingerprint sensing technology, many sensors are now equipped with dual resolution (500 ppi/1,000 ppi) scanning capability. However, increasing the scan resolution alone does not necessarily provide any performance improvement in fingerprint matching, unless an extended feature set is utilized. As a result, a systematic study to determine how much performance gain one can achieve by introducing level 3 features in AFIS is highly desired. We propose a hierarchical matching system that utilizes features at all the three levels extracted from 1,000 ppi fingerprint scans. Level 3 features, including pores and ridge contours, are automatically extracted using Gabor filters and wavelet transform and are locally matched using the iterative closest point (ICP) algorithm. Our experiments show that level 3 features carry significant discriminatory information. There is a relative reduction of 20 percent in the equal error rate (EER) of the matching system when level 3 features are employed in combination with level 1 and 2 features. This significant performance gain is consistently observed across various quality fingerprint images <s> BIB009 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> In this work, we present a novel hybrid fingerprint matcher system based on local binary patterns. The two fingerprints to be matched are first aligned using their minutiae, then the images are decomposed in several overlapping sub-windows, each sub-window is convolved with a bank of Gabor filters and, finally, the invariant local binary patterns histograms are extracted from the convolved images. Extensive experiments conducted over the four FVC2002 fingerprint databases show the effectiveness of the proposed hybrid approach with respect to the well-known Tico's minutiae matcher and other image-based approaches. Moreover, a BioHashing approach have been designed using the proposed fixed-length feature vector and very interesting performance has been obtained by combining it with the Tico's minutiae matcher. <s> BIB010 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> The spectral minutiae representation is a method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require as an input a fixed-length feature vector. Based on the spectral minutiae features, this paper introduces two feature reduction algorithms: the Column Principal Component Analysis and the Line Discrete Fourier Transform feature reductions, which can efficiently compress the template size with a reduction rate of 94%. With reduced features, we can also achieve a fast minutiae-based matching algorithm. This paper presents the performance of the spectral minutiae fingerprint recognition system and shows a matching speed with 125 000 comparisons per second on a PC with Intel Pentium D processor 2.80 GHz and 1 GB of RAM. This fast operation renders our system suitable as a preselector for a large-scale fingerprint identification system, thus significantly reducing the time to perform matching, especially in systems operating at geographical level (e.g., police patrolling) or in complex critical environments (e.g., airports). <s> BIB011 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Considering fingerprint matching as a classification problem, the extreme learning machine (ELM) is a powerful classifier for assigning inputs to their corresponding classes, which offers better generalization performance, much faster learning speed, and minimal human intervention, and is therefore able to overcome the disadvantages of other gradient-based, standard optimization-based, and least squares-based learning techniques, such as high computational complexity, difficult parameter tuning, and so on. This paper proposes a novel fingerprint recognition system by first applying the ELM and Regularized ELM (R-ELM) to fingerprint matching to overcome the demerits of traditional learning methods. The proposed method includes the following steps: effective preprocessing, extraction of invariant moment features, and PCA for feature selection. Finally, ELM and R-ELM are used for fingerprint matching. Experimental results show that the proposed methods have a higher matching accuracy and are less time-consuming; thus, they are suitable for real-time processing. Other comparative studies involving traditional methods also show that the proposed methods with ELM and R-ELM outperform the traditional ones. <s> BIB012 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Fingerprint matching is an important and essential step in automated fingerprint recognition systems (AFRSs). The noise and distortion of captured fingerprints and the inaccurate of extracted features make fingerprint matching a very difficult problem. With the advent of high-resolution fingerprint imaging techniques and the increasing demand for high security, sweat pores have been recently attracting increasing attention in automatic fingerprint recognition. Therefore, this paper takes fingerprint pore matching as an example to show the robustness of our proposed matching method to the errors caused by the fingerprint representation. This method directly matches pores in fingerprints by adopting a coarse-to-fine strategy. In the coarse matching step, a tangent distance and sparse representation-based matching method (denoted as TD-Sparse) is proposed to compare pores in the template and test fingerprint images and establish one-to-many pore correspondences between them. The proposed TD-Sparse method is robust to noise and distortions in fingerprint images. In the fine matching step, false pore correspondences are further excluded by a weighted RANdom SAmple Consensus (WRANSAC) algorithm in which the weights of pore correspondences are determined based on the dis-similarity between the pores in the correspondences. The experimental results on two databases of high-resolution fingerprints demonstrate that the proposed method can achieve much higher recognition accuracy compared with other state-of-the-art pore matching methods. <s> BIB013 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> Most fingerprint recognition systems are based on matching the location and orientation attributes of minutia points. In this paper, we propose a localized minutiae phase spectrum representation that encodes the local minutiae structure in the neighborhood of a given minutia point as a fixed-length binary code. Since this representation is invariant to global transformations (e.g., translation and rotation), the correspondences between the minutia points from two different fingerprints can be easily established based on the similarity (Hamming distance) between their phase spectral codes. In addition to determining the local minutiae similarities, a global similarity score can also be computed by aligning the query to the template based on the estimated correspondences and finding the similarity between the global phase spectra of the aligned minutiae sets. While the local similarity scores are robust to nonlinear fingerprint distortion, the global similarity score captures the highly distinctive spatial relationships between all the minutia points. Therefore, a combination of these two similarity measures provides high recognition accuracy. Experiments on the public-domain FVC2002 databases (DB1 and DB2) and FVC2006-DB2 demonstrate that the proposed approach achieves less than 1% Equal Error Rate, which is better than the state-of-the-art fingerprint matchers using only the location and orientation of minutia points. <s> BIB014 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Correlation-based Techniques and Matching without Minutiae <s> This paper presents results on direct optical matching, using Fourier transforms and neural networks for matching "ngerprints for authentication. Direct optical correlations and hybrid optical neural network correlation are used in the matching system. The test samples used in the experiments are the "ngerprints taken from NIST database SD-9. These images, in both binary and gray-level forms, are stored in a VanderLugt correlator (A. VanderLugt, Signal detection by complex spatial "ltering, IEEE Trans. Inform. Theory IT-10 (1964) 139}145). Tests of typical cross correlations and autocorrelation sensitivity for both binary and 8 bit gray images are presented. When Fourier transform (FT) correlations are used to generate features that are localized to parts of each "ngerprint and combined using a neural network classi"cation network and separate class-by-class matching networks, 90.9% matching accuracy is obtained on a test set of 200,000 image pairs. These results are obtained on images using 512 pixel resolution. The e!ect of image quality and resolution are tested using 256 and 128 pixel images, and yield accuracy of 89.3 and 88.7%. The 128-pixel images show only ridge #ow and have no reliably detectable ridge endings or bifurcations and are therefore not suitable for minutia matching. This demonstrates that Fourier transform matching and neural networks can be used to match "ngerprints which have too low image quality to be matched using minutia-based methods. Since more than 258,000 images were used to test each hybrid system, this is the largest test to date of FT matching for "ngerprints. Published by Elsevier Science Ltd. <s> BIB015
|
Generically, matching by correlation of images occurs when two fingerprint images are superimposed and their similarity is computed through the correlation between corresponding pixels for different alignments. However, this apparently simple operation rarely leads to acceptable results, mainly due to undesirable changes of global structure and brightness and contrast of the image, both depending on distortions and skin condition. Moreover, this process may involve high computational costs. In the specialized literature, there are various alternatives coped to palliate some of the problems associated with correlation-based matching. For example, to alleviate the distortion problem, some proposals use local windows around the minutiae BIB001 , singular points alignment before correlation BIB004 or advanced correlation filters BIB005 . To reduce the computational complexity, the correlation is performed in local regions in the Fourier domain BIB015 , or using the Fourier-Mellin transform to maintain rotation and translation invariance BIB003 , the symmetric phase only filter to reduce noise BIB006 and the curvelet transform . Recently, there is a promising trend that transforms minutiae positions and orientations to spectral representations in fixed-length feature vectors invariant to translations, rotations and scale. They are suitable to be reduced by dimensionality reduction techniques to speed up the matching process BIB011 BIB014 . Other approaches perform fingerprint matching without the use of minutiae. They use the so-called texture information, being the most popular the FingerCode approach BIB002 , which chains tessellated areas related to core points with Gabor filter to capture useful texture information. FingerCode features have been used in later research BIB008 BIB010 BIB012 . Isolated orientation or ridge information BIB007 can also be used for matching. Finally, when high resolution images are available, level-3 features such as sweat pores, dots and incipient ridges can be used instead of minutiae BIB009 BIB013 .
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Fingerprint Indexing <s> We are concerned with accurate and efficient indexing of fingerprint images. We present a model-based approach, which efficiently retrieves correct hypotheses using novel features of triangles formed by the triplets of minutiae as the basic representation unit. The triangle features that we use are its angles, handedness, type, direction, and maximum side. Geometric constraints based on other characteristics of minutiae are used to eliminate false correspondences. Experimental results on live-scan fingerprint images of varying quality and NIST special database 4 (NIST-4) show that our indexing approach efficiently narrows down the number of candidate hypotheses in the presence of translation, rotation, scale, shear, occlusion, and clutter. We also perform scientific experiments to compare the performance of our approach with another prominent indexing approach and show that the performance of our approach is better for both the live scan database and the ink based database NIST-4. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Fingerprint Indexing <s> Fingerprint indexing is a key technique in automatic fingerprint identification systems (AFIS). However, handling fingerprint distortion is still a problem. This paper concentrates on a more accurate fingerprint indexing algorithm that efficiently retrieves the top N possible matching candidates from a huge database. To this end, we design a novel feature based on minutia neighborhood structure (we call this minutia detail and it contains richer minutia information) and a more stable triangulation algorithm (low-order Delaunay triangles, consisting of order 0 and 1 Delaunay triangles), which are both insensitive to fingerprint distortion. The indexing features include minutia detail and attributes of low-order Delaunay triangle (its handedness, angles, maximum edge, and related angles between orientation field and edges). Experiments on databases FVC2002 and FVC2004 show that the proposed algorithm considerably narrows down the search space in fingerprint databases and is stable for various fingerprints. We also compared it with other indexing approaches, and the results show our algorithm has better performance, especially on fingerprints with distortion. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Fingerprint Indexing <s> This paper describes a new fingerprint indexing approach based on vector and scalar features, obtained from ridge-line orientations and frequencies. A carefully designed set of features and ad-hoc score measures allow the proposed indexing algorithm to be extremely effective and efficient, as confirmed by the results of extensive experiments. The new method markedly outperforms competing state-of-the-art techniques over six publicly available data sets. Furthermore, it can scale to large databases without losing accuracy: on a standard PC, a search over one million fingerprints takes less than 1 s. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Fingerprint Indexing <s> This correspondence proposes new candidate list reduction criteria for fingerprint indexing approaches. The basic idea is that, given a query fingerprint, the initial set of scores produced by an indexer could contain useful information to reduce the candidate list. Novel reduction criteria have been proposed, and extensive experiments have been carried out over five publicly available benchmarks, using two state-of-the-art fingerprint indexing techniques. Although quite simple, the proposed criteria achieved remarkable results, allowing a substantial reduction of the candidate list: for instance, at 1% error rate, the average penetration rate of a state-of-the-art minutiae-based indexer decreases from 27% to 3.9% on FVC2000 DB2. The new reduction criteria are applicable to any indexing approach, since they only require a list of scores as input. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Fingerprint Indexing <s> Orientation fields can be used to describe interleaved ridge and valley patterns of fingerprint image, providing features useful for fingerprint recognition. However, for tasks such as fingerprint indexing, additional image alignment is often required to avoid confounding effects caused by pose differences. In this paper, we propose to employ a set of polar complex moments (PCMs) for extraction of rotation invariant fingerprint representation. PCMs are capable of describing fingerprint ridge flow structures, including singular regions, and are tolerant to spurious orientations in noisy fingerprints. From the orientation fields, a set of rotation moment invariants are derived to form a feature vector for comprehensive fingerprint structural description. This feature vector gives a compact and rotation invariant representation that is important for pose-robust fingerprint indexing. A clustering-based fingerprint indexing scheme is employed to facilitate efficient and effective retrieval of the most likely candidates from a fingerprint database. Our experimental results on NIST and FVC fingerprint databases indicate that the proposed invariant representation improves the performance of fingerprint indexing as compared to state-of-the-art methods. <s> BIB005
|
Fingerprint indexing arises from the necessity of quick access to the fingerprint templates database in identification tasks. Some indexing techniques use partial information provided by the extracted minutiae of the fingerprint and build local structures centered on each minutia to establish similarity relationships between fingerprints and key indexes. This allows the ordering of candidate templates to increase the probability to match true paired fingerprints. Actually, these approaches can be viewed as minutiae-based matching approaches if the matching score is proportionally related to the number of coincident local structures. The pure indexing proposals found in the literature are those based on minutiae triplets, which consider triangle-based characteristics to compute similarity among fingerprints, such as lengths, angles, handleless BIB001 , etc.; and triangulations to improve efficiency BIB002 . Other indexing approaches utilize LO BIB005 and also RF BIB003 . Finally, several criteria for narrowing the candidate list obtained from indexing are evaluated in BIB004 .
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> An automatic personal identification system based solely on fingerprints or faces is often not able to meet the system performance requirements. Face recognition is fast but not reliable while fingerprint verification is reliable but inefficient in database retrieval. We have developed a prototype biometric system which integrates faces and fingerprints. The system overcomes the limitations of face recognition systems as well as fingerprint verification systems. The integrated prototype system operates in the identification mode with an admissible response time. The identity established by the system is more reliable than the identity established by a face recognition system. In addition, the proposed decision fusion schema enables performance improvement by integrating multiple cues with different confidence measures. Experimental results demonstrate that our system performs very well. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Abstract Integration of various fingerprint matching algorithms is a viable method to improve the performance of a fingerprint verification system. Different fingerprint matching algorithms are often based on different representations of the input fingerprints and hence complement each other. We use the logistic transform to integrate the output scores from three different fingerprint matching algorithms. Experiments conducted on a large fingerprint database confirm the effectiveness of the proposed integration scheme. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> In this paper, a parallel-matching processor architecture with early jump-out (EJO) control is proposed to carry out high-speed biometric fingerprint database retrieval. The processor performs the fingerprint retrieval by using minutia point matching. An EJO method is applied to the proposed architecture to speed up the large database retrieval. The processor is implemented on a Xilinx Virtex-E, and occupies 6,825 slices and runs at up to 65 MHz. The software/hardware co-simulation benchmark with a database of 10,000 fingerprints verifies that the matching speed can achieve the rate of up to 1.22 million fingerprints per second. EJO results in about a 22% gain in computing efficiency. <s> BIB003 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> The evidential value of palmprints in forensics is clear as about 30% of the latents recovered from crime scenes are from palms. While palmprint-based personal authentication systems have been developed, they mostly deal with low resolution (about 100 ppi) palmprints and only perform full-to-full matching. We propose a latent-to-full palmprint matching system that is needed in forensics. Our system deals with palmprints captured at 500 ppi and uses minutiae as features. Latent palmprint matching is a challenging problem because latents lifted at crime scenes are of poor quality, cover small area of palms and have complex background. Other difficulties include the presence of many creases and a large number of minutiae in palmprints. A robust algorithm to estimate ridge direction and frequency in palmprints is developed. This facilitates minutiae extraction even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based matching algorithm is used to match palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latents) are matched to a background database of 10,200 full palmprints to test the proposed system. Rank-1 recognition rates of 78.7% and 69%, respectively, were achieved for live-scan palmprints and latents. <s> BIB004 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Fingerprint matching is often affected by the presence of intrinsically low quality fingerprints and various distortions introduced during the acquisition process. An effective approach to account for within-class variations is by capturing multiple enrollment impressions of a finger. The focus of this work is on effectively combining minutiae information from multiple impressions of the same finger in order to increase coverage area, restore missing minutiae, and eliminate spurious ones. We propose a new, minutiae-based, template synthesis algorithm which merges several enrollment feature sets into a ''super-template''. We have performed extensive experiments and comparisons to demonstrate the effectiveness of the proposed approach using a challenging public database (i.e., FVC2000 Db1) which contains small area, low quality fingerprints. <s> BIB005 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy. <s> BIB006 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> It is now established that photo-response nonuniformity noise pattern can be reliably used as a fingerprint to identify an image sensor. The large size and random nature of sensor fingerprints, however, make them inconvenient to store. Further, associated fingerprint matching method can be computationally expensive, especially for applications that involve large-scale databases. To address these limitations, we propose to represent sensor fingerprints in binary-quantized form. It is shown through both analytical study and simulations that the reduction in matching accuracy due to quantization is insignificant as compared to conventional approaches. Experiments on actual sensor fingerprint data are conducted to confirm that only a slight increase occurred in the probability of error and to demonstrate the computational efficacy of the approach. <s> BIB007 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> During the past decade, many efforts have been made to use palmprints as a biometric modality. However, most of the existing palmprint recognition systems are based on encoding and matching creases, which are not as reliable as ridges. This affects the use of palmprints in large-scale person identification applications where the biometric modality needs to be distinctive as well as insensitive to changes in age and skin conditions. Recently, several ridge-based palmprint matching algorithms have been proposed to fill the gap. Major contributions of these systems include reliable orientation field estimation in the presence of creases and the use of multiple features in matching, while the matching algorithms adopted in these systems simply follow the matching algorithms for fingerprints. However, palmprints differ from fingerprints in several aspects: 1) Palmprints are much larger and thus contain a large number of minutiae, 2) palms are more deformable than fingertips, and 3) the quality and discrimination power of different regions in palmprints vary significantly. As a result, these matchers are unable to appropriately handle the distortion and noise, despite heavy computational cost. Motivated by the matching strategies of human palmprint experts, we developed a novel palmprint recognition system. The main contributions are as follows: 1) Statistics of major features in palmprints are quantitatively studied, 2) a segment-based matching and fusion algorithm is proposed to deal with the skin distortion and the varying discrimination power of different palmprint regions, and 3) to reduce the computational complexity, an orientation field-based registration algorithm is designed for registering the palmprints into the same coordinate system before matching and a cascade filter is built to reject the nonmated gallery palmprints in early stage. The proposed matcher is tested by matching 840 query palmprints against a gallery set of 13,736 palmprints. Experimental results show that the proposed matcher outperforms the existing matchers a lot both in matching accuracy and speed. <s> BIB008 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Palmprint recognition is a challenging problem, mainly due to low quality of the pattern, large nonlinear distortion between different impressions of the same palm and large image size, which makes feature extraction and matching computationally demanding. This paper introduces a high-resolution palmprint recognition system based on minutiae. The proposed system follows the typical sequence of steps used in fingerprint recognition, but each step has been specifically designed and optimized to process large palmprint images with a good tradeoff between accuracy and speed. A sequence of robust feature extraction steps allows to reliably detect minutiae; moreover, the matching algorithm is very efficient and robust to skin distortion, being based on a local matching strategy and an efficient and compact representation of the minutiae. Experimental results show that the proposed system compares very favorably with the state of the art. <s> BIB009 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Although several fingerprint template protection methods have been proposed in the literature, the problem is still unsolved, since enforcing nonreversibility tends to produce an excessive drop in accuracy. Furthermore, unlike fingerprint verification, whose performance is assessed today with public benchmarks and protocols, performance of template protection approaches is often evaluated in heterogeneous scenarios, thus making it very difficult to compare existing techniques. In this paper, we propose a novel protection technique for Minutia Cylinder-Code (MCC), which is a well-known local minutiae representation. A sophisticate algorithm is designed to reverse MCC (i.e., recovering original minutiae positions and angles). Systematic experimentations show that the new approach compares favorably with state-of-the-art methods in terms of accuracy and, at the same time, provides a good protection of minutiae information and is robust against masquerade attacks. <s> BIB010 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> This paper describes an embedded minutia-based matching algorithm using the reference point neighborhoods minutiae. The proposed matching algorithm is implemented in restricted environments such as smart card devices requiring careful monitoring of both memory and processing time usage. The proposed algorithm uses a circular tessellation to encode fingerprint features in neighborhood minutia localization binary codes. The objective of the present study is the development of a new matching approach which reduces both computing time and required space memory for fingerprint matching on Java Card. The main advantage of our approach is avoiding the implicit alignment of fingerprint images during the matching process while improving the fingerprint verification accuracy. Tests carried out on the public fingerprint databases DB1-a and DB2-a of FVC2002 have shown the effectiveness of our approach. <s> BIB011 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is extremely important to law enforcement agencies. Latents are usually partial fingerprints with small area, contain nonlinear distortion, and are usually smudgy and blurred. Due to some of these characteristics, they have a significantly smaller number of minutiae points (one of the most important features in fingerprint matching) and therefore it can be extremely difficult to automatically match latents to plain or rolled fingerprints that are stored in law enforcement databases. Our goal is to develop a latent matching algorithm that uses only minutiae information. The proposed approach consists of following three modules: (i) align two sets of minutiae by using a descriptor-based Hough Transform; (ii) establish the correspondences between minutiae; and (iii) compute a similarity score. Experimental results on NIST SD27 show that the proposed algorithm outperforms a commercial fingerprint matcher. <s> BIB012 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Fingerprints and palmprints are the most common authentic biometrics for personal identification, especially for forensic security. Previous research have been proposed to speed up the searching process in fingerprint and palmprint identification systems, such as those based on classification or indexing, in which the deterioration of identification accuracy is hard to avert. In this paper, a novel hierarchical minutiae matching algorithm for fingerprint and palmprint identification systems is proposed. This method decomposes the matching step into several stages and rejects many false fingerprints or palmprints on different stages, thus it can save much time while preserving a high identification rate. Experimental results show that the proposed algorithm can save almost 50% searching time compared with traditional methods and illustrate its effectiveness. <s> BIB013 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> With the availability of live-scan palmprint technology, high resolution palmprint recognition has started to receive significant attention in forensics and law enforcement. In forensic applications, latent palmprints provide critical evidence as it is estimated that about 30 percent of the latents recovered at crime scenes are those of palms. Most of the available high-resolution palmprint matching algorithms essentially follow the minutiae-based fingerprint matching strategy. Considering the large number of minutiae (about 1,000 minutiae in a full palmprint compared to about 100 minutiae in a rolled fingerprint) and large area of foreground region in full palmprints, novel strategies need to be developed for efficient and robust latent palmprint matching. In this paper, a coarse to fine matching strategy based on minutiae clustering and minutiae match propagation is designed specifically for palmprint matching. To deal with the large number of minutiae, a local feature-based minutiae clustering algorithm is designed to cluster minutiae into several groups such that minutiae belonging to the same group have similar local characteristics. The coarse matching is then performed within each cluster to establish initial minutiae correspondences between two palmprints. Starting with each initial correspondence, a minutiae match propagation algorithm searches for mated minutiae in the full palmprint. The proposed palmprint matching algorithm has been evaluated on a latent-to-full palmprint database consisting of 446 latents and 12,489 background full prints. The matching results show a rank-1 identification accuracy of 79.4 percent, which is significantly higher than the 60.8 percent identification accuracy of a state-of-the-art latent palmprint matching algorithm on the same latent database. The average computation time of our algorithm for a single latent-to-full match is about 141 ms for genuine match and 50 ms for impostor match, on a Windows XP desktop system with 2.2-GHz CPU and 1.00-GB RAM. The computation time of our algorithm is an order of magnitude faster than a previously published state-of-the-art-algorithm. <s> BIB014 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> In this paper we investigate the question of combining multi-sample matching results obtained during repeated attempts of fingerprint based authentication. In order to utilize the information corresponding to multiple input templates in a most efficient way, we propose a minutiae-based matching state model which uses relationship between test templates and enrolled template. The principle of this algorithm is that matching parameters, i.e the sets of matched minutiae, between these templates should be consistent in genuine matchings. Experiments are performed on FVC2002 fingerprint databases. Result shows that the system utilizing the proposed matching state model is able to outperform the original system with raw matching scores. Likelihood ratio and multilayer perceptron are used as combination methods. <s> BIB015 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> We propose here a novel system for protecting fingerprint privacy by combining two different fingerprints into a new identity. In the enrollment, two fingerprints are captured from two different fingers. We extract the minutiae positions from one fingerprint, the orientation from the other fingerprint, and the reference points from both fingerprints. Based on this extracted information and our proposed coding strategies, a combined minutiae template is generated and stored in a database. In the authentication, the system requires two query fingerprints from the same two fingers which are used in the enrollment. A two-stage fingerprint matching process is proposed for matching the two query fingerprints against a combined minutiae template. By storing the combined minutiae template, the complete minutiae feature of a single fingerprint will not be compromised when the database is stolen. Furthermore, because of the similarity in topology, it is difficult for the attacker to distinguish a combined minutiae template from the original minutiae templates. With the help of an existing fingerprint reconstruction approach, we are able to convert the combined minutiae template into a real-look alike combined fingerprint. Thus, a new virtual identity is created for the two different fingerprints, which can be matched using minutiae-based fingerprint matching algorithms. The experimental results show that our system can achieve a very low error rate with FRR = 0.4% at FAR = 0.1%. Compared with the state-of-the-art technique, our work has the advantage in creating a better new virtual identity when the two different fingerprints are randomly chosen. <s> BIB016 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Fingerprints are the biometric features most used for identification. They can be characterized through some particular elements called minutiae. The identification of a given fingerprint requires the matching of its minutiae against the minutiae of other fingerprints. Hence, fingerprint matching is a key process. The efficiency of current matching algorithms does not allow their use in large fingerprint databases; to apply them, a breakthrough in running performance is necessary. Nowadays, the minutia cylinder-code (MCC) is the best performing algorithm in terms of accuracy. However, a weak point of this algorithm is its computational requirements. In this paper, we present a GPU fingerprint matching system based on MCC. The many-core computing framework provided by CUDA on NVIDIA Tesla and GeForce hardware platforms offers an opportunity to enhance fingerprint matching. Through a thorough and careful data structure, computation and memory transfer design, we have developed a system that keeps its accuracy and reaches a speed-up up to 100.8× compared with a reference sequential CPU implementation. A rigorous empirical study over captured and synthetic fingerprint databases shows the efficiency of our proposal. These results open up a whole new field of possibilities for reliable real time fingerprint identification in large databases. <s> BIB017 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Current Progress in Matching <s> Fingerprint matching has emerged as an effective tool for human recognition due to the uniqueness, universality and invariability of fingerprints. Many different approaches have been proposed in the literature to determine faithfully if two fingerprint images belong to the same person. Among them, minutiae-based matchers highlight as the most relevant techniques because of their discriminative capabilities, providing precise results. However, performing a fingerprint identification over a large database can be an inefficient task due to the lack of scalability and high computing times of fingerprint matching algorithms.In this paper, we propose a distributed framework for fingerprint matching to tackle large databases in a reasonable time. It provides a general scheme for any kind of matcher, so that its precision is preserved and its time of response can be reduced.To test the proposed system, we conduct an extensive study that involves both synthetic and captured fingerprint databases, which have different characteristics, analyzing the performance of three well-known minutiae-based matchers within the designed framework. With the available hardware resources, our distributed model is able to address up to 400000 fingerprints in approximately half a second. Additional details are provided at http://sci2s.ugr.es/ParallelMatching. HighlightsA two-level parallel AFIS is proposed to deal with large databases.It makes possible to perform pattern identifications in arbitrarily large databases.The framework is flexible for any kind of databases, and any matching algorithm.The achieved speedup is nearly linear.The framework performs 400000 matchings in 0.5s, with no precision loss. <s> BIB018
|
Nowadays, the matching field is continually in progress, offering new developments to improve personal identification. In the following, we briefly mention different matching related issues being currently tackled: • Accelerating fingerprint matching: many efforts have been performed to speed up the matching process, for instance, by means of FPGA-based BIB003 , GPU-based BIB017 parallel architectures or distributed computing BIB018 . • Fingerprint matching in embedded systems: sensors BIB007 and smart cards BIB011 . • Latent fingerprint matching: it is a more complicated problem because these fingerprints are inadvertent impressions left by fingers on surfaces BIB006 BIB012 . • Palmprint matching: based on ridges BIB008 , minutiae BIB009 BIB013 and also effective approaches for latent matching BIB004 BIB014 . • Combinations with other traits and multiple matching: with face recognition BIB001 , multiple matching BIB002 , multiple sample BIB015 and minutiae-based synthesis for matching BIB005 . • Privacy protection in fingerprint matching: which tries to avoid the traditional encryption with its associated decryption, which exposes the fingerprint to the attacker. Two examples of recent techniques are the reverse MCC representation BIB010 and the combination of two different fingerprints into a new identity, based on minutiae, orientations and singular points BIB016 .
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Databases <s> Two years after the first edition, a new Fingerprint Verification Competition (FVC2002) was organized by the authors, with the aim of determining the state-of-the-art in this challenging pattern recognition application. The experience and the feedback received from FVC2000 allowed the authors to improve the organization of FVC2002 and to capture the attention of a significantly higher number of academic and commercial organizations (33 algorithms were submitted). This paper discusses the FVC2002 database, the test protocol and the main differences between FVC2000 and FVC2002. The algorithm performance evaluation will be presented at the 16/sup th/ ICPR. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Databases <s> A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-of-the-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to ”light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004. <s> BIB002
|
We have used a wide variety of databases to test the performance and behavior of the matching algorithms. Table 3 presents their characteristics, showing their size and the average number of minutiae of the template and input fingerprints. First, we apply the algorithms over twelve of the well-known FVC databases, using the first impression of each finger as template, and the other seven impressions as input. These databases are designed for verification competitions, and therefore their fingerprints have bad quality on purpose. More information about the FVCs databases can be found in BIB001 BIB002 . Four additional databases, captured by the authors' research groups, are used for the study. They simulate a real environment for identification with consented fingerprints captures of reasonable quality. All of them are composed by the same fingers, captured by four different sensors (Table 4) . A total of 308 people participated in the study. The fingerprints of the thumb, forefinger and middle finger of both their hands were captured along three different sessions. After removing the failed captures, we selected three random input fingerprints per session and a single template fingerprint for each finger and sensor. After this manner we get four final databases that contain the same 1228 fingers captured by four different sensors.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Accuracy measures <s> A printed circuit assembly has a flexible printed circuit carried on a rigid panel which includes an open ended cavity. A flexible conductor strip of the printed circuit is wiped into the cavity by a sheet metal female terminal upon insertion of the terminal into one end of the cavity. The terminal has a box-like portion for receiving a male terminal inserted into the opposite end of the cavity, a first resilient tongue for biasing a terminal received in the box-like portion against an interior surface thereof and a pair of resilient tongues for biasingly engaging the conductor strip wiped into the cavity. The terminal also includes a transverse portion which biases the conductor strip against the panel outside of the cavity. Flat or partispherical dimples for contacting the conductor strip may be utilized and the terminal may include a ferrule portion. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Accuracy measures <s> While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Accuracy measures <s> In a recently published paper in JMLR, Demˇ sar (2006) recommends a set of non-parametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable p-values in multiple comparison procedures. <s> BIB003
|
The accuracy of a fingerprint matcher can be measured from two different perspectives: • Verification: consists of matching two fingerprints to determine whether they correspond to the same finger or not. • Identification: tries to find the match of an input fingerprint in a database, comparing it to all the templates. Each perspective employs different accuracy measures. In this paper, we use the following verification measures: • False Matching Rate (FMR): rate of different fingerprints that are considered to be the same by the matcher. Each possible score has an FMR associated; the higher the score, the lower the FMR. • False Non-Matching Rate (FNMR): rate of corresponding fingerprints that are erroneously considered different. • Equal-Error Rate (EER): value (corresponding to a certain score threshold) where FMR and FNMR are equal. • ROC: curve that plots the Genuine Matching Rate (GMR = 1 − FNMR) versus the FMR. • FMR100: lowest achievable FNMR for a FMR ≤ 1%. • FMR1000: lowest achievable FNMR for a FMR ≤ 0.1%. • ZeroFMR: lowest achievable FNMR for a FMR = 0%. Within an identification process, most of the accuracy measures are related to the rank, which is the position of the genuine score if all the obtained scores are ordered in descending order. In other words, the rank is the minimum number of database fingerprints that have to be returned by the identification system to ensure that the correct identity is included. We use the following identification accuracy measures: • True positive rate (TPR): percentage of test fingerprints that are correctly identified in the database, when only the best matching score is retrieved. The TPR is the error obtained when using a rank of 1. • R100: lowest rank that allows an error lower than 1%. • ZeroR: lowest rank that does not allow errors. • Cumulative Match Curve (CMC): curve that represents the error associated to each rank. The optimum value for R100 and ZeroR is 1, whereas the worst one is the size of the database. In addition to all these values, the average matching time is also important to determine if a matching algorithm is suitable for a certain identification system. For reasons of space and concision, not all of these measures are presented in the paper. The full set of results is accessible at http://sci2s.ugr.es/MatchingReview/. Statistical tests allow to establish a fair comparison between the methods and to detect significant differences. In this paper, we use the nonparametric tests recommended in BIB002 BIB003 , which claim to be simple, safe and robust. Furthermore, we apply the Friedman test BIB001 to measure the differences between the methods with a multiple comparison analysis. The Holm procedure is applied to find out which algorithms are distinctive.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Verification <s> Fingerprint matching is challenging as the matcher has to minimize two competing error rates: the False Accept Rate and the False Reject Rate. We propose a novel, efficient, accurate and distortion-tolerant fingerprint authentication technique based on graph representation. Using the fingerprint minutiae features, a labeled, and weighted graph of minutiae is constructed for both the query fingerprint and the reference fingerprint. In the first phase, we obtain a minimum set of matched node pairs by matching their neighborhood structures. In the second phase, we include more pairs in the match by comparing distances with respect to matched pairs obtained in first phase. An optional third phase, extending the neighborhood around each feature, is entered if we cannot arrive at a decision based on the analysis in first two phases. The proposed algorithm has been tested with excellent results on a large private livescan database obtained with optical scanners. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Verification <s> We introduce a novel fingerprint representation scheme that relies on describing the orientation field of the fingerprint pattern with respect to each minutia detail. This representation allows the derivation of a similarity function between minutiae that is used to identify corresponding features and evaluate the resemblance between two fingerprint impressions. A fingerprint matching algorithm, based on the proposed representation, is developed and tested with a series of experiments conducted on two public domain collections of fingerprint images. The results reveal that our method can achieve good performance on these data collections and that it outperforms other alternative approaches implemented for comparison. <s> BIB002
|
Tables 6 and 7 present the EER and FMR100, respectively, as the error percentage obtained for all tested algorithms over the 700 input fingerprints of each FVC database. The best result for each database is stressed in boldface. Additionally, Figure 1 plots the ROC curve for the most difficult FVC database (FVC2002 db3a, which obtains the highest average EER). Bozorth3 is the best performing algorithm in general. If we focus on the EER, MCC also obtains good results, while Deng is more accurate in terms of FMR100. The ROC curves Ratha BIB001 Neigh=6, F min =0.4, TM=8, RelDist=0.2, RidgesDiff=10, EdgesDiff=0.1, MisMatch=10000 Tico BIB002 T H RV = 25, Block = 16, NumRadius=4, T HR t =Π, T HR Dist =6, , M T I=6, µ=0.25 show that Bozorth3, MCC and Deng dominate all methods, followed by Jiang. These four algorithms are substantially different from each other. For example, MCC uses cylinders as local structure, while Deng uses the texture and Jiang and Bozorth3 use the nearest neighbors. The consolidation type is also different. However, it is noteworthy that none of them use any additional features: Jiang and Deng use both the minutia type and the ridge count, while Bozorth3 and MCC only use the basic minutia information. It is also interesting that, even though MCC+L1 obtains good results when the GMR is high, it does not improve the results obtained with the bare use of MCC. Note that the MCC+L1 algorithm uses a different, less accurate variant of MCC (with binary encoding and a different consolidation), meant to be very efficiently implemented on hardware. This states that none of the characteristics described in Subsection 3.1 can be discarded as worse than the rest: the verification performance is determined by the matching algorithm as a whole, and each local structure and consolidation can supply useful information. Nevertheless, the use of additional features does not always lead to more accurate results. Along with the accuracy, the computational performance is a very important characteristic of a fingerprint matching algorithm, especially when it has to deal with large fingerprint databases. Table 8 summarizes the average matching times for the tests performed so far. Note that these times are measured in computational time, and therefore are not affected by the parallel framework in which the tests have been carried out. We can notice that in all cases, Jiang is the fastest algorithm, followed by Qi. The former performs a simple consolidation and does not use any additional features, which makes the computation very fast. The latter does not involve any consolidation, and therefore performs all the matching process from a local point of view. In the other extreme, the Tan's algorithm is extremely slow, especially for databases with more minutiae per fingerprint. This algorithm computes all the triplets of the fingerprints, and compares them. This computation has factorial order and therefore takes a long time for fingerprints with a certain number of minutiae. This is an example of an algorithm that could be improved by a previous minutiae filtering. It is curious to note that the Qi's algorithm is very fast, although it also uses triplets. However, it includes a first candidate selection using the texture, avoiding the creation of all possible triplets. If we compare the overall performance of the algorithms, we can observe that the consolidation bears a high weight in the runtime. Complex consolidations require more computing time, as for MCC, Deng and Tico. Another observation that can be made is that MCC+L1 is considerably faster than MCC. This is due to the structure of MCC+L1, which first compares the L1 features of the fingerprints, and applies MCC only if they are similar enough. This hierarchical matching is able to save a lot of computing time, but also explains why MCC+L1 is often less accurate than MCC. Table 9 shows the results of the statistical tests for several accuracy measures, highlighting Bozorth3, MCC and Deng as the best algorithms. Friedman P-value 6.18e-011 6.13e-11 5.33e-11 7.34e-11 Tables 10 and 11 summarize the R100 and TPR values, respectively. Finally, Figure 2 displays the CMC curves for the FVC2002 db3a database. It is curious to observe that, while MCC+L1 is the best algorithm if we focus on the rank, MCC obtains better numeric results (for example for FVC2000 db4a) and Deng and Bozorth3 have higher TPR in most cases. The CMC curves explain this behavior. For low ranks, Deng and Bozorth3 perform better, and therefore have a lower TPR. MCC is slightly below Deng in accuracy, while MCC+L1 obtains good results for very high ranks. In this case, MCC obtains the best results for all measures and databases except DB1, in which Bozorth3 is better, and the ROC curves follow the same behavior. Jiang gets the worst values among the three tested algorithms. MCC and Bozorth3 only use the basic minutiae information to build their local structures, while Deng takes into account texture information and some minutiae peculiarities such as the ridge count and the type. Therefore, the fact that Deng is able to obtain good results with the FVC databases-even though it is outperformed by MCC and Bozorth3 for the captured ones-suggests that the texture is less affected than the minutiae in the FVC bad quality images. It is also noteworthy that Jiang and Deng perform better with the DB3 and DB4 databases (plain fingerprints), while Bozorth3 excels on DB1 (swipe fingerprints), and MCC obtains better results with DB2 (rolled fingerprints). This could happen due to the convex hull computation carried out by MCC, which filters the minutiae on the borders of the fingerprint. Bozorth3, Deng and Jiang do not carry out any special treatment on those areas, which are more prone to errors. In all cases, the DB1 database (captured with a narrow swipe sensor) is the most difficult one for the verification. As for the computing times, we observe the same behavior as with the FVC databases (Table 14) . Jiang is the fastest algorithm, followed by Bozorth3, MCC and Deng, which involve more complex consolidations and more information.
|
A survey on fingerprint minutiae-based local matching for verification and identification <s> Analysis and Empirical Results on Captured Databases <s> Proposes a fingerprint minutia matching technique, which matches the fingerprint minutiae by using both the local and global structures of minutiae. The local structure of a minutia describes a rotation and translation invariant feature of the minutia in its neighborhood. It is used to find the correspondence of two minutiae sets and increase the reliability of the global matching. The global structure of minutiae reliably determines the uniqueness of fingerprint. Therefore, the local and global structures of minutiae together provide a solid basis for reliable and robust minutiae matching. The proposed minutiae matching scheme is suitable for an online processing due to its high processing speed. Experimental results show the performance of the proposed technique. <s> BIB001 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Analysis and Empirical Results on Captured Databases <s> This paper presents a novel minutiae matching approach to fingerprint verification. Given an input or a template fingerprint image, minutiae are extracted first. Using Delaunay triangulation, each fingerprint is then represented as a special connected graph with each node being a minutia point and each edge connecting two minutiae. Such a graph is used to define the neighborhood of a minutia that facilitates a local-structure-based matching of two minutiae from input and template fingerprints respectively. The possible alignment of an edge in input graph and an edge in template graph can be identified efficiently. A global matching score between two fingerprints is finally calculated by using an aligned-edge-guided triangle matching procedure. The effectiveness of the proposed approach is confirmed by a benchmark test on FVC2000 and FVC2002 databases. <s> BIB002 </s> A survey on fingerprint minutiae-based local matching for verification and identification <s> Analysis and Empirical Results on Captured Databases <s> In this paper, we introduce the Minutia Cylinder-Code (MCC): a novel representation based on 3D data structures (called cylinders), built from minutiae distances and angles. The cylinders can be created starting from a subset of the mandatory features (minutiae position and direction) defined by standards like ISO/IEC 19794-2 (2005). Thanks to the cylinder invariance, fixed-length, and bit-oriented coding, some simple but very effective metrics can be defined to compute local similarities and to consolidate them into a global score. Extensive experiments over FVC2006 databases prove the superiority of MCC with respect to three well-known techniques and demonstrate the feasibility of obtaining a very effective (and interoperable) fingerprint recognition implementation for light architectures. <s> BIB003
|
In the preceding section, the algorithms of Bozorth3 , Jiang BIB001 , Deng BIB002 and MCC BIB003 were highlighted as the most accurate for the FVC databases, as they are statistically better than other methods both for verification and identification. This section performs a deeper study upon the four captured databases described, focusing on these four algorithms. Table 13 presents the results obtained in terms of EER, FMR100 and FMR1000. Figure 4 displays the ROC curves. Note that the error values for these databases are far better than those obtained for the FVC ones, which are designed for test purposes and whose quality is deliberately bad.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> In recent year, the Internet of Things (IoT) has drawn significant research attention. IoT is considered as a part of the Internet of the future and will comprise billions of intelligent communicating `things'. The future of the Internet will consist of heterogeneously connected devices that will further extend the borders of the world with physical entities and virtual components. The Internet of Things (IoT) will empower the connected things with new capabilities. In this survey, the definitions, architecture, fundamental technologies, and applications of IoT are systematically reviewed. Firstly, various definitions of IoT are introduced; secondly, emerging techniques for the implementation of IoT are discussed; thirdly, some open issues related to the IoT applications are explored; finally, the major challenges which need addressing by the research community and corresponding potential solutions are investigated. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> An ultra-low power (ULP), energy-harvesting system-on-chip, that can operate in various application scenarios, is needed for enabling the trillions of Internet-of-Things (IoT) devices. However, energy from the ambient sources is little and system power consumption is high. Circuits and system development require an optimal use of available energy. In this paper, we present circuits that can improve the energy utilization in an IoT device by providing improvements at critical points of the flow of harvested energy. A boost converter circuit, that can harvest energy from 10-mV input voltage and a few nanowatt of input power, makes more harvested energy available for the IoT device. A single-inductor-multiple-output buck-boost converter provides high-efficiency and low-voltage power management solution to put most of the harvested energy for system use. A real time clock and ULP bandgap reference circuit significantly reduce the standby power consumption. The proposed ULP circuits are developed in 130-nm CMOS technology. The combined effects of these circuits and the system design technique can improve the life-time of an example IoT device by over four times in higher power consumption mode and over 70 times in ULP mode. <s> BIB004 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> In this paper, we review the background and state-of-the-art of the narrow-band Internet of Things (NB-IoT). We first introduce NB-IoT general background, development history, and standardization. Then, we present NB-IoT features through the review of current national and international studies on NB-IoT technology, where we focus on basic theories and key technologies, i.e., connection count analysis theory, delay analysis theory, coverage enhancement mechanism, ultra-low power consumption technology, and coupling relationship between signaling and data. Subsequently, we compare several performances of NB-IoT and other wireless and mobile communication technologies in aspects of latency, security, availability, data transmission rate, energy consumption, spectral efficiency, and coverage area. Moreover, we analyze five intelligent applications of NB-IoT, including smart cities, smart buildings, intelligent environment monitoring, intelligent user services, and smart metering. Finally, we summarize security requirements of NB-IoT, which need to be solved urgently. These discussions aim to provide a comprehensive overview of NB-IoT, which can help readers to understand clearly the scientific problems and future research directions of NB-IoT. <s> BIB005 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> Human-generated information has been the main interest of the wireless communication technologies designs for decades. However, we are currently witnessing the emerge of an entirely different paradigm of communication introduced by machines, and hence, the name machine type communication (MTC). Such paradigm arises as a result of the new applications included in the Internet-of-Things (IoT) framework. Among the enabling technologies of the IoT, cellular-based communication is the most promising and more efficient. This is justified by the currently well-developed and mature radio access networks, along with the large capacities and flexibility of the offered data rates to support a large variety of applications. On the other hand, several radio-access-network groups put efforts to optimize the 3GPP LTE standard to accommodate for the new challenges by introducing new communication categories paving the way to support the machine-to-machine communication within the IoT framework. In this paper, we provide a step-by-step tutorial discussing the development of MTC design across different releases of LTE and the newly introduced user equipment categories, namely, MTC category (CAT-M) and narrowband IoT category (CAT-N). We start by briefly discussing the different physical channels of the legacy LTE. Then we provide a comprehensive and up-to-date background for the most recent standard activities to specify CAT-M and CAT-N technologies. We also emphasize on some of necessary concepts used in the new specifications, such as the narrowband concept used in CAT-M and the frequency hopping. Finally, we identify and discuss some of the open research challenges related to the implementation of the new technologies in real life scenarios. <s> BIB006 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> In this article, a review of commercial devices on the edge of the Internet of Things (IoT), or IoT nodes, is presented in terms of hardware requirements. IoT nodes are the interface between the IoT and the physical world (e.g., sensor nodes). To this aim, we introduce a wide survey of existing devices made publicly available for the further analysis of trends and state of the art. This data-driven approach permits developing quantitative insight into the big picture of the current status of IoT nodes. The analysis shows that an order (ultimately two orders) of magnitude gap needs to be filled in terms of size, lifetime, and cost (energy efficiency) to ultimately make IoT nodes truly ubiquitous and trigger the widely expected exponential growth of the IoT ecosystem. Overall, this article presents a view from the edge of the IoT and a glimpse of its tipping point. <s> BIB007 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> Recently, the Internet of Things (IoT) concept has attracted a lot of attention due to its capability to translate our physical world into a digital cyber world with meaningful information. The IoT devices are smaller in size, sheer in number, contain less memory, use less energy, and have more computational capabilities. These scarce resources for IoT devices are powered by small operating systems (OSs) that are specially designed to support the IoT devices’ diverse applications and operational requirements. These IoT OSs are responsible for managing the constrained resources of IoT devices efficiently and in a timely manner. In this paper, discussions on IoT devices and OS resource management are provided. In detail, the resource management mechanisms of the state-of-the-art IoT OSs, such as Contiki, TinyOS, and FreeRTOS, are investigated. The different dimensions of their resource management approaches (including process management, memory management, energy management, communication management, and file management) are studied, and their advantages and limitations are highlighted. <s> BIB008 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> Energy harvesting technology provides a promising solution to enable internet of battery-less things (IoBT), as the lifetime and size of batteries become major limiting factors in the design and effective operation of internet of things (IoT). However, with constrained energy buffer size, the variation of ambient energy availability and wireless communication cast adverse effect on the operation of IoBT. There is a pressing demand for developing IoBT-specialized power management. In this paper, we propose a novel predictive power management (PPM) framework combining optimal working point, deviation aware predictive energy allocation, and energy efficient transmission power control. The optimal working point guarantees minimum power loss of IoBT systems. By predictively budgeting the available energy and using the optimal working point as a set-point, PPM mitigates the prediction error so that both power failure time and system power loss is minimized. The transmission power control module of PPM improves energy efficiency by dynamically selecting optimal transmission power level with minimum energy consumption. Real-world harvesting profiles are tested to validate the effectiveness of PPM. The results indicate that compared with the previous predictive power managers, PPM incurs up to $17.49\%$ reduction in system power loss and $93.88\%$ less power failure time while maintaining a high energy utilization rate. PPM also achieves $9.4\%$ to $23.22\%$ of maximum improvement of transmission energy efficiency compared with the state-of-the-art transmission power control schemes. <s> BIB009 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> The Internet of Things (IoT) is expected to play an important role in the construction of next generation mobile communication services, and is currently used in various services. However, the power-hungry battery significantly limits the lifetime of IoT devices. Among the various lifetime extension techniques, this paper discusses mobile charging, which enables wireless power transfer based on radio frequency with mobile chargers (MCs). MCs function as traveling target IoT networks that provide energy to battery-operated IoT devices. However, MCs with an energy-constrained battery result in limitation of travel-time. This paper formulates a problem to minimize energy consumption for charging IoT devices by determining the path of motion of an MC and efficient charging points, and proves that the problem is NP-hard. An efficient algorithm, named best charging efficiency (BCE), is proposed to solve the problem and the upper bound of the BCE algorithm is guaranteed using the duality of linear programming. In addition, an improved BCE algorithm called branching second best efficiency algorithm with additional searching techniques is introduced. Finally, this paper analyzes the difference in performance among the proposed algorithms, optimal solutions, and the existing algorithm and concludes that the performance of the proposed algorithm is near optimal, within 1% of difference ratio in terms of charging efficiency and delay. <s> BIB010 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> I. INTRODUCTION <s> Abstract The paradigm of Internet of Things (IoT) is on rapid rise in today’s world of communication. Every networking device is being connected to the Internet to develop specific and dedicated applications. Data from these devices, called as IoT devices, is transmitted to the Internet through IoT Gateways (IGWs). IGWs support all the technologies in an IoT network. In order to reduce the cost involved with the deployment of IGWs, specialized low-cost devices called Solution Specific Gateways (SSGWs) are also employed alongside IGWs. These SSGWs are similar to IGWs except they support a subset of technologies supported by IGWs. A large number of applications are being designed which require IGWs and SSGWs to be deployed in remote areas. More often than not, gateways in such areas have to be run on battery power. Hence, power needs to be conserved in such networks for extending network life along with maintaining total connectivity. In this paper, we propose a dynamic spanning tree based algorithm for power-aware connectivity called SpanIoTPower-Connect which determines (near) optimal power consumption in battery-powered IoT networks. SpanIoTPower-Connect computes the spanning tree in the network in a greedy manner in order to minimize the power consumption and achieve total connectivity. Additionally, we propose an algorithm to conserve power in dynamic IoT networks where the connectivity demand changes with time. Our simulation results show that our algorithm performs better than Static Spanning Tree based algorithm for power-aware connectivity (Static ST) and a naive connectivity algorithm where two neighboring SSGWs are connected through every available technology. To the best of our knowledge, our work is the first attempt at achieving power-aware connectivity in battery-powered dynamic IoT networks. <s> BIB011
|
5G is on the purview where IoT will seize the stage spotlight, as IoT devices would form a notable segment of the 5G network. The radical evolution of the current Internet into a network of interconnected objects harvest information from the environment and interact with the physical world. The mammoth interest of connecting sensors, actuators, meters cars, appliances, and so on with internet, results in the Internet of Things BIB002 , BIB003 . According to IEA-4E (Electronic device and network annex), the number of network connected devices will be 50 billion by 2020 . Thus enterprises ushers into the modern era of automation to evidently change our daily lives by providing solution related to multiple sectors of health, agriculture, retail, vehicular, industry, power grid, underwater, buildings, homes, environment, transportation, smart home BIB001 . By 2024 itself, it is expected that the IoT industry will generate a revenue of USD4.3 billion and it's expected to grow over years. However, deployment of gigantic IoT ecosystem brings in various challenges to handle such as cost-efficient robust and flexible connectivity, interoperability of heterogeneous hardware, diverse security mechanism BIB007 , and long battery life. However, the major deployment obstruction is due to constrained resources availability for IoT devices i.e. limited energy, limited computation, and limited processing capabilities BIB008 . Most importantly the flavor of IoT gets bitter, especially due to limited energy as this leads to unanticipated human intervention. Hence the burning issue of efficient utilization of energy is getting anomalous traction from academics and industry. In literature, various techniques have been proposed to tackle this critical issue such as energy harvesting, sporadic transmission, resource allocation, clustering, etc. Ju and Zhang BIB009 suggested the technique of predictive analysis with energy harvesting to obsolete the battery from IoT device. Na et al. BIB010 suggested a technique to charge the device wirelessly using RF. Further, a power-aware connectivity using Spanning tree algorithm is suggested by Karthikeya et al. BIB011 . Shafiee et al. BIB004 designed a circuit to manage the power of the device where a source of energy is ambient. Another crucial challenge is to handle the diverse requirement of wide range of IoT applications. Today there are two evident classes of IoT applications: Critical IoT and Massive IoT. Critical IoT applications such as autonomous driving or remote surgery require very low latency with ultra-high reliability. Massive IoT (M-IoT) applications like smart building, logistics, tracking and fleet management, smart agriculture, etc. require low cost device with reduced complexity that consumes low power, wide coverage including uncovered areas and performance flexibility to handle multiple application with different latency and throughput requirement. According to current statics, low data rate (<100kbps) applications i.e. M-IoT, are expected to form 60% of IoT connections in 2020 in comparison to medium and high data rate applications BIB005 . Hence most of the traffic will originate from cost-effective low-bit-rate services that will serve the connected world. Since massive communication between various low power IoT devices is different from H2H (Human-to-Human) in terms of delay sensitivity and traffic pattern, thus could not be supported by cellular technologies adequately. As they are designed for the different category of terminal that operates at a high data rate and consumes high power their device complexity is also high and thereby their cost, though it will aid in deployment of IoT. Apart from this, technologies such as Wi-Fi, BLE, ZigBee, etc., offer short range communications though consume less power. However, if deployed repeatedly will increase the cost. Hence not a cost effective solution for applications with wide coverage range requirements like M-IoT. Thus a technology is required that could support massive connection at reduced cost with low power consumption & provide enhanced coverage . LPWAN technologies are one that could support this emerging market as it allows low power device to communicate at low data rate over wide area with radius of several kilometers. LPWA proffers abilities like Indoor penetration: as it uses Sub-1GHz band, the signal propagates more reliably with less power consumption as compared to 2.4GHz signal. Low power consumption is achieved at a cost of low data rate and latency on the higher end in seconds or minutes. This ability is achieved by using Duty cycling, by using star topology (usually), by allowing devices to directly connect to BS (thereby bifurcating the need of gateway & relays also) or by reducing burden on the device side( by using Fog or edge Computing). Further for Low cost, CAPEX (capital expenditure) & OPEX (operational expenditure) is reduced by reducing the complexity of hardware, implanting less no of BS (LPWA BS) etc. Massive connections, QoS are two abilities where still improvement is required. Standard bodies like ETSI, 3GPP, IEEE, IETF are actively working on LPWA technologies. A range of applications such as precision monitoring, Smart City, Home Automation, Industrial Asset monitoring, Logistics, Wildlife Monitoring, Smart meter, is being catered by LPWA technologies. As the LPWA market is expected to rule the market in coming years, hence competition among LPWA technologies becoming severe. Recently to improve the foundation of IoT through cellular networks, and tussle with existing proprietary LPWA technologies (LoRa, SigFox, RPMA, etc.), 3GPP has introduced three technologies in Rel-13 namely eMTC (enhanced Machine Type Communication), NB-IoT, EC-GSM (enhanced coverage GSM). These cellular technologies will operate in licensed spectrum and will reuse the existing LTE infrastructure. Among available connecting technologies, NBIoT specifically tailored for emerging M-IoT market is one of the promising massive LPWA technology for data perception and acquisition for low data rate applications. NBIoT could handle massive connections with low power consumption and provide wide area coverage with deep indoor penetration and nomadic mobility BIB006 - . Most outstandingly it offers reliable service by using licensed bands and avoid congestion problems. NB-IoT reduced the hardware complexity by 90% compared in comparison of LTE Cat-1. Moreover, NBIoT can coexist with existing GSM and LTE network, hence reduce deployment cost also. Beside this new RAT (Radio Access Technology) reduced the signaling required for the transmission of data in the conventional system. Altogether NB-IoT can reduce the cost and energy consumption, which are the chief limitations of cellular network technology for IoT devices. The present communication contributes toward the Next generation Green-NBIoT and delivers a comprehensive enriched study of NBIoT. This extensive survey elaborates resource allocation and energy efficiency techniques. Along with this a detailed comparative analysis of IoT connecting technology with NBIoT is also presented. Further, two novel application specific energy efficient approaches are also proposed namely ''Zonal Thermal pattern Analysis'' (ZTPA) and ''Energy Efficiency Adaptive Health Monitoring System'' (E2AHMS). Along with this real time hardware implementation of Health application is given to support the proposed energy efficient approach. This paper would be beneficial As per literature survey and research availability in the area of IoT, a lot of work has been done to deliver services like smart health, smart agriculture, smart vehicle, etc. In the current scenario, IoT has become the first choice of industry, research and academia. However, there exist big gap similar to WSN i.e. energy optimization, energy efficient network which acts as a big stumbling block for IoT network. This becomes even more challenging as IoT devices have limited resource availability. This motivated us to study resource allocation techniques in detail. Furthermore, implementation of IoT on a massive scale would inflate carbon emissions in one way or the other. Hence to envision the green communication energy efficiency techniques will play an important role. Apart from this to cater the new specificities of IoT (i.e. low cost devices, wide coverage, long battery life, massive connection support) in cost effective way, NBIoT (3GPP standardized LPWA technology) presents good candidature. In this paper with an effort toward green IoT, two energy efficiency techniques using NBIoT are also proposed related to two specific area Health & Agriculture which directly or indirectly impact the human being.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> II. INTERNET OF THINGS <s> Smart world is envisioned as an era in which objects (e.g., watches, mobile phones, computers, cars, buses, and trains) can automatically and intelligently serve people in a collaborative manner. Paving the way for smart world, Internet of Things (IoT) connects everything in the smart world. Motivated by achieving a sustainable smart world, this paper discusses various technologies and issues regarding green IoT, which further reduces the energy consumption of IoT. Particularly, an overview regarding IoT and green IoT is performed first. Then, the hot green information and communications technologies (ICTs) (e.g., green radio-frequency identification, green wireless sensor network, green cloud computing, green machine to machine, and green data center) enabling green IoT are studied, and general green ICT principles are summarized. Furthermore, the latest developments and future vision about sensor cloud, which is a novel paradigm in green IoT, are reviewed and introduced, respectively. Finally, future research directions and open problems about green IoT are presented. Our work targets to be an enlightening and latest guidance for research with respect to green IoT and smart world. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> II. INTERNET OF THINGS <s> The Internet of Things (IoT) is seen as a set of vertical application domains that share a limited number of common basic functionalities. In this view, consumer-centric solutions, platforms, data management, and business models have to be developed and consolidated in order to deploy effective solutions in the specific fields. The availability of low-cost general-purpose processing and storage systems with sensing/actuation capabilities coupled with communication capabilities are broadening the possibilities of IoT, leading to open systems that will be highly programmable and virtualized, and will support large numbers of application programming interfaces (APIs). IoT emerges as a set of integrated technologies — new exciting solutions and services that are set to change the way people live and produce goods. IoT is viewed by many as a fruitful technological sector in order to generate revenues. IoT covers a large wealth of consumer-centric technologies, and it is applicable to an even larger set of application domains. Innovation will be nurtured and driven by the possibilities offered by the combination of increased technological capabilities, new business models, and the rise of new ecosystems. The articles in this special section focus on several promising approaches to sensors, actuators, and new consumer devices. New communication capabilities (from short-range to LPWAN to 4G and 5G networks, with NB-IoT). <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> II. INTERNET OF THINGS <s> In recent years, a tremendous amount of development has been seen in the field of Internet of Things (IoT) enabled solutions, triggering the advent of novel applications. This evolutionary trend has been led in part by radio frequency identification (RFID) and smart technologies, among others. Inspired by the endeavors in this area, the results reported here apply to a smart architecture for patient tracking and monitoring within hospitals. The aim is to implement RFID based solutions to tracking patients location and movement and expensive equipment in hospitals. Experimental results are reported of a novel patient tracking system that uses of multiple RFID tags in bracelets worn by patients to reliably determine their location within certain area. We also present results on processing the phase of the received signal in order to detect movement of a patient. <s> BIB003
|
Communication and sensing ability has evolved gradually due to advancement in technology. The evolution of IoT (Fig.2 ) started with RFID (radio frequency identification), which is the first technology to realize the machine to machine concept. RFID detects and ascertains the tagged entity wirelessly, by the data it transmits. Recently RFID is used in numerous IoT applications such as gestural detection BIB002 , patient RFID tracking system BIB003 , and smart restaurant. RFID could not sense the critical environment parameters, this stems in the requirement of sensing technologies. Hereafter, several technical fields including embedded computing, hardware miniaturization, Wireless networking aids in augmentation of real-world things capability, to sense, think, process and act, thus making things smart. Usually, these smart things have low computation capability thus clouds are used to offload the computational task which in turn reduce the energy consumption. Today the integration of. RFID, WSN, MCC (mobile cloud computing) and advancement in technology generated the umbrella term IoT. This concept is significantly revolutionizing the technical and business world, as it can offers anytime, anywhere, and anything service. The term IoT was first coined by Kevin Aston in 1999 also known as ''cyber-physical systems (CPS)'' . In this paradigm, object/things around us are connected to the internet using BIB001 . IoT network, comprised of billions of devices is highly heterogeneous. Its essence of heterogeneity can be visualized by the fact that devices from different vendors would run with diverse platforms even if they're doing the same task as a result will generate radically different ontologies. Thus for IoT to speak a common language a unified standardized IoT architecture is required to ensure interoperability and security. Many architectural layouts have been proposed for IoT but no one has converged to a standardized architecture. Some of the well-known IoT architecture such as RAMI 4.0, IIRA, IoT-ARM, P2413, Arrowhead Framework, WSO2, Microsoft Azure, Internet-of-everything reference Model, Intel IoT Platform Reference Architecture are available. Among variously available architecture this paper presents a five-layer architecture ( Fig. 3 Thus IoT flow can be defined as first identification (to provide clear identification to each device), then sensing gather data from the physical environment), thereafter communication, then integration of service and finally extraction of knowledge. The new IoT specificities like low power and Wide Range, Low deployment and Operational Costs, Long Battery life (10mA RX current, 100nA sleep current), Low bitrates, are different from conventional networks in many aspects and brings in many challenges. Beside this massive connectivity will further aggravate the problem. However, a solution is appearing on the horizon: is 3GPP, NB-IoT which can cater these requirements in an effective manner. As IoT has a diverse and wide range of IoT requirement, hence one solution won't fit for all. Therefore in the next section a detail comparative analysis of NBIoT and other IoT related technologies is discussed.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> The Internet of Things (IoT) incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. This enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. This research analyses some of the major evolving and enabling wireless technologies in the IoT. Particularly, it focuses on ZigBee, 6LoWPAN, Bluetooth Low Energy, LoRa, and the different versions of Wi-Fi including the recent IEEE 802.11ah protocol. The studies evaluate the capabilities and behaviours of these technologies regarding various metrics including the data range and rate, network size, RF Channels and Bandwidth, and power consumption. It is concluded that there is a need to develop a multifaceted technology approach to enable interoperable and secure communications in the IoT. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> Low power and long range machine-to-machine (M2M) communication techniques are expected to provide ubiquitous connections for the wireless devices. In this paper, three major low power and long range M2M solutions are surveyed. The first type of solutions is referred to as the low power wide area (LPWA) network. The design of the LPWA techniques features low cost, low data rate, long communication range, and low power consumption. The second type of solutions is the IEEE 802.11ah which features higher data rates using a wider bandwidth than the LPWA-based solutions. The third type of solutions is operated under the cellular network infrastructure. Based on the analysis of the pros and cons of the enabling technologies of the surveyed M2M solutions, as well as the corresponding deployment strategies, the gaps in knowledge are identified. The paper also presents a summary of the research directions for improving the performance of the surveyed low power and long range M2M communication technologies. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> Low power wide area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things, LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine applications. This review paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LoRa Alliance, Weightless-SIG, and Dash7 alliance). We further note that LPWA technologies adopt similar approaches, thus sharing similar limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> Abstract By 2020, more than twenty five billion devices would be connected through wireless communications. In accordance with the rapid growth of the internet of things (IoT) market, low power wide area (LPWA) technologies have become popular. In various LPWA technologies, narrowband (NB)-IoT and long range (LoRa) are two leading technologies. In this paper, we provide a comprehensive survey on NB-IoT and LoRa as efficient solutions connecting the devices. It is shown that unlicensed LoRa has advantages in terms of battery lifetime, capacity, and cost. Meanwhile, licensed NB-IoT offers benefits in terms of QoS, latency, reliability, and range. <s> BIB004 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> The Internet of Things (IoT) is a promising technology which tends to revolutionize and connect the global world via heterogeneous smart devices through seamless connectivity. The current demand for machine-type communications (MTC) has resulted in a variety of communication technologies with diverse service requirements to achieve the modern IoT vision. More recent cellular standards like long-term evolution (LTE) have been introduced for mobile devices but are not well suited for low-power and low data rate devices such as the IoT devices. To address this, there is a number of emerging IoT standards. Fifth generation (5G) mobile network, in particular, aims to address the limitations of previous cellular standards and be a potential key enabler for future IoT. In this paper, the state-of-the-art of the IoT application requirements along with their associated communication technologies are surveyed. In addition, the third generation partnership project cellular-based low-power wide area solutions to support and enable the new service requirements for Massive to Critical IoT use cases are discussed in detail, including extended coverage global system for mobile communications for the Internet of Things, enhanced machine-type communications, and narrowband-Internet of Things. Furthermore, 5G new radio enhancements for new service requirements and enabling technologies for the IoT are introduced. This paper presents a comprehensive review related to emerging and enabling technologies with main focus on 5G mobile networks that is envisaged to support the exponential traffic growth for enabling the IoT. The challenges and open research directions pertinent to the deployment of massive to critical IoT applications are also presented in coming up with an efficient context-aware congestion control mechanism. <s> BIB005 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> III. COMPARISON OF NBIoT WITH OTHER IoT CONNECTING TECHNOLOGIES <s> Motivated by the increasing variance of suggested Internet of Things (IoT) applications and the lack of suitability of current wireless technologies in scalable, long range deployments, a number of diverging Low Power Wide Area (LPWA) technologies have been developed. These technologies promise to enable a scalable high range network on cheap low power devices, facilitating the development of a ubiquitous IoT. This paper provides a definition of this new LPWA paradigm, presents a systematic approach to defined suitable use cases, and undertakes a detailed comparison of current LPWA standards, including the primary technologies, upcoming cellular options, and remaining proprietary solutions. <s> BIB006
|
A myriad of IoT connectivity solutions is available to support a wide range of IoT application with miscellaneous requirements. Therefore to select an optimal technology for an application various factors such as power consumption, security issues, deployment cost, communication range, data rate, throughput, latency are required to be considered BIB001 , BIB005 . A comparison of IoT connecting solutions on pre-specified factor is given in . Among wireless technologies, Wi-Fi uses radio waves for communication and operates in either 2.4GHz or 5GHz band. The existing Wi-Fi standards 802.11 a/b/g/n/ac has constraints in both range and energy efficiency, thus can't be considered apposite for applications with low power consumption requisite. Thus IEEEhas introduced two new protocols 802.11ah and 802.11ax that can support thousands of devices with low power consumption. These two are considered to be the biggest move of IEEE towards the massive IoT market. The Wi-Fi standard 802.11ah -HaLow provide a range of 1km and can easily penetrate barriers/obstacles at data rate ranging from 150kbps to 40Mbps. The 802.11ax solution will provide four time's increased capacity and data rate and improved efficiency. These new solutions have great potential to capture the M2M market. Bluetooth Low Energy (BLE) is another Wireless standard that offers much needed characteristic required for most of IoT applications i.e. consumes low power, low cost, but within a short range. BLE allows both connectionless and connection-oriented communication. It fragments the data into small packets then transmit them over the radio interface. This enables BLE to support data services with large packets. It is used in applications like wearables, for locating things, etc. Further ZigBee is a simple and low cost Wireless option that consumes low power. It is based on IEEE 802 std. and could support ∼ 65000 nodes/network. Its range can be extended by using repeaters in the mesh topology, but this will increase the deployment cost . ZigBee 3.0 is the new solution introduces, it ensures device interoperability, it is reliable robust and above all it is green. Z-Wave is another pioneer for short range communication, especially for home-automation industry. As it provides a reliable method to wirelessly control multiple home appliances using low-power. Z-Wave operates in the subgigahertz frequency range, around 900MHz. It can support up to approximately 232 nodes/ network. The Frequently Listening receiver Slave (FLiRs) enable low latency communication by employing unique beaming technology so that device can transit from sleep to fully awake modes in one second. Hence suitable for short range IoT application. Z-Wave 700 Series, could offer a range of 300 feet, hence can connect gadgets placed outside the home and far off into the yard, too . Despite offering low power consumption and low cost, Wireless technologies limits in supporting long distance communication. Hence offers limited mobility and deployment possibilities for the device. Moreover, to extend the range, repeaters are required to be deployed densely, thus extortionately expensive. Whereas, LPWA technologies offer mass deployment, long range, long battery life, and require low bandwidth. Hence LPWA fills a cavity in the landscape of BIB002 , BIB003 . LoRais developed by Semtech, to deploy its network it is required to get a NetID issued from LoRa Alliance. It uses an Adaptive Data Rate (ADR) scheme which helps in extending the battery lifetime. It is based on Chirp Spread Spectrum modulation technique, which sustains the same low power characteristics as FSK modulation yet considerably increases the communication range. LoRa-WAN being asynchronous let devices to remain in sleep mode for a long time period as desired by the application. Another advantage of LoRa is its BS provide wide coverage. However, as LoRa operates in unlicensed spectrum thus could not provide QoS and neither it could support dense network as it employs MAC based ALOHA protocol BIB004 . SigFox is another technology that operates in unlicensed sub 1GHz band. It operates on cloud based approach and uses ALOHA like LoRa but with a restriction on the maximum number of messages, a device could send. The prime advantage of SigFox is it doesn't spend energy on sensing medium thus saves energy. SigFox requires the user to purchase subscription. In order to provide long range communication it uses Ultra-Narrow-band modulation and transfers each message 100 Hz wide with a data rate of 100/600 bps depending on the region. Ingenuis another LPWAN technology offers high throughput, high scalability, and use patented Random Phase Multiple Access scheme. RPMA helps in improving SNR by minimizing the overlapping between signals transmitted. It is slightly more complex as it uses TDMA based MAC protocol to efficiently allocate the radio resource [34] . Another LPWAN technology WEIGHTLESS-P, uses cognitive radio and TV white-spaces, such that devices utilize the bands as an opportunistic user without causing interference to the primary user. It supports two way communication thereby acknowledges all transmissions, further in order to maintain QoS and to assure reliability Automatic retransmission request (ARQ) and Forward Error Correction (FEC) is used BIB006 . Taking about existing cellular technology LTE could not provide solutions for M-IoT applications i.e. Low power consumption, low data rates, and wider coverage at low cost. Thus 3GPP proposed a licensed LTE-M / eMTC, in Rel-12 to cater the requirements of machine type communication which has been optimized in Rel-13. It leverages the existing LTE network and follows narrowband operation for reception and transmission and provides extended coverage. To reduce power consumption it used PSM (power saving mode) and edRx mode . NBIoT (Narrow Band IoT), is another licensed technology introduced by 3GPP in Rel-13. NB-IoT in comparison to other technologies offer mMTC required features at low cost like (i) provide coverage in challenging positions like underground or basements; (ii) improved power saving mechanisms to enhance the battery life; (iii) network procedures are simplified to reduce the UE complexity. Apart from this to support small data transfer it has optimized CIoT User Plane and Control Plane Another technology EC-GSM is also proposed by 3GPP which is an extension of existing GSM technology. Suitable for many applications, but it overburdens the device and network . Conclusion: For M-IoT applications, individually all LPWAN technologies are prompt in one way or the other for wide range solutions but picking a clear winner is critical. LoRa, SigFox being proprietary technology has an established market and giving a tough fight to 3GPP NBIoT, EC-GSM-IoT and others also. However proprietary technologies operate in Unauthorized /Unlicensed Spectrum that suffers from Duty Cycle constraint. Thus affects the transmission time and also bears high interference as spectrum is shared between multiple technologies. Whereas, NBIoT operate in licensed spectrum and can support massive connections, operates at low data rates (but better than LoRa), offer wide coverage with deep indoor penetration, perform well in dense area, consumes less power in comparison to LoRa, & SigFox, offers very good link budgets and good scalability, provide QoS Apart from this just require a software update to LTE or existing RAN infrastructure. Thus ready to immediate role out in the market. This is essential for reducing deployment cost and time , . Hence this ascertains that NBIoT has the ability to capture the IoT-LPWAN market specifically in comparison to other LPWAN Technologies
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> IV. NBIoT: NARROW BAND INTERNET OF THINGS <s> In this paper, we review the background and state-of-the-art of the narrow-band Internet of Things (NB-IoT). We first introduce NB-IoT general background, development history, and standardization. Then, we present NB-IoT features through the review of current national and international studies on NB-IoT technology, where we focus on basic theories and key technologies, i.e., connection count analysis theory, delay analysis theory, coverage enhancement mechanism, ultra-low power consumption technology, and coupling relationship between signaling and data. Subsequently, we compare several performances of NB-IoT and other wireless and mobile communication technologies in aspects of latency, security, availability, data transmission rate, energy consumption, spectral efficiency, and coverage area. Moreover, we analyze five intelligent applications of NB-IoT, including smart cities, smart buildings, intelligent environment monitoring, intelligent user services, and smart metering. Finally, we summarize security requirements of NB-IoT, which need to be solved urgently. These discussions aim to provide a comprehensive overview of NB-IoT, which can help readers to understand clearly the scientific problems and future research directions of NB-IoT. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> IV. NBIoT: NARROW BAND INTERNET OF THINGS <s> Internet of things (IoT) changes significantly the requirements for connectivity, mainly ::: with regards to long battery life, low device cost, low deployment cost, extended coverage ::: and support for a massive number of devices. Driven from these requirements, several ::: different cellular and non-cellular low power wide area network (LPWAN) solutions are ::: emerging and competing for IoT business and the overall connectivity market. Motivated by ::: this, in this thesis, we review and compare the pros and the cons for 2 specific ::: LPWANs,LoraWan and Narrowband IoT, as well as, we discuss their suitability for different ::: IoT applications. Finally, we will simulate the LoraWan with the help of Matlab platform and ::: compare it to a an existing simulation of Narrowband IoT,to see the differences. <s> BIB002
|
NB-IoT is the latest technology that is identified and standardized in a little time span, in the response of customer requirement and pressure to tussle with non-3GPP proprietary technologies. NBIoT can efficiently support the market of the M-IoT application. This an independent radio interface, tightly connected with LTE, which also shows up in its integration in the current LTE specifications BIB001 , , BIB002 , . It can address the needs of mMTC (massive Machine Type Communication) by following features (Fig. 4) : NBIoT can support massive connections (more than 52K/channel): as in MTC communication model user, transfer small data at low frequency and are insensitive to latency so multiple users can camp on one cell. Further, NBIoT supports two schemes multi-tone and single-tone transmission. This offers flexibility to schedule 12 subcarriers with sub spacing of 15 kHz or 48 subcarriers with sub spacing of 3.75 kHz with single tone scheduling. Hence eNodeB could support large no of users in parallel. NBIoT uses a bandwidth of 180 KHz and operates in HD-FDD: In NBIoT, UL/DL bandwidth is restricted to 180 kHz. Hence less complex transceiver is used. Further, the device operates in Half Duplex mode, thus can't receive and transmit simultaneously. This further decrease device cost, as no duplexer will be required. NBIoT design objective is to provide prolonged battery life: For long battery life, power Consumption is reduced by using eDRX (extended discontinuous reception) and PSM (power saving modes) features. In extended discontinuous reception, the UE monitors the paging channel periodically. Whereas in PSM, the device remains in receiving state. NBIoT provides extend coverage range of 20dB as compare to GPRS (especially deep indoor penetration): To provide the wide coverage the transport block can be transmitted multiple times i.e. 128 times in UL and 2048 in DL. Hence improves Signal to noise-interference ratio (SINR) and enable proper decoding of the signal. To determine the number of repetitions for coverage extensions, three classes are defined CE level 0, CE level 1 and CE level 2. For each of these class, the number of repetitions is regulated separately. Beside this soft retransmission the network bandwidth is also reduced. Hence based on two features 20 dB coverage enhancement is achieved where ∼7dB is achieved from network bandwidth reduction and ∼13 dB from allowed repeated transmissions. NBIoT offers operating mode flexibility: In order to coexist with LTE and 2G, three deployment modes are available standalone, in-band and standalone. NBIoT doesn't support higher modulation scheme than QPSK, thus keep device complexity low and thereby cost. Moreover, to keep the PAPR (peak to average power ratio) low in UL, π/2-BPSK, π/4-QPSK is used. NBIoT supports Low-Data-Rate applications, which obsolete the requirement for high-capacity flash memory hence reduce the chip area and thereby the cost of devices. NBIoT operates in Licensed Band and can provide telecommunication level of security. NBIoT achieved signaling optimization: In addition to existing RRC, NBIoT uses DONAS (data over non access stratum) and RRC (radio resource control) suspend/resume for signaling optimization. Where DONAS enable the user to transmit data without activating a user plane and support sporadic data transmissions also. The RRC suspend/resume is a user plane optimization procedure that introduced an efficient way to disable and restore the user plane. Further, NB-IoT is designed for sporadic and small messages transmission among the device and the network. It is presumed that the device can exchange small messages via one cell, therefore obsolete the requirement of handoffs. However, if required then it needs to first go in idle state and then restart the cell selection process. Besides this, it doesn't support other EUTRA functions like Inter-RATmobility, handover, dual connectivity, CQI (Channel Quality Indicator) reporting.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> Narrowband internet of things (NB-IoT) is an emerging cellular technology that will provide improved coverage for massive number of low-throughput low-cost devices with low device power consumption in delay-tolerant applications. A new single tone signal with frequency hopping has been designed for NB-IoT physical random access channel (NPRACH). In this letter we describe this new NPRACH design and explain in detail the design rationale. We further propose possible receiver algorithms for NPRACH detection and time-of-arrival estimation. Simulation results on NPRACH performance including detection rate, false alarm rate, and time-of-arrival estimation accuracy are presented to shed light on the overall potential of NB-IoT systems. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> Many use cases in the Internet of Things (IoT) will require or benefit from location information, making positioning a vital dimension of the IoT. The 3GPP has dedicated a significant effort during its Release 14 to enhance positioning support for its IoT technologies to further improve the 3GPPbased IoT eco-system. In this article, we identify the design challenges of positioning support in LTE-M and NB-IoT, and overview the 3GPP's work in enhancing the positioning support for LTE-M and NB-IoT. We focus on OTDOA, which is a downlink based positioning method. We provide an overview of the OTDOA architecture and protocols, summarize the designs of OTDOA positioning reference signals, and present simulation results to illustrate the positioning performance. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> We propose an enhanced access reservation protocol (ARP) with a partial preamble transmission mechanism for the narrow band Internet of Things (NB-IoT) systems. The proposed ARP can enhance the ARP performance by mitigating the occurrence of preamble collisions, while being compatible with the conventional NB-IoT ARP. We provide an analytical model that captures the performance of the proposed ARP in terms of false alarm, misdetection, and collision probabilities. Moreover, we investigate a tradeoff between the misdetection and the collision probabilities, and optimize the proposed ARP according to the system loads. The results show that the proposed ARP outperforms the conventional NB-IoT ARP, in particular, at heavier system loads. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> We derive the uplink system model for In-band and Guard-band narrowband Internet of Things (NB-IoT). The results reveal that the actual channel frequency response (CFR) is not a simple Fourier transform of the channel impulse response, due to sampling rate mismatch between the NB-IoT user and long term evolution (LTE) base station. Consequently, a new channel equalization algorithm is proposed based on the derived effective CFR. In addition, the interference is derived analytically to facilitate the co-existence of NB-IoT and LTE signals. This letter provides an example and guidance to support network slicing and service multiplexing in the physical layer. <s> BIB004 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> Narrowband Internet-of-Things (NB-IoT) is one of the emerging 5G technologies, but might introduce narrowband interference (NBI) to existing broadband systems, such as long-term evolution advanced (LTE-A) systems. Thus, the mitigation of the NB-IoT interference to LTE-A is an important issue for the harmonic coexistence and compatibility between 4G and 5G. In this paper, a newly emerged sparse approximation technique, block sparse Bayesian learning (BSBL), is utilized to estimate the NB-IoT interference in LTE-A systems. The block sparse representation of the NBI is constituted through the proposed temporal differential measuring approach, and the BSBL theory is utilized to recover the practical block sparse NBI. A BSBL-based method, partition estimated BSBL, is proposed. With the aid of the estimated block partition beforehand, the Bayesian parameters are obtained to yield the NBI estimation. The intra-block correlation (IBC) is considered to facilitate the recovery. Moreover, exploiting the inherent structure of the identical IBC matrix, another method of informative BSBL is proposed to further improve the accuracy, which does not require prior estimation of the block partition. Reported simulation results demonstrate that the proposed methods are effective in canceling the NB-IoT interference in LTE-A systems, and significantly outperform other conventional methods. <s> BIB005 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> LPWAN is a type of wireless telecommunication network designed to allow long range communications with relaxed requirements on data rate and latency between the core network and a high-volume of battery-operated devices. This article first reviews the leading LPWAN technologies on both unlicensed spectrum (SIGFOX, and LoRa) and licensed spectrum (LTE-M and NB-IoT). Although these technologies differ in many aspects, they do have one thing in common: they all utilize the narrow-band transmission mechanism as a leverage to achieve three fundamental goals, that is, high system capacity, long battery life, and wide coverage. This article introduces an effective bandwidth concept that ties these goals together with the transmission bandwidth, such that these contradicting goals are balanced for best overall system performance. <s> BIB006 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. BACKGROUND OF NBIoT <s> With the proliferation of mobile devices, indoor fingerprinting-based localization has caught considerable interest on account of its high precision. Meanwhile, channel state information (CSI), as a promising positioning characteristic, has been gradually adopted as an enhanced channel metric in indoor positioning schemes. In this paper, we propose a CSI amplitude fingerprinting-based localization algorithm in Narrowband Internet of Things system, in which we optimize a centroid algorithm based on CSI propagation model. In particular, in the fingerprint matching, we utilize the method of multidimensional scaling (MDS) analysis to calculate the Euclidean distance and time-reversal resonating strength between the target point and the reference points and then employ the ${K}$ -nearest neighbor (KNN) algorithm for location estimation. By conjugate gradient method, moreover, we optimize the localization error of triangular centroid algorithm and combine the positioning result with MDS and KNN’s estimated position to get the final estimated position. Experiment results show that compared to some existing localization methods, our proposed algorithm can effectively reduce positioning error. <s> BIB007
|
NBIoT is in the initial state of deployment, still, there is room for optimization. The miscellaneous work carried out in the context of NBIoT optimization is discussed in this section. Song et al. BIB007 proposed an indoor localization algorithm for NB-IoT system using Channel state information (CSI) fingerprinting. The proposed algorithm will estimate the position with less complexity by observing the similarity between two CSI values and then convert the similar value to a relative value. To improve estimation of position, triangular centroid algorithm and K nearest neighbor is used. As a result, the position error was reduced in comparison to existing techniques. Further, to enable a node, in an idle state to transmit the data. Lin et al. BIB002 proposed an efficient SDT (Smallest Data Transmission) technique to increase the number of the supportable device. In this eNB will broadcast the maximum SDT size in SIB (system information block), and also a group of SDT ids with it. Thereafter, UE will check the SDT value and determine whether to use this technique or not. If suitable then send the RA response. Successively, eNB will acknowledge by sending RA response comprised of uplink resource grant. UE on successful reception will send the PDU and starts the timer. Upon receiving the response within time, UE compares its id with group ids. As a result, authors found that more devices could be supported in comparison of conventional control plane solutions, whilst using limited uplink resources. However massive connection results in the high probability of collision thus Kim et al. BIB003 proposed an enhanced Access-Reservation-Protocol (EARP) with a partial preamble transmission mechanism. The proposed protocol although reduced the collision probability but at the cost of abdication of the detection ability. Authors found that EARP would results in effective resource utilization in reference to system load. When the load will be less then it will reduces the probability of misdetection. On the other hand, when a heavy load would be there, the collision probability gets reduced although the detection probability get deteriorates. Further Zhang et al. BIB004 done an analysis of interference caused by the NBIoT user equipment to that of LTE users. Authors considered only in-band and guard band for analysis and proposed an algorithm for removal of ISI or channel equalization. On the basis of observation, the authors stated that LTE-UE performance degrades due to the different sampling rate of NBIoT user. In addition to this, the level of interference depends upon specifically two important factors. First due to attenuation of power and second is due to guard band. A critical observation about bit error rate is made for an LTE device i.e. its BER will improve as guard-band bandwidth increases although for IoT devices BER remains unaffected. Liu et al. BIB005 taking the same issue of interference between NBIoT and LTE devices have proposed a framework based on Block Sparse Bayesian Learning (BSBL) and two algorithms for recovery of signals. Authors observed that by canceling the interfering NB-IoT signal, LTE device could operate more effectively. Further, Lin et al. BIB001 proposed a single tone design for NPRACH (narrowband physical random access channel) and this work was integrated into 3GPP Rel-13. To reduce the overhead due to cyclic prefix (CP) N-samples are repeated n times and then collectively CP corresponding to these samples is added, as a result, the length of CP got reduced. After this single carrier hopping is used where within symbols fixed hopping is used and among different symbol groups, pseudo-random hopping is applied. Further, the time of arrival (ToA) & carrier frequency offset (CFO) is jointly estimated to determine the midsection and false alarm. As a result, the authors observed that using the proposed design the probability of detection increased up to 99% and the probability of false-alarm reduced immensely below 0.1%. Further, Yang et al. BIB006 suggested using effective bandwidth (EB) to collectively handle all the components of LPWAN i.e. battery life, coverage, T x (transmission) bandwidth, spectral efficiency. According to the suggested, concept if T x bandwidth would be greater than EB, this would ultimately reduce the system capacity. On the other hand, if T x would be less than EB, this would entail more Tx time. However, the journey of optimization doesn't end here. There are various open issues that are still required to be addressed to envisage NBIoT success. Some of them are discussed in section VII.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. HISTORY OF STANDARDIZATION <s> An ultra-low power (ULP), energy-harvesting system-on-chip, that can operate in various application scenarios, is needed for enabling the trillions of Internet-of-Things (IoT) devices. However, energy from the ambient sources is little and system power consumption is high. Circuits and system development require an optimal use of available energy. In this paper, we present circuits that can improve the energy utilization in an IoT device by providing improvements at critical points of the flow of harvested energy. A boost converter circuit, that can harvest energy from 10-mV input voltage and a few nanowatt of input power, makes more harvested energy available for the IoT device. A single-inductor-multiple-output buck-boost converter provides high-efficiency and low-voltage power management solution to put most of the harvested energy for system use. A real time clock and ULP bandgap reference circuit significantly reduce the standby power consumption. The proposed ULP circuits are developed in 130-nm CMOS technology. The combined effects of these circuits and the system design technique can improve the life-time of an example IoT device by over four times in higher power consumption mode and over 70 times in ULP mode. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. HISTORY OF STANDARDIZATION <s> In this paper, we review the background and state-of-the-art of the narrow-band Internet of Things (NB-IoT). We first introduce NB-IoT general background, development history, and standardization. Then, we present NB-IoT features through the review of current national and international studies on NB-IoT technology, where we focus on basic theories and key technologies, i.e., connection count analysis theory, delay analysis theory, coverage enhancement mechanism, ultra-low power consumption technology, and coupling relationship between signaling and data. Subsequently, we compare several performances of NB-IoT and other wireless and mobile communication technologies in aspects of latency, security, availability, data transmission rate, energy consumption, spectral efficiency, and coverage area. Moreover, we analyze five intelligent applications of NB-IoT, including smart cities, smart buildings, intelligent environment monitoring, intelligent user services, and smart metering. Finally, we summarize security requirements of NB-IoT, which need to be solved urgently. These discussions aim to provide a comprehensive overview of NB-IoT, which can help readers to understand clearly the scientific problems and future research directions of NB-IoT. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. HISTORY OF STANDARDIZATION <s> Energy harvesting technology provides a promising solution to enable internet of battery-less things (IoBT), as the lifetime and size of batteries become major limiting factors in the design and effective operation of internet of things (IoT). However, with constrained energy buffer size, the variation of ambient energy availability and wireless communication cast adverse effect on the operation of IoBT. There is a pressing demand for developing IoBT-specialized power management. In this paper, we propose a novel predictive power management (PPM) framework combining optimal working point, deviation aware predictive energy allocation, and energy efficient transmission power control. The optimal working point guarantees minimum power loss of IoBT systems. By predictively budgeting the available energy and using the optimal working point as a set-point, PPM mitigates the prediction error so that both power failure time and system power loss is minimized. The transmission power control module of PPM improves energy efficiency by dynamically selecting optimal transmission power level with minimum energy consumption. Real-world harvesting profiles are tested to validate the effectiveness of PPM. The results indicate that compared with the previous predictive power managers, PPM incurs up to $17.49\%$ reduction in system power loss and $93.88\%$ less power failure time while maintaining a high energy utilization rate. PPM also achieves $9.4\%$ to $23.22\%$ of maximum improvement of transmission energy efficiency compared with the state-of-the-art transmission power control schemes. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. HISTORY OF STANDARDIZATION <s> The Internet of Things (IoT) is expected to play an important role in the construction of next generation mobile communication services, and is currently used in various services. However, the power-hungry battery significantly limits the lifetime of IoT devices. Among the various lifetime extension techniques, this paper discusses mobile charging, which enables wireless power transfer based on radio frequency with mobile chargers (MCs). MCs function as traveling target IoT networks that provide energy to battery-operated IoT devices. However, MCs with an energy-constrained battery result in limitation of travel-time. This paper formulates a problem to minimize energy consumption for charging IoT devices by determining the path of motion of an MC and efficient charging points, and proves that the problem is NP-hard. An efficient algorithm, named best charging efficiency (BCE), is proposed to solve the problem and the upper bound of the BCE algorithm is guaranteed using the duality of linear programming. In addition, an improved BCE algorithm called branching second best efficiency algorithm with additional searching techniques is introduced. Finally, this paper analyzes the difference in performance among the proposed algorithms, optimal solutions, and the existing algorithm and concludes that the performance of the proposed algorithm is near optimal, within 1% of difference ratio in terms of charging efficiency and delay. <s> BIB004 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. HISTORY OF STANDARDIZATION <s> Abstract The paradigm of Internet of Things (IoT) is on rapid rise in today’s world of communication. Every networking device is being connected to the Internet to develop specific and dedicated applications. Data from these devices, called as IoT devices, is transmitted to the Internet through IoT Gateways (IGWs). IGWs support all the technologies in an IoT network. In order to reduce the cost involved with the deployment of IGWs, specialized low-cost devices called Solution Specific Gateways (SSGWs) are also employed alongside IGWs. These SSGWs are similar to IGWs except they support a subset of technologies supported by IGWs. A large number of applications are being designed which require IGWs and SSGWs to be deployed in remote areas. More often than not, gateways in such areas have to be run on battery power. Hence, power needs to be conserved in such networks for extending network life along with maintaining total connectivity. In this paper, we propose a dynamic spanning tree based algorithm for power-aware connectivity called SpanIoTPower-Connect which determines (near) optimal power consumption in battery-powered IoT networks. SpanIoTPower-Connect computes the spanning tree in the network in a greedy manner in order to minimize the power consumption and achieve total connectivity. Additionally, we propose an algorithm to conserve power in dynamic IoT networks where the connectivity demand changes with time. Our simulation results show that our algorithm performs better than Static Spanning Tree based algorithm for power-aware connectivity (Static ST) and a naive connectivity algorithm where two neighboring SSGWs are connected through every available technology. To the best of our knowledge, our work is the first attempt at achieving power-aware connectivity in battery-powered dynamic IoT networks. <s> BIB005
|
Before standardization, of NBIoT. It was proposed in two formats. NBIoT-OFDMA and NBIoT-M2M in 2014. Where QUALCOMM offered NBIoT-OFDMA & Huawei and Vodafone collectively offered NBIoT-M2M. These two formats were clubbed together to form NB-CIoT (NB cellular IoT), which require a new chipset and has no backward compatibility with previous LTE Rel ( BIB003 BIB004 BIB005 BIB001 BIB002 . Further, another format NBIoT-LTE was proposed in 2015 which is fully compatible with existing LTE specifications. Then finally in 2016, 3GPP after agreeing on two proposals gave standardized technology NBIoT. This technology has evolved over years of improvement. In this section, the roadmap of standardization (from Rel-8 to 15) is discussed to determine the improvement made (in these Rel) to support MTC. Well standardization of LTE started in 2005 when 3GPP started depth research for MTC on CN. In 2009, Rel-8 was standardization and brought in the feature like peak DL data rate of 300Mbps with an ability to operate in paired and unpaired frequency band. Thus have spectral flexibility with bandwidth ranging from 1.4 to 20 MHz A new IP based network was introduced i.e. LTE, was a significant step toward the IMT-A. In addition to this, to cater the need of multiple users, multiple antenna scenario came in to picture since the first Rel of 3GPP . However the basic requirement of low cost device and power issues left unattended. Thereafter Rel-9 standardized in 2010 in which LTE evolved to LTE-Advance and introduced additional features such as public warning system (PWS), location Identification service, SON (Self Organizing Network) feature to improve the network configuration, added new spectrum bands. Further in 2011 Rel-10 (also known as LTE-Advanced) was launched to extensively improve the throughput of the LTE system. For evaluation of Rel-10, it was submitted to ITU-R in 2010. In Rel-10, to enhance the bandwidth and bitrate, Carrier Aggregation was used for both TDD & FDD [52] and to reduce the interference, eICIC (enhanced Inter-Cell Interference Coordination) was added, where ABS (Almost Blank Subframes) is used to confine the data to the specific layer of the cell. Other features like eMIMO, etc. were also introduced. Further, Rel-11 got frozen in 2013. An important feature CoMP (Co-ordinated Multi Point operation) was announced to facilitate the scheduling of multiple carriers. EPDCCH (Enhanced Physical Downlink Control Channel) was added to increase DLC (downlink-Capacity). Furthermore, Rel-12 , got standardized in 2015. In this NAICS (Network Assisted Interference Cancellation and Suppression) was added to manage interference. To achieve QoE, naïve UE categories were introduced with 50% reduced cost in comparison to Rel-8, cat-1 devices. In support of machine type communication, PSM (power saving mode) was also introduced to conserve energy. Finally in Rel 13 , [57] to further cater the need of Machine to machine type communication, a new feature added to existing Rel-8-12: SCPTM (single cell point to multi-point), indoorpositioning, reduced latency, Dual connectivity and enhanced CA. LAA (licensed assisted access) was also introduced to allow operators to offload traffic to the femtocell, devoid of WLAN Two naïve category of devices were introduced in Rel 13 Cat-M1 (eMTC) and Cat-NB1 (NBIoT). Where the system bandwidth was confined to 1.4 MHz and coverage range was enhanced by 15 dB for Cat-M1 devices. Altogether reduced the device complexity and cost enabled operators to reach terminal devices in poor coverage range like in basements. For assurance of long battery life eDRX (extended Discontinuous Reception) feature was introduced that monitors the DL signals for only a short span of time otherwise keep the device in the sleep state. For the further evolution of LTE to support mMTC Rel-14 is also standardized with more technologies and improvement in existing features such as Additional Broadcast services, enhanced positioning, reduced latency, and massive multi antenna system. Currently, Rel-15 is under standardization process and expected to get standardize by September 2018 . The expected features of Rel-15 are also enlisted in Table 2 .
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. NBIoT RESOURCE ALLOCATION TECHNIQUES <s> Narrowband Internet of Things (NB-IoT) is a new narrow-band radio technology introduced in the Third Generation Partnership Project release 13 to the 5th generation evolution for providing low-power wide-area IoT. In NB-IoT systems, repeating transmission data or control signals has been considered as a promising approach for enhancing coverage. Considering the new feature of repetition, link adaptation for NB-IoT systems needs to be performed in 2-D, i.e., the modulation and coding scheme (MCS) and the repetition number. Therefore, existing link adaptation schemes without consideration of the repetition number are no longer applicable. In this paper, a novel uplink link adaptation scheme with the repetition number determination is proposed, which is composed of the inner loop link adaptation and the outer loop link adaptation, to guarantee transmission reliability and improve throughput of NB-IoT systems. In particular, the inner loop link adaptation is designed to cope with block error ratio variation by periodically adjusting the repetition number. The outer loop link adaptation coordinates the MCS level selection and the repetition number determination. Besides, key technologies of uplink scheduling, such as power control and transmission gap, are analyzed, and a simple single-tone scheduling scheme is proposed. Link-level simulations are performed to validate the performance of the proposed uplink link adaptation scheme. The results show that our proposed uplink link adaptation scheme for NB-IoT systems outperforms the repetition-dominated method and the straightforward method, particularly for good channel conditions and larger packet sizes. Specifically, it can save more than 14% of the active time and resource consumption compared with the repetition-dominated method and save more than 46% of the active time and resource consumption compared with the straightforward method. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. NBIoT RESOURCE ALLOCATION TECHNIQUES <s> Internet of things (IoT) is a way of connecting everything around us together, which would be widely applied in the 5G era. Narrow band IoT (NB-IoT) is one of the solutions introduced from 3rd Generation Partner Project (3GPP) Release-13 to fulfill this concept. Massive-connection and narrow band operation are two key features of NB-IoT. With current NB-IoT design, certain resources within narrow band are dedicated for transmitting paging information of massive connections. This leads to overload of the dedicated resources, and consequently increased padding bits and low resource efficiency. Moreover, UE power consumption would also rise due to the extra effort to decode larger packets. To solve above problems, a new resource allocation method is proposed, which includes a new definition of paging resource set and corresponding resource selection method. Link level simulation is conducted to show the benefit of our proposals. It is observed that approximately power consumption could be saved by 80% and the resource efficiency could be improved by 30.5% by utilizing our proposed methods. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> A. NBIoT RESOURCE ALLOCATION TECHNIQUES <s> Narrowband Internet of Things (NB-IoT) is the prominent technology that fits the requirements of future IoT networks. However, due to the limited spectrum (i.e., 180 kHz) availability for NB-IoT systems, one of the key issues is how to efficiently use these resources to support massive IoT devices? Furthermore, in NB-IoT, to reduce the computation complexity and to provide coverage extension, the concept of time offset and repetition has been introduced. Considering these new features, the existing resource management schemes are no longer applicable. Moreover, the allocation of frequency band for NB-IoT within LTE band, or as a standalone, might not be synchronous in all the cells, resulting in intercell interference (ICI) from the neighboring cells’ LTE users or NB-IoT users (synchronous case). In this paper, first a theoretical framework for the upper bound on the achievable data rate is formulated in the presence of control channel and repetition factor. From the conducted analysis, it is shown that the maximum achievable data rates are 89.2 Kbps and 92 Kbps for downlink and uplink, respectively. Second, we propose an interference aware resource allocation for NB-IoT by formulating the rate maximization problem considering the overhead of control channels, time offset, and repetition factor. Due to the complexity of finding the globally optimum solution of the formulated problem, a sub-optimal solution with an iterative algorithm based on cooperative approaches is proposed. The proposed algorithm is then evaluated to investigate the impact of repetition factor, time offset and ICI on the NB-IoT data rate, and energy consumption. Furthermore, a detailed comparison between the non-cooperative, cooperative, and optimal scheme (i.e., no repetition) is also presented. It is shown through the simulation results that the cooperative scheme provides up to 8% rate improvement and 17% energy reduction as compared with the non-cooperative scheme. <s> BIB003
|
Yu et al. BIB001 proposed a UL link adaptation scheme with consideration of repletion factor to address two important aspects of throughput and reliability. This scheme worked in two loops inner and outer. The inner one modifies the repletion factor to improve block error ratio whereas outer concentrate on the selection of modulation and coding scheme level In addition to this outer loop also decides the repetition factor accounting acknowledgment or no acknowledgment packets. As a result, the authors observed that the proposed technique can save 46% more resource allocation and active time in comparison of straightforward techniques. However, they have not considered the effect of inter-channel interference during evaluation. In order to handle massive connection effectively in NBIoT multi PRB design is used. Where most of UE's use Anchor-PRB (that satisfy the channel raster condition) to get paging and system information block. This leads to overloading of anchor-PRB. Hence the overloading of narrowband resource leads to underutilization of resources and increase power consumption. To overcome this issue Liu et al. BIB002 suggested a resource allocation technique to balance the paging load by using non-Anchor PRB. In the proposed technique existing user will use the Anchor PRB and the new NBIoT user will make use of non-Anchor PRB for paging in idle mode. As a result, it was observed that resource utilization improved by 30% and power consumption was reduced by 80%. Malik et al. BIB003 proposed an interference aware RAS that accounts for the repetition number & time offset. The linear modulation scheme is used in NBIoT (QPSK), therefore spectral efficiency is low. Firstly they evaluated the max attainable data rate for downlink is 89.2 kbps and 92kbps for uplink (theoretically). In addition to this, the authors analyzed the performance of cooperative scheme in comparison of noncooperative and optimal scheme. They found that cooperative scheme can reduce the energy consumption by 17% and can improve the data rate up to 8%.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. IoT BASED RESOURCE ALLOCATION TECHNIQUES <s> Undoubtedly, the Internet of Things (IoT) is the next big revolution in the field of wireless communication networks. IoT is an invisible network, which connects the physical world to the virtual world. Seamless Internet connectivity is essential between these two worlds for IoT to become a reality. In this aspect, long-term evolution advanced (LTE-A) is a promising technology, which meets the requirements of IoT. However, an exponential increase in the number of IoT devices will increase the energy consumption at base stations in LTE-A. Therefore, in this letter, we study the downlink energy efficiency aspect of LTE-A in the IoT networks. We propose a power control and resource block allocation scheme called breathing for an IoT network. Simulation results have shown that breathing performs better than the traditional maximum power allocation scheme and greedy power reduction scheme in terms of energy efficiency, system throughput, and system blocking. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> B. IoT BASED RESOURCE ALLOCATION TECHNIQUES <s> The Internet of Things (IoT) heralds a vision of future Internet where all physical things/devices are connected via a network to promote a heightened level of awareness about our world and dramatically improve our daily lives. Nonetheless, most wireless technologies in unlicensed band cannot provision ubiquitous and quality IoT services. In contrast, cellular networks support large-scale, quality of service guaranteed, and secured communications. However, tremendous proximal communications via local base stations (BSs) will lead to severe traffic congestion and huge energy consumption in conventional cellular networks. Device-to-device (D2D) communications can potentially offload traffic from and reduce energy consumption of BSs. In order to realize the vision of a truly global IoT, we propose a novel architecture, i.e., overlay-based green relay assisted D2D communications with dual batteries in heterogeneous cellular networks. By optimally allocating the network resource, our proposed resource allocation method provisions the IoT services and minimizes the overall energy consumption of the pico relay BSs. By balancing the residual green energy among the pico relay BSs, the green energy utilization has been maximized; this furthest saves the on-grid energy. Finally, we validate the performance of the proposed architecture through extensive simulations. <s> BIB002
|
Liu and Ansari BIB002 proposed a green approach by developing an overlay based architecture that uses relay BS powered by solar energy. These relay BS stores the harvested energy in two batteries and each one is used in alternatively when one exhaust. Thus omits the grid based energy utilization and cuts cost for transmission of power also. Here in this approach, the SD (source destination) pairs are classified into two groups based on CSI. Where one is direct D2D which doesn't use relay for transmission and another one that uses relay BS for transmission of information. However, while allocating bandwidth direct D2D pair is given more preference in comparison to relay based D2D transmissions. As a result, authors observed that as the number of SD pair increases with D2D communication very less bandwidth left for relay based pairs which deteriorates their performance. In addition to this due to large no of direct D2D pairs green energy will exhaust quickly and to balance the leftover/ residual energy the relay BSs with enough left over green energy will transmit their electricity via electric transmission lines to the relay BSs which run out of green energy. Hence proposed architecture maximizes the green energy utilization and further saves the on-grid energy. Further, Kotagi et al. BIB001 proposed a breathing technique to reduce the BS energy consumption by assigning the T x (Transmission) power efficiently. Firstly the SINR of the user is compared with pre-specified SINR threshold value and if found greater Tx Power has modified accordingly. Thereupon resource is allocated, according to breathing pattern i.e. IoT devices are arranged in increasing (inhaling) /decreasing (exhaling) order on the basis required Tx Power and then mapped to resource block. In addition to this if available resources are less as compare to required, the user/device with a heavy request and highest Tx power was blocked. Apart from this transmission power is reduced if SINR of device is greater than the threshold value, which in turn improves the co-channel interference. Authors found that overall energy consumption of BS reduced and throughput is improved using the proposed approach.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> In the near future, cellular machine-to-machine (M2M) communications are expected to play an important role to realize the Internet of Things (IoT). Due to the exponentially growing number of machine nodes, however, one of challenging issues is to accommodate their random access (RA) requests without severely degrading the performance of human-to-human (H2H) communications. In this letter, we propose an enhanced RA scheme with spatial group based reusable preamble allocation (ERA-SGRPA), which reuses the preamble resources based on a spatial grouping during the RA procedure. By allocating the identical preamble set to the groups which are far apart from each other, the ERA-SGRPA scheme enables the given preamble set to be efficiently reused. The performance evaluation shows that the ERA-SGRPA scheme significantly lowers the collision probability and reduces the access delay and can accommodate a significantly larger number of machine nodes without degrading the performance of H2H communications. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> Internet-of-Things (IoT) devices can be equipped with multiple heterogeneous network interfaces. An overwhelmingly large amount of services may demand some or all of these interfaces’ available resources. Herein, we present a precise mathematical formulation of assigning services to interfaces with heterogeneous resources in one or more rounds. For reasonable instance sizes, the presented formulation produces optimal solutions for this computationally hard problem. We prove the NP-completeness of the problem and develop two algorithms to approximate the optimal solution for big instance sizes. The first algorithm allocates the most demanding service requirements first, considering the average cost of interfaces’ resources. The second one calculates the demanding resource shares and allocates the most demanding of them first by choosing randomly among equally demanding shares. Finally, we provide simulation results giving insight into services splitting over different interfaces for both cases. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> With the proliferation of portable and mobile IoT devices and their increasing processing capability, we witness that the edge of network is moving to the IoT gateways and smart devices. To avoid Big Data issues (e.g. high latency of cloud based IoT), the processing of the captured data is starting from the IoT edge node. However, the available processing capabilities and energy resources are still limited and do not allow to fully process the data on-board. It calls for offloading some portions of computation to the gateway or servers. Due to the limited bandwidth of the IoT gateways, choosing the offloading levels of connected devices and allocating bandwidth to them is a challenging problem. This paper proposes a technique for managing computation offloading in a local IoT network under bandwidth constraints. The existing bandwidth allocation and computation offloading management techniques underutilize the gateway's resources (e.g. bandwidth) due to the fragmentation issue. This issue stems from the discrete coarse-grained choices (i.e. offloading levels) on the IoT end nodes. Our proposed technique addresses this issue, and utilizes the available resources of the gateway effectively. The experimental results show on average 1 hour (up to 1.5 hour) improvement in battery life of edge devices. The utilization of gateway's bandwidth increased by 40%. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> While the huge number of machine-to-machine (M2M) devices connects to the LTE system, the bursty random access attempts from them are potential to cause severe random access collisions and degrade the successful probability as well as delay of network connection establishment. To address this issue, current LTE spec has specified access class barring (ACB) and enhanced access barring (EAB), but both methods lack a contention detection method to decide when to activate it as well as a proper way to dynamically adjust the parameter. In this work, we first proposed a contention detection method that can work with current long-term evolution (LTE) standard and is easy to be implemented. To further improve the random access performance in dynamic world, we proposed a context-aware dynamic resource allocation (CADRA) mechanism, which is a two-phase method to resolve random access contention: Phase I for the estimation of random access attempts, and Phase II for resource allocation. By CADRA, high resource efficiency and low random access delay can be achieved without prior knowledge about random access arrival traffic, and thus this approach is competent in diverse applications and scenarios of Internet of Things (IoT). Simulation results show that the proposed contention detection method works great with ACB and EAB. Our proposed CADRA has good performance in resource efficiency and delay while satisfying success probability. <s> BIB004 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> Deployment of massive machine-to-machine (M2M) user equipments (UEs) in the current cellular network may cause overload in the radio access network (RAN). Access class barring (ACB) is an effective solution for reducing the RAN overload. In this letter, we propose an extended random access (RA) scheme to increase access success probability of M2M UEs by efficient use of available uplink radio resources. The proposed scheme allocates the available radio resources to the access-attempting UEs in two stages. In the first stage, the evolved node B (eNB) grants the available uplink resources to the UEs that have passed the ACB check. Then in the second stage, UEs that did not pass the ACB check utilize the remained unscheduled resources from the first stage. Simulation results show that the proposed scheme increases the number of successful requests and decreases the total service time of a traffic burst. <s> BIB005 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> Cellular machine-to-machine (M2M) communication can be one of the major candidate technologies to develop an Internet of Things (IoT) platform. A massive number of machine nodes access the cellular networks and typically send/receive small-sized data. In this situation, severe random access (RA) overload and radio resource shortage problems may occur if there is no evolution in the conventional cellular system. Focusing on RA, we need a larger number of preambles, as well as a more efficient resource allocation scheme in order to accommodate a significantly large number of RA requests from machine nodes. In this paper, we propose a non-orthogonal resource allocation (NORA) scheme, combined with our spatial group based RA (SGRA) mechanism, in order to provide a sufficiently large number of preambles at the first step of the RA procedure and non-orthogonally allocate physical uplink shared channel resources at the second step of the RA procedure. As a result, the proposed SGRA-NORA scheme can significantly increase the RA success probability, compared with that of the conventional RA scheme. <s> BIB006 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> Fog computing is a promising architecture to provide economical and low latency data services for future Internet of Things (IoT)-based network systems. Fog computing relies on a set of low-power fog nodes (FNs) that are located close to the end users to offload the services originally targeting at cloud data centers. In this paper, we consider a specific fog computing network consisting of a set of data service operators (DSOs) each of which controls a set of FNs to provide the required data service to a set of data service subscribers (DSSs). How to allocate the limited computing resources of FNs to all the DSSs to achieve an optimal and stable performance is an important problem. Therefore, we propose a joint optimization framework for all FNs, DSOs, and DSSs to achieve the optimal resource allocation schemes in a distributed fashion. In the framework, we first formulate a Stackelberg game to analyze the pricing problem for the DSOs as well as the resource allocation problem for the DSSs. Under the scenarios that the DSOs can know the expected amount of resource purchased by the DSSs, a many-to-many matching game is applied to investigate the pairing problem between DSOs and FNs. Finally, within the same DSO, we apply another layer of many-to-many matching between each of the paired FNs and serving DSSs to solve the FN-DSS pairing problem. Simulation results show that our proposed framework can significantly improve the performance of the IoT-based network systems. <s> BIB007 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> The proliferation of the Internet of Things (IoT) demands a diverse and wide range of requirements in terms of latency, reliability, energy efficiency, etc. Future IoT systems must have the ability to deal with the challenging requirements of both users and applications. Cognitive fifth generation (5G) network is envisioned to play a key role in leveraging the performance of IoT systems. IoT systems in cognitive 5G network are expected to provide flexible delivery of broad services and robust operations under highly dynamic conditions. In this paper, we present multiband cooperative spectrum sensing and resource allocation framework for IoT in cognitive 5G networks. Multiband approach can significantly reduce energy consumption for spectrum sensing compared to the traditional single-band scheme. We formulate an optimization problem to determine a minimum number of channels to be sensed by each IoT node in multiband approach to minimize the energy consumption for spectrum sensing while satisfying probabilities of detection and false alarm requirements. We then propose a cross-layer reconfiguration scheme (CLRS) for dynamic resource allocation in IoT applications with different quality-of-service (QoS) requirements including data rate, latency, reliability, economic price, and environment cost. The potential game is employed for cross-layer reconfiguration, in which IoT nodes are considered as the players. The proposed CLRS efficiently allocate resources to satisfy QoS requirements through opportunistic spectrum access. Finally, extensive simulation results are presented to demonstrate the benefits offered by the proposed framework for IoT systems. <s> BIB008 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> The Internet of Things gateways with multi-radio facilities in wireless networks can simultaneously communicate using multiple available channels. This feature enhances the carrying capacity of wireless links and thus increases the overall network throughput. However, designing an efficient resource allocation strategy is a complex task due to the decisive behavior of interference. There is only a limited number of available channels; therefore, the resource allocation requires careful planning to mitigate the effect of interference. This research proposes a backtracking search-based resource allocation scheme that maps resource allocation to the constraint satisfaction problem. Some of the resource allocation constraints are applied as soft constraints which are relaxed to find a feasible solution, provided the perfect allocation of limited resources is not possible. The proposed approach has been benchmarked through simulations and the results prove the effectiveness of the proposed approach especially in dense multi-hop network deployments. <s> BIB009 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> VOLUME 7, 2019 <s> The IoT is a novel platform for making objects more intelligent by connecting to the Internet. However, mass connections, big data processing, and huge power consumption restrict the development of IoT. In order to address these challenges, this article proposes a novel ECIoT architecture. To further enhance the system performance, radio resource and computational resource management in ECIoT are also investigated. According to the characteristics of the ECIoT, we mainly focus on admission control, computational resource allocation, and power control. To improve the performance of ECIoT, cross-layer dynamic stochastic network optimization is studied to maximize the system utility, based on the Lyapunov stochastic optimization approach. Evaluation results are provided which demonstrate that the proposed resource allocation scheme can improve throughput, reduce end-to-end delay, and also achieve an average throughput and delay trade-off. Finally, the future research topics of resource management in ECIoT are discussed. <s> BIB010
|
Further Samie et al. BIB003 enlighten the issue of underutilization of bandwidth of gateway. As IoT device have limited computation capability thus it is offloaded to the edge nodes i.e. gateway. However, due to limited bandwidth and diverse offloading levels, gateway bandwidth resource remain underutilized. Hence authors developed an algorithm that allocates or take away bandwidth from a device depending on its battery level. As a result, they found that battery life extended by 1.5 hour max and bandwidth utilization improved by 40%. Ejaz and Ibnkahla BIB008 proposed a framework for C-5GN. In the proposed framework, multiband sensing approach is used such that sensing is done in shared mode where each node is required to sense equal and reduced number of channels, thus reduced energy consumption. Although it is assured that all channels get processed once at least by a node. Apart from this, a game based CLRS (Cross Layer Reconfiguration) scheme is used to satisfy the diverse QoS requirement (latency, data rate, etc.) for each node. Hence, authors observed that the proposed technique reduced energy consumed during spectrum sensing and QoS is offered economically. Pang et al. BIB004 proposed a complete contention resolution approach based on the principle of context-aware and dynamic allocation of resource (CADRA). Overload scenario is first detected by analyzing the no of nonempty preambles. If detected, CADRA is called. This works in two phases firstly it determines the estimated number of RACH attempts by observing the preambles received successfully and empty preambles. After that in the second phase resource allocation is done. As a result, authors observed that throughput increased with moderate packet delay. Jang et al. BIB006 proposed Non orthogonal resource allocation collaborated with spatial group resource allocation scheme (SGRA). SGRA divides the cell into spatial groups. After dividing the cell region, it detects the preambles transmitted from the user in a certain spatial group by zadoff-chu sequence (which is shifted by spatial group number). Once the preambles utilized by a spatial group was determined using the ordered time delay metric, subgroups were formed. Authors found that using the proposed approach the probability of success of random access has increased a lot. Further, Morvari and Ghasemi BIB005 proposed a resource allocation scheme that works in two stage by efficacious use of resource available in UL. During the first phase, users are required to pass the access class bearing test. If a user manages to pass only then permitted to select the preamble (PA) and use physical uplink shared channel opportunity else has to select the special PA. Finally, in the second stage the secondary users are allowed to take access class bearing test again if passed allowed to request for UL shared channel. Hence it is inferred that the number of request that can be handled with proposed technique whilst dropping the wastage that occurs due to the collision of PA's. Another extended RA scheme with spatial group resource allocation is proposed in BIB001 . This scheme reuses the allocated preamble if the distance between the two nodes belonging to a different spatial group is greater than pre-specified minimum distance. In addition to this, access delay is also reduced. The proposed approach will utilize the unused bandwidth. Beside this collision probability is also tapered by using distinct UL resources for transmission. As interference is also a prevalent challenge for IoT device communication. Taking into consideration Iqbal et al. BIB009 proposed a constraint specific approach to improve the performance of IoT networks For solving the RA problem authors have used backtracking search algorithm named Soft-GORA .Where firstly channel is selected with minimum cost for tentative allocation to link, at the top in the ordered list and then checked against hard constraint. If it couldn't satisfy the constraint, the allocated channel is removed and the next best option is opted. As a result, authors relaxing few constraints observed that throughput has improved and delay is reduced. Li et al. BIB010 proposed an architecture using edge computing further, they proposed a scheme to jointly control admission of new packet and allocation of resource together with power control. For this, the resource management controller at fog node uses three information i.e. service, state of the queue, CSI. Using the proposed schemes authors found that tradeoff is achieved between throughput and delay. Zhang et al. BIB007 proposed a joint optimize framework for the selection and allocation of resource among fog nodes (FN), data service subscribers (DSS) & data service operators.(DSO) They have developed a stakelberg game algorithm, here DSO provides the computing resources (CR) to DSS and if they require more CRB's(Computing Resource Block) DSS competes for them. For this, another matching game algorithm is developed. As a result it was found that the system achieves high performance and optimally utilizes their resources. Further, cognitive 5G network (C-5GN) is expected to play a vital role in IoT success, Angelakis et al. BIB002 developed two algorithms RAND-INT and Average-cost allocation, to optimally allocate resources for services. These algorithms reduced the total cost of resource utilization and activation of interface. Where RAND-INT algorithm normalizes the demands to get an order in which resource is to be allocated. Henceforth the heaviest demand is served first by comparing the utilization cost of interfaces, thereby other demands are served either by splitting or by not splitting among two or more interfaces. Whereas the second algorithm assigns, the resource by averaging the total cost incurred. Authors have studied the impact of the activation cost on the services' splits and distribution among the interfaces. As a result, they found that for random service with high demand, interface total cost increases linearly with respect to the number of services. Hence above discussed technique proves that resource technique can enhance communication, connectivity and collaboration of IoT devices. The chief objective of resource allocation technique is to minimize the power consumption per node, BER, Delay and maximize the throughput and network utilization. The above discussed techniques are summarized in Table 6 .
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 1) SLEEP/WAKEUP TECHNIQUES <s> In this article, we present a complete design for an optimized energy-efficient sensor for use in Internet of Things networks based on the concept of radio wake-up, with embedded addressing capability. Experimental and analytical results demonstrate the validity of the proposed design in terms of low power consumption and efficient functionality compared to existing solutions that are based on periodic wake-up patterns. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 1) SLEEP/WAKEUP TECHNIQUES <s> Abstract Internet of Things (IoT) technologies can facilitate the preventive conservation of cultural heritage (CH) by enabling the management of data collected from electronic sensors. This work presents an IoT architecture for this purpose. Firstly, we discuss the requirements from the artwork standpoint, data acquisition, cloud processing and data visualization to the end user. The results presented in this work focuses on the most critical aspect of the architecture, which are the sensor nodes. We designed a solution based on LoRa and Sigfox technologies to produce the minimum impact in the artwork, achieving a lifespan of more than 10 years. The solution will be capable of scaling the processing and storage resources, deployed either in a public or on-premise cloud, embedding complex predictive models. This combination of technologies can cope with different types of cultural heritage environments. <s> BIB002
|
Khodr et al. BIB001 enlighten that though the duty cycling technique could reduce power consumption but this will reduce network reactivity. Thus authors proposed a design of sensor based RF wake up technology. This design approach wakes the sensor only when the address decoder detects that whether a received signal address is of IoT device or not. Thereby omits the fake call wakeups hence reduced energy consumption. Perles et al. BIB002 did a case study on cultural heritage and proposed an architecture to improve the lifespan of sensor life up to 20 years using unlicensed LPWAN technologies LoRa &SigFox. In order to keep the energy consumption low, the proposed architecture wakes up the microcontroller whenever a new packet arrives. Apart from this, it uses a cluster management system, Elastic Cloud Computing Cluster (EC3) which can deploy and un-deploy nodes as per demand i.e. nodes will be deployed if load increases and un-deployed if they are idle.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 2) COGNITIVE RADIO TECHNIQUES <s> Due to the drastic growth and an upsurge in the wireless communication devices in the world in recent years, there is a high demand of uninterrupted and intelligent connectivity in a self-organising manner amongst the users. It becomes more challenging for the emerging users because of scarcity of bandwidth. To overcome the unforbidden challenges in the advanced technologies like smart cities, 5G and Internet of Things (IoT), Cognitive Radio provides the solution to achieve high throughput and continuous connectivity for reliable communication. A primary challenge in the Cognitive Radio (CR) technology is the identification of dependable Data Channels (DCHs) for Secondary Users (SUs) communication amongst the available channels, and the continuation of communication when the Primary Users (PUs) return. The objective of every SU is to intelligently choose reliable DCHs, thereby ensuring reliable connectivity and successful transfer of data frames across the cognitive networks. The proposed Reliable, Intelligent and Smart Cognitive Radio protocol consumes less computational time and transmits energy with high throughput, as compared to the benchmark Cognitive Radio MAC (CR-MAC) protocols. This paper provides new applications of CR technology for IoT and proposes new and effective solutions to the real challenges in CR technology that will make IoT more affordable and applicable. HighlightsWe introduce Novel Channel Selection Criteria for IoTCR ProtocolsWe introduce Backup Channel Techniques for the SUs when PUs turn ON on their respective channels.We propose a model for integration of IoT with CR technology.The proposed approach has achieved higher throughput and data rate as compared to other benchmark protocols.The proposed model can be utilised for other technologies such as Smart city, e-health, communication, 5G, it, IoE, etc., and make these technologies highly effective for the users around the world. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 2) COGNITIVE RADIO TECHNIQUES <s> Cyber-physical Internet of things system (CPIoTS), as an evolution of Internet of things (IoT), plays a significant role in industrial area to support the interoperability and interaction of various machines (e.g., sensors, actuators, and controllers) by providing seamless connectivity with low bandwidth requirement. The fifth generation (5G) is a key enabling technology to revolutionize the future of industrial CPIoTS. In this paper, a communication framework based on 5G is presented to support the deployment of CPIoTS with a central controller. Based on this framework, multiple sensors and actuators can establish communication links with the central controller in full-duplex mode. To accommodate the signal data in the available channel band, the resource allocation problem is formulated as a mixed integer nonconvex programming problem, aiming to maximize the sum energy efficiency of CPIoTS. By introducing the transformation, we decompose the resource allocation problem into power allocation and channel allocation. Moreover, we consider an energy-efficient power allocation algorithm based on game theory and Dinkelbach's algorithm. Finally, to reduce the computational complexity, the channel allocation is modeled as a three-dimensional matching problem, and solved by iterative Hungarian method with virtual devices (IHM-VD). A comparison is performed with well-known existing algorithms to demonstrate the performance of the proposed one. The simulation results validate the efficiency of our proposed model, which significantly outperforms other benchmark algorithms in terms of meeting the energy efficiency and the QoS requirements. <s> BIB002
|
Further, Qureshi et al. BIB001 discussed cognitive radio approach where unlicensed users opportunistically access the unused spectrum holes that are not used by the licensed user. Hence to achieve good throughput, uninterrupted connectivity, and Reliable communication, for both licensed and unlicensed user the concept of the backup data channel (BDC) was introduced. Where if licensed user return than unlicensed user resumes its communication using BDC. Hence communication time between SUs for task completion will be reduced and thereby the energy consumption of CRAN's will reduce and throughput will increase. Furthermore, Li et al. BIB002 proposed a 5G based framework where the central control unit act as both cloud and central data processor takes the sensed data from the physical world. In addition to this authors, fractionated the resource allocation into two sub problems of power and channel allotment in industrial Cyber physical IoT. These sub-problems are solved separately. For individual channel allotment, the energy efficiency is achieved using game theory and Dinkelbach's algorithm. After this, the Hungarian method was used for channel allocation. Authors analyzed that IHM-VD could perform outstandingly.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 3) ROUTING TECHNIQUES <s> Topology control is one of the significant research topics in traditional wireless networks. The primary purpose of topology control ensures the connectivity of wireless nodes participated in the network. Low-power Internet of Things communication networks look like wireless network environments in which the main communication devices are wireless devices with limited energy like battery. In this paper, we propose a distributed topology control algorithm by merging the combinatorial block design from a design theory with the multiples of 2. The proposed technique especially focuses on asynchronous and asymmetric neighbor discovery. The concept of block design is used to generate the neighbor discovery schedule when a target duty cycle is given. In addition, the multiples of 2 are applied to overcome the challenge of the block design and support asymmetric operation. We analyze the worst case discovery latency and energy consumption numerically by calculating the total number of slots and wake-up slots based on the given duty cycle. It shows that our proposed method has the smallest total number of slots and wake-up slots among existing representative neighbor discovery protocols. The numerical analysis represents the proposed technique find neighbors quickly with minimum battery power compared with other protocols for distributed topology control. For future research direction, we could perform a simulation study or real experiment to investigate the best parameter for choosing the multiple of a certain number. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 3) ROUTING TECHNIQUES <s> In this paper, a simple energy-efficient physical layer modulation scheme termed as random number modulation (RNM) is proposed. It is a class of new systems that harness the randomness of pseudo random number generators (RNGs) for efficient communication, and adds a new degree of freedom to digital communication systems. It is also highly adaptable to high-rate, low-latency, and low-rate, high-energy efficiency scenarios. This paper includes a detailed system model, preliminary performance analysis, and extended discussions. The performance of the proposed system is analyzed in terms of symbol and block error probabilities, energy efficiency, and latency. It is shown that there is a fundamental tradeoff between the energy efficiency and the latency of the proposed system, and they can be easily software programmable allowing devices to adapt to different circumstances and environments rapidly. Based upon the basic system model herein, it is anticipated that more sophisticated RNM systems can be developed. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 3) ROUTING TECHNIQUES <s> Wireless sensor networks (WSNs) distribute hundreds to thousands of inexpensive micro-sensor nodes in their regions, and these nodes are important parts of Internet of Things (IoT). In WSN-assisted IoT, the nodes are resource constrained in many ways, such as storage resources, computing resources, energy resources, and so on. Robust routing protocols are required to maintain a long network lifetime and achieve higher energy utilization. In this paper, we propose a new energy-efficient centroid-based routing protocol (EECRP) for WSN-assisted IoT to improve the performance of the network. The proposed EECRP includes three key parts: a new distributed cluster formation technique that enables the self-organization of local nodes, a new series of algorithms for adapting clusters and rotating the cluster head based on the centroid position to evenly distribute the energy load among all sensor nodes, and a new mechanism to reduce the energy consumption for long-distance communications. In particular, the residual energy of nodes is considered in EECRP for calculating the centroid′s position. Our simulation results indicate that EECRP performs better than LEACH, LEACH-C, and GEEC. In addition, EECRP is suitable for networks that require a long lifetime and whose base station (BS) is located in the network. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 3) ROUTING TECHNIQUES <s> Reliable and energy-efficient data forwarding is significant for industrial Internet of Things (IoT) applications. A routing protocol called Network Coding and Power Control based Routing (NCPCR) is presented for unreliable wireless networks to save energy. The proposed NCPCR incorporates network coding mechanism and considers dynamic transmit power and the number of packet transmissions. In addition to the optimal transmit power, we derive the probability of successful decoding an encoded packet to achieve the network coding gain. The proposed NCPCR adopts the derived network coding gain in making intelligent decisions on whether to apply network coding or not such that energy consumption is significantly reduced. Simulation results show that the proposed NCPCR outperforms existing routing protocols in terms of lower energy consumption. <s> BIB004
|
Further Yi et al. BIB001 proposed a topology control algorithm in which nodes discover their neighbors asymmetrically. For time synchronization multiple of two is taken as reference point. Authors observed that energy consumption got abridged due to curtailment in wakeup calls required to discover the neighboring node. Further, the utmost drawback of continuous sensing (i.e. nonselective monitoring) is unfiltered data that results in significant energy consumption of sensors. Basnayaka and Haas BIB002 proposed a physical layer modulation scheme RNM (random number modulation) that operates in two modes energy efficient and low latency. To switch between modes the constellation order and random bit generated by random generator were considered. Proposed approach compromise latency for energy efficiency, as the user was allowed to transmit the data block only when it matches with random sequence generated by a random number generator otherwise the user could not transmit the data. Shen et al. BIB003 proposed a clustering based protocol EECRP for IoT assisted by WSN to enhance the performance of the network. Firstly a cluster head is opted by comparing the energy of each node with pre-specified energy (threshold value) after this taking the distance from the centroid (energy based) of the network to the node is considered. This leads to uniform energy consumption in the network. After this for optimal result, the max distance value is broadcasted by BS to all the nodes, in order to avoid long distance communication. As a result, it has been found that overall energy consumption of the network is reduced in comparison to proprietary protocols. In extension to this, Tian et al. BIB004 have discussed, when to use network coding concept to reduce the energy consumption. Authors have proposed a network coding and power control based routing (NCPCR) protocol. Using this protocol intermediate node drops the duplicate packets and keep the relevant information in its buffer. Source node selects the path with the minimum distance to transmit the data packets. As a result the energy consumption is reduced as no of packets to be retransmitted is reduced and moreover, it opts for the shortest path also and in addition to this, it can handle scalability.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 2) MATHEMATICAL ANALYSIS FOR ENERGY EFFICIENT SMART AGRICULTURE <s> Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 2) MATHEMATICAL ANALYSIS FOR ENERGY EFFICIENT SMART AGRICULTURE <s> Extensive research has not been done on propagation modeling for natural short- and tall-grassy environments for the purpose of wireless sensor deployment. This study is essential for efficiently deploying wireless sensors in different applications such as tracking the grazing habits of cows on the grass or monitoring sporting activities. This study proposes empirical path loss models for wireless sensor deployments in grassy environments. The proposed models are compared with the theoretical models to demonstrate their inaccuracy in predicting the path loss between sensor nodes deployed in natural grassy environments. The results show that the theoretical model values deviate from the proposed model values by 12%-42%. In addition, the results of the proposed models are compared with those of the experimental results obtained from similar natural grassy terrains at different locations resulting in similar outcomes. Finally, the results of the proposed models are compared with those of the previous studies and other terrain models such as those in dense tree environments. These comparisons show that there is a significant difference in path loss and empirical model parameters. The proposed models as well as the measured data can be used for efficient planning and future deployments of wireless sensor networks in similar grass terrains. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> 2) MATHEMATICAL ANALYSIS FOR ENERGY EFFICIENT SMART AGRICULTURE <s> In this article, a review of commercial devices on the edge of the Internet of Things (IoT), or IoT nodes, is presented in terms of hardware requirements. IoT nodes are the interface between the IoT and the physical world (e.g., sensor nodes). To this aim, we introduce a wide survey of existing devices made publicly available for the further analysis of trends and state of the art. This data-driven approach permits developing quantitative insight into the big picture of the current status of IoT nodes. The analysis shows that an order (ultimately two orders) of magnitude gap needs to be filled in terms of size, lifetime, and cost (energy efficiency) to ultimately make IoT nodes truly ubiquitous and trigger the widely expected exponential growth of the IoT ecosystem. Overall, this article presents a view from the edge of the IoT and a glimpse of its tipping point. <s> BIB003
|
Sensor nodes SN 1 , SN 2 , SN 3 , . . . . . . ., SN n are deployed in the field and as per proposed model, it is assumed CH of disjoint sets of sensors (CH 1 , CH 2 , and CH3) will communicate with the processing unit (PU). Further, this PU would communicate with BS. Assume that using NBIoT technology k PRB's (physical resource blocks) are required. As the signal propagates from SN i to NBIoT node path loss (PL) will occur. PL will depend on the position of sensor. If the sensor is placed under the soil or underground, then soil would absorb the signal and hence leads to path loss which can be evaluated as follows Here P * C specifies the complex-permittivity (cp),P real C denotes the real part of cp, P dipolarloss is due to Relaxation and P D specifies the relative dielectric-permittivity BIB002 , . The permittivity of soil can be affected importantly by parameters like water content, frequency of signal, soil composition and soil conductivity. Hence it is difficult to measure the soil permittivity accurately. Path loss can be calculated using equation BIB001 where T p is Transmission power, T g Transmission gain, R g receiver gain,PL fs is path loss due to free space propagation of signal and PL M is path loss due to medium, α is path loss between NBIoT node and eNB. 1 Further path loss due to medium can be evaluated as follows Where WD s is path loss due to difference in wavelength of signal in soil in comparison of free space and TL s is transmission loss. If the sensor is placed above the surface (of ground), then plants would act as scatterers and thereby the path loss can be evaluated using equation BIB003 Here OH T i represent the obstacle height at time iε{1, 2, 3}, where i =1 specifies the phase, when seeds are just sawn, so no plant would be there at that time so no obstacle. Likewise for i =2, specifies that plant has started budding up so it will have certain height. Similarly for i =3 specifies, plant has fully grown to its full height. Thus corresponding to height of plant, obstacle path loss parameter would increase. Here d 0 specifies the reference distance between sensing node and NBIoT node (Refer Fig. 20.) . Here α specifies the path loss between NBIoT node and eNB. Hence the SINR of the sensing node can be expressed as Where N is thermal noise, I T total interference,S T represents the transmitted signal power and G k is the channel gain between sensor node k and NBIoT node. We then express the total interference as Interference that occurs within zone, between zones and also due to soil reflectivity. Further channel capacity (C) can be evaluated based on Shannon theorem as follows Based on these above formulations we conclude that capacity of a sensor deployed in the agricultural area will be adversely affected by the interference generated by the adjacent sensor nodes, scatterers, etc.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> a: ENERGY CONSUMPTION EVALUATION <s> Energy consumption is the core issue in wireless sensor networks (WSN). To generate a node energy model that can accurately reveal the energy consumption of sensor nodes is an extremely important part of protocol development, system design and performance evaluation in WSNs. In this paper, by studying component energy consumption in different node states and within state transitions, the authors present the energy models of the node core components, including processors, RF modules and sensors. Furthermore, this paper reveals the energy correlations between node components, and then establishes the node energy model based on the event-trigger mechanism. Finally, the authors simulate the energy models of node components and then evaluate the energy consumption of network protocols based on this node energy model. The proposed model can be used to analyze the WSNs energy consumption, to evaluate communication protocols, to deploy nodes and then to construct WSN applications. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> a: ENERGY CONSUMPTION EVALUATION <s> The limited sensor node energy and the large number of nodes with dynamic network topology information have always been the important design concerns in Wireless Sensor Networks (WSN). Node clustering is an effective way to tackle with the two issues by grouping the nodes into hierarchies in order to reduce communication distance and the amount of message. This paper mainly focuses on the unification of the node energy consumption in WSN. The distributions of the energy consumption for various scenarios in the hierarchical network are analyzed for the first time and two main reasons are found leading to the asymmetry of the energy consumption among nodes. One is the energy consumption from the communications between nodes and base station, and the other is that from the cluster head for receiving data from other nodes. It is concluded that the probability of the node acting as cluster head should depend on the distribution of the head's energy consumption, and a variable sampling space oriented to the potential number of cluster heads is established thereafter. Furthermore, a new clustering algorithm, the Segment Equalization Clustering based on Cluster Head Energy Consumption (SECHEC) algorithm is proposed, which can effectively improve the network lifetime and ensure the availability of the system within its entire lifespan. <s> BIB002 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> a: ENERGY CONSUMPTION EVALUATION <s> The main goal in this paper is to obtain low power consumption of wireless sensor nodes and collected distributed data in environmental parameters monitoring. Communication module and the controller should be in idle state as long as possible when they are not active. In design and development of Wireless Sensor Networks (WSNs), one of the main challenges is to achieve long lasting battery lifetime. The purpose of this work is to develop a low maintenance and low cost wireless sensor network system which would be used for optimization of greenhouse crop production. <s> BIB003 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> a: ENERGY CONSUMPTION EVALUATION <s> Abstract The paradigm of Internet of Things (IoT) is on rapid rise in today’s world of communication. Every networking device is being connected to the Internet to develop specific and dedicated applications. Data from these devices, called as IoT devices, is transmitted to the Internet through IoT Gateways (IGWs). IGWs support all the technologies in an IoT network. In order to reduce the cost involved with the deployment of IGWs, specialized low-cost devices called Solution Specific Gateways (SSGWs) are also employed alongside IGWs. These SSGWs are similar to IGWs except they support a subset of technologies supported by IGWs. A large number of applications are being designed which require IGWs and SSGWs to be deployed in remote areas. More often than not, gateways in such areas have to be run on battery power. Hence, power needs to be conserved in such networks for extending network life along with maintaining total connectivity. In this paper, we propose a dynamic spanning tree based algorithm for power-aware connectivity called SpanIoTPower-Connect which determines (near) optimal power consumption in battery-powered IoT networks. SpanIoTPower-Connect computes the spanning tree in the network in a greedy manner in order to minimize the power consumption and achieve total connectivity. Additionally, we propose an algorithm to conserve power in dynamic IoT networks where the connectivity demand changes with time. Our simulation results show that our algorithm performs better than Static Spanning Tree based algorithm for power-aware connectivity (Static ST) and a naive connectivity algorithm where two neighboring SSGWs are connected through every available technology. To the best of our knowledge, our work is the first attempt at achieving power-aware connectivity in battery-powered dynamic IoT networks. <s> BIB004
|
For electing cluster head out of n sensor node for each disjoint set of sensing nodes. Each node power level is to be determined and one with highest power level is selected. Power level get deteriorates due to energy dissipation. Hence we first evaluate the total energy consumed due to various factors for each node (The energy model used here is given BIB001 .) EC Total = EC Sensing + EC Processing+ EC Transmission + EC Sleeping + EC Switching BIB004 Where energy consumption during sensing could be evaluated using equation (11) EC Sensing = EC Sample * n Here EC Sample specifies the energy consumption during sampling of each sample, n represents the no of samples. Similarly, energy consumption during transmission (EC Transmission ) depends on supply voltage S v , current I and time T required for transmission Energy consumed during switching from active to sleep state BIB003 EC Switching = T Swt * PC Active + PC Sleep /2 Hence total energy consumption can be evaluated (using eq. 10) and using Relation E= P * T, power levels can be evaluated for each node, where T will include time for switching, sleeping, active, sensing, and processing. Hence CH can be selected for by comparing the power of all the nodes. Here another aspect must also be included that is if the node has acted as CH earlier also then. Hence we evaluate the energy consumption when node act as CH, (EC CH ). This will depend on length of message L MSG and energy consumed by transmitter if sending the message to processing point (PP) or energy consumed when receiver receives data by PP BIB002 . Our proposed approach E 2 AHMS can not only be used for patients who require continuous monitoring, but also for health monitoring of person who are fit but wants to keep track of their health to prevent themselves from severe disease. As in both the cases, network life time will play a vital role, which can be increased or improved by using proposed optimized energy efficient algorithm. E 2 AHMS will work in an energy efficient manner by adaptively using the transmission, power required during different postural mobility (considered before). To understand this it is required to first understand the difference in energy requirement during different cases under consideration. (18) Here E obst i specifies the energy consumed due to obstacles/ shadowing /multipath effect.
|
A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> CASE B.3 (RELATIVELY MOBILE): <s> Smart world is envisioned as an era in which objects (e.g., watches, mobile phones, computers, cars, buses, and trains) can automatically and intelligently serve people in a collaborative manner. Paving the way for smart world, Internet of Things (IoT) connects everything in the smart world. Motivated by achieving a sustainable smart world, this paper discusses various technologies and issues regarding green IoT, which further reduces the energy consumption of IoT. Particularly, an overview regarding IoT and green IoT is performed first. Then, the hot green information and communications technologies (ICTs) (e.g., green radio-frequency identification, green wireless sensor network, green cloud computing, green machine to machine, and green data center) enabling green IoT are studied, and general green ICT principles are summarized. Furthermore, the latest developments and future vision about sensor cloud, which is a novel paradigm in green IoT, are reviewed and introduced, respectively. Finally, future research directions and open problems about green IoT are presented. Our work targets to be an enlightening and latest guidance for research with respect to green IoT and smart world. <s> BIB001 </s> A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges <s> CASE B.3 (RELATIVELY MOBILE): <s> The Internet of Things (IoT) is a promising technology which tends to revolutionize and connect the global world via heterogeneous smart devices through seamless connectivity. The current demand for machine-type communications (MTC) has resulted in a variety of communication technologies with diverse service requirements to achieve the modern IoT vision. More recent cellular standards like long-term evolution (LTE) have been introduced for mobile devices but are not well suited for low-power and low data rate devices such as the IoT devices. To address this, there is a number of emerging IoT standards. Fifth generation (5G) mobile network, in particular, aims to address the limitations of previous cellular standards and be a potential key enabler for future IoT. In this paper, the state-of-the-art of the IoT application requirements along with their associated communication technologies are surveyed. In addition, the third generation partnership project cellular-based low-power wide area solutions to support and enable the new service requirements for Massive to Critical IoT use cases are discussed in detail, including extended coverage global system for mobile communications for the Internet of Things, enhanced machine-type communications, and narrowband-Internet of Things. Furthermore, 5G new radio enhancements for new service requirements and enabling technologies for the IoT are introduced. This paper presents a comprehensive review related to emerging and enabling technologies with main focus on 5G mobile networks that is envisaged to support the exponential traffic growth for enabling the IoT. The challenges and open research directions pertinent to the deployment of massive to critical IoT applications are also presented in coming up with an efficient context-aware congestion control mechanism. <s> BIB002
|
In this state, the patient will be walking or moving with the help of wheel chair or might be doing some routine activities. Therefore it is more likely that NLOS transmission will occur due to various reasons such as obstacles in the surrounding, links between sensors would change rapidly due to body movement, etc. Hence more loss of information and thereby more energy consumption. Energy consumption during mobile state could be given by equation . Here E relink i denotes the energy consumed due to Relinking between sensor nodes and gateway. Thus, it is clear from the above discussion that due to multipath effect and different channel conditions, different postural movement leads to different energy consumption. Therefore in proposed technique E 2 AHMS, depending on postural movement level transmission power will be used adaptively which can be determined using graph theory. Whereas conventionally data was transmitted with equal power without considering LOS communication is possible or not and also channel conditions are good or not. Hence our proposed approach will be an energy efficient approach for Health monitoring application. This proposed approach is also supported by mathematical formulation. Here d 0 is reference the distance from NBIoT coordinator. S dB , is shadowing factor added to it due to movement of body parts. However, in the sleeping state, it would be negligible. Case Here F 3 specifies an additional factor of multipath propagation of signal. As PL is difference of transmitted power and received power PL = P Tx − P RX BIB001 In the conventional system, same transmission power is used for all the cases whether required or not, but in the proposed technique E 2 AHMS transmission power would be used adaptively depending on postural mobility. Hence power will be used in an optimized way. SNR can be evaluated for different considered case studies using received power Noise_Power BIB002 Thereby channel capacity (C case i ) for considered cases {i=B.1, B.2, B.3} using Shannon theorem could be evaluated for considered case studies as Hence Energy efficiency can be evaluated using equation It is clear from the above equation the high mobility would result in more energy consumption. Hence using the proposed technique power will be saved and thereby could enhance the battery lifetime. Thus an energy efficient solution. Using the same theme of the proposed approach real time hardware implementation is done which is discussed in next section. C. REAL TIME HARDWARE IMPLEMENTATION FOR PROPOSED WORK 1) HARDWARE SYSTEM MODEL Fig.22 gives an overview of proposed approach real time implementation. Here two axis accelerometer sensors and a pulse sensor is used for real time analysis. To determine the patient postural mobility accelerometer sensors are placed on his right hand and right leg. Apart from this a pulse sensor is used to measure the pulse rate of the patient. Further, in order to send the health information to eNB in energy efficient way. Our proposed approach will differentiate between different postural ( Fig. 22 (a), (b) , (c), (d), (e)) using edge computing. Such that energy is utilized in an efficiently way by using an optimal amount of transmission power of device as suggested in proposed approach to transmit information to eNB. Hereafter the health information is forwarded to the centralized cloud and then to application server and finally to doctor, who tracks the record of patient, to avoid unwanted situation. Finally, doctor would prescribe some medicine if required otherwise the message will be sent to user that he is fine.
|
Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Bug triage, deciding what to do with an incoming bug report, is taking up increasing amount of developer resources in large open-source projects. In this paper, we propose to apply machine learning techniques to assist in bug triage by using text categorization to predict the developer that should work on the bug based on the bug’s description. We demonstrate our approach on a collection of 15,859 bug reports from a large open-source project. Our evaluation shows that our prototype, using supervised Bayesian learning, can correctly predict 30% of the report assignments to <s> BIB001 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> bug report is typically assigned to a single developer who is then responsible for fixing the bug. In Mozilla and Eclipse, between 37%-44% of bug reports are "tossed" (reassigned) to other developers, for example because the bug has been assigned by accident or another developer with additional expertise is needed. In any case, tossing increases the time-to-correction for a bug. In this paper, we introduce a graph model based on Markov chains, which captures bug tossing history. This model has several desirable qualities. First, it reveals developer networks which can be used to discover team structures and to find suitable experts for a new task. Second, it helps to better assign developers to bug reports. In our experiments with 445,000 bug reports, our model reduced tossing events, by up to 72%. In addition, the model increased the prediction accuracy by up to 23 percentage points compared to traditional bug triaging approaches. <s> BIB002 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> SUMMARY The paper presents an approach to recommend a ranked list of expert developers to assist in the implementation of software change requests (e.g., bug reports and feature requests). An Information Retrieval (IR)-based concept location technique is first used to locate source code entities, e.g., files and classes, relevant to a given textual description of a change request. The previous commits from version control repositories of these entities are then mined for expert developers. The role of the IR method in selectively reducing the mining space is different from previous approaches that textually index past change requests and/or commits. The approach is evaluated on change requests from three open-source systems: ArgoUML, Eclipse ,a ndKOffice, across a range of accuracy criteria. The results show that the overall accuracies of the correctly recommended developers are between 47 and 96% for bug reports, and between 43 and 60% for feature requests. Moreover, comparison results with two other recommendation alternatives show that the presented approach outperforms them with a substantial margin. Project leads or developers can use this approach in maintenance tasks immediately after the receipt of a change request in a free-form text. Copyright q 2011 John Wiley & Sons, Ltd. <s> BIB003 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> A key collaborative hub for many software development projects is the bug report repository. Although its use can improve the software development process in a number of ways, reports added to the repository need to be triaged. A triager determines if a report is meaningful. Meaningful reports are then organized for integration into the project's development process. To assist triagers with their work, this article presents a machine learning approach to create recommenders that assist with a variety of decisions aimed at streamlining the development process. The recommenders created with this approach are accurate; for instance, recommenders for which developer to assign a report that we have created using this approach have a precision between 70p and 98p over five open source projects. As the configuration of a recommender for a particular project can require substantial effort and be time consuming, we also present an approach to assist the configuration of such recommenders that significantly lowers the cost of putting a recommender in place for a project. We show that recommenders for which developer should fix a bug can be quickly configured with this approach and that the configured recommenders are within 15p precision of hand-tuned developer recommenders. <s> BIB004 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Empirical studies indicate that automating the bug assignment process has the potential to significantly reduce software evolution effort and costs. Prior work has used machine learning techniques to automate bug assignment but has employed a narrow band of tools which can be ineffective in large, long-lived software projects. To redress this situation, in this paper we employ a comprehensive set of machine learning tools and a probabilistic graph-based model (bug tossing graphs) that lead to highly-accurate predictions, and lay the foundation for the next generation of machine learning-based bug assignment. Our work is the first to examine the impact of multiple machine learning dimensions (classifiers, attributes, and training history) along with bug tossing graphs on prediction accuracy in bug assignment. We validate our approach on Mozilla and Eclipse, covering 856,259 bug reports and 21 cumulative years of development. We demonstrate that our techniques can achieve up to 86.09% prediction accuracy in bug assignment and significantly reduce tossing path lengths. We show that for our data sets the Naive Bayes classifier coupled with product-component features, tossing graphs and incremental learning performs best. Next, we perform an ablative analysis by unilaterally varying classifiers, features, and learning model to show their relative importance of on bug assignment accuracy. Finally, we propose optimization techniques that achieve high prediction accuracy while reducing training and prediction time. <s> BIB005 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Generally speaking, the larger-scale open source development projects support both developers and users to report bugs in an open bug repository. Each report that appears in this repository must be triaged for fixing it. However, with huge amount of bugs are reported every day, the workload of developers is so high. In addition, most of bug reports were not assigned to correct developers for fixing so that these bugs need to be re-assigned to another developer. If the number of re-assignments to developers is large, the bug fixing time is increased. So "who are appropriate developers for fixing bug?" is an important question for bug triage. In this paper, we propose an automated developer recommendation approach for bug triage. The major contribution of our paper is to build the concept profile(CP) for extracting the bug concepts with topic terms from the documents produced by related bug reports, and we find the important developers with the high probability of fixing the given bug by using social network(SN). As a result, we get a ranked list of appropriate developers for bug fixing according to their expertise and fixing cost. The evaluation results show that our approach outperforms other developer recommendation methods. <s> BIB006 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Large open source software projects receive abundant rates of submitted bug reports. Triaging these incoming reports manually is error-prone and time consuming. The goal of bug triaging is to assign potentially experienced developers to new-coming bug reports. To reduce time and cost of bug triaging, we present an automatic approach to predict a developer with relevant experience to solve the new coming report. In this paper, we investigate the use of five term selection methods on the accuracy of bug assignment. In addition, we re-balance the load between developers based on their experience. We conduct experiments on four real datasets. The experimental results show that by selecting a small number of discriminating terms, the F-score can be significantly improved. <s> BIB007 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Bugs are prevalent in software systems and improving time efficiency in bug fixing is desired. We performed an analysis on 11,115 bug records of Eclipse JDT and found that bug resolution time is log-normally distributed and varies across fixers, technical topics, and bug severity levels. We then propose FixTime, a novel method for bug assignment. The key of FixTime is a topicbased, log-normal regression model for predicting defect resolution time on which FixTime is based to make fixing assignment recommendations. Preliminary results suggest that FixTime has higher prediction accuracy than existing approaches. <s> BIB008 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> For complex and popular software, project teams could receive a large number of bug reports. It is often tedious and costly to manually assign these bug reports to developers who have the expertise to fix the bugs. Many bug triage techniques have been proposed to automate this process. In this paper, we describe our study on applying conventional bug triage techniques to projects of different sizes. We find that the effectiveness of a bug triage technique largely depends on the size of a project team (measured in terms of the number of developers). The conventional bug triage methods become less effective when the number of developers increases. To further improve the effectiveness of bug triage for large projects, we propose a novel recommendation method called Bug Fixer, which recommends developers for a new bug report based on historical bug-fix information. Bug Fixer constructs a Developer-Component-Bug (DCB) network, which models the relationship between developers and source code components, as well as the relationship between the components and their associated bugs. A DCB network captures the knowledge of "who fixed what, where". For a new bug report, Bug Fixer uses a DCB network to recommend to triager a list of suitable developers who could fix this bug. We evaluate Bug Fixer on three large-scale open source projects and two smaller industrial projects. The experimental results show that the proposed method outperforms the existing methods for large projects and achieves comparable performance for small projects. <s> BIB009 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Abstract Context Feature location aims to identify the source code location corresponding to the implementation of a software feature. Many existing feature location methods apply text retrieval to determine the relevancy of the features to the text data extracted from the software repositories. One of the preprocessing activities in text retrieval is term-weighting, which is used to adjust the importance of a term within a document or corpus. Common term-weighting techniques may not be optimal to deal with text data from software repositories due to the origin of term-weighting techniques from a natural language context. Objective This paper describes how the consideration of when the terms were used in the repositories, under the condition of weighting only the noun terms, can improve a feature location approach. Method We propose a feature location approach using a new term-weighting technique that takes into account how recently a term has been used in the repositories. In this approach, only the noun terms are weighted to reduce the dataset volume and avoid dealing with dimensionality reduction. Results An empirical evaluation of the approach on four open-source projects reveals improvements to the accuracy, effectiveness and performance up to 50%, 17%, and 13%, respectively, when compared to the commonly-used Vector Space Model approach. The comparison of the proposed term-weighting technique with the Term Frequency-Inverse Document Frequency technique shows accuracy, effectiveness, and performance improvements as much as 15%, 10%, and 40%, respectively. The investigation of using only noun terms, instead of using all terms, in the proposed approach also indicates improvements up to 28%, 21%, and 58% on accuracy, effectiveness, and performance, respectively. Conclusion In general, the use of time in the weighting of terms, along with the use of only the noun terms, makes significant improvements to a feature location approach that relies on textual information. <s> BIB010 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Improve automatic bug assignment (ABA) accuracy by using metadata in term weighting.Improve accuracy of common term-weighting technique, tf-idf, up to 14%.Recommend a light method for ABA based on the new term-weighting technique.Outperform the ML and IR methods by recommended method up to 55%. Bug assignment is one of the important activities in bug triaging that aims to assign bugs to the appropriate developers for fixing. Many recommended automatic bug assignment approaches are based on text analysis methods such as machine learning and information retrieval methods. Most of these approaches use term-weighting techniques, such as term frequency-inverse document frequency (tf-idf), to determine the value of terms. However, the existing term-weighting techniques only deal with frequency of terms without considering the metadata associated with the terms that exist in software repositories. This paper aims to improve automatic bug assignment by using time-metadata in tf-idf (Time-tf-idf). In the Time-tf-idf technique, the recency of using the term by the developer is considered in determining the values of the developer expertise. An evaluation of the recommended automatic bug assignment approach that uses Time-tf-idf, called ABA-Time-tf-idf, was conducted on three open-source projects. The evaluation shows accuracy and mean reciprocal rank (MRR) improvements of up to 11.8% and 8.94%, respectively, in comparison to the use of tf-idf. Moreover, the ABA-Time-tf-idf approach outperforms the accuracy and MRR of commonly used approaches in automatic bug assignment by up to 45.52% and 55.54%, respectively. Consequently, consideration of time-metadata in term weighting reasonably leads to improvements in automatic bug assignment. <s> BIB011 </s> Survey Based Classification of Bug Triage Approaches <s> Bug-repot Triage <s> Abstract —In this paper, we propose a semi-supervised text classification approach for bug triage to avoid the deficiency of labeled bug reports in existing supervised approaches. This new approach combines naive Bayes classifier and expectation-maximization to take advantage of both labeled and unlabeled bug reports. This approach trains a classifier with a fraction of labeled bug reports. Then the approach iteratively labels numerous unlabeled bug reports and trains a new classifier with labels of all the bug reports. We also employ a weighted recommendation list to boost the performance by imposing the weights of multiple developers in training the classifier. Experimental results on bug reports of Eclipse show that our new approach outperforms existing supervised approaches in terms of classification accuracy. Keywords- automatic bug triage; expectation-maximization; semi-supervised text classification; weighted recommendation list I. I NTRODUCTION Most of large software projects employ a bug tracking system (bug repository) to manage bugs and developers. In software development and maintenance, a bug repository is a significant software repository for storing the bugs submitted by <s> BIB012
|
To resolve a new bug, each bug must be assigned to a relevant developer who has an appropriate experience in restoring similar types of bug. Bug assignment process can be done manually, which becomes labor intensive, error-prone and time-consuming. To reduce the time and cost of bug assignment process, the first automatic bug triager was proposed by Cubranic and Murphy BIB001 . Thereafter, many automatic bug triage approaches were proposed that are based on machine learning BIB003 BIB004 BIB002 BIB005 BIB012 , metadata BIB011 BIB008 BIB009 BIB010 , or developer profile BIB006 BIB007 . These are shown in Table 1 . Assigning change requests to software developers BIB003 Automatic assignment of work item Reducing the Effort of Bug Report Triage: Recommenders for DevelopmentOriented Decisions BIB004 Highly-accurate Bug Triage using Machine Learning Improving bug triage with Bug tossing Graphs BIB002 Automated, highly-accurate, bug assignment using machine learning and tossing graphs BIB005 An Approach to Improving Bug Assignment with bug tossing graph and bug similarities Novel metrics for bug triage Automatic Bug Triage using Semi-Supervised Text Classification BIB012 Meta-Data Based Approch COSTRIAGE: A Cost-Aware Triage Algorithm for Bug Reporting A time based approach to Automatic Bug Report Assignment BIB011 Topic-based, time aware bug assignment BIB008 Improving automatic bug assignment using time-meta in term weights Effective Bug Triage based on Historical Bug-Fix information BIB009 Automatic Bug Assignment Using Information Extraction Methods A Noun based approach to feature location using time aware term-weighting BIB010 Profile Based Approach An Automated Bug Triage Approach: A Concept Profile and Social network Based Developer Recommendation BIB006 Bug report assignee Recommendation using Activity Profile A Hybrid Bug Triage Algorithm for Develop recommendation Efficient Bug Triaging Using Text Mining BIB007 2.1.5. Bug-report Duplication A newly reported bug in issues tracking system can be a duplicate bug that has the same root of a master or existing bug. Duplicate bug can either be originated from the same root source as existing bug but may have a different failure or have same description of the same failure as an existing bug . However in practice, duplicate bugs can be avoided only when the developer knows about all the existing bugs, which is practically not possible. An important task of a bug triage is to detect duplicate bugs and remove it in order to save the time for developers to fix the bug and reduce triaging cost.
|
Survey Based Classification of Bug Triage Approaches <s> Bug Report Prioritization <s> A key collaborative hub for many software development projects is the bug report repository. Although its use can improve the software development process in a number of ways, reports added to the repository need to be triaged. A triager determines if a report is meaningful. Meaningful reports are then organized for integration into the project's development process. To assist triagers with their work, this article presents a machine learning approach to create recommenders that assist with a variety of decisions aimed at streamlining the development process. The recommenders created with this approach are accurate; for instance, recommenders for which developer to assign a report that we have created using this approach have a precision between 70p and 98p over five open source projects. As the configuration of a recommender for a particular project can require substantial effort and be time consuming, we also present an approach to assist the configuration of such recommenders that significantly lowers the cost of putting a recommender in place for a project. We show that recommenders for which developer should fix a bug can be quickly configured with this approach and that the configured recommenders are within 15p precision of hand-tuned developer recommenders. <s> BIB001 </s> Survey Based Classification of Bug Triage Approaches <s> Bug Report Prioritization <s> The large number of new bug reports received in bug repositories of software systems makes their management a challenging task. Handling these reports manually is time consuming, and often results in delaying the resolution of important bugs. To address this issue, a recommender may be developed which automatically prioritizes the new bug reports. In this paper, we propose and evaluate a classification based approach to build such a recommender. We use the Naive Bayes and Support Vector Machine (SVM) classifiers, and present a comparison to evaluate which classifier performs better in terms of accuracy. Since a bug report contains both categorical and text features, another evaluation we perform is to determine the combination of features that better determines the priority of a bug. To evaluate the bug priority recommender, we use precision and recall measures and also propose two new measures, Nearest False Negatives (NFN) and Nearest False Positives (NFP), which provide insight into the results produced by precision and recall. Our findings are that the results of SVM are better than the Naive Bayes algorithm for text features, whereas for categorical features, Naive Bayes performance is better than SVM. The highest accuracy is achieved with SVM when categorical and text features are combined for training. <s> BIB002 </s> Survey Based Classification of Bug Triage Approaches <s> Bug Report Prioritization <s> Large open source bug tracking systems receives large number of bug reports daily. Managing these huge numbers of incoming bug reports is a challenging task. Dealing with these reports manually consumes time and resources which leads to delaying the resolution of important bugs which are crucial and need to be identified and resolved earlier. Bug triaging is an important process in software maintenance. Some bugs are important and need to be fixed right away, whereas others are minor and their fixes could be postponed until resources are available. Most automatic bug assignment approaches do not take the priority of bug reports in their consideration. Assigning bug reports based on their priority may play an important role in enhancing the bug triaging process. In this paper, we present an approach to predict the priority of a reported bug using different machine learning algorithms namely Naive Bayes, Decision Trees, and Random Forest. We also investigate the effect of using two feature sets on the classification accuracy. We conduct experimental evaluation using open-source projects namely Eclipse and Fire fox. The experimental evaluation shows that the proposed approach is feasible in predicting the priority of bug reports. It also shows that feature-set-2 outperformsfeature-set-1. Moreover, both Random Forests and Decision Trees outperform Naive Bayes. <s> BIB003
|
A difficult and time consuming task for bug repositories is to host the large number of newly submitted bug reports. To resolve this problem, developer may assign bug priority (P1, P2, P3, P4 and P5) based importance of bugs in a system. Various bug priority recommendation are proposed using SVM and Naïve Bayes classification BIB002 BIB001 BIB003 .
|
Survey Based Classification of Bug Triage Approaches <s> . Machine Learning Based Approaches <s> Bug triage, deciding what to do with an incoming bug report, is taking up increasing amount of developer resources in large open-source projects. In this paper, we propose to apply machine learning techniques to assist in bug triage by using text categorization to predict the developer that should work on the bug based on the bug’s description. We demonstrate our approach on a collection of 15,859 bug reports from a large open-source project. Our evaluation shows that our prototype, using supervised Bayesian learning, can correctly predict 30% of the report assignments to <s> BIB001 </s> Survey Based Classification of Bug Triage Approaches <s> . Machine Learning Based Approaches <s> bug report is typically assigned to a single developer who is then responsible for fixing the bug. In Mozilla and Eclipse, between 37%-44% of bug reports are "tossed" (reassigned) to other developers, for example because the bug has been assigned by accident or another developer with additional expertise is needed. In any case, tossing increases the time-to-correction for a bug. In this paper, we introduce a graph model based on Markov chains, which captures bug tossing history. This model has several desirable qualities. First, it reveals developer networks which can be used to discover team structures and to find suitable experts for a new task. Second, it helps to better assign developers to bug reports. In our experiments with 445,000 bug reports, our model reduced tossing events, by up to 72%. In addition, the model increased the prediction accuracy by up to 23 percentage points compared to traditional bug triaging approaches. <s> BIB002 </s> Survey Based Classification of Bug Triage Approaches <s> . Machine Learning Based Approaches <s> Empirical studies indicate that automating the bug assignment process has the potential to significantly reduce software evolution effort and costs. Prior work has used machine learning techniques to automate bug assignment but has employed a narrow band of tools which can be ineffective in large, long-lived software projects. To redress this situation, in this paper we employ a comprehensive set of machine learning tools and a probabilistic graph-based model (bug tossing graphs) that lead to highly-accurate predictions, and lay the foundation for the next generation of machine learning-based bug assignment. Our work is the first to examine the impact of multiple machine learning dimensions (classifiers, attributes, and training history) along with bug tossing graphs on prediction accuracy in bug assignment. We validate our approach on Mozilla and Eclipse, covering 856,259 bug reports and 21 cumulative years of development. We demonstrate that our techniques can achieve up to 86.09% prediction accuracy in bug assignment and significantly reduce tossing path lengths. We show that for our data sets the Naive Bayes classifier coupled with product-component features, tossing graphs and incremental learning performs best. Next, we perform an ablative analysis by unilaterally varying classifiers, features, and learning model to show their relative importance of on bug assignment accuracy. Finally, we propose optimization techniques that achieve high prediction accuracy while reducing training and prediction time. <s> BIB003
|
Various bug triaging approaches are based on machine learning techniques for assigning a bug report to an experienced developer who has enough knowledge to fix the bug. In these techniques, previously resolved bug reposts are used as an input to train a classifier, and then this trained classifier classify and assign new bug report to relevant developer. According to survey, first Machine learning approach is presented by Cubranic and Murphy BIB001 which is based on bug reports. Machine learning techniques can be categorized into three types namely supervised learning, unsupervised learning and reinforcement learning. In our survey, we only focused on supervised learning approaches that contain various potential algorithms like: Naïve-bayes, support vector machine, tossing graph, vector space model etc. In 2009, the bug tossing concepts was defined and described by Jeong BIB002 . According to author"s survey, in Mozilla and Eclipse, around 37%-44% of bug reports are tossed again that can be reduced up to 72% and improve 23% automatic prediction accuracy by using Jeong tossing model. This is a first work on tossing graph that explained use of a basic classifier without considering the interfeedback process and developer activity. Furthermore, Pamela BIB003 extended their work to remove some of these limitation by using a fine-grained, multi -features tossing graph (Product, Component and activity days are extra attributes to an edge) with intra update, which is able to improve accuracy upto 86.09% and reduce the tossing path lengths by up to 83.28% in Eclipse and 86.67% in Mozilla. This approach is not applicable on small projects due to their limited numbers of bug features on the node of multi tossing graphs i.e. product, component and activity. This approach is not be able to handle the developer load balancing problem. To remove Jeong BIB002 limitations, Liguo also proposed an approach by using both graph tossing and vector space model. To measure the path between the assignee and developer, they used weight based breadth first algorithm. Up to 84% bug tossing length can be reduced by using this approach. Although, they only evaluate their approach on two open source projects (Eclipse & Mozilla) by considering very few bug report features for similarity measure. Success of this approach in closed source project needs to be improved. V.Akila has proposed an approach based on bug tossing graph by using metric to reduced hop path and route a bug to the correct developer in the best (optimal) route. In this respect, Levenshtein similarity achieved best correlation coefficient value (.9714 for 30% data set and .9671 for 20% data set) with respect to precision. But this approach has no indicators ISSN: 2528-2417
|
Survey Based Classification of Bug Triage Approaches <s> Survey Based Classification of Bug Triage Approaches (Asmita Yadav) <s> SUMMARY The paper presents an approach to recommend a ranked list of expert developers to assist in the implementation of software change requests (e.g., bug reports and feature requests). An Information Retrieval (IR)-based concept location technique is first used to locate source code entities, e.g., files and classes, relevant to a given textual description of a change request. The previous commits from version control repositories of these entities are then mined for expert developers. The role of the IR method in selectively reducing the mining space is different from previous approaches that textually index past change requests and/or commits. The approach is evaluated on change requests from three open-source systems: ArgoUML, Eclipse ,a ndKOffice, across a range of accuracy criteria. The results show that the overall accuracies of the correctly recommended developers are between 47 and 96% for bug reports, and between 43 and 60% for feature requests. Moreover, comparison results with two other recommendation alternatives show that the presented approach outperforms them with a substantial margin. Project leads or developers can use this approach in maintenance tasks immediately after the receipt of a change request in a free-form text. Copyright q 2011 John Wiley & Sons, Ltd. <s> BIB001 </s> Survey Based Classification of Bug Triage Approaches <s> Survey Based Classification of Bug Triage Approaches (Asmita Yadav) <s> Abstract —In this paper, we propose a semi-supervised text classification approach for bug triage to avoid the deficiency of labeled bug reports in existing supervised approaches. This new approach combines naive Bayes classifier and expectation-maximization to take advantage of both labeled and unlabeled bug reports. This approach trains a classifier with a fraction of labeled bug reports. Then the approach iteratively labels numerous unlabeled bug reports and trains a new classifier with labels of all the bug reports. We also employ a weighted recommendation list to boost the performance by imposing the weights of multiple developers in training the classifier. Experimental results on bug reports of Eclipse show that our new approach outperforms existing supervised approaches in terms of classification accuracy. Keywords- automatic bug triage; expectation-maximization; semi-supervised text classification; weighted recommendation list I. I NTRODUCTION Most of large software projects employ a bug tracking system (bug repository) to manage bugs and developers. In software development and maintenance, a bug repository is a significant software repository for storing the bugs submitted by <s> BIB002
|
5 for measuring the strength of the retrieved paths, if the extracted paths of more than one developer have same distance to the original path. Jifeng BIB002 presented another mechanism of machine learning techniques to combine the naïve bayes classifier and expectation-maximization (EM) that utilized both labeled and unlabeled bug reports. They reported improved classification accuracy up to 6 % only by this classifier. It could not achieve sufficient improvements in naïve bayes classifier. So that, obtain experimental results are not up to mark for the real-world applications. This may be due to use of inappropriate bug reports, wrong assumption of EM model for real-world data, and selection of irrelevant developer as a bug fixer. Jonas proposed a semi-automatic assignment model approach that is based on unified-model and presents a relevant artifact by using the history of bug reports and association between work item (newly reported bug) and developer. At a certain time, a snap-shot of the project is captured to assign all the work items that finally have a fixed state. Two classifiers: SVM (support vector machine) and Naïve bayes both gives better results for certain project state(state-based: snap shot of a project at a certain time). For datasets UNICASE, DOLLI and King Tale, SVM classifier produced 38%, 28.9%, 37.4% accuracy and for the same database, naïve bayes achieved 39.1%, 29.7%, 37.8% accuracy respectively. Although, that model based approach is only applicable on those work items that are linked to the functional requirement of the work item and is therefore not directly applicable in scenarios where links do not exist. Although, various techniques are proposed for concept-location which is based on the search of abstract system dependence graphs, like: static as well as dynamic based techniques, IR-based techniques etc. but no work has applied concept location techniques to the problem of expert developer recommendation. In 2011, concept location approach has been proposed by Huzefa BIB001 . Here, author presents an approach to recommend a ranked list of expert developers with relevant source code. This source-code corpus is created from extracted comments and identifiers from the source-code and is indexed by Latent Semantic Indexing (LSI). xFinder and xFactor is used for expert developer recommendation to prepare the rank-list of the relevant developers. Three open source system, namely KOffice, Eclipse and ArgoUML are used for evaluation and accuracy of 95%, 82% and 80% respectively is recorded by this recommendation system. xFinder is used to increase the rank effectiveness and in 50% of cases, the first relevant developer is found in the first position of the rank-list. This rank-list is prepared at the four granularity level i.e. file, package, system and overall. In few cases, location based tools does not return exactly relevant source-code or class file for bug fixing. 88.89% request-level of accuracy is achieved. Developer"s expertise, knowledge and experience are only extracted from his/ her previous contribution for bug fixing; still this is not a sufficient condition to correctly assign developer to resolve the bug because in some cases, a developer resolved more than bugs , then he have numbers of committer" Ids. There is a challenge to select only one committer" Id for developer identification by xFinder. 7 effective on business projects. This approach also suffers from cold start problem, i.e. in case of a new type of bug or a new developer none of the historical data is helpful to identify the relevant features. To improve the bug triage accuracy, an activity profile has been described by Hoda . In that method, developer ranked list is based on the developer expertise in the bug report topic. It has a new way to save bug fix time by recommending and adding all new developer who are willing to volunteer and have sufficient knowledge for bug resolving. After experiment, this proposed approach has better hit ratio i.e. 88% as compare to LDA and SVM-based activity profile techniques. Although, to improve the accuracy of the Triager, an ensemble classifier ( like: SVM+LDA) can be a better option as compared to use a single classifier.
|
Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Fast and accurate localization of software defects continues to be a difficult problem since defects can emanate from a large variety of sources and can often be intricate in nature. In this paper, we show how version histories of a software project can be used to estimate a prior probability distribution for defect proneness associated with the files in a given version of the project. Subsequently, these priors are used in an IR (Information Retrieval) framework to determine the posterior probability of a file being the cause of a bug. We first present two models to estimate the priors, one from the defect histories and the other from the modification histories, with both types of histories as stored in the versioning tools. Referring to these as the base models, we then extend them by incorporating a temporal decay into the estimation of the priors. We show that by just including the base models, the mean average precision (MAP) for bug localization improves by as much as 30%. And when we also factor in the time decay in the estimates of the priors, the improvements in MAP can be as large as 80%. <s> BIB001 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Large open source software projects receive abundant rates of submitted bug reports. Triaging these incoming reports manually is error-prone and time consuming. The goal of bug triaging is to assign potentially experienced developers to new-coming bug reports. To reduce time and cost of bug triaging, we present an automatic approach to predict a developer with relevant experience to solve the new coming report. In this paper, we investigate the use of five term selection methods on the accuracy of bug assignment. In addition, we re-balance the load between developers based on their experience. We conduct experiments on four real datasets. The experimental results show that by selecting a small number of discriminating terms, the F-score can be significantly improved. <s> BIB002 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Software maintenance starts as soon as the first artifacts are delivered and is essential for the success of the software. However, keeping maintenance activities and their related artifacts on track comes at a high cost. In this respect, change request CR repositories are fundamental in software maintenance. They facilitate the management of CRs and are also the central point to coordinate activities and communication among stakeholders. However, the benefits of CR repositories do not come without issues, and commonly occurring ones should be dealt with, such as the following: duplicate CRs, the large number of CRs to assign, or poorly described CRs. Such issues have led researchers to an increased interest in investigating CR repositories, by considering different aspects of software development and CR management. In this paper, we performed a systematic mapping study to characterize this research field. We analyzed 142 studies, which we classified in two ways. First, we classified the studies into different topics and grouped them into two dimensions: challenges and opportunities. Second, the challenge topics were classified in accordance with an existing taxonomy for information retrieval models. In addition, we investigated tools and services for CR management, to understand whether and how they addressed the topics identified. Copyright © 2013 John Wiley & Sons, Ltd. <s> BIB003 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Abstract Context Feature location aims to identify the source code location corresponding to the implementation of a software feature. Many existing feature location methods apply text retrieval to determine the relevancy of the features to the text data extracted from the software repositories. One of the preprocessing activities in text retrieval is term-weighting, which is used to adjust the importance of a term within a document or corpus. Common term-weighting techniques may not be optimal to deal with text data from software repositories due to the origin of term-weighting techniques from a natural language context. Objective This paper describes how the consideration of when the terms were used in the repositories, under the condition of weighting only the noun terms, can improve a feature location approach. Method We propose a feature location approach using a new term-weighting technique that takes into account how recently a term has been used in the repositories. In this approach, only the noun terms are weighted to reduce the dataset volume and avoid dealing with dimensionality reduction. Results An empirical evaluation of the approach on four open-source projects reveals improvements to the accuracy, effectiveness and performance up to 50%, 17%, and 13%, respectively, when compared to the commonly-used Vector Space Model approach. The comparison of the proposed term-weighting technique with the Term Frequency-Inverse Document Frequency technique shows accuracy, effectiveness, and performance improvements as much as 15%, 10%, and 40%, respectively. The investigation of using only noun terms, instead of using all terms, in the proposed approach also indicates improvements up to 28%, 21%, and 58% on accuracy, effectiveness, and performance, respectively. Conclusion In general, the use of time in the weighting of terms, along with the use of only the noun terms, makes significant improvements to a feature location approach that relies on textual information. <s> BIB004 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> For complex and popular software, project teams could receive a large number of bug reports. It is often tedious and costly to manually assign these bug reports to developers who have the expertise to fix the bugs. Many bug triage techniques have been proposed to automate this process. In this paper, we describe our study on applying conventional bug triage techniques to projects of different sizes. We find that the effectiveness of a bug triage technique largely depends on the size of a project team (measured in terms of the number of developers). The conventional bug triage methods become less effective when the number of developers increases. To further improve the effectiveness of bug triage for large projects, we propose a novel recommendation method called Bug Fixer, which recommends developers for a new bug report based on historical bug-fix information. Bug Fixer constructs a Developer-Component-Bug (DCB) network, which models the relationship between developers and source code components, as well as the relationship between the components and their associated bugs. A DCB network captures the knowledge of "who fixed what, where". For a new bug report, Bug Fixer uses a DCB network to recommend to triager a list of suitable developers who could fix this bug. We evaluate Bug Fixer on three large-scale open source projects and two smaller industrial projects. The experimental results show that the proposed method outperforms the existing methods for large projects and achieves comparable performance for small projects. <s> BIB005 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Bugs are prevalent in software systems and improving time efficiency in bug fixing is desired. We performed an analysis on 11,115 bug records of Eclipse JDT and found that bug resolution time is log-normally distributed and varies across fixers, technical topics, and bug severity levels. We then propose FixTime, a novel method for bug assignment. The key of FixTime is a topicbased, log-normal regression model for predicting defect resolution time on which FixTime is based to make fixing assignment recommendations. Preliminary results suggest that FixTime has higher prediction accuracy than existing approaches. <s> BIB006 </s> Survey Based Classification of Bug Triage Approaches <s> Meta-Data Based <s> Improve automatic bug assignment (ABA) accuracy by using metadata in term weighting.Improve accuracy of common term-weighting technique, tf-idf, up to 14%.Recommend a light method for ABA based on the new term-weighting technique.Outperform the ML and IR methods by recommended method up to 55%. Bug assignment is one of the important activities in bug triaging that aims to assign bugs to the appropriate developers for fixing. Many recommended automatic bug assignment approaches are based on text analysis methods such as machine learning and information retrieval methods. Most of these approaches use term-weighting techniques, such as term frequency-inverse document frequency (tf-idf), to determine the value of terms. However, the existing term-weighting techniques only deal with frequency of terms without considering the metadata associated with the terms that exist in software repositories. This paper aims to improve automatic bug assignment by using time-metadata in tf-idf (Time-tf-idf). In the Time-tf-idf technique, the recency of using the term by the developer is considered in determining the values of the developer expertise. An evaluation of the recommended automatic bug assignment approach that uses Time-tf-idf, called ABA-Time-tf-idf, was conducted on three open-source projects. The evaluation shows accuracy and mean reciprocal rank (MRR) improvements of up to 11.8% and 8.94%, respectively, in comparison to the use of tf-idf. Moreover, the ABA-Time-tf-idf approach outperforms the accuracy and MRR of commonly used approaches in automatic bug assignment by up to 45.52% and 55.54%, respectively. Consequently, consideration of time-metadata in term weighting reasonably leads to improvements in automatic bug assignment. <s> BIB007
|
Bug metadata is used by many researchers to propose methods for automatic bug assignments. Metadata has all types of bug related details like: bug time stamp (means when bug is filed, when it tossed between the developers for fixing and final bug fixing time), bug history, bug comments, developers sparseness etc. Term weighting technique is mostly used by the researchers to determine the term frequency. This textual information is used to prepare rank-list of relevant developers for bug fixing. Although, the term frequency-inverse document frequency (tf-idf) is common technique for bug assignment that is explained by calvalcanti BIB003 . But, Tf-Idf does not consider time-stamp, i.e. the time of using a term. Ramin & Anvik BIB007 presented an approach ABA-Time-tf-idf (Automatic Bug Assignment using the Time-tf-idf) term weighting technique. They considered time when the terms were used by developers to assign weights to terms during triaging process. All the important information like: time difference between the last activity of the developer and the new bug" reporting date, time spent for fixing previous bug is extracted from the time-metadata to decide the developer ranking. The results of ABA-Time-tf-idf approach had indicated an improvement between 26-37.2%, 3.4-14.4%, 5.6-17.2% and 12.6-19.8% in comparison to the average accuracies of SVM, NB, VSM, SUM approaches respectively on five random data-sets of Eclipse projects. This methodology assumes that the developer who committed the changes to repository is the actual fixer of the bug report. In some of the projects, few of the developers" works as a gate keepers who have only the permission to commit to the source code, that means developer who changes the source code and who committed the change(s) in the software repository, can be different. Another improved version in time stamp is presented by Ramin & Anvik developer" recommendation system based on the similarity (weight) between the new bug" term and previous bugreport information (corpus). They considered important details from a bug like: the term creation, modification time and who changed it. This information is used for calculation of the developer activity at various periods of project"s life. If a developer A used a term "switch" for bug fixing, two years ago and developer B, used it 8 month ago on another bug, containing the same term. Then developer B is more appropriate to fix the new bug that contains the same term "switch". The TNBA (time -aware noun-based bug assignment) approach outperformed the TNBA (no-time), tf-idf and VSM (time) approaches by as much as 14, 12 and 48% on Eclipse, Netbeans and ArgoUML projects respectively. This approach only worked on the exact matching in text processing and cannot handled approximation matching by using ontology to handles the synonyms of the bug text. All the information like: developer identifier, time-stamp and commit comments are stored in the metadata. Only Sisman & kak BIB001 used the time-metadata for feature location approach. Sima, Sai & Ramin BIB004 has proposed an approach that included weighting and ranking the source-code locations based on both the textual similarity with a change request and the use of the time-metadata. It used only the noun terms for weighting to reduce the dataset volume. This approach gives much better results as compared to the tf-idf techniques by up to 15%, 10% and 14 % in terms of accuracy, effectiveness and performance respectively. Another approach for automatic bug assignment is given by Ramin by using information extraction from bug metadata. In this, they calculated the similarities between bug reports and commits and then found the similar phrases that are used to link a bug report to a specific commit and finally determine the exact location of new bug with relevant developer to fix a new bug. For experiential work three open sources are considered namely Eclipse, Mozilla and Gnome, and they received 62%, 43% and 41% recall levels respectively. Mamdouh BIB002 proposed an bug triaging approach based on text mining concepts by using five term selection methods ( Log odds ratio, Chi-square, Term frequency relevance frequency, Mutual information and Distinguishing feature selector) to predict an experienced developer to resolve bug. According to this approach, X2 gives better results as compare to other selection methods in terms of Fscore. It improved the F-score by 6.2%, 38.2% 26.5% and 12.1% for all open source system Eclipse-SWT, Eclipse-UI, Netbeans, and Maemo respectively. The historical data extractions information for bugFixer has been used by Hao BIB005 to construct a developer -component-Bug network (DCB). For this DCB network, they established a relationship between the developer and source code component and also found the relation between source code component and bugs. It then calculated the similarity among new bug and existing bugs. This approach correctly ranked the bugs in Eclipse by up to 42.36% for first recommendation list. This approach suffered cold start problem when new bug or developer have no previous and historical information. In 2014, Tung BIB006 worked on a problem to determine amount of time required to fix a bug. Although, this problem is also mentioned and discussed by Jin in reference of bug triaging issue. Here, they focused only to achieve better accuracy and cost without considering the time-complexity of the problem. They evaluated their approach on four different open source project namely, Apache, Eclipse, Linux kernel and Mozilla and achieved better accuracy by reducing 30% cost of the triage. However, they have no solution to handle bug resolution time. Then Tung BIB006 proposed a model that is a topic-based, log-normal regression model (combination of CosTriage and Regression model) that can predicate the resolution time of a given bug , if it is already assigned to a given developer.
|
Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> A wide range of computational methods and tools for data analysis are available. In this study we took advantage of those available technological advancements to develop prediction models for the prediction of a Type-2 Diabetic Patient. We aim to investigate how the diabetes incidents are affected by patients' characteristics and measurements. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes Hybrid Prediction Model (HPM) which uses Simple K-means clustering algorithm aimed at validating chosen class label of given data (incorrectly classified instances are removed, i.e. pattern extracted from original data) and subsequently applying the classification algorithm to the result set. C4.5 algorithm is used to build the final classifier model by using the k-fold cross-validation method. The Pima Indians diabetes data was obtained from the University of California at Irvine (UCI) machine learning repository datasets. A wide range of different classification methods have been applied previously by various researchers in order to find the best performing algorithm on this dataset. The accuracies achieved have been in the range of 59.4-84.05%. However the proposed HPM obtained a classification accuracy of 92.38%. In order to evaluate the performance of the proposed method, sensitivity and specificity performance measures that are used commonly in medical classification studies were used. <s> BIB001 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> The medical data are multidimensional, and are represented by a large number of features. Hundreds of independent features (parameters) in these high dimensional databases need to be simultaneously considered and analyzed, for valuable decision-making information in medical prediction. Most data mining methods depend on a set of features that define the behavior of the learning algorithm and directly or indirectly influence the complexity of the resulting models. Hence, to improve the efficiency and accuracy of mining task on high dimensional data, the data must be preprocessed by an efficient dimensionality reduction method. The aim of this study is to improve the diagnostic accuracy of diabetes disease by selecting informative features of Pima Indians Diabetes Dataset. This study proposes a Hybrid Prediction Model with F-score feature selection approach to identify the optimal feature subset of the Pima Indians Diabetes dataset. The features of diabetes dataset are ranked using F-score and the feature subset that gives the minimal clustering error is the optimal feature subset of the dataset. The correctly classified instances determine the pattern for diagnosis and are used for further classification process. The improved performance of the Support Vector Machine classifier measured in terms of Accuracy of the classifier, Sensitivity, Specificity and Area Under Curve (AUC) proves that the proposed feature approach indeed improves the performance of classification. The proposed prediction model achieves a predictive accuracy of 98.9427 and it is the highest predictive accuracy for diabetes dataset compared to other models in literature for this problem. <s> BIB002 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> description of patterns. In this study, decision tree method was used to predict patients with developing diabetes. The dataset used is the Pima Indians Diabetes Data Set, which collects the information of patients with and without developing diabetes. The study goes through two phases. The first phase is data preprocessing including attribute identification and selection, handling missing values, and numerical discretization. The second phase is a diabetes prediction model construction using the decision tree method. Weka software was used throughout all the phases of this study. <s> BIB003 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Continuous glucose monitoring (CGM) by suitable portable sensors plays a central role in the treatment of diabetes, a disease currently affecting more than 350 million people worldwide. Noninvasive CGM (NI-CGM), in particular, is appealing for reasons related to patient comfort (no needles are used) but challenging. NI-CGM prototypes exploiting multisensor approaches have been recently proposed to deal with physiological and environmental disturbances. In these prototypes, signals measured noninvasively (e.g., skin impedance, temperature, optical skin properties, etc.) are combined through a static multivariate linear model for estimating glucose levels. In this work, by exploiting a dataset of 45 experimental sessions acquired in diabetic subjects, we show that regularisation-based techniques for the identification of the model, such as the least absolute shrinkage and selection operator (better known as LASSO), Ridge regression, and Elastic-Net regression, improve the accuracy of glucose estimates with respect to techniques, such as partial least squares regression, previously used in the literature. More specifically, the Elastic-Net model (i.e., the model identified using a combination of and norms) has the best results, according to the metrics widely accepted in the diabetes community. This model represents an important incremental step toward the development of NI-CGM devices effectively usable by patients. <s> BIB004 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Diabetes complications often afflict diabetes patients seriously: over 68% of diabetes-related mortality is caused by diabetes complications. In this paper, we study the problem of automatically diagnosing diabetes complications from patients' lab test results. The objective problem has two main challenges: 1) feature sparseness: a patient only undergoes 1:26% lab tests on average, and 65:5% types of lab tests are performed on samples from less than 10 patients; 2) knowledge skewness: it lacks comprehensive detailed domain knowledge of the association between diabetes complications and lab tests. To address these challenges, we propose a novel probabilistic model called Sparse Factor Graph Model (SparseFGM). SparseFGM projects sparse features onto a lower-dimensional latent space, which alleviates the problem of sparseness. SparseFGM is also able to capture the associations between complications and lab tests, which help handle the knowledge skewness. We evaluate the proposed model on a large collections of real medical records. SparseFGM outperforms (+20% by F1) baselines significantly and gives detailed associations between diabetes complications and lab tests. <s> BIB005 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Diabetes Mellitus or Diabetes has been portrayed as worse than Cancer and HIV (Human Immunodeficiency Virus). It develops when there are high blood sugar levels over a prolonged period. Recently, it has been quoted as a risk factor for developing Alzheimer, and a leading cause for blindness & kidney failure. Prevention of the disease is a hot topic for research in the healthcare community. Many techniques have been discovered to find the causes of diabetes and cure it. This research paper is a discussion on establishing a relationship between diabetes risk likely to be developed from a person's daily lifestyle activities such as his/her eating habits, sleeping habits, physical activity along with other indicators like BMI (Body Mass Index), waist circumference etc. Initially, a Chi-Squared Test of Independence was performed followed by application of the CART (Classification and Regression Trees) machine learning algorithm on the data and finally using Cross-Validation, the bias in the results was removed. <s> BIB006 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Background/Objectives: Different methods can be applied to create predictive models for the clinical data with binary outcome variable. This research aims to explore the process of constructing the modified predictive model of Logistic Regression (LR). Method/Statistical Analysis: To improve the accuracy of prediction, the Distance based Outlier Detection (DBOD) is used for pre-processing and Bipolar Sigmoid Function calculated using Neuro based Weight Activation Function is used in Logistic Regression instead of Sigmoid Function. Datasets were collected from clinical laboratory of AR Hospital in Madurai for the three years 2012, 2013 and 2014 are used for analysis. Data pre-processing is done to avoid the existence of insignificant data in the dataset. The detected outliers, using DBOD method are treated using a method closest to the normal range. A comparative study among different distance measures likes Euclidean and Manhattan etc. are done for DBOD method. The pre-processed data finally is fed as input to the Logistic Regression model. Maximum likelihood estimation is used to fit the model. Logistic Model is built from the Sigmoid Function using the Regression Coefficients. The accuracy of the model is evaluated by 10 fold cross validation. Findings: Logistic Model is built from the Sigmoid Function using the Regression Coefficients, produces the accuracy of 79%. The Sigmoid Function calculated using Random Weight Function provides the prediction accuracy of 84.2% and the Bipolar Sigmoid Function calculated using Neuro based Weight Activation function provides the prediction accuracy of 90.4%. On comparison, Bipolar Sigmoid Function calculated using Neuro weight activation function outperforms well than the Sigmoid Function calculated using regression coefficients. Improvements/Applications: The accuracy of Logistic Regression is improved from 79% to 90.4%. The most important factors: Erythrocyte Sedimentation Rate (ESR) and Estimation of Mean blood Glucose are identified from positive subjects of Diabetes Mellitus. The analysis is done for the 31 Diabetes Disease attributes of three years dataset. <s> BIB007 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Nowadays, diabetes disease is considered one of the key reasons of death among the people in the world. The availability of extensive medical information leads to the search for proper tools to support physicians to diagnose diabetes disease accurately. This research aimed at improving the diagnostic accuracy and reducing diagnostic miss-classification based on the extracted significant diabetes features. Feature selection is critical to the superiority of classifiers founded through knowledge discovery approaches, thereby solving the classification problems relating to diabetes patients. This study proposed an integration approach between the SVM technique and K-means clustering algorithms to diagnose diabetes disease. Experimental results achieved high accuracy for differentiating the hidden patterns of the Diabetic and Non-diabetic patients compared with the modern diagnosis methods in term of the performance measure. The T-test statistical method obtained significant improvement results based on K-SVM technique when tested on the UCI Pima Indian standard dataset. <s> BIB008 </s> Survey on clinical prediction models for diabetes prediction <s> Different prediction models used for diabetes <s> Currently in the healthcare industry different data mining methods are used to mine the interesting pattern of diseases using the statistical medical data with the help of different machine learning techniques. The conventional disease diagnosis system uses the perception and experience of doctor without using the complex clinical data. The proposed system assists doctor to predict disease correctly and the prediction makes patients and medical insurance providers benefited. This research focuses on to diagnosis diabetes disease as it is a great threat to human life worldwide. The system uses the Decision Tree and K-Nearest Neighbor (KNN) Algorithms as supervised classification model. Finally, the proposed system calculates and compares the accuracy of C4.5 and KNN and the experimental result demonstrates that the C4.5 provides better accuracy for diagnosis diabetes. For the clinical database, the Pima Indians Dataset is used in this research. <s> BIB009
|
A multi stage adjustment model with low misclassification rate which predicts which persons are most likely to develop diabetes is built by using KoGES dataset . A physiological model which can predict the blood glucose level 30 min in advance was developed using five patients data by training SVR with physiological features. This helped in producing best results than doctors . Another type of predictive model is sparse factor graph model. By using which diabetes complications are not the only forecast but also can discover the underlying associations between diabetes complications and lab test types. All algorithms were implemented in C++, and all experiments were performed on a Mac running Mac OS X with Intel Core i7 2.66 GHz and 4 GB of memory. The data set used for the experiment is collected from a geriatric hospital. The data set contain 1-year span data with 181,933 medical records, 35,525 patients data and 1945 types of lab tests. 60% of data was chosen for training the model and the rest for testing. The proposed model addresses two challenges feature sparseness and knowledge skewness BIB005 . A hybrid model has been developed to predict whether the diagnosed patient may develop diabetes within 5 years or not. A tool used for this purpose is WEKA and the data set was PIMA Indian diabetes data set. This hybrid model has achieved 92.38% accuracy BIB001 . The details of the hybrid model are shown in Fig. 3 . Another hybrid prediction model helps in producing optimal feature subset. This helps in detecting diabetes with high accuracy. To implement the model WEKA tool is used on PIMA Indian diabetic dataset. The proposed models have given an accuracy of 98.9247%. The procedure adopted by authors in developing predictive model is first preprocessed the dataset, then compute F-score values of features, select features with high F-score as discriminative features, then k-means algorithm is used to select feature subset that gives minimum clustering error and finally SVM is used to classification BIB002 as shown in Fig. 4 . In paper the authors used two different types of neural networks to express which will output the accurate classifier in predicting diabetes. The two neural network models are multilayer neural network and probabilistic neural network. The dataset contains Pima Indian diabetes, having two classes and 768 samples. 576 samples were used for training and 192 were used for testing. The proposed methods were proved to better when compared with other previous methods. In paper the author developed a prediction model based on Hybrid-Twin Support Vector Machine (H-TSVM), which predicts whether a new patient is suffering from diabetes or not. They used Pima dataset for conducting an experiment. The factor that keeps this proposed method different from others is kernel function. The classifier produces an accuracy of 87.46%. In paper the author proposed a predicting model that classify type 2 diabetic treatment plans into three groups such as insulin, diet and medication. The dataset used for developing the model was JABER ABN ABU ALIZ clinic centre which contains 318 medical records. The model was developed using WEKA tool by applying J48 classifier and it has produced an accuracy of 70.8%. In paper BIB007 the author developed a prediction model which predicts what are different types of disease a diabetic patient can develop. To develop the model a data set of 3 years span is collected from AR hospital with 739 patient details and 31 attributes. The pre processed data after deleting outliers by using distance based outlier detection (DBOD), is given as input to logistic regression model which was built by Bipolar Sigmoid Function that is calculated using Neuro based Weight Activation function. The model produced prediction accuracy of 90.4%. In paper a tool FNC was developed that can be used for diagnosing diabetes as shown in Fig. 5 result from three approaches rule based algorithm was applied to all three techniques to improve the accuracy. Finally, the best accuracy was obtained for case based reasoning. In paper BIB008 author developed a hybrid model KSVM. The important criteria that make this model different from other methods are feature selection algorithm. PIMA data set was utilized to do experiments and results were produced. It was shown that diagnosis results using K-SVM are 99.74, 99.78, and 99.81 for learning experiments with amount 50, 60, and 70% data respectively, and 99.82, 99.85, and 99.90 for testing experiments with amount 50, 60, and 70% data respectively. In paper authors developed a prediction model that would predict whether a person would develop diabetes by considering daily lifestyle activities. To build prediction model PIMA diabetes data set was used and CART (Classification and Regression Trees) machine learning classifier was applied. The proposed model could provide an accuracy of 75%. In paper authors developed a prediction model that would predict whether a person develops diabetes or not. To achieve this PIMA diabetes dataset was used. In the proposed method first controlled binning technique is applied then multiple regression was used to improve the accuracy of the model. After incorporating all techniques an accuracy of 77.85% was achieved. The controlled binning technique which is innovative thought in this paper is calculated by using the Eqs. (1) and (2) In paper BIB003 authors developed a decision tree model for the diagnosis of type 2 diabetes. They used Pima Indian diabetes dataset. Pre-processing techniques like attributes identification and selection, handling missing values, and numerical discretization was used to improve the quality of data. Weka tool was used, J48 decision tree classifier In paper the authors developed a prediction model by using neural networks to classify and to diagnose onset and progression of diabetes. They have used 545 patients' data from a diabetes clinic. First, they trained and tested neural networks with a different number of neurons and found a neural network with seven neurons has produced highest accuracy. The memetic algorithm is used to update weights which improved the accuracy of the model from 88.0 to 93.2%. this model was compared with other models too. But a neural network with seven neurons and application of memetic algorithm is observed as the best model (Figs. 6, 7) . In paper BIB009 the authors have developed an expert healthcare predictive decision support system that predicts diabetes. This model is trained on Pima diabetes dataset. Decision tree and K-nearest neighbor algorithms are used to develop the model and found that C4.5 algorithm has achieved 90.43% accuracy. In paper BIB006 the authors have developed a prediction model using Chi squared test to find not only dependencies between factors but also independences. Then CART is applied to build a prediction model which has 75% accuracy. Data was collected through questioners from 200 people and model was built using R tool. In paper BIB004 authors developed an elastic net model which improves the accuracy for estimating glucose. The authors have collected 45 experimental sessions data set from diabetic patients. The data was collected from a noninvasive glucose device i.e., a blood sample is not taken. Three models were constructed using regularized methods LASSO, Ridged and Elastic net model. The elastic net model has compared with LASSO, ridged and partial least square regression and found Elastic net model is best. From all of the techniques and prediction models discussed above, we want a prediction model that predicts diabetes of a diagnosed person. Since this output can be obtained depending on the time we would lie to use regression model. Of all regressions, Elastic Net is most useful as categorical, numerical and image or signal form data can be given as input to the model. The elastic net regression model is a combination of LASSO (Least Absolute Shrinkage And Selection Operator) and Ridged Regressions. Thus elastic net regression support shrinkage of coefficients as well as grouping effect. One more interesting point is numerical, categorical and image form data can be given as input to the model.
|
Survey on clinical prediction models for diabetes prediction <s> Conclusions <s> Clinical decision-making needs available information to be the guidance for physicians. Nowadays, data mining method is applied in medical research in order to analyze large volume of medical data. This study attempts to use data mining method to analyze the databank of Diabetes disease and diagnose the Diabetes disease. This study involves the implementation of FCM and SVM and testing it on a set of medical data related to diabetes diagnosis problem. The medical data is taken from UCI repository, consists of 9 input attributes related to clinical diagnosis of diabetes, and one output attribute which indicates whether the patient is diagnosed with the diabetes or not. The whole data set consists of 768 cases. <s> BIB001 </s> Survey on clinical prediction models for diabetes prediction <s> Conclusions <s> Background/Objectives: Different methods can be applied to create predictive models for the clinical data with binary outcome variable. This research aims to explore the process of constructing the modified predictive model of Logistic Regression (LR). Method/Statistical Analysis: To improve the accuracy of prediction, the Distance based Outlier Detection (DBOD) is used for pre-processing and Bipolar Sigmoid Function calculated using Neuro based Weight Activation Function is used in Logistic Regression instead of Sigmoid Function. Datasets were collected from clinical laboratory of AR Hospital in Madurai for the three years 2012, 2013 and 2014 are used for analysis. Data pre-processing is done to avoid the existence of insignificant data in the dataset. The detected outliers, using DBOD method are treated using a method closest to the normal range. A comparative study among different distance measures likes Euclidean and Manhattan etc. are done for DBOD method. The pre-processed data finally is fed as input to the Logistic Regression model. Maximum likelihood estimation is used to fit the model. Logistic Model is built from the Sigmoid Function using the Regression Coefficients. The accuracy of the model is evaluated by 10 fold cross validation. Findings: Logistic Model is built from the Sigmoid Function using the Regression Coefficients, produces the accuracy of 79%. The Sigmoid Function calculated using Random Weight Function provides the prediction accuracy of 84.2% and the Bipolar Sigmoid Function calculated using Neuro based Weight Activation function provides the prediction accuracy of 90.4%. On comparison, Bipolar Sigmoid Function calculated using Neuro weight activation function outperforms well than the Sigmoid Function calculated using regression coefficients. Improvements/Applications: The accuracy of Logistic Regression is improved from 79% to 90.4%. The most important factors: Erythrocyte Sedimentation Rate (ESR) and Estimation of Mean blood Glucose are identified from positive subjects of Diabetes Mellitus. The analysis is done for the 31 Diabetes Disease attributes of three years dataset. <s> BIB002 </s> Survey on clinical prediction models for diabetes prediction <s> Conclusions <s> Nowadays, diabetes disease is considered one of the key reasons of death among the people in the world. The availability of extensive medical information leads to the search for proper tools to support physicians to diagnose diabetes disease accurately. This research aimed at improving the diagnostic accuracy and reducing diagnostic miss-classification based on the extracted significant diabetes features. Feature selection is critical to the superiority of classifiers founded through knowledge discovery approaches, thereby solving the classification problems relating to diabetes patients. This study proposed an integration approach between the SVM technique and K-means clustering algorithms to diagnose diabetes disease. Experimental results achieved high accuracy for differentiating the hidden patterns of the Diabetic and Non-diabetic patients compared with the modern diagnosis methods in term of the performance measure. The T-test statistical method obtained significant improvement results based on K-SVM technique when tested on the UCI Pima Indian standard dataset. <s> BIB003
|
In this paper a detail description of predictive modeling is presented, a combination of tradition and hybrid prediction models Modeling, This paper showed that hybrid models produce more accuracy than traditional models. A researcher who is willing to do research in developing clinical prediction model would be benefited by this paper. There is a wide range of scope for the development of clinical prediction models especially for diabetes as this is a modern disease in developing countries like India. As per the survey of above papers we can find many gaps that are to be filled, which are usage of larger dataset , outlier detection , improving prediction model , integration of optimization techniques to hybrid prediction model BIB003 , implementation of prediction models for other diseases on android mobile BIB002 , development of prediction model that include type 1 treatment plans with more attributes , usage of datasets of multiple classes BIB001 .
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy-Preserving Data Publishing <s> Abstract This article discusses theory and method of complementary cell suppression and related topics in statistical disclosure control. Emphasis is placed on the development of methods that are theoretically broad but also practical to implement. The approach draws from areas of discrete mathematics and linear optimization theory. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy-Preserving Data Publishing <s> Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection. <s> BIB002
|
Researchers in the field of privacy-preserving data publishing focus on designing techniques to publish data as useful as possible while preserving the privacy of individuals . Publishing data instead of publishing data mining results is much more useful and interesting because many other analysis can be done on such data. Thus the published data should be potentially useful for many data analysis objectives which makes privacy-preserving data publishing challenging. The process of anonymization BIB001 refers to hiding the identity (or sensitive information) of individuals. Removing explicit identifiers (such as name) is not effective since non-identifying personal data (such as age, gender, zipcode) can be combined with publicly available data to identify an individual BIB002 . The combination of such non-explicit identifiers are called the quasi-identifier (QI) attributes , which could be used to identify an individual with some sensitive attribute (SA) such as his disease.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Models <s> There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The model is then built over the randomized data, after first compensating for the randomization (at the aggregate level). This approach is potentially vulnerable to privacy breaches: based on the distribution of the data, one may be able to learn with high confidence that some of the randomized records satisfy a specified property, even though privacy is preserved on average.In this paper, we present a new formulation of privacy breaches, together with a methodology, "amplification", for limiting them. Unlike earlier approaches, amplification makes it is possible to guarantee limits on privacy breaches without any knowledge of the distribution of the original data. We instantiate this methodology for the problem of mining association rules, and modify the algorithm from [9] to limit privacy breaches without knowledge of the data distribution. Next, we address the problem that the amount of randomization required to avoid privacy breaches (when mining association rules) results in very long transactions. By using pseudorandom generators and carefully choosing seeds such that the desired items from the original transaction are present in the randomized transaction, we can send just the seed instead of the transaction, resulting in a dramatic drop in communication and storage cost. Finally, we define new information measures that take privacy breaches into account when quantifying the amount of privacy preserved by randomization. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Models <s> Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \kappa-anonymity has gained popularity. In a \kappa-anonymized dataset, each record is indistinguishable from at least k—1 other records with respect to certain "identifying" attributes. In this paper we show with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \ell-diversity. In addition to building a formal foundation for \ell-diversity, we show in an experimental evaluation that \ell-diversity is practical and can be implemented efficiently. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Models <s> In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Models <s> The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain "identifying" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments. <s> BIB004
|
We explain some well-known approaches to prevent privacy attacks. The notion of k-anonymity is a solution to record linkage attacks, where the QI of each record should be the same as at least k-1 other records. This ensures that the probability of linking an individual to a specific record based on QI is at most 1 k . As a solution to attribute linkage attack, thediversity notion BIB002 with the same QI, to have at least "well-represented" SAs. This ensures that there are at least distinct values for the SA in each such group, thus automatically satisfies k-anonymity, where k= . Thediversity could not prevent attribute linkage attacks if the overall distribution of a SA is skewed. As a solution, the notion of t-closeness BIB004 requires the distribution of a sensitive attribute in any group on QID to be close to the distribution of the attribute in the overall table. The (ρ 1 , ρ 2 )-privacy BIB001 guarantees that if attacker's prior knowledge on a SA value before data release is at most ρ 1 then after seeing the released data, his posterior knowledge is bounded by ρ 2 , where 0<ρ 1 <ρ 2 <1. The notion of -differential privacy BIB003 guarantees that the addition or removal of a "single" record in the database will not significantly change the statistical analysis results. -differential privacy assures record owners that submitting their personal information to the database is very secure.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Abstract This article discusses theory and method of complementary cell suppression and related topics in statistical disclosure control. Emphasis is placed on the development of methods that are theoretically broad but also practical to implement. The approach draws from areas of discrete mathematics and linear optimization theory. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially identifying when linked with other available data sets. Data are often shared for business or legal reasons. This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process. We explore preserving the anonymity by the use of generalizations and suppressions on the potentially identifying portions of the data. We extend earlier works in this area along various dimensions. First, satisfying privacy constraints is considered in conjunction with the usage for the data being disseminated. This allows us to optimize the process of preserving privacy for the specified usage. In particular, we investigate the privacy transformation in the context of data mining applications like building classification and regression models. Second, our work improves on previous approaches by allowing more flexible generalizations for the data. Lastly, this is combined with a more thorough exploration of the solution space using the genetic algorithm framework. These extensions allow us to transform the data so that they are more useful for their intended purpose while satisfying the privacy constraints. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The model is then built over the randomized data, after first compensating for the randomization (at the aggregate level). This approach is potentially vulnerable to privacy breaches: based on the distribution of the data, one may be able to learn with high confidence that some of the randomized records satisfy a specified property, even though privacy is preserved on average.In this paper, we present a new formulation of privacy breaches, together with a methodology, "amplification", for limiting them. Unlike earlier approaches, amplification makes it is possible to guarantee limits on privacy breaches without any knowledge of the distribution of the original data. We instantiate this methodology for the problem of mining association rules, and modify the algorithm from [9] to limit privacy breaches without knowledge of the data distribution. Next, we address the problem that the amount of randomization required to avoid privacy breaches (when mining association rules) results in very long transactions. By using pseudorandom generators and carefully choosing seeds such that the desired items from the original transaction are present in the randomized transaction, we can send just the seed instead of the transaction, resulting in a dramatic drop in communication and storage cost. Finally, we define new information measures that take privacy breaches into account when quantifying the amount of privacy preserved by randomization. <s> BIB005 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> The technique of k-anonymization has been proposed in the literature as an alternative way to release public information, while ensuring both data privacy and data integrity. We prove that two general versions of optimal k-anonymization of relations are NP-hard, including the suppression version which amounts to choosing a minimum number of entries to delete from the relation. We also present a polynomial time algorithm for optimal k-anonymity that achieves an approximation ratio independent of the size of the database, when k is constant. In particular, it is a O(k log k)-approximation where the constant in the big-O is no more than 4, However, the runtime of the algorithm is exponential in k. A slightly more clever algorithm removes this condition, but is a O(k log m)-approximation, where m is the degree of the relation. We believe this algorithm could potentially be quite fast in practice. <s> BIB006 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Privacy becomes a more and more serious concern in applications involving microdata. Recently, efficient anonymization has attracted much research work. Most of the previous methods use global recoding, which maps the domains of the quasi-identifier attributes to generalized or changed values. However, global recoding may not always achieve effective anonymization in terms of discernability and query answering accuracy using the anonymized data. Moreover, anonymized data is often for analysis. As well accepted in many analytical applications, different attributes in a data set may have different utility in the analysis. The utility of attributes has not been considered in the previous methods.In this paper, we study the problem of utility-based anonymization. First, we propose a simple framework to specify utility of attributes. The framework covers both numeric and categorical data. Second, we develop two simple yet efficient heuristic local recoding methods for utility-based anonymization. Our extensive performance study using both real data sets and synthetic data sets shows that our methods outperform the state-of-the-art multidimensional global recoding methods in both discernability and query answering accuracy. Furthermore, our utility-based method can boost the quality of analysis using the anonymized data. <s> BIB007 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> This paper presents a novel technique, anatomy, for publishing sensitive data. Anatomy releases all the quasi-identifier and sensitive values directly in two separate tables. Combined with a grouping mechanism, this approach protects privacy, and captures a large amount of correlation in the microdata. We develop a linear-time algorithm for computing anatomized tables that obey the l-diversity privacy requirement, and minimize the error of reconstructing the microdata. Extensive experiments confirm that our technique allows significantly more effective data analysis than the conventional publication method based on generalization. Specifically, anatomy permits aggregate reasoning with average error below 10%, which is lower than the error obtained from a generalized table by orders of magnitude. <s> BIB008 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Anonymization Techniques <s> Privacy is a serious concern when microdata need to be released for ad hoc analyses. The privacy goals of existing privacy protection approaches (e.g., k-anonymity and l-diversity) are suitable only for categorical sensitive attributes. Since applying them directly to numerical sensitive attributes (e.g., salary) may result in undesirable information leakage, we propose privacy goals to better capture the need of privacy protection for numerical sensitive attributes. Complementing the desire for privacy is the need to support ad hoc aggregate analyses over microdata. Existing generalization-based anonymization approaches cannot answer aggregate queries with reasonable accuracy. We present a general framework of permutation-based anonymization to support accurate answering of aggregate queries and show that, for the same grouping, permutation-based techniques can always answer aggregate queries more accurately than generalization-based approaches. We further propose several criteria to optimize permutations for accurate answering of aggregate queries, and develop efficient algorithms for each criterion. <s> BIB009
|
We explain three major techniques to guarantee privacy notions. Generalization and Suppression: In suppression we delete some values, and in generalization we replace some values with their less specific values. For the generalization, we replace categorical attributes with respect to a given taxonomy, such as the one shown in Figure 2 . Values in numerical attributes are usually replaced with an interval containing the original values. In full-domain generalization BIB002 , BIB003 , all values in an attribute are generalized to the same level of the taxonomy tree. For example with taxonomy in Figure 2 , if Beef and Chicken are generalized to Meat, then Apple, Orange and Banana should be generalized to Fruit. In subtree generalization , BIB004 , either all child nodes or none are generalized. For example, in Figure 2 , this scheme requires that if Beef is generalized to Meat, then the other child node, Chicken, would also be generalized to Meat, but Apple and Orange, which are child nodes of Fruit, can remain ungeneralized. In cell generalization BIB007 , also known as "local recoding", only some instances of a value will be generalized compared to "global recoding" in which if a value is generalized, all its instances are generalized. Major suppression techniques are Record suppression , BIB004 , BIB002 , and cell suppression (or local suppression) BIB001 BIB006 , which are processes of suppressing an entire record, or suppressing some instances of a given value in a database, respectively. Anatomization and Permutation: In anatomization BIB008 the QI or the SAs are not modified and instead the QI data and the SAs data will be published in two separate tables: a QI table containing the quasi-identifier attributes, a SA table containing the sensitive attributes, and tables have one common GroupID attribute. In permutation method BIB009 records are partitioned into groups and then their SA values within each group will be shuffled. Perturbation and Randomization: In perturbation the original data values are replaced with some synthetic data values in such a way that the statistical information is preserved.The additive noise technique [17] alters a sensitive numerical data such as salary by adding a random value drawn from some distribution. The data swapping method in which SA values of records are exchanged, can protect numerical and categorical attributes . Authors in BIB005 also proposed a randomization approach based on data swapping to limit the attacker's background knowledge on inferring sensitive attributes.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> In this paper we study the privacy preservation properties of aspecific technique for query log anonymization: token-based hashing. In this approach, each query is tokenized, and then a secure hash function is applied to each token. We show that statistical techniques may be applied to partially compromise the anonymization. We then analyze the specific risks that arise from these partial compromises, focused on revelation of identity from unambiguous names, addresses, and so forth, and the revelation of facts associated with an identity that are deemed to be highly sensitive. Our goal in this work is two fold: to show that token-based hashing is unsuitable for anonymization, and to present a concrete analysis of specific techniques that may be effective in breaching privacy, against which other anonymization schemes should be measured. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> In recent years, privacy preserving data mining has become very important because of the proliferation of large amounts of data on the internet. Many data sets are inherently high dimensional, which are challenging to different privacy preservation algorithms. However, some domains of such data sets also have some special properties which make the use of sketch based techniques particularly useful. In this paper, we present a new method for privacy preserving data mining of text and binary data with the use of a sketch based approach. The special properties of such data sets which are exploited are that of sparsity; according to this property, only a small percentage of the attributes have non-zero values. We formalize an anonymity model for the sketch based approach, and utilize it in order to construct sketch based privacy preserving representations of the original data. This representation allows accurate computation of a number of important data mining primitives such as the dot product. Therefore, it can be used for a variety of data mining algorithms such as clustering and classification. We illustrate the effectiveness of our approach on a number of real and synthetic data sets. We show that the accuracy of data mining algorithms is preserved by the transformation even in the presence of increasing data dimensionality. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> Existing research on privacy-preserving data publishing focuses on relational data: in this context, the objective is to enforce privacy-preserving paradigms, such as k- anonymity and lscr-diversity, while minimizing the information loss incurred in the anonymizing process (i.e. maximize data utility). However, existing techniques adopt an indexing- or clustering- based approach, and work well for fixed-schema data, with low dimensionality. Nevertheless, certain applications require privacy-preserving publishing of transaction data (or basket data), which involves hundreds or even thousands of dimensions, rendering existing methods unusable. We propose a novel anonymization method for sparse high-dimensional data. We employ a particular representation that captures the correlation in the underlying data, and facilitates the formation of anonymized groups with low information loss. We propose an efficient anonymization algorithm based on this representation. We show experimentally, using real-life datasets, that our method clearly outperforms existing state-of-the-art in terms of both data utility and computational overhead. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> In this paper we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of transactional data that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the point of view of the adversary. We define a new version of the k-anonymity guarantee, the km-anonymity, to limit the effects of the data dimensionality and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm which finds the optimal solution, however, at a high cost which makes it inapplicable for large, realistic problems. Then, we propose two greedy heuristics, which scale much better and in most of the cases find a solution close to the optimal. The proposed algorithms are experimentally evaluated using real datasets. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB005 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB006 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data. <s> BIB007 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> Web query log data contain information useful to research; however, release of such data can re-identify the search engine users issuing the queries. These privacy concerns go far beyond removing explicitly identifying information such as name and address, since non-identifying personal data can be combined with publicly available information to pinpoint to an individual. In this work we model web query logs as unstructured transaction data and present a novel transaction anonymization technique based on clustering and generalization techniques to achieve the k-anonymity privacy. We conduct extensive experiments on the AOL query log data. Our results show that this method results in a higher data utility compared to the state-of-the-art transaction anonymization methods. <s> BIB008 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> Privacy protection in publishing transaction data is an important problem. A key feature of transaction data is the extreme sparsity, which renders any single technique ineffective in anonymizing such data. Among recent works, some incur high information loss, some result in data hard to interpret, and some suffer from performance drawbacks. This paper proposes to integrate generalization and suppression to reduce information loss. However, the integration is non-trivial. We propose novel techniques to address the efficiency and scalability challenges. Extensive experiments on real world databases show that this approach outperforms the state-of-the-art methods, including global generalization, local generalization, and total suppression. In addition, transaction data anonymized by this approach can be analyzed by standard data mining tools, a property that local generalization fails to provide. <s> BIB009 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> The publication of transaction data, such as market basket data, medical records, and query logs, serves the public benefit. Mining such data allows for the derivation of association rules that connect certain items to others with measurable confidence. Still, this type of data analysis poses a privacy threat; an adversary having partial information on a person's behavior may confidently associate that person to an item deemed to be sensitive. Ideally, an anonymization of such data should lead to an inference-proof version that prevents the association of individuals to sensitive items, while otherwise allowing for truthful associations to be derived. Original approaches to this problem were based on value perturbation, damaging data integrity. Recently, value generalization has been proposed as an alternative; still, approaches based on it have assumed either that all items are equally sensitive, or that some are sensitive and can be known to an adversary only by association, while others are non-sensitive and can be known directly. Yet in reality there is a distinction between sensitive and non-sensitive items, but an adversary may possess information on any of them. Most critically, no antecedent method aims at a clear inference-proof privacy guarantee. In this paper, we propose ρ-uncertainty, the first, to our knowledge, privacy concept that inherently safeguards against sensitive associations without constraining the nature of an adversary's knowledge and without falsifying data. The problem of achieving ρ-uncertainty with low information loss is challenging because it is natural. A trivial solution is to suppress all sensitive items. We develop more sophisticated schemes. In a broad experimental study, we show that the problem is solved non-trivially by a technique that combines generalization and suppression, which also achieves favorable results compared to a baseline perturbation-based scheme. <s> BIB010 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Contributions and Paper Organization <s> The publication of Web search logs is very useful for the scientific research community, but to preserve the users' privacy, logs have to be submitted to an anonymization process. Random query swapping is a common technique used to protect logs that provides k-anonymity to the users in exchange for loss of utility. With the assumption that by swapping queries semantically close this utility loss can be reduced, we introduce a novel protection method that semantically microaggregates the logs using the Open Directory Project. That is, we extend a common method used in statistical disclosure control to protect search logs from a semantic perspective. The method has been tested with a random subset of AOL search logs, and it has been observed that new logs improve the data usefulness. <s> BIB011
|
In this survey, we provide an overview of the recent studies in privacy-preserving Web query log publishing. We explain privacy notions, attacks, and the utility challenges in query log anonymization. We categorize the recent privacy-preserving query log publishing techniques into transactional and non-transactional anonymity approaches. The rest of the paper is organized as follows. In Section 2, we study the problem of query log anonymization and its challenges. We categorize the existing anonymization methods in Section 3 and summarize and discuss these methods in section 4. We conclude the paper in Section 5. The problem of Web query-log anonymization have been examined with BIB001 , , and from Web community with focus on privacy attacks, and , BIB002 , BIB003 , BIB004 , BIB005 , BIB006 , BIB007 , BIB008 , BIB009 , BIB010 , and BIB011 from the database community with focus on transaction database anonymization. In this survey we study both group of works and categorize them into non-transactional and transactional anonymity models respectively. However, the major part belongs to transactional model meaning that we treat query logs as transaction data (unstructured data without a fixed set of attributes), where each transaction represents a query and each item represents a query term. Such data is a rich source for many data mining applications such as association rule mining, search recommendations, and etc.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially identifying when linked with other available data sets. Data are often shared for business or legal reasons. This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process. We explore preserving the anonymity by the use of generalizations and suppressions on the potentially identifying portions of the data. We extend earlier works in this area along various dimensions. First, satisfying privacy constraints is considered in conjunction with the usage for the data being disseminated. This allows us to optimize the process of preserving privacy for the specified usage. In particular, we investigate the privacy transformation in the context of data mining applications like building classification and regression models. Second, our work improves on previous approaches by allowing more flexible generalizations for the data. Lastly, this is combined with a more thorough exploration of the solution space using the genetic algorithm framework. These extensions allow us to transform the data so that they are more useful for their intended purpose while satisfying the privacy constraints. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> K-Anonymity has been proposed as a mechanism for protecting privacy in microdata publishing, and numerous recoding "models" have been considered for achieving 𝑘anonymity. This paper proposes a new multidimensional model, which provides an additional degree of flexibility not seen in previous (single-dimensional) approaches. Often this flexibility leads to higher-quality anonymizations, as measured both by general-purpose metrics and more specific notions of query answerability. Optimal multidimensional anonymization is NP-hard (like previous optimal 𝑘-anonymity problems). However, we introduce a simple greedy approximation algorithm, and experimental results show that this greedy algorithm frequently leads to more desirable anonymizations than exhaustive optimal algorithms for two single-dimensional models. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> We investigate the subtle cues to user identity that may be exploited in attacks on the privacy of users in web search query logs. We study the application of simple classifiers to map a sequence of queries into the gender, age, and location of the user issuing the queries. We then show how these classifiers may be carefully combined at multiple granularities to map a sequence of queries into a set of candidate users that is 300-600 times smaller than random chance would allow. We show that this approach remains accurate even after removing personally identifiable information such as names/numbers or limiting the size of the query log. We also present a new attack in which a real-world acquaintance of a user attempts to identify that user in a large query log, using personal information. We show that combinations of small pieces of information about terms a user would probably search for can be highly effective in identifying the sessions of that user. We conclude that known schemes to release even heavily scrubbed query logs that contain session information have significant privacy risks. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> Web query log data contain information useful to research; however, release of such data can re-identify the search engine users issuing the queries. These privacy concerns go far beyond removing explicitly identifying information such as name and address, since non-identifying personal data can be combined with publicly available information to pinpoint to an individual. In this work we model web query logs as unstructured transaction data and present a novel transaction anonymization technique based on clustering and generalization techniques to achieve the k-anonymity privacy. We conduct extensive experiments on the AOL query log data. Our results show that this method results in a higher data utility compared to the state-of-the-art transaction anonymization methods. <s> BIB005 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Utility Challenge <s> Thank you very much for reading introduction to privacy preserving data publishing concepts and techniques. Maybe you have knowledge that, people have search numerous times for their favorite books like this introduction to privacy preserving data publishing concepts and techniques, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their desktop computer. <s> BIB006
|
Query log (or transaction data) anonymization aims at preserving privacy while maintaining data utility and reducing the information loss. However, measuring the utility of the anonymized query logs is not always clear. In case of suppression methods, the information loss can be a simple count of suppressed items. For generalization based techniques, various metrics have been proposed to measure the quality of generalized data including classification metric, generalized loss metric BIB001 , and discernibility metric . Some specific transaction anonymization loss measures are normalized centrality penalty BIB004 , and group generalization distortion BIB005 . Itemset based utility BIB006 is another utility measure which captures frequent itemsets in transaction data. Apart from the utility measures mentioned above, BIB006 mentioned two other aspects for practical usefulness of the anonymized data. The first is the truthfulness of results, i.e. the analysis results (such as support of frequent itemset) on the anonymized data holds on the original data. The second is the value exclusiveness, i.e. the items in the modified data are exclusive of each other. This has a significant impact on many data mining tasks based on counting queries. For example, the local recoding transformation BIB002 does not have this property. Consider Figure 2 , a local recoding can generalize some occurrences of "Apple" and some occurrences of "Orange", to "Fruit". Now, it is not possible to count the number of transactions containing "Apple" or "Orange" from the modified data. The major challenge for all query-log anonymization is reducing the significant information loss of the anonymized data. This is because each dimension (any search term) could be potentially sensitive and a potential QID attribute used for record or attribute linkages, thus employing traditional privacy models, such as k-anonymity, would require including all dimensions into a single QID. Consequently lots of data has to be suppressed or generalized to the top-most values in order to satisfy k-anonymity, even for small values of k . Although removing sensitive terms based on the semantics of the search term and context can help increasing the utility of anonymized data, the removed sensitive terms can still be predicted based on user's other queries BIB003 .
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Challenge <s> Privacy is an important issue when one wants to make use of data that involves individuals' sensitive information. Research on protecting the privacy of individuals and the confidentiality of data has received contributions from many fields, including computer science, statistics, economics, and social science. In this paper, we survey research work in privacy-preserving data publishing. This is an area that attempts to answer the problem of how an organization, such as a hospital, government agency, or insurance company, can release data to the public without violating the confidentiality of personal information. We focus on privacy criteria that provide formal safety guarantees, present algorithms that sanitize data to make it safe for release while preserving useful information, and discuss ways of analyzing the sanitized data. Many challenges still remain. This survey provides a summary of the current state-of-the-art, based on which we expect to see advances in years to come. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Data Privacy Challenge <s> Thank you very much for reading introduction to privacy preserving data publishing concepts and techniques. Maybe you have knowledge that, people have search numerous times for their favorite books like this introduction to privacy preserving data publishing concepts and techniques, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their desktop computer. <s> BIB002
|
Anonymized query log data has some privacy issues which are even more important than the above utility issues. Firstly, the assumption of having an adversary with a very strong background knowledge can drastically affect the anonymized data utility. Therefore some researchers consider a bounded adversary with a limited background knowledge (e.g. by a maximum number of items) BIB002 . Although this assumption can be realistic, it does not hold for cases with unbounded adversary and thus privacy is breached. Secondly, as discussed in BIB001 , an adversary can create multiple accounts and generate many queries using those accounts to create special query patterns (such as a lot of infrequent query, or a distinguishable signature), so that, when the search log is sanitized and released, the adversary can use those patterns to obtain private information about other users. Such issues are still not well studied.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Coherence Method <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Coherence Method <s> We consider the problem of publishing sensitive transaction data with privacy preservation. High dimensionality of transaction data poses unique challenges on data privacy and data utility. On one hand, re-identification attacks tend to use a subset of items that infrequently occur in transactions, called moles. On the other hand, data mining applications typically depend on subsets of items that frequently occur in transactions, called nuggets. Thus the problem is how to eliminate all moles while retaining nuggets as much as possible. A challenge is that moles and nuggets are multi-dimensional with exponential growth and are tangled together by shared items. We present a novel and scalable solution to this problem. The novelty lies in a compact border data structure that eliminates the need of generating all moles and nuggets. <s> BIB002
|
The coherence method BIB001 eliminates both record linkage attacks and attribute linkage attacks. The (h, k, p)-coherence privacy criterion requires that at least k transactions must have any subset of at most p non-sensitive items and at most h percent of these transactions have some sensitive item. This ensures that, for an attacker with the power p, the probability of linking an individual to a transaction is limited to 1/k and the probability of linking an individual to a sensitive item is limited to h. Let β denote the adversary's background knowledge that a transaction contains some non-sensitive items. An attack is modeled in the form of β → e, where e is a sensitive item. Let Sup(β) denote the support of β i.e., the number of such transactions. P (β → e) = Sup(β ∪ {e})/Sup(β) is the probability that a transaction contains e given that it contains β. The breach probability of β, denoted by P breach (β) is the maximum P (β → e) for any private item e. Assume an adversary's background knowledge is up to p non-sensitive items, i.e., |β| ≤ p. If Sup(β) < k, the adversary is able to link an individual to a transaction (record linkage attack) and if P breach (β) > h, the adversary is able to link an individual to a sensitive item (attribute linkage attack). A mole, is any background knowledge (at most to the size p) that can result in a linking attack. Coherence aim at eliminating all moles. For a setting of (h, k, p), an itemset β, with |β|≤p and Sup(β)>0, is called a mole if Sup(β) < k or P breach (β) > h. The data D is (h, k, p)-coherent if D contains no moles. Authors in BIB001 applied the total item suppression technique to enforce (h, k, p)-coherence. Total suppression of an item refers to deleting the item from all transactions containing it. Although total suppression results in a high information loss when the data is sparse, it has two nice properties: (1) eliminating all moles containing the suppressed item, and (2) keeping the support of any remaining itemset, equal to the support in the original data. The latter one implies that any result derived from the modified data, also holds on the original one which is not hold for partial suppression. Since an optimal solution to (h, k, p)-coherence, i.e. with minimum information loss (suppressed items) is NP -hard BIB001 , authors proposed a heuristic solution. They defined minimal moles as those moles that contain no proper subset as a mole in which removing them is sufficient for removing all moles. An algorithm similar to the well-known Apriori algorithm for mining frequent itemsets, was used to find all minimal moles. One problem with the coherence method is the scalability issue considering exponential growth of itemsets. In a similar work, BIB002 suggested using sets of maximal and minimal itemsets, called borders. These borders are typically much smaller than the full sets of all itemsets that they represent, thus their solution requires much less space and time.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Band Matrix Approach <s> Existing research on privacy-preserving data publishing focuses on relational data: in this context, the objective is to enforce privacy-preserving paradigms, such as k- anonymity and lscr-diversity, while minimizing the information loss incurred in the anonymizing process (i.e. maximize data utility). However, existing techniques adopt an indexing- or clustering- based approach, and work well for fixed-schema data, with low dimensionality. Nevertheless, certain applications require privacy-preserving publishing of transaction data (or basket data), which involves hundreds or even thousands of dimensions, rendering existing methods unusable. We propose a novel anonymization method for sparse high-dimensional data. We employ a particular representation that captures the correlation in the underlying data, and facilitates the formation of anonymized groups with low information loss. We propose an efficient anonymization algorithm based on this representation. We show experimentally, using real-life datasets, that our method clearly outperforms existing state-of-the-art in terms of both data utility and computational overhead. <s> BIB001
|
An anonymization method is proposed in BIB001 to prevent attribute linkage attacks for high-dimensional data with sensitive items, using a band matrix technique. In a band matrix, non-zero entries are confined to a diagonal band and zero entries on either side. In such a matrix, rows correspond to transactions and columns correspond to items, with the 0/1 value in each entry. In their method, items are divided into sensitive items, and non-sensitive items. A non-sensitive transaction, is a transaction with no sensitive items and sensitive transactions are those with at least one sensitive item. A transaction set T has privacy degree of p if the probability of associating any transaction t∈T with a particular sensitive item does not exceed 1 p . To achieve this privacy requirement, BIB001 suggested applying two phases: (1) transforming the data to a band matrix (using Reverse Cuthill-McKee algorithm) with respect to non-sensitive attributes, and (2) grouping each sensitive transaction with nonsensitive transactions or sensitive ones with different sensitive items. The intuition why such band matrix formation is helpful, is that it organizes data such that consecutive transactions are very likely to share many common non-sensitive items and this results in a smaller reconstruction error. In the second phase each sensitive transaction will be grouped with non-sensitive transactions or sensitive ones with different sensitive items. A greedy algorithm based on the "one-occurrence-per-group" heuristic, was proposed in BIB001 which allows only one occurrence of each sensitive item in a group.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> k m -Anonymity <s> Privacy becomes a more and more serious concern in applications involving microdata. Recently, efficient anonymization has attracted much research work. Most of the previous methods use global recoding, which maps the domains of the quasi-identifier attributes to generalized or changed values. However, global recoding may not always achieve effective anonymization in terms of discernability and query answering accuracy using the anonymized data. Moreover, anonymized data is often for analysis. As well accepted in many analytical applications, different attributes in a data set may have different utility in the analysis. The utility of attributes has not been considered in the previous methods.In this paper, we study the problem of utility-based anonymization. First, we propose a simple framework to specify utility of attributes. The framework covers both numeric and categorical data. Second, we develop two simple yet efficient heuristic local recoding methods for utility-based anonymization. Our extensive performance study using both real data sets and synthetic data sets shows that our methods outperform the state-of-the-art multidimensional global recoding methods in both discernability and query answering accuracy. Furthermore, our utility-based method can boost the quality of analysis using the anonymized data. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> k m -Anonymity <s> In this paper we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of transactional data that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the point of view of the adversary. We define a new version of the k-anonymity guarantee, the km-anonymity, to limit the effects of the data dimensionality and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm which finds the optimal solution, however, at a high cost which makes it inapplicable for large, realistic problems. Then, we propose two greedy heuristics, which scale much better and in most of the cases find a solution close to the optimal. The proposed algorithms are experimentally evaluated using real datasets. <s> BIB002
|
To address the record linkage attacks in transaction data, BIB002 proposed the k m -anonymity notion which assumes that any subset of items can be used as background knowledge. In this method unlike coherence and band matrix approach, data is not distinguished as sensitive and non-sensitive, but it is considered both as potential quasi-identifiers and potential sensitive data. It assumes, like the coherence method, that an adversary knows at most m number of items as background knowledge. A transaction database is k m -anonymous if for any set of up to m items, there exist at least k transactions that contain those items in the published database. We can consider this privacy notion as a special case of (h, k, p)-coherence with h = 100% and p = m, meaning that a subset of items that causes violation of k m -anonymity is a mole under the coherence model. The anonymization method in BIB002 applies generalization in form of the global recoding scheme in which when a child node is generalized to its parent, all its sibling nodes will also be generalized to their parent node, and the generalization process is applies to all transactions in the database. Each generalization corresponds to a possible horizontal cut of the taxonomy tree. The information loss of a cut is measured using the normalized certainty penalty loss metric BIB001 which captures the degree of generalization of an item i, by considering the percentage of leaf nodes under i in the item taxonomy. If a cut results in a k m -anonymous database, then all its more general cuts, also result in a k m -anonymous database. This is called the monotonicity property of cuts BIB002 . The k m -anonymization problem is to find a k m -anonymous transformation with the minimum information loss. Based on the monotonicity property and in order to prevent higher information loss, as soon as we find a cut that satisfies the k m -anonymity constraint, we do not have to find a more general cut. Generating the set of all possible cuts and checking the anonymity violation for every subset of up to m items is not applicable for large, realistic problems. Thus, authors proposed a greedy heuristic method called Apriori anonymization (AA) which is based on the apriori principle: if an itemset J of size i, violates the anonymity requirement, then each superset of J also violates the anonymity requirement. It explores the space of itemsets in an apriori, bottom-up scheme. Meaning that before checking if -itemsets ( = 2,. . . , m) violates the anonymity requirement, we first eliminate the possible anonymity violation caused by ( -1)-itemsets. This method drastically reduces the number of itemsets that must be checked at a higher level, since detailed items could have been generalized.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Transactional k-Anonymity <s> Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Transactional k-Anonymity <s> Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Transactional k-Anonymity <s> K-Anonymity has been proposed as a mechanism for protecting privacy in microdata publishing, and numerous recoding "models" have been considered for achieving 𝑘anonymity. This paper proposes a new multidimensional model, which provides an additional degree of flexibility not seen in previous (single-dimensional) approaches. Often this flexibility leads to higher-quality anonymizations, as measured both by general-purpose metrics and more specific notions of query answerability. Optimal multidimensional anonymization is NP-hard (like previous optimal 𝑘-anonymity problems). However, we introduce a simple greedy approximation algorithm, and experimental results show that this greedy algorithm frequently leads to more desirable anonymizations than exhaustive optimal algorithms for two single-dimensional models. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Transactional k-Anonymity <s> In this paper we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of transactional data that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the point of view of the adversary. We define a new version of the k-anonymity guarantee, the km-anonymity, to limit the effects of the data dimensionality and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm which finds the optimal solution, however, at a high cost which makes it inapplicable for large, realistic problems. Then, we propose two greedy heuristics, which scale much better and in most of the cases find a solution close to the optimal. The proposed algorithms are experimentally evaluated using real datasets. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Transactional k-Anonymity <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB005
|
The assumption of bounded background knowledge of an adversary in the coherence and the k manonymity methods, has two limitations. Firstly, in many cases it is not possible to determine this bound in advance. Secondly, these methods can ensure k-anonymity (with p or m set to the maximum transaction length in the database) only if adversary's background knowledge is limited to the presence of items. If the background knowledge is on the "absence" of items, the adversary may exclude transactions and focus on fewer than k transactions. For example, consider an adversary who knows that Bob has bought "Orange" and "Chicken", but has not bought "Milk". Suppose that three transactions contain "Orange", and "Chicken", in which two of them contain "Milk". The adversary can exclude the two transaction containing "Milk" and link the remaining transaction to Bob. Here, k m privacy with k=2 and m=3 is violated, even by setting m to the maximum transaction length. The k-anonymity approach in BIB005 , which we refer to as the P artition method, avoids this problem since all transactions in the same equivalence class are identical. They extended the original kanonymity for relational data BIB001 BIB002 , to the transactional k-anonymity for "set-valued data", in which a set of values are associated with an individual. A transaction database D is k-anonymous if every transaction in D occurs at least k times. Authors in BIB005 showed that every database which satisfies k-anonymity, also satisfies k m -anonymity for all m values, however, the reverse does not always hold. The P artition method is the extended version of the top-down Mondrian BIB003 algorithm for relational data. In this method, if several items are generalized to the same item, only one occurrence of the generalized item will be kept in the generalized transaction. It starts with the single partition containing all transactions with all items generalized to the root item. Then it recursively splits a partition by specializing a node in the taxonomy for all the transactions in the partition. Next all the transactions in the partition with the same specialized item are distributed to the same sub-partition. At the end of distribution, some small sub-partitions with less than k transactions are merged into a special leftover sub-partition to be redistributed. The partitioning stops if k-anonymity condition is violated. Unlike the Apriori anonymization BIB004 , the Partition approach follows a local recoding scheme.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Clustering-Based k-Anonymity <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Clustering-Based k-Anonymity <s> Web query log data contain information useful to research; however, release of such data can re-identify the search engine users issuing the queries. These privacy concerns go far beyond removing explicitly identifying information such as name and address, since non-identifying personal data can be combined with publicly available information to pinpoint to an individual. In this work we model web query logs as unstructured transaction data and present a novel transaction anonymization technique based on clustering and generalization techniques to achieve the k-anonymity privacy. We conduct extensive experiments on the AOL query log data. Our results show that this method results in a higher data utility compared to the state-of-the-art transaction anonymization methods. <s> BIB002
|
The P artition method suffers from significant information loss for two reasons. Firstly, it stops partitioning the data at a high level of the taxonomy because the exponential branching for generating sub-partitions quickly diminishes the size of a subpartition and causes k-anonymity violation. This is especially true for query logs with large and diverse item universe. Secondly, it does not deal with item duplication in the generalized transaction. In fact preserving term frequency (as much as possible) is an important issue for many applications such as TFIDF used by ranking algorithms. Authors in BIB002 , adopted the the privacy notion of transactional k-anonymity BIB001 and proposed a clustering approach to query log anonymization as a solution to the above shortcomings of the P artition method. The main idea in BIB002 is grouping "similar" transactions together, to reduce the amount of required generalization and suppression to make them identical. For example, the generalized transaction for <Apple> and <Milk > is <Food >, and for <Apple> and <Orange> is <Fruit>. Clearly the former entails more information loss. Therefore, the transaction anonymization can be treated as a clustering problem such that each cluster must contain at least k transactions and these transactions should be "similar". They defined a transaction as a bag of items (thus allowing duplicate items). A transaction t' is a generalized transaction of a transaction t, if each item i'∈t' represents (the generalization of) one "distinct" item i ∈ t. This transaction model has two distinctions from BIB001 . First, it allows duplicate items in a transaction and in its generalized transaction. For example if t'=<F ruit, F ruit> is a generalized transaction of t, t' represents two leaf items under F ruit in t. Second, it allows items in a transaction to be on the same path in the item taxonomy while each item represents a distinct leaf item. For example, we interpret the transaction <F ruit, F ood> as: F ruit represents a leaf item under F ruit and F ood represents a leaf item under F ood that is not represented by F ruit. The least common generalization (LCG) was proposed as a way to measure the similarity of a subset of transactions. The LCG of a set of transactions S, is a common generalized transaction for all of the transactions in S, and there is no other more special common generalized transaction. The authors devised an efficient linear-time bottom-up item generalization algorithm to compute LCG. Authors proposed group generalization distortion (GGD) as a measure to capture both generalization and suppression distortion of a set of transactions. They formulated the transaction anonymization as the problem of clustering a set of transactions into clusters of size at least k such that the sum of GGD of LCG of all clusters is minimized. Since the problem is N P -hard, they presented a heuristic linear-time algorithm, called Clump, which unlike P artition, preserves duplicate items after generalization.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Heuristic Generalization with Heuristic Suppression <s> In this paper we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of transactional data that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the point of view of the adversary. We define a new version of the k-anonymity guarantee, the km-anonymity, to limit the effects of the data dimensionality and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm which finds the optimal solution, however, at a high cost which makes it inapplicable for large, realistic problems. Then, we propose two greedy heuristics, which scale much better and in most of the cases find a solution close to the optimal. The proposed algorithms are experimentally evaluated using real datasets. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Heuristic Generalization with Heuristic Suppression <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Heuristic Generalization with Heuristic Suppression <s> Privacy protection in publishing transaction data is an important problem. A key feature of transaction data is the extreme sparsity, which renders any single technique ineffective in anonymizing such data. Among recent works, some incur high information loss, some result in data hard to interpret, and some suffer from performance drawbacks. This paper proposes to integrate generalization and suppression to reduce information loss. However, the integration is non-trivial. We propose novel techniques to address the efficiency and scalability challenges. Extensive experiments on real world databases show that this approach outperforms the state-of-the-art methods, including global generalization, local generalization, and total suppression. In addition, transaction data anonymized by this approach can be analyzed by standard data mining tools, a property that local generalization fails to provide. <s> BIB003
|
Authors in BIB003 were motivated by the limitations of the k m -anonymity, and proposed to integrate the global generalization technique in BIB001 with the total item suppression technique in BIB002 for enforcing k manonymity. They applied full subtree generalization technique BIB001 , meaning that a generalization solution Cut is defined by a cut on a taxonomy tree with exactly one item on every root-to-leaf path. Since the full subtree generalization can suffer from excessive distortion in the presence of outliers, suppressing a few outlier items will reduce information loss caused by high amount of generalization. They applied total item suppression technique, which removes some items of Cut from all transactions. The loss metric is the aggregate of both generalization and suppression. The anonymized data is derived in two steps: first the items are generalized with respect to the Cut and then some items of the Cut are suppressed in all transactions. Since the number of cuts for a taxonomy is exponential in the number of items, enumerating suppression scenarios for a cut is also intractable. Consequently, authors provided a heuristic approach to address this issues.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Sketch-based Anonymization <s> In recent years, privacy preserving data mining has become very important because of the proliferation of large amounts of data on the internet. Many data sets are inherently high dimensional, which are challenging to different privacy preservation algorithms. However, some domains of such data sets also have some special properties which make the use of sketch based techniques particularly useful. In this paper, we present a new method for privacy preserving data mining of text and binary data with the use of a sketch based approach. The special properties of such data sets which are exploited are that of sparsity; according to this property, only a small percentage of the attributes have non-zero values. We formalize an anonymity model for the sketch based approach, and utilize it in order to construct sketch based privacy preserving representations of the original data. This representation allows accurate computation of a number of important data mining primitives such as the dot product. Therefore, it can be used for a variety of data mining algorithms such as clustering and classification. We illustrate the effectiveness of our approach on a number of real and synthetic data sets. We show that the accuracy of data mining algorithms is preserved by the transformation even in the presence of increasing data dimensionality. <s> BIB001
|
The sketch-based privacy-preserving approach BIB001 reduces the dimensionality of the data by producing a much smaller number of features to represent the data. This technique is specifically effective for high-dimensional sparse data such as query logs. The idea is to replace a user's search history by a set of sketches. Two privacy criteria associated with this technique are δ-anonymity and k-variance. The δ-Anonymity ensures that the uncertainty in the reconstructed value of each term frequency is at least δ. As noted in BIB001 , a disadvantage of δ-anonymity is that it treats each user independently regardless of whether there are other users similar to him/her. They argued that it is desirable to give outliers (users who use unique terms) more protection than users who are similar to many others. Thus, they define the k-variance which ensures that any user's sanitized search history cannot be easily distinguished from its k-nearest neighbors. They described algorithms for δ-anonymity and kvariance using suppression.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Differential Privacy <s> In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Differential Privacy <s> The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data. <s> BIB002
|
Differential privacy BIB001 is one of the state-ofthe-art techniques for ensuring privacy and is more robust to attacks than any other existing privacy definitions. The notion of differential privacy was applied for search queries in BIB002 which adds a random noise to any statistic of a search log such as a term frequency. This random noise is drawn independently from the Laplace distribution with mean zero and a scaling parameter. The algorithm output contains frequent queries with noisy statistic of the queries and the clicked URLs.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> ρ-Uncertainty <s> Existing research on privacy-preserving data publishing focuses on relational data: in this context, the objective is to enforce privacy-preserving paradigms, such as k- anonymity and lscr-diversity, while minimizing the information loss incurred in the anonymizing process (i.e. maximize data utility). However, existing techniques adopt an indexing- or clustering- based approach, and work well for fixed-schema data, with low dimensionality. Nevertheless, certain applications require privacy-preserving publishing of transaction data (or basket data), which involves hundreds or even thousands of dimensions, rendering existing methods unusable. We propose a novel anonymization method for sparse high-dimensional data. We employ a particular representation that captures the correlation in the underlying data, and facilitates the formation of anonymized groups with low information loss. We propose an efficient anonymization algorithm based on this representation. We show experimentally, using real-life datasets, that our method clearly outperforms existing state-of-the-art in terms of both data utility and computational overhead. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> ρ-Uncertainty <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> ρ-Uncertainty <s> The publication of transaction data, such as market basket data, medical records, and query logs, serves the public benefit. Mining such data allows for the derivation of association rules that connect certain items to others with measurable confidence. Still, this type of data analysis poses a privacy threat; an adversary having partial information on a person's behavior may confidently associate that person to an item deemed to be sensitive. Ideally, an anonymization of such data should lead to an inference-proof version that prevents the association of individuals to sensitive items, while otherwise allowing for truthful associations to be derived. Original approaches to this problem were based on value perturbation, damaging data integrity. Recently, value generalization has been proposed as an alternative; still, approaches based on it have assumed either that all items are equally sensitive, or that some are sensitive and can be known to an adversary only by association, while others are non-sensitive and can be known directly. Yet in reality there is a distinction between sensitive and non-sensitive items, but an adversary may possess information on any of them. Most critically, no antecedent method aims at a clear inference-proof privacy guarantee. In this paper, we propose ρ-uncertainty, the first, to our knowledge, privacy concept that inherently safeguards against sensitive associations without constraining the nature of an adversary's knowledge and without falsifying data. The problem of achieving ρ-uncertainty with low information loss is challenging because it is natural. A trivial solution is to suppress all sensitive items. We develop more sophisticated schemes. In a broad experimental study, we show that the problem is solved non-trivially by a technique that combines generalization and suppression, which also achieves favorable results compared to a baseline perturbation-based scheme. <s> BIB003
|
The privacy notion ρ-uncertainty BIB003 ensures that the confidence of any sensitive association rule is at most ρ, while truthful association rules can still be derived. Like the works in BIB001 , and BIB002 , they distinguish between public (non-sensitive) and private (sensitive) items. Formally, a ρ-uncertain transaction set D does not allow an attacker knowing any 1 http://www.dmoz.org/ subset of a transaction t∈D to infer a sensitive item in t with confidence higher than ρ. The authors proposed a technique that combines global generalization over non-sensitive items and selective global suppression of some items. This notion is similar to (h, k, p)-coherence, however, the ρ-uncertainty model allows an adversary with some prior knowledge on the private items.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> In recent years, privacy preserving data mining has become very important because of the proliferation of large amounts of data on the internet. Many data sets are inherently high dimensional, which are challenging to different privacy preservation algorithms. However, some domains of such data sets also have some special properties which make the use of sketch based techniques particularly useful. In this paper, we present a new method for privacy preserving data mining of text and binary data with the use of a sketch based approach. The special properties of such data sets which are exploited are that of sparsity; according to this property, only a small percentage of the attributes have non-zero values. We formalize an anonymity model for the sketch based approach, and utilize it in order to construct sketch based privacy preserving representations of the original data. This representation allows accurate computation of a number of important data mining primitives such as the dot product. Therefore, it can be used for a variety of data mining algorithms such as clustering and classification. We illustrate the effectiveness of our approach on a number of real and synthetic data sets. We show that the accuracy of data mining algorithms is preserved by the transformation even in the presence of increasing data dimensionality. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> Existing research on privacy-preserving data publishing focuses on relational data: in this context, the objective is to enforce privacy-preserving paradigms, such as k- anonymity and lscr-diversity, while minimizing the information loss incurred in the anonymizing process (i.e. maximize data utility). However, existing techniques adopt an indexing- or clustering- based approach, and work well for fixed-schema data, with low dimensionality. Nevertheless, certain applications require privacy-preserving publishing of transaction data (or basket data), which involves hundreds or even thousands of dimensions, rendering existing methods unusable. We propose a novel anonymization method for sparse high-dimensional data. We employ a particular representation that captures the correlation in the underlying data, and facilitates the formation of anonymized groups with low information loss. We propose an efficient anonymization algorithm based on this representation. We show experimentally, using real-life datasets, that our method clearly outperforms existing state-of-the-art in terms of both data utility and computational overhead. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> Privacy is an important issue when one wants to make use of data that involves individuals' sensitive information. Research on protecting the privacy of individuals and the confidentiality of data has received contributions from many fields, including computer science, statistics, economics, and social science. In this paper, we survey research work in privacy-preserving data publishing. This is an area that attempts to answer the problem of how an organization, such as a hospital, government agency, or insurance company, can release data to the public without violating the confidentiality of personal information. We focus on privacy criteria that provide formal safety guarantees, present algorithms that sanitize data to make it safe for release while preserving useful information, and discuss ways of analyzing the sanitized data. Many challenges still remain. This survey provides a summary of the current state-of-the-art, based on which we expect to see advances in years to come. <s> BIB005 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data. <s> BIB006 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Privacy Preservation <s> The publication of transaction data, such as market basket data, medical records, and query logs, serves the public benefit. Mining such data allows for the derivation of association rules that connect certain items to others with measurable confidence. Still, this type of data analysis poses a privacy threat; an adversary having partial information on a person's behavior may confidently associate that person to an item deemed to be sensitive. Ideally, an anonymization of such data should lead to an inference-proof version that prevents the association of individuals to sensitive items, while otherwise allowing for truthful associations to be derived. Original approaches to this problem were based on value perturbation, damaging data integrity. Recently, value generalization has been proposed as an alternative; still, approaches based on it have assumed either that all items are equally sensitive, or that some are sensitive and can be known to an adversary only by association, while others are non-sensitive and can be known directly. Yet in reality there is a distinction between sensitive and non-sensitive items, but an adversary may possess information on any of them. Most critically, no antecedent method aims at a clear inference-proof privacy guarantee. In this paper, we propose ρ-uncertainty, the first, to our knowledge, privacy concept that inherently safeguards against sensitive associations without constraining the nature of an adversary's knowledge and without falsifying data. The problem of achieving ρ-uncertainty with low information loss is challenging because it is natural. A trivial solution is to suppress all sensitive items. We develop more sophisticated schemes. In a broad experimental study, we show that the problem is solved non-trivially by a technique that combines generalization and suppression, which also achieves favorable results compared to a baseline perturbation-based scheme. <s> BIB007
|
In this survey we considered two models for query log anonymization: non-transactional model and transactional model. Although the techniques mentioned in non-transactional model in Section 3.1 protect privacy to some extent, there is a lack of formal privacy guarantee. For example, the release of the AOL query log still leads to the re-identification of a search engine user even after hashing users identifiers . This is because the query content itself may be used together with publicly available information for linking attacks. In the transactional model we consider Web query logs as unstructured transaction data and therefore focus on query-log anonymization from transaction database anonymization point of view. Such a modeling, however, might not be a good idea since there are strong correlations between keywords within a query (based on natural language), and between queries within a single session. This is not true of transactions. Utilizing these correlations can help develop better solutions for the problem in the future. Among the previous work in transaction anonymization, the coherence BIB002 approach can both prevent record linkage attacks and attribute linkage attacks. band matrix BIB003 and ρ-uncertainty BIB007 Both k m -anonymization and k-anonymization, do not distinguish data as sensitive and non-sensitive but as potential QI and SA. In fact, determining which items are sensitive is not always possible in many real applications considering huge size of the item universe. The adversary's background knowledge is bounded in coherence and k m -anonymization, while in band matrix and k-anonymization we do not limit the attacker's knowledge. A security issue about bounded knowledge in coherence and k manonymization was explained by BIB004 that if background knowledge is on the "absence" of items, the attacker may exclude transactions using this knowledge and focus on fewer than k transactions. The HgHs approach also has this privacy issue. For the sketch-based privacy-preserving approach BIB001 , authors in BIB005 argued that one should be careful of releasing the (pseudo)randomly generated values that were used in the sanitization process in BIB001 since this may allow attackers to reconstruct the original data which is a privacy breach. While applying differential privacy for search queries BIB006 is very promising, like every existing privacy definition, it is susceptible to active attacks. The assumption that users behave honestly may lead to privacy breach. If an attacker creates multiple accounts and in some of his first queries issues a private query such as someone else's credit card number, it could result in publishing this private data by the search engine.
|
Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm. <s> BIB001 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> In recent years, privacy preserving data mining has become very important because of the proliferation of large amounts of data on the internet. Many data sets are inherently high dimensional, which are challenging to different privacy preservation algorithms. However, some domains of such data sets also have some special properties which make the use of sketch based techniques particularly useful. In this paper, we present a new method for privacy preserving data mining of text and binary data with the use of a sketch based approach. The special properties of such data sets which are exploited are that of sparsity; according to this property, only a small percentage of the attributes have non-zero values. We formalize an anonymity model for the sketch based approach, and utilize it in order to construct sketch based privacy preserving representations of the original data. This representation allows accurate computation of a number of important data mining primitives such as the dot product. Therefore, it can be used for a variety of data mining algorithms such as clustering and classification. We illustrate the effectiveness of our approach on a number of real and synthetic data sets. We show that the accuracy of data mining algorithms is preserved by the transformation even in the presence of increasing data dimensionality. <s> BIB002 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> We consider the problem of publishing sensitive transaction data with privacy preservation. High dimensionality of transaction data poses unique challenges on data privacy and data utility. On one hand, re-identification attacks tend to use a subset of items that infrequently occur in transactions, called moles. On the other hand, data mining applications typically depend on subsets of items that frequently occur in transactions, called nuggets. Thus the problem is how to eliminate all moles while retaining nuggets as much as possible. A challenge is that moles and nuggets are multi-dimensional with exponential growth and are tangled together by shared items. We present a novel and scalable solution to this problem. The novelty lies in a compact border data structure that eliminates the need of generating all moles and nuggets. <s> BIB003 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue. <s> BIB004 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients' symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach. <s> BIB005 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> Web query log data contain information useful to research; however, release of such data can re-identify the search engine users issuing the queries. These privacy concerns go far beyond removing explicitly identifying information such as name and address, since non-identifying personal data can be combined with publicly available information to pinpoint to an individual. In this work we model web query logs as unstructured transaction data and present a novel transaction anonymization technique based on clustering and generalization techniques to achieve the k-anonymity privacy. We conduct extensive experiments on the AOL query log data. Our results show that this method results in a higher data utility compared to the state-of-the-art transaction anonymization methods. <s> BIB006 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> The publication of Web search logs is very useful for the scientific research community, but to preserve the users' privacy, logs have to be submitted to an anonymization process. Random query swapping is a common technique used to protect logs that provides k-anonymity to the users in exchange for loss of utility. With the assumption that by swapping queries semantically close this utility loss can be reduced, we introduce a novel protection method that semantically microaggregates the logs using the Open Directory Project. That is, we extend a common method used in statistical disclosure control to protect search logs from a semantic perspective. The method has been tested with a random subset of AOL search logs, and it has been observed that new logs improve the data usefulness. <s> BIB007 </s> Privacy Preserving Web Query Log Publishing: A Survey on Anonymization Techniques <s> Utility Preservation <s> Thank you very much for reading introduction to privacy preserving data publishing concepts and techniques. Maybe you have knowledge that, people have search numerous times for their favorite books like this introduction to privacy preserving data publishing concepts and techniques, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some malicious bugs inside their desktop computer. <s> BIB008
|
As discussed in Section 2.2.1, important utility factors for the anonymized data are item generalization/suppression loss, truthfulness, itemset utility, value exclusiveness, and item frequency. Authors in BIB003 and BIB004 assume that the taxonomy tree for transaction data tend to be flat and fanout, and thus decided to use item suppression instead of generalization. In this case, employing generalization loses more information than employing item suppression. However, if the transaction database is too sparse, then the item suppression of the coherence may cause a large information loss. If the data is sparse and the taxonomy is "slim" and "tall", the generalization scheme in k m -anonymization and the k-anonymization could work better, while if the taxonomy is "short" and "wide", generalization causes larger information loss BIB008 BIB006 . Data analysis on anonymized data is considered truthful with respect to the original data if the analysis results obtained from the modified data holds on the original data BIB008 . The coherence approach and k m -anonymity approaches guarantee truthful analysis while it is not the case for the k-anonymization. The analysis of frequent itemsets BIB001 , i.e., the items that co-occur frequently in transactions, has a vast application in data mining applications such as association rule mining, search recommendations, and etc. Thus preserving itemsets is an important utility factor. Among the discussed approaches, coherence and band matrix can preserve such itemset utility. The local generalization in transactional k-anonymity approach has a smaller information loss than global generalization, however, the anonymized data does not have the value exclusiveness, which is important to preserve for many data mining algorithms. This means that new algorithms must be designed to analyze such data BIB008 . Most of the previous works in transaction data anonymization do not deal with item duplication meaning that the frequency of a term in a query can not be preserved well and will affect utilities such as count query results. For example the information loss of k-anonymity method in BIB005 can be high due to item generalization, and eliminating duplicate generalized item. The latter reason of information loss was not measured by an usual information loss metric for relational data where no attribute value will be eliminated by generalization. Authors in BIB006 , however, designed their anonymization method in such a way which preserves item frequency. There is no guarantee for minimum data distortion in the semantic microaggregation technique BIB007 while computing the centroid for the clusters. Moreover, authors did not consider item generalization and its cost in their model. For The sketch-based privacy-preserving approach BIB002 , it is interesting to see if it would be useful for anonymizing real search logs, and when we only have sanitized search logs, what kinds of search log analysis can still be conducted with acceptable accuracy.
|
Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> From the Publisher: ::: Many scientists and engineers now use the paradigms of evolutionary computation (genetic agorithms, evolution strategies, evolutionary programming, genetic programming, classifier systems, and combinations or hybrids thereof) to tackle problems that are either intractable or unrealistically time consuming to solve through traditional computational strategies. Recently there have been vigorous initiatives to promote cross-fertilization between the EC paradigms, and also to combine these paradigms with other approaches such as neural networks to create hybrid systems with enhanced capabilities. To address the need for speedy dissemination of new ideas in these fields, and also to assist in cross-disciplinary communications and understanding, Oxford University Press and the Institute of Physics have joined forces to create a major reference publication devoted to EC fundamentals, models, algorithms and applications. This work is intended to become the standard reference resource for the evolutionary computation community. The Handbook of Evolutionary Computation will be available in loose-leaf print form, as well as in an electronic version that combines both CD-ROM and on-line (World Wide Web) acess to its contents. Regularly published supplements will be available on a subscription basis. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> The goal of this expository paper is to bring forth the basic current elements of soft computing (fuzzy logic, neural networks, genetic algorithms and genetic programming) and the current applications in intelligent control. Fuzzy sets and fuzzy logic and their applications to control systems have been documented. Other elements of soft computing, such as neural networks and genetic algorithms, are also treated for the novice reader. Each topic will have a number of relevant references of as many key contributors as possible. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> Chapter 2: An Introduction to Genetic Algorithms for Engineering Applications Chapter 3: Memetic Algorithms Chapter 4: Scatter Search and Path Relinking: Foundations and Advanced Designs Chapter 5: Ant Colony Optimization Chapter 6: Differential Evolution Chapter 7: SOMA-Self-Organizing Migrating Algorithm Chapter 8: Discrete Particle Swarm Optimization:Illustrated by the Traveling Salesman Problem <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> This Portfolio selection Problem (PSP) remains an intractable research problem in finance and economics and often regarded as NP-hard problem in optimization and computational intelligence. This paper solved the extended Markowitz mean- variance portfolio selection model with an efficient Metaheuristics method of Generalized Differential Evolution 3 (GDE3). The extended Markowitz mean- variance portfolio selection model consists of four constraints: bounds on holdings, cardinality, minimum transaction lots, and expert opinion. There is no research in literature that had ever engaged the set of four constraints with GDE3 to solve PSP. This paper is the first to conduct the study in this direction. The first three sets of constraints have been presented in other researches in literatures. This paper introduced expert opinion constraint to existing portfolio selection models and solved with GDE3. The computational results obtained in this research study show improved performance when compared with other Metaheuristics methods of Genetic algorithm (GA), Simulated Annealing (SA), Tabu Search (TS) and Particle Swarm Optimization (PSO). <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> Today's maintenance workforce operates in a complex business environment and relies on metrics that indirectly link equipment breakdown, fluctuating production rate, demand uncertainties and fluctuating raw material requirements. This has triggered a change in the scope as well as the substance of maintenance workforce theory and practice, and the necessary requirement to promote a full understanding of maintenance workforce optimization of some seemingly non-polynomial hard problems. Theorizing is essential on the near optimal solution techniques for the maintenance workforce problem. In this paper, a fuzzy goal programming model is proposed and used in formulating a single objective function for maintenance workforce optimization with stochastic constraint consideration. The performance of the proposed model was verified using data obtained from a production system and simulated annealing (SA) as a solution method. The results obtained using SA and differential evolution (DE) were compared on the basis of computational time and quality of solution. We observed that the SA results outperform those of the DE algorithm. Based on the results obtained, the proposed model has the capacity to generate reliable information for preventive and breakdown workforce maintenance planning. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> Evolutionary Computation <s> This paper draws on the “human reliability” concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice. <s> BIB008
|
Physical phenomena are routinely studied using models. The models are sometimes complex because of different parameters that constitute them. At times, the analysis of such models is prohibitively complex and waste computation time. The insights and benefit accrual from the analysis of complex models have pushed researchers to look for a viable alternative method, which is nature . Natural processes have inspired researchers in the development of computational processes, methods and algorithm that can be used to solve complex methods or to provide the best available (optimum) solution of their models. BIB001 . Because of the complexity of some models, optimisation became the only alternative since the models have different candidate solutions . Optimisation is central to many natural processes as the method of evolution, memes and adaptation implies that organisms attempt to adjust over time to fit in optimally to their environment despite the constraints of space, search for food, shelter, and search for mates and so on. The study of these natural processes gave birth to the notion of computational intelligence of which evolutionary computation is a subfield . EC is a family of iterative algorithms inspired mostly by biological evolution and are employed mostly in global optimisation [5] . In addition, is a branch of applied mathematics that deals with the global optimisation of a given function or set of functions based on some predefined criteria. Global optimisation focuses on finding the maximum or minimum of all input values while local optimisation deals with finding local minima or maxima. In EC, an initial set of candidate solutions is generated and iteratively updated to minimise or maximise the given function . Each new solution (generation) is produced by stochastically removing weaker or less desired solutions -survival of the fittest and iteratively introducing random changes until a termination criterion that guarantee a feasible solution is obtained . EC techniques can produce highly optimized solutions given a wide range of constraints and complex objective function . This makes EC a suitable tool for solving multi-dimensional problems and advanced optimisation BIB007 . The choice of EC is mainly based on the nature of the problem to be solved and the corresponding data structures. EC techniques perform well in solving higher procedure problems that are designed to find, select or search or determine a heuristic (partial search algorithm) that obtain a near optimal solution BIB002 . EC works with incomplete or partial, or imperfect information and limited completion capacity BIB004 . However, EC does not guarantee that an optimal (exact) solution will be obtained BIB003 . Differential evolution (DE) is one of the most widely used EC technique BIB008 BIB006 BIB005 . The biological processes of evolution, mutation and adaptation inspired the development of DE. The aim of this review is to critically analyse the different areas where DE has been applied in wireless communications.
|
Differential Evolution in Wireless Communications: A Review <s> Let <s> Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Let <s> A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a user's prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Let <s> Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants. <s> BIB003
|
, be the fitness function, which must be minimised, based on some equality, inequality or bounded constraints. The function maps a candidate solution in the form of vector in a given dimensional space to a real number as output, DE is simpler and easier to implement when compared with other evolutionary algorithms. The code is easy to code with different programming languages and can easily be understood by novices. 2. The performance is better than most EC in terms of convergence, computational speed, accuracy and robustness BIB001 . This makes it a suitable candidate for handling unimodal, multimodal, homogenous, non-homogenous, separable and nonseparable systems BIB002 . 3. The number of control parameters (Cr, F, and NP) in DE is very few. This has helped to reduce the computational burden associated with the method BIB003 . 4. The low space complexity of DE has been helpful in the use of DE for solving large scale, nonlinear and multi-dimensional optimisation problems.
|
Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> We explain the biology and physics underlying the chemotactic (foraging) behavior of E. coli bacteria. We explain a variety of bacterial swarming and social foraging behaviors and discuss the control system on the E. coli that dictates how foraging should proceed. Next, a computer program that emulates the distributed optimization process represented by the activity of social bacterial foraging is presented. To illustrate its operation, we apply it to a simple multiple-extremum function minimization problem and briefly discuss its relationship to some existing optimization algorithms. The article closes with a brief discussion on the potential uses of biomimicry of social foraging to develop adaptive controllers and cooperative control strategies for autonomous vehicles. For this, we provide some basic ideas and invite the reader to explore the concepts further. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Previous studies have shown that differential evolution is an efficient, effective and robust evolutionary optimization method. However, the convergence rate of differential evolution in optimizing a computationally expensive objective function still does not meet all our requirements, and attempting to speed up DE is considered necessary. In this paper, a new local search operation, trigonometric mutation, is proposed and embedded into the differential evolution algorithm. This modification enables the algorithm to get a better trade-off between the convergence rate and the robustness. Thus it can be possible to increase the convergence velocity of the differential evolution algorithm and thereby obtain an acceptable solution with a lower number of objective function evaluations. Such an improvement can be advantageous in many real-world problems where the evaluation of a candidate solution is a computationally expensive operation and consequently finding the global optimum or a good sub-optimal solution with the original differential evolution algorithm is too time-consuming, or even impossible within the time available. In this article, the mechanism of the trigonometric mutation operation is presented and analyzed. The modified differential evolution algorithm is demonstrated in cases of two well-known test functions, and is further examined with two practical training problems of neural networks. The obtained numerical simulation results are providing empirical evidences on the efficiency and effectiveness of the proposed modified differential evolution algorithm. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Parallel processing has emerged as a key enabling technology in modern computing. Recent software advances have allowed collections of heterogeneous computers to be used as a concurrent computational resource. In this work we explore how differential evolution can be parallelized, using a ring-network topology, so as to improve both the speed and the performance of the method. Experimental results indicate that the extent of information exchange among subpopulations assigned to different processor nodes, bears a significant impact on the performance of the algorithm. Furthermore, not all the mutation strategies of the differential evolution algorithm are equally sensitive to the value of this parameter. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Opposition-based learning as a new scheme for machine intelligence is introduced. Estimates and counter-estimates, weights and opposite weights, and actions versus counter-actions are the foundation of this new approach. Examples are provided. Possibilities for extensions of existing learning algorithms are discussed. Preliminary results are provided <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Reinforcement learning is a machine intelligence scheme for learning in highly dynamic, probabilistic environments. By interaction with the environment, reinforcement agents learn optimal control policies, especially in the absence of a priori knowledge and/or a sufficiently large amount of training data. Despite its advantages, however, reinforcement learning suffers from a major drawback – high calculation cost because convergence to an optimal solution usually requires that all states be visited frequently to ensure that policy is reliable. This is not always possible, however, due to the complex, high-dimensional state space in many applications. This paper introduces opposition-based reinforcement learning, inspired by opposition-based learning, to speed up convergence. Considering opposite actions simultaneously enables individual states to be updated more than once shortening exploration and expediting convergence. Three versions of Q-learning algorithm will be given as examples. Experimental results for the grid world problem of different sizes demonstrate the superior performance of the proposed approach. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> The ability of differential evolution (DE) to perform well in continuous-valued search spaces is well documented. The arithmetic reproduction operator used by differential evolution is simple, however, the manner in which the operator is defined, makes it practically impossible to effectively apply the standard DE to other problem spaces. An interesting and unique mapping method is examined which will enable the DE algorithm to operate within binary space. Using angle modulation, a bit string can be generated using a trigonometric generating function. The DE is used to evolve the coefficients to the trigonometric function, thereby allowing a mapping from continuous-space to binary-space. Instead of evolving the higher-dimensional binary solution directly, angle modulation is used together with DE to reduce the complexity of the problem into a 4-dimensional continuous-valued problem. Experimental results indicate the effectiveness of the technique and the viability for the DE to operate in binary space. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Most reported studies on differential evolution (DE) are obtained using low-dimensional problems, e.g., smaller than 100, which are relatively small for many real-world problems. In this paper we propose two new efficient DE variants, named DECC-I and DECC-II, for high-dimensional optimization (up to 1000 dimensions). The two algorithms are based on a cooperative coevolution framework incorporated with several novel strategies. The new strategies are mainly focus on problem decomposition and subcomponents cooperation. Experimental results have shown that these algorithms have superior performance on a set of widely used benchmark functions. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Evolutionary algorithms (EAs) are well-known optimization approaches to deal with nonlinear and complex problems. However, these population-based algorithms are computationally expensive due to the slow nature of the evolutionary process. This paper presents a novel algorithm to accelerate the differential evolution (DE). The proposed opposition-based DE (ODE) employs opposition-based learning (OBL) for population initialization and also for generation jumping. In this work, opposite numbers have been utilized to improve the convergence rate of DE. A comprehensive set of 58 complex benchmark functions including a wide range of dimensions is employed for experimental verification. The influence of dimensionality, population size, jumping rate, and various mutation strategies are also investigated. Additionally, the contribution of opposite numbers is empirically verified. We also provide a comparison of ODE to fuzzy adaptive DE (FADE). Experimental results confirm that the ODE outperforms the original DE and FADE in terms of convergence speed and solution accuracy. <s> BIB008 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented. <s> BIB009 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design. <s> BIB010 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants. <s> BIB011 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> A new differential evolution (DE) algorithm, JADE, is proposed to improve optimization performance by implementing a new mutation strategy ldquoDE/current-to-p bestrdquo with optional external archive and updating control parameters in an adaptive manner. The DE/current-to-pbest is a generalization of the classic ldquoDE/current-to-best,rdquo while the optional archive operation utilizes historical data to provide information of progress direction. Both operations diversify the population and improve the convergence performance. The parameter adaptation automatically updates the control parameters to appropriate values and avoids a user's prior knowledge of the relationship between the parameter settings and the characteristics of optimization problems. It is thus helpful to improve the robustness of the algorithm. Simulation results show that JADE is better than, or at least comparable to, other classic or adaptive DE algorithms, the canonical particle swarm optimization, and other evolutionary algorithms from the literature in terms of convergence performance for a set of 20 benchmark problems. JADE with an external archive shows promising results for relatively high dimensional problems. In addition, it clearly shows that there is no fixed control parameter setting suitable for various problems or even at different optimization stages of a single problem. <s> BIB012 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> The barebones differential evolution (BBDE) is a new, almost parameter-free optimization algorithm that is a hybrid of the barebones particle swarm optimizer and differential evolution. Differential evolution is used to mutate, for each particle, the attractor associated with that particle, defined as a weighted average of its personal and neighborhood best positions. The performance of the proposed approach is investigated and compared with differential evolution, a Von Neumann particle swarm optimizer and a barebones particle swarm optimizer. The experiments conducted show that the BBDE provides excellent results with the added advantage of little, almost no parameter tuning. Moreover, the performance of the barebones differential evolution using the ring and Von Neumann neighborhood topologies is investigated. Finally, the application of the BBDE to the real-world problem of unsupervised image classification is investigated. Experimental results show that the proposed approach performs very well compared to other state-of-the-art clustering algorithms in all measured criteria. <s> BIB013 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> This paper presents a novel discrete differential evolution (DDE) algorithm for solving the no-wait flow shop scheduling problems with makespan and maximum tardiness criteria. First, the individuals in the DDE algorithm are represented as discrete job permutations, and new mutation and crossover operators are developed based on this representation. Second, an elaborate one-to-one selection operator is designed by taking into account the domination status of a trial individual with its counterpart target individual as well as an archive set of the non-dominated solutions found so far. Third, a simple but effective local search algorithm is developed to incorporate into the DDE algorithm to stress the balance between global exploration and local exploitation. In addition, to improve the efficiency of the scheduling algorithm, several speed-up methods are devised to evaluate a job permutation and its whole insert neighborhood as well as to decide the domination status of a solution with the archive set. Computational simulation results based on the well-known benchmarks and statistical performance comparisons are provided. It is shown that the proposed DDE algorithm is superior to a recently published hybrid differential evolution (HDE) algorithm [Qian B, Wang L, Huang DX, Wang WL, Wang X. An effective hybrid DE-based algorithm for multi-objective flow shop scheduling with limited buffers. Computers & Operations Research 2009;36(1):209-33] and the well-known multi-objective genetic local search algorithm (IMMOGLS2) [Ishibuchi H, Yoshida I, Murata T. Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling. IEEE Transactions on Evolutionary Computation 2003;7(2):204-23] in terms of searching quality, diversity level, robustness and efficiency. Moreover, the effectiveness of incorporating the local search into the DDE algorithm is also investigated. <s> BIB014 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> This paper presents extensive experiments on a hybrid optimization algorithm (DEPSO) we recently developed by combining the advantages of two powerful population-based metaheuristics—differential evolution (DE) and particle swarm optimization (PSO). The hybrid optimizer achieves on-the-fly adaptation of evolution methods for individuals in a statistical learning way. Two primary parameters for the novel algorithm including its learning period and population size are empirically analyzed. The dynamics of the hybrid optimizer is revealed by tracking and analyzing the relative success ratio of PSO versus DE in the optimization of several typical problems. The comparison between the proposed DEPSO and its competitors involved in our previous research is enriched by using multiple rotated functions. Benchmark tests involving scalability test validate that the DEPSO is competent for the global optimization of numerical functions due to its high optimization quality and wide applicability. <s> BIB015 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Differential evolution (DE) is a fast and robust evolutionary algorithm for global optimization. It has been widely used in many areas. Biogeography-based optimization (BBO) is a new biogeography inspired algorithm. It mainly uses the biogeography-based migration operator to share the information among solutions. In this paper, we propose a hybrid DE with BBO, namely DE/BBO, for the global numerical optimization problem. DE/BBO combines the exploration of DE with the exploitation of BBO effectively, and hence it can generate the promising candidate solutions. To verify the performance of our proposed DE/BBO, 23 benchmark functions with a wide range of dimensions and diverse complexities are employed. Experimental results indicate that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, DE/BBO performs better, or at least comparably, in terms of the quality of the final solutions and the convergence rate. In addition, the influence of the population size, dimensionality, different mutation schemes, and the self-adaptive control parameters of DE are also studied. <s> BIB016 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Hybridization with other different algorithms is an interesting direction for the improvement of differential evolution (DE). In this paper, a hybrid DE based on the one-step k-means clustering, called clustering-based DE (CDE), is presented for the unconstrained global optimization problems. The one-step k-means clustering acts as several multi-parent crossover operators to utilize the information of the population efficiently, and hence it can enhance the performance of DE. To validate the performance of our approach, 30 benchmark functions of a wide range of dimensions and diversity complexities are employed. Experimental results indicate that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs). <s> BIB017 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> This paper proposed an algorithm called DE-GSA. The proposed algorithm incorporates both the concepts from Differential evolution algorithm (DE) and Gravitation search algorithm (GSA), updating particles not only by DE operators but also by GSA mechanisms. The proposed algorithm is tested on several benchmark functions including unimodal and multimodal test functions, multimodal test function with fix dimension, and some real life problems. Then, experimental results have shown that the proposed algorithm is both efficient and effective. <s> BIB018 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Differential evolution (DE) and particle swarm optimization (PSO) are two formidable population-based optimizers (POs) that follow different philosophies and paradigms, which are successfully and widely applied in scientific and engineering research. The hybridization between DE and PSO represents a promising way to create more powerful optimizers, especially for specific problem solving. In the past decade, numerous hybrids of DE and PSO have emerged with diverse design ideas from many researchers. This paper attempts to comprehensively review the existing hybrids based on DE and PSO with the goal of collection of different ideas to build a systematic taxonomy of hybridization strategies. Taking into account five hybridization factors, i.e., the relationship between parent optimizers, hybridization level, operating order (OO), type of information transfer (TIT), and type of transferred information (TTI), we propose several classification mechanisms and a versatile taxonomy to differentiate and analyze various hybridization strategies. A large number of hybrids, which include the hybrids of DE and PSO and several other representative hybrids, are categorized according to the taxonomy. The taxonomy can be utilized not only as a tool to identify different hybridization strategies, but also as a reference to design hybrid optimizers. The tradeoff between exploration and exploitation regarding hybridization design is discussed and highlighted. Based on the taxonomy proposed, this paper also indicates several promising lines of research that are worthy of devotion in future. <s> BIB019 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Global optimization methods play an important role to solve many real-world problems. However, the implementation of single methods is excessively preventive for high dimensionality and nonlinear problems, especially in term of the accuracy of finding best solutions and convergence speed performance. In recent years, hybrid optimization methods have shown potential achievements to overcome such challenges. In this paper, a new hybrid optimization method called Hybrid Evolutionary Firefly Algorithm (HEFA) is proposed. The method combines the standard Firefly Algorithm (FA) with the evolutionary operations of Differential Evolution (DE) method to improve the searching accuracy and information sharing among the fireflies. The HEFA method is used to estimate the parameters in a complex and nonlinear biological model to address its effectiveness in high dimensional and nonlinear problem. Experimental results showed that the accuracy of finding the best solution and convergence speed performance of the proposed method is significantly better compared to those achieved by the existing methods. <s> BIB020 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> This paper presents a hybrid approach based on appropriately combining Differential Evolution algorithms and Tissue P Systems (DETPS for short), used for solving a class of constrained manufacturing parameter optimization problems. DETPS uses a network membrane structure, evolution and communication rules like in a tissue P system to specify five widely used DE variants respectively put inside five cells of the tissue membrane system. Each DE variant independently evolves in a cell according to its own evolutionary mechanism and its parameters are dynamically adjusted in the process of evolution. DETPS applies the channels connecting the five cells of the tissue membrane system to implement communication in the process of evolution. Twenty-one benchmark problems taken from the specialized literature related to constrained manufacturing parameter optimization are used to test the DETPS performance. Experimental results show that DETPS is superior or competitive to twenty-two optimization algorithms recently reported in the literature. <s> BIB021 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> In order to overcome the poor exploitation of the krill herd (KH) algorithm, a hybrid differential evolution KH (DEKH) method has been developed for function optimization. The improvement involves adding a new hybrid differential evolution (HDE) operator into the krill, updating process for the purpose of dealing with optimization problems more efficiently. The introduced HDE operator inspires the intensification and lets the krill perform local search within the defined region. DEKH is validated by 26 functions. From the results, the proposed methods are able to find more accurate solution than the KH and other methods. In addition, the robustness of the DEKH algorithm and the influence of the initial population size on convergence and performance are investigated by a series of experiments. <s> BIB022 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Hybridizing of the optimization algorithms provides a scope to improve the searching abilities of the resulting method. The purpose of this paper is to develop a novel hybrid optimization algorithm entitled hybrid robust differential evolution (HRDE) by adding positive properties of the Taguchi's method to the differential evolution algorithm for minimizing the production cost associated with multi-pass turning problems. The proposed optimization approach is applied to two case studies for multi-pass turning operations to illustrate the effectiveness and robustness of the proposed algorithm in machining operations. The results reveal that the proposed hybrid algorithm is more effective than particle swarm optimization algorithm, immune algorithm, hybrid harmony search algorithm, hybrid genetic algorithm, scatter search algorithm, genetic algorithm and integration of simulated annealing and Hooke-Jeevespatter search. <s> BIB023 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Abstract In this paper, a novel metaheuristic optimization methodology is proposed to solve large scale nonconvex economic dispatch problem. The proposed approach is based on a hybrid shuffled differential evolution (SDE) algorithm which combines the benefits of shuffled frog leaping algorithm and differential evolution. The proposed algorithm integrates a novel differential mutation operator specifically designed to effectively address the problem under study. In order to validate the SDE methodology, detailed simulation results obtained on three standard test systems 13, 40, and 140-unit test system are presented and discussed. Transmission losses are considered along with valve point loading effects for 13 and 40-unit test systems and calculated using B-coefficient matrix. A comparative analysis with other settled nature-inspired solution algorithms demonstrates the superior performance of the proposed methodology in terms of both solution accuracy and convergence performances. <s> BIB024 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Abstract This paper proposes a hybrid of genetic algorithm (GA) and differential evolution (DE), termed hGADE, to solve one of the most important power system optimization problems known as the unit commitment (UC) scheduling. The UC problem is a nonlinear mixed-integer combinatorial high-dimensional and highly constrained optimization problem consisting of both binary UC variables and continuous power dispatch variables. Although GA is more capable of efficiently handling binary variables, the performance of DE is more remarkable in real parameter optimization. Thus, in the proposed algorithm hGADE, the binary UC variables are evolved using GA while the continuous power dispatch variables are evolved using DE. Two different variants of hGADE are presented by hybridizing GA with two classical variants of DE algorithm. Additionally, in this paper a problem specific heuristic initial population generation method and a replacement strategy based on preservation of infeasible solutions in the population are incorporated to enhance the search capability of the hybridized variants on the UC problem. The scalability of the proposed algorithm hGADE is demonstrated by testing on systems with generating units in the range of 10 up to 100 in one-day scheduling period and the simulation results demonstrate that hGADE algorithm can provide a system operator with remarkable cost savings as compared to the best approaches in the literature. Finally, an ensemble optimizer based on combination of hGADE variants is implemented to further amplify the performance of the presented algorithm. <s> BIB025 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> In this paper, we propose a novel hybrid multi-objective immune algorithm with adaptive differential evolution, named ADE-MOIA, in which the introduction of differential evolution (DE) into multi-objective immune algorithm (MOIA) combines their respective advantages and thus enhances the robustness to solve various kinds of MOPs. In ADE-MOIA, in order to effectively cooperate DE with MOIA, we present a novel adaptive DE operator, which includes a suitable parent selection strategy and a novel adaptive parameter control approach. When performing DE operation, two parents are respectively picked from the current evolved and dominated population in order to provide a correct evolutionary direction. Moreover, based on the evolutionary progress and the success rate of offspring, the crossover rate and scaling factor in DE operator are adaptively varied for each individual. The proposed adaptive DE operator is able to improve both of the convergence speed and population diversity, which are validated by the experimental studies. When comparing ADE-MOIA with several nature-inspired heuristic algorithms, such as NSGA-II, SPEA2, AbYSS, MOEA/D-DE, MIMO and D2MOPSO, simulations show that ADE-MOIA performs better on most of 21 well-known benchmark problems. Differential evolution is embedded into the multi-objective immune algorithm.A suitable parent selection strategy provides a correct evolutionary direction.A novel adaptive control approach enhances the algorithmic robustness. <s> BIB026 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> We propose a BPNN with adaptive differential evolution (ADE) for time series forecasting.ADE is used to search for global initial connection weights and thresholds of BPNN.The proposed ADE-BPNN is effective for improving forecasting accuracy. The back propagation neural network (BPNN) can easily fall into the local minimum point in time series forecasting. A hybrid approach that combines the adaptive differential evolution (ADE) algorithm with BPNN, called ADE-BPNN, is designed to improve the forecasting accuracy of BPNN. ADE is first applied to search for the global initial connection weights and thresholds of BPNN. Then, BPNN is employed to thoroughly search for the optimal weights and thresholds. Two comparative real-life series data sets are used to verify the feasibility and effectiveness of the hybrid method. The proposed ADE-BPNN can effectively improve forecasting accuracy relative to basic BPNN, autoregressive integrated moving average model (ARIMA), and other hybrid models. <s> BIB027 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Fireworks algorithm (FA) is a relatively new swarm-based metaheuristic for global optimization. The algorithm is inspired by the phenomenon of fireworks display and has a promising performance on a number of benchmark functions. However, in the sense of swarm intelligence, the individuals including fireworks and sparks are not well-informed by the whole swarm. In this paper we develop an improved version of the FA by combining with differential evolution (DE) operators: mutation, crossover, and selection. At each iteration of the algorithm, most of the newly generated solutions are updated under the guidance of two different vectors that are randomly selected from highly ranked solutions, which increases the information sharing among the individual solutions to a great extent. Experimental results show that the DE operators can improve diversity and avoid prematurity effectively, and the hybrid method outperforms both the FA and the DE on the selected benchmark functions. <s> BIB028 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> The Asynchronous Differential Evolution (ADE) is based on Differential Evolution (DE) with some variations. In ADE the population is updated as soon as a vector with better fitness is found hence the algorithm works asynchronously. ADE leads to stronger exploration and supports parallel optimization. In this paper ADE is embedded with the trigonometric mutation operator (TMO) to enhance the convergence rate of basic ADE. The proposed hybridized algorithm is termed as ADE-TMO. The algorithm is verified over widely used 10 benchmark functions referred from the literature. The simulated results show that ADE-TMO perform better than basic ADE and other state-of-art algorithms. <s> BIB029 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Abstract Many real-world problems can be formulated as optimization problems. Such problems pose a challenge for researchers in the design of efficient algorithms capable of finding the best solution with the least computational cost. In this paper, a new evolutionary algorithm is proposed that combines the explorative and exploitative capabilities of two evolutionary algorithms, Cultural Algorithm (CA) and Differential Evolution (DE) algorithm. This hybridization follows the HTH (High-level Teamwork Hybrid) nomenclature in which two meta-heuristics are executed in parallel. The new algorithm named as CADE, manages an overall population which is shared between CA and DE simultaneously. Four modified knowledge sources have been used in proposed CA which are: topographical, situational, normative and domain. The role of the used acceptance function in belief space is to select the knowledge of the best individuals to update the current knowledge. A novel quality function is used to determine the participation ratio for both CA and DE, and then a competitive selection takes place in order to select the proportion of function evaluations allocated for each technique. This collaborative synergy emerges between the DE and CA techniques and is shown to improve the quality of solutions, beyond what each of these two algorithms alone. The performance of the algorithm is evaluated on a set of 50 scalable optimization problems taken from two sources. The first set of 35 came from existing benchmark sets available in the literature. The second set came from the 2014 IEEE Single Function optimization competition. The overall results show that CADE has a favorable performance and scalability behaviors when compared to other recent state-of-the-art algorithms. CADE's overall performance ranked at number 1 for each of the two sets of problems. It is suggested that CADE's success across such a broad spectrum of problem types and complexities bodes well for its application to new and novel applications. <s> BIB030 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Abstract Artificial Bee Colony (ABC) and Differential Evolution (DE) are two very popular and efficient meta-heuristic algorithms. However, both algorithms have been applied to various science and engineering optimization problems, extensively, the algorithms suffer from premature convergence, unbalanced exploration-exploitation, and sometimes slow convergence speed. Hybridization of ABC and DE may provide a platform for developing a meta-heuristic algorithm with better convergence speed and a better balance between exploration and exploitation capabilities. This paper proposes a hybridization of ABC and DE algorithms to develop a more efficient meta-heuristic algorithm than ABC and DE. In the proposed hybrid algorithm, Hybrid Artificial Bee Colony with Differential Evolution (HABCDE), the onlooker bee phase of ABC is inspired from DE. Employed bee phase is modified by employing the concept of the best individual while scout bee phase has also been modified for higher exploration. The proposed HABCDE has been tested over 20 test problems and 4 real-world optimization problems. The performance of HABCDE is compared with the basic version of ABC and DE. The results are also compared with state-of-the-art algorithms, namely Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Particle Swarm Optimization (PSO), Biogeography Based Optimization (BBO) and Spider Monkey Optimization (SMO) to establish the superiority of the proposed algorithm. For further validation of the proposed hybridization, the experimental results are also compared with other hybrid versions of ABC and DE, namely ABC-DE, DE-BCO and HDABCA and with modified ABC algorithms, namely Best-So-Far ABC (BSFABC), Gbest guided ABC (GABC) and modified ABC (MABC). Results indicate that HABCDE would be a competitive algorithm in the field of meta-heuristics. <s> BIB031 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives. <s> BIB032 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Grey Wolf Optimizer (GWO), developed by Mirjalili et al. (Adv Eng Softw 69:46–61, 2014 [1]), is a recently developed nature-inspired technique based on leadership hierarchy of grey wolves. In this paper, Grey Wolf Optimizer has been hybridized with differential evolution (DE) mutation, and two versions, namely DE-GWO and gDE-GWO, have been proposed to avoid the stagnation of the solution. To evaluate the performance of both the proposed versions, a set of 23 well-known benchmark problems has been taken. The comparison of obtained results between original GWO and proposed hybridized versions of GWO is done with the help of Wilcoxon signed-rank test. The results conclude that the proposed hybridized version gDE-GWO of GWO has better potential to solve these benchmark test problems compared to GWO and DE-GWO. <s> BIB033 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Abstract This paper presents an evolutionary algorithm employing differential evolution to solve nonlinear optimisation problems with (or without) constraints and multiple objectives. New decision strategies to compare candidate solutions are developed that take into account all constraints and objective functions. The new constraint handling strategy uses the concept of Pareto dominance to rank the candidate solutions based on their constraint violation value. In order to improve the performance of the algorithm, a set of genetic operators and differential evolution operators are combined. In addition, the paper proposes an algorithm to perform parallel evolution in a way that the diversity of the final population is preserved after migrations. Another goal of the algorithm is to handle problems with a mix of integers and real-valued variables. Numerical experiments investigate the robustness and the performance of the algorithm through multiple benchmark optimisation problems. Finally, two engineering applications are studied, namely: (1) the topology optimisation of trusses; and (2) the economical dispatch problem in power generation. Results show that the algorithm is capable of handling optimisation problems with a mix of integer and real-valued variables with constraints and multiple objectives. <s> BIB034 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> In recent years, advanced technology is increasing rapidly, especially in the field of smart grids. A home energy management systems are implemented in homes for scheduling of power for cost minimization. In this paper, for management of home energy we propose a meta-heuristic technique which is hybrid of existing techniques enhanced differential evolution (EDE) and earthworm optimization algorithm (EWA) and it is named as earthworm EWA (EEDE). Simulations show that EWA performed better in term of reducing cost and EDE performed better in reducing peak to average ratio (PAR). However proposed scheme outperformed in terms of both cost and PAR. For evaluating the performance of proposed technique a home energy system proposed by us. In our work we are considering a single home, consists of many appliances. Appliances are categorized into two groups: Interruptible and un-interruptible. Simulations and results show that both algorithms performed well in terms of reducing costs and PAR. We also measured waiting time to find out user comfort and energy consumption. <s> BIB035 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Differential evolution (DE) has been proven to be a powerful and efficient stochastic search technique for global numerical optimization. However, choosing the optimal control parameters of DE is a time-consuming task because they are problem depended. DE may have a strong ability in exploring the search space and locating the promising area of global optimum but may be slow at exploitation. Thus, in this paper, we propose a Gaussian Cauchy differential evolution (GCDE). It is a hybrid of a modified bare-bones swarm optimizers and the differential evolution algorithm. It takes advantage of the good exploration searching ability of DE and the good exploitation ability of bare-bones optimization. Moreover, the parameters in GCDE are generated by the function of Gaussian distribution and Cauchy distribution. In addition, the parameters dynamically change according to the quality of the current search solution. The performance of proposed method is compared with three differential evolution algorithms and three bare-bones technique based optimizers. Comprehensive experimental results show that the proposed approach is better than, or at least comparable to, other classic DE variants when considering the quality of search solutions on a set of benchmark problems. <s> BIB036 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> This paper considers an energy-efficient bi-objective unrelated parallel machine scheduling problem to minimize both makespan and total energy consumption. The parallel machines are speed-scaling. To solve the problem, we propose a memetic differential evolution (MDE) algorithm. Since the problem involves assigning jobs to machines and selecting an appropriate processing speed level for each job, we characterize each individual by two vectors: a job-machine assignment vector and a speed vector. To accelerate the convergence of the algorithm, only the speed vector of each individual evolves and a list scheduling heuristic is applied to derive its job-machine assignment vector based on its speed vector. To further enhance the algorithm, we propose efficient speed adjusting and job-machine swap heuristics and integrate them into the algorithm as a local search approach by an adaptive meta-Lamarckian learning strategy. Computational results reveal that the incorporation of list scheduling heuristic and local search greatly strengthens the algorithm. Computational experiments also show that the proposed MDE algorithm outperforms SPEA-II and NSGA-II significantly. <s> BIB037 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Protein⁻ligand docking is a molecular modeling technique that is used to predict the conformation of a small molecular ligand at the binding pocket of a protein receptor. There are many protein⁻ligand docking tools, among which AutoDock Vina is the most popular open-source docking software. In recent years, there have been numerous attempts to optimize the search process in AutoDock Vina by means of heuristic optimization methods, such as genetic and particle swarm optimization algorithms. This study, for the first time, explores the use of cuckoo search (CS) to solve the protein⁻ligand docking problem. The result of this study is CuckooVina, an enhanced conformational search algorithm that hybridizes cuckoo search with differential evolution (DE). Extensive tests using two benchmark datasets, PDBbind 2012 and Astex Diverse set, show that CuckooVina improves the docking performances in terms of RMSD, binding affinity, and success rate compared to Vina though it requires about 9⁻15% more time to complete a run than Vina. CuckooVina predicts more accurate docking poses with higher binding affinities than PSOVina with similar success rates. CuckooVina's slower convergence but higher accuracy suggest that it is better able to escape from local energy minima and improves the problem of premature convergence. As a summary, our results assure that the hybrid CS⁻DE process to continuously generate diverse solutions is a good strategy to maintain the proper balance between global and local exploitation required for the ligand conformational search. <s> BIB038 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> Clustering is an unsupervised data mining task which groups objects in the unlabeled dataset based on some proximity measure. Many nature-inspired population-based optimization algorithms have been employed to solve clustering problems. However, few of them lack in balancing exploration and exploitation in global search space in their original form. Differential Evolution (DE) is a nature-inspired population-based global search optimization method which is suitable to explore the solution in global search space. However, it lacks in exploiting the solution. To overcome this deficiency, few literatures incorporate local search algorithms in DE to achieve a good solution in the search space. In this work, we have performed a comparative study to show effectiveness of local search algorithms, such as chaotic local search, Levy flight, and Golden Section Search with DE to balance exploration and exploitation in the search space for clustering problem. We employ an internal validity measure, Sum of Squared Error (SSE), to evaluate the quality of cluster which is based on the compactness of the cluster. We select F-measure and rand index as external validity measures. Extensive results are compared based on six real datasets from UCI machine learning repository. <s> BIB039 </s> Differential Evolution in Wireless Communications: A Review <s> Different variants of differential evolution <s> The differential evaluation (DE) algorithm is a population-based very well-known meta-heuristic, proposed to fix the complex real-world optimization problems. This paper presents a variant of DE, inspired by the black-hole (BH) phenomenon in space and named as Black-Hole Gbest DE algorithm (BHGDE). In BHGDE, the realization of Black-Hole improves the exploration capability, while maintaining the original exploitation capability of the DE algorithm. The efficiency, reliability, accuracy, and robustness of the anticipated BHGDE algorithm are analyzed while simulating it over 15 complex benchmark functions of different modality and characteristics. The competitiveness of the newly anticipated BHGDE algorithm is proved by comparing the simulated results with the DE and its two recent variants, namely Opposition-based Differential Evolution (ODE) and Hybrid Artificial Bee Colony algorithm with Differential Evolution (HABCDE) algorithms. To check the robustness of the propounded BHGDE, it is implemented to solve the problem of path planning of the robots starting from the source node to the destination node. <s> BIB040
|
Since the introduction of DE, researchers have continued to propose different variants of the algorithm without necessarily altering its foundation. The nonlinear and complex nature of some problems have led researchers to push back the boundaries of DE. The different variants of DE are listed. Differential evolution using trigonometric mutation:This was proposed by BIB002 , aimed at speeding up the performance of DE by splitting the target vectors BIB029 . Differential evolution using arithmetic recombination: This comes as a departure from the traditional binomial crossover used in DE. Here, recombination can be either continuous or arithmetic. The trial vector is now expressed as a linear combination of the components from the donor and target vectors . Furthermore, the coefficients of the combination can be a random variable or constant . DE/rand/1/either-or algorithm: This variant of the DE algorithm was designed in such a way that the here the trial vectors that are pure mutants and those that are pure recombinants are mutually exclusive . This method appears to perform better than the classical DE BIB010 . Opposition based differential evolution (ODE): DE usually start with some random guesses and works with no prior information about the actual optimum solution BIB004 . Fast convergence to the optima can be obtained simultaneously by checking the fitness of the opposite solution BIB008 . This has proved useful as the initial candidate solution can be chosen between the better fit options of the guess or opposite guess BIB005 . The process can be extended before the birth of each individual of the population. Differential evolution with neighborhood-based mutation: This is the use of exploitation in DE which helps the algorithm to search new regions in a multidimensional search space. This variant of DE is effective because of two things. Firstly, the search algorithm utilizes the initial information and search towards the optima and lastly, the search algorithm will be very effective in the introduction and management of information into the population . The idea behind this to prevent the search from favouring only vectors in the neighbourhood, but to extend the search to other areas. Differential evolution with adaptive selection of mutation strategies: This variant of DE makes use of the control parameters values and self-adapted trial vector g eneration strategies to produce new individuals that are candidates for the optimum solution BIB011 . This is done by using the historical data and creating patterns that will generate the solution (unsupervised learning). Adaptive DE with DE/current-to-pbest mutation: This uses a specific external data learning tool (archive) or algorithm to relate and interact effectively with the records of failure and success and subsequently update the control parameters with the information. This helps in the creation of new candidate solutions to facilitate convergence and to discourage arbitrary tuning of the control parameters BIB012 . Hybrid Differential Evolution Algorithms: Hybrid models or algorithms in general are the combination of two or more parent algorithms to produce another one (offspring), whereby the outcome (offspring) is expected to be better than the parent algorithms. This is because; hybrid algorithms are built upon the best features of the parent algorithms. Hybridisation in DE takes three forms. • Hybridisation with other EC algorithms: such as particle swarm optimisation BIB015 BIB019 , cultural algorithms BIB030 , biogeography-based optimisation BIB016 , earthworm BIB035 , bacterial foraging optimisation algorithm BIB001 , bare bones BIB013 and modified bare bones swarm optimizers BIB036 , ant bee colony algorithm BIB031 and genetic algorithm BIB025 . Others are: tissue membrane systems BIB021 , artificial immune systems BIB026 , firefly algorithm BIB020 , simulated annealing BIB032 , neural networks BIB027 , bat algorithm , krill herd algorithm BIB022 , memetic inspired systems BIB037 , fireworks optimisation BIB028 , Cuckoo Search algorithm BIB038 and Grey Wolf optimizer BIB033 . • The use of local search technique (algorithms) in DE to improve the ability of the DE algorithm to make effective utilization of the information collected and to push towards obtaining the optimum solution BIB009 . The local search is usually adapted in the crossover stage of the DE, thereby increasing the likelihood that the optimal solutions (offspring with high fitness) will be found in small neighbourhood around the candidate solutions and reduction of fitness function evaluations. Hybridisation can also be done using neighbourhood search BIB007 , Taguchi operator BIB023 , clustering BIB017 , shuffled BIB024 and chaotic local search, Levy flight, and Golden Section Search BIB039 . • Hybridisation with non-Darwinian methods: DE has been hybridized with some non-evolutionary techniques such as: black hole inspired systems BIB040 and gravitation search algorithm BIB018 and radial basis function response surface . • Differential Evolution for Discrete and Binary Optimisation: DE was originally created for real value parameters. Over the years, several researchers have modified it to be utilized in tackling binary and discrete optimisation problems. This can be achieved by the following: truncating or approximating the parameter values for objective function evaluation or discretisation of continuous value parameters , bicriteria BIB014 and purely binary optimisation BIB006 . Most applications of DE in this context are for solving job and machine scheduling problems. Parallel differential evolution: DE can be modified to solve problems concurrently, which can be in hardware or software modes. This is done by breaking complex problems into small bits . Speed, accuracy and reducing the complexity of objective function evaluation is the motivation behind parallel DE BIB003 . The other EC algorithm can be employed for more accuracy . Parallel DE ensures that the heterogenous nature of the optimal population is preserved after migrations. In addition, the method allows for solving optimisation problems with mixed integer and real parameter values BIB034 .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.