aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1007.4890
1640740452
As energy proportional computing gradually extends the success of DVFS (Dynamic voltage and frequency scaling) to the entire system, DVFS control algorithms will play a key role in reducing server clusters' power consumption. The focus of this paper is to provide accurate cluster-level DVFS control for power saving in a server cluster. To achieve this goal, we propose a request tracing approach that online classifies the major causal path patterns of a multi-tier service and monitors their performance data as a guide for accurate DVFS control. The request tracing approach significantly decreases the time cost of performance profiling experiments that aim to establish the empirical performance model. Moreover, it decreases the controller complexity so that we can introduce a much simpler feedback controller, which only relies on the single-node DVFS modulation at a time as opposed to varying multiple CPU frequencies simultaneously. Based on the request tracing approach, we present a hybrid DVFS control system that combines an empirical performance model for fast modulation at different load levels and a simpler feedback controller for adaption. We implement a prototype of the proposed system, called PowerTracer, and conduct extensive experiments on a 3-tier platform. Our experimental results show that PowerTracer outperforms its peer in terms of power saving and system performance.
The closest work to this paper is @cite_27 by They proposed a coordinated distributed DVS policy based on a feedback controller for three-tier web server systems. However, their work fails to provide accurate DVFS control for two reasons: first, the simple DVS algorithm uses CPU utilization as the indicator in determining which server's clock frequency should be scaled, while the optimized algorithm is difficult to be applied because of its complexity; second, the two algorithms take the average server-side latency of all requests as the controlled variable, while our experiments show that massive requests have a number of different patterns. In @cite_24 , proposed a multi-mode energy management for multi-tier server clusters, which exploited DVS together with multiple sleep states. In @cite_7 , invented a service prioritization scheme for multi-tier web server clusters, which assigned different priorities based on their performance requirements. In @cite_6 , developed a simple metric called that can predict the impact of changes in processor frequency upon the end-to-end transaction response times of multi-tier applications.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_6", "@cite_7" ], "mid": [ "", "2128769472", "2024882059", "2135668185" ], "abstract": [ "", "The energy and cooling costs of Web server farms are among their main financial expenditures. This paper explores the benefits of dynamic voltage scaling (DVS) for power management in server farms. Unlike previous work, which addressed DVS on individual servers and on load-balanced server replicas, this paper addresses DVS in multistage service pipelines. Contemporary Web server installations typically adopt a three-tier architecture in which the first tier presents a Web interface, the second executes scripts that implement business logic, and the third serves database accesses. From a user's perspective, only the end-to-end response across the entire pipeline is relevant. This paper presents a rigorous optimization methodology and an algorithm for minimizing the total energy expenditure of the multistage pipeline subject to soft end-to-end response-time constraints. A distributed power management service is designed and evaluated on a real three-tier server prototype for coordinating DVS settings in a way that minimizes global energy consumption while meeting end-to-end delay constraints. The service is shown to consume as much as 30 percent less energy compared to the default (Linux) energy saving policy", "Dynamic voltage and frequency scaling (DVFS) is a well-known technique for gaining energy savings on desktop and laptop computers. However, its use in server settings requires careful consideration of any potential impacts on end-to-end service performance of hosted applications. In this paper, we develop a simple metric called the gradient\" that allows prediction of the impact of changes in processor frequency on the end-to-end transaction response times of multitier applications. We show how frequency gradients can be measured on a running system in a push-button manner without any prior knowledge of application semantics, structure, or configuration settings. Using experimental results, we demonstrate that the frequency gradients provide accurate predictions, and enable end-to-end performance-aware DVFS for mulitier applications.", "This paper investigates the design issues and energy savings benefits of service prioritization in multi-tier Web server clusters. In many services, classes of clients can be naturally assigned different priorities based on their performance requirements. We show that if the whole multi-tier system is effectively prioritized, additional power and energy savings are realizable while keeping an existing cluster-wide energy management technique, through exploiting the different performance requirements of separate service classes. We find a simple prioritization scheme to be highly effective without requiring intrusive modifications to the system. In order to quantify its benefits, we perform extensive experimental evaluation on a real testbed. It is shown that the scheme significantly improves both total system power savings and energy efficiency, at the same time as improving throughput and enabling the system to meet per-class performance requirements." ] }
1007.4890
1640740452
As energy proportional computing gradually extends the success of DVFS (Dynamic voltage and frequency scaling) to the entire system, DVFS control algorithms will play a key role in reducing server clusters' power consumption. The focus of this paper is to provide accurate cluster-level DVFS control for power saving in a server cluster. To achieve this goal, we propose a request tracing approach that online classifies the major causal path patterns of a multi-tier service and monitors their performance data as a guide for accurate DVFS control. The request tracing approach significantly decreases the time cost of performance profiling experiments that aim to establish the empirical performance model. Moreover, it decreases the controller complexity so that we can introduce a much simpler feedback controller, which only relies on the single-node DVFS modulation at a time as opposed to varying multiple CPU frequencies simultaneously. Based on the request tracing approach, we present a hybrid DVFS control system that combines an empirical performance model for fast modulation at different load levels and a simpler feedback controller for adaption. We implement a prototype of the proposed system, called PowerTracer, and conduct extensive experiments on a 3-tier platform. Our experimental results show that PowerTracer outperforms its peer in terms of power saving and system performance.
K. @cite_20 improved energy efficiency by powering down some servers when the desired quality of service can be met with fewer servers. M. @cite_23 used to conserve energy during periods of low workload intensity. Facing challenges in the context of connection servers, G. @cite_21 designed a server provisioning algorithm to dynamically turn on a minimum number of servers, and a load dispatching algorithm to distribute load among the running machines. In integrating independently energy saving policies, J. @cite_36 presented a mechanism, called adaptation graph analysis, for identifying potential incompatibilities between composed adaptation policies.
{ "cite_N": [ "@cite_36", "@cite_21", "@cite_23", "@cite_20" ], "mid": [ "2097271361", "1590860274", "1597560875", "2120613668" ], "abstract": [ "The increased complexity of performance-sensitive software systems leads to increased use of automated adaptation policies in lieu of manual performance tuning. Composition of adaptive components into larger adaptive systems, however, presents challenges that arise from potential incompatibilities among the respective adaptation policies. Consequently, unstable or poorly-tuned feedback loops may result that cause performance deterioration. This paper (i) presents a mechanism, called adaptation graph analysis, for identifying potential incompatibilities between composed adaptation policies and (ii) illustrates a general design methodology for co-adaptation that resolves such incompatibilities. Our results are demonstrated by a case study on energy minimization in multi-tier Web server farms subject to soft real-time constraints. Two independently efficient energy saving policies (an on off policy that switches machines off when not needed and a dynamic voltage scaling policy) are shown to conflict leading to increased energy consumption when combined. Our adaptation graph analysis predicts the problem, and our co-adaptation design methodology finds a solution that improves performance. Experimental results from a 17-server farm running the industry standard TPC-W e-commerce benchmark show that co-adaptation renders a cut-down in energy consumption by more than 50 , when workload is not high, while maintaining latency within acceptable bounds. The paper serves as a proof of concept of the proposed conflict-identification and resolution methodology and an invitation to further investigate a science for composing adaptive systems.", "Energy consumption in hosting Internet services is becoming a pressing issue as these services scale up. Dynamic server provisioning techniques are effective in turning off unnecessary servers to save energy. Such techniques, mostly studied for request-response services, face challenges in the context of connection servers that host a large number of long-lived TCP connections. In this paper, we characterize unique properties, performance, and power models of connection servers, based on a real data trace collected from the deployed Windows Live Messenger. Using the models, we design server provisioning and load dispatching algorithms and study subtle interactions between them. We show that our algorithms can save a significant amount of energy without sacrificing user experiences.", "Energy management for servers is now necessary for technical, financial, and environmental reasons. This paper describes three policies designed to reduce energy consumption in Web servers. The policies employ two power management mechanisms: dynamic voltage scaling (DVS), an existing mechanism, and request batching, a new mechanism introduced in this paper. The first policy uses DVS in isolation, except that we extend recently introduced task-based DVS policies for use in server environments with many concurrent tasks. The second policy uses request batching to conserve energy during periods of low workload intensity. The third policy uses both DVS and request batching mechanisms to reduce processor energy usage over a wide range of workload intensities. All the policies trade off system responsiveness to save energy. However, the policies employ the mechanisms in a feedback-driven control framework in order to conserve energy while maintaining a given quality of service level, as defined by a percentile-level response time. We evaluate the policies using Salsa, a web server simulator that has been extensively validated for both energy and response time against measurements from a commodity web server. Three daylong static web workloads from real web server systems are used to quantify the energy savings: the Nagano Olympics98 web server, a financial services company web site, and a disk intensive web workload. Our results show that when required to maintain a 90th-percentile response time of 50ms, the DVS and request batching policies save from 8.7 to 38 and from 3.1 to 27 respectively of the CPU energy used by the base system. The two polices provide these savings for complementary workload intensities. The combined policy is effective for all three workloads across a broad range of intensities, saving from 17 to 42 of the CPU energy.", "Power-performance optimization is a relatively new problem area particularly in the context of server clusters. Power-aware request distribution is a method of scheduling service requests among servers in a cluster so that energy consumption is minimized, while maintaining a particular level of performance. Energy efficiency is obtained by powering-down some servers when the desired quality of service can be met with fewer servers. We have found that it is critical to take into account the system and workload factors during both the design and the evaluation of such request distribution schemes. We identify the key system and workload factors that impact such policies and their effectiveness in saving energy. We measure a web cluster running an industry-standard commercial web workload to demonstrate that understanding this system-workload context is critical to performing valid evaluations and even for improving the energy-saving schemes." ] }
1007.4890
1640740452
As energy proportional computing gradually extends the success of DVFS (Dynamic voltage and frequency scaling) to the entire system, DVFS control algorithms will play a key role in reducing server clusters' power consumption. The focus of this paper is to provide accurate cluster-level DVFS control for power saving in a server cluster. To achieve this goal, we propose a request tracing approach that online classifies the major causal path patterns of a multi-tier service and monitors their performance data as a guide for accurate DVFS control. The request tracing approach significantly decreases the time cost of performance profiling experiments that aim to establish the empirical performance model. Moreover, it decreases the controller complexity so that we can introduce a much simpler feedback controller, which only relies on the single-node DVFS modulation at a time as opposed to varying multiple CPU frequencies simultaneously. Based on the request tracing approach, we present a hybrid DVFS control system that combines an empirical performance model for fast modulation at different load levels and a simpler feedback controller for adaption. We implement a prototype of the proposed system, called PowerTracer, and conduct extensive experiments on a 3-tier platform. Our experimental results show that PowerTracer outperforms its peer in terms of power saving and system performance.
G. @cite_14 indicated that co-scheduling VMs with heterogeneous characteristics on the same physical node is beneficial from both energy efficiency and performance point of view. Y. @cite_12 proposed Virtual Batching, a novel request batching solution for virtualized servers with primarily light workloads. Y. @cite_11 proposed a two-layer control architecture based on well-established control theory. X. @cite_9 proposed Co-Con, a cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters. P. @cite_8 developed an adaptive resource control system that dynamically adjusts the resource shares to individual tiers in order to meet application-level QoS goals . In their later work @cite_29 , P. present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_29", "@cite_12", "@cite_11" ], "mid": [ "2129846162", "2163012480", "2139052027", "2137240782", "2085307822", "2169151081" ], "abstract": [ "In this paper, we present vGreen, a multi-tiered software system for energy efficient computing in virtualized environments. It comprises of novel hierarchical metrics that capture power and performance characteristics of virtual and physical machines, and policies, which use it for energy efficient virtual machine scheduling across the whole deployment. We show through real life implementation on a state of the art testbed of server machines that vGreen can improve both performance and system level energy savings by 20 and 15 across benchmarks with varying characteristics.", "Data centers are often under-utilized due to over-provisioning as well as time-varying resource demands of typical enterprise applications. One approach to increase resource utilization is to consolidate applications in a shared infrastructure using virtualization. Meeting application-level quality of service (QoS) goals becomes a challenge in a consolidated environment as application resource needs differ. Furthermore, for multi-tier applications, the amount of resources needed to achieve their QoS goals might be different at each tier and may also depend on availability of resources in other tiers. In this paper, we develop an adaptive resource control system that dynamically adjusts the resource shares to individual tiers in order to meet application-level QoS goals while achieving high resource utilization in the data center. Our control system is developed using classical control theory, and we used a black-box system modeling approach to overcome the absence of first principle models for complex enterprise applications and systems. To evaluate our controllers, we built a testbed simulating a virtual data center using Xen virtual machines. We experimented with two multi-tier applications in this virtual data center: a two-tier implementation of RUBiS, an online auction site, and a two-tier Java implementation of TPC-W. Our results indicate that the proposed control system is able to maintain high resource utilization and meets QoS goals in spite of varying resource demands from the applications.", "Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high server density. However, existing work controls power and application-level performance separately, and thus, cannot simultaneously provide explicit guarantees on both. In addition, as power and performance control strategies may come from different hardware software vendors and coexist at different layers, it is more feasible to coordinate various strategies to achieve the desired control objectives than relying on a single centralized control strategy. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results on a physical testbed demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption.", "Virtualized data centers enable sharing of resources among hosted applications. However, it is difficult to satisfy service-level objectives(SLOs) of applications on shared infrastructure, as application workloads and resource consumption patterns change over time. In this paper, we present AutoControl, a resource control system that automatically adapts to dynamic workload changes to achieve application SLOs. AutoControl is a combination of an online model estimator and a novel multi-input, multi-output (MIMO) resource controller. The model estimator captures the complex relationship between application performance and resource allocations, while the MIMO controller allocates the right amount of multiple virtualized resources to achieve application SLOs. Our experimental evaluation with RUBiS and TPC-W benchmarks along with production-trace-driven workloads indicates that AutoControl can detect and mitigate CPU and disk I O bottlenecks that occur over time and across multiple nodes by allocating each resource accordingly. We also show that AutoControl can be used to provide service differentiation according to the application priorities during resource contention.", "Many power management strategies have been proposed for enterprise servers based on dynamic voltage and frequency scaling (DVFS), but those solutions cannot further reduce the energy consumption of a server when the server processor is already at the lowest DVFS level and the server utilization is still low (e.g., 5 or lower). To achieve improved energy efficiency, request batching can be conducted to group received requests into batches and put the processor into sleep between the batches. However, it is challenging to perform request batching on a virtualized server because different virtual machines on the same server may have different workload intensities. Hence, putting the shared processor into sleep may severely impact the performance of all the virtual machines. This paper proposes Virtual Batching, a novel request batching solution for virtualized servers with primarily light workloads. Our solution dynamically allocates CPU resources such that all the virtual machines can have approximately the same performance level relative to their allowed peak values. Based on this uniform level, our solution determines the time length for periodically batching incoming requests and putting the processor into sleep. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase processor frequency for performance guarantees. Empirical results based on a hardware testbed and real trace files show that Virtual Batching can achieve the desired performance with more energy conservation than several well-designed baselines, e.g., 63 more, on average, than a solution based on DVFS only.", "Both power and performance are important concerns for enterprise data centers. While various management strategies have been developed to effectively reduce server power consumption by transitioning hardware components to lower power states, they cannot be directly applied to today's data centers that rely on virtualization technologies. Virtual machines running on the same physical server are correlated because the state transition of any hardware component will affect the application performance of all the virtual machines. As a result, reducing power solely based on the performance level of one virtual machine may cause another to violate its performance specification. This paper proposes PARTIC, a two-layer control architecture designed based on well-established control theory. The primary control loop adopts a multi-input multi-output control approach to maintain load balancing among all virtual machines so that they can have approximately the same performance level relative to their allowed peak values. The secondary performance control loop then manipulates CPU frequency for power efficiency based on the uniform performance level achieved by the primary loop. Empirical results demonstrate that PARTIC can effectively reduce server power consumption while achieving required application-level performance for virtualized enterprise servers." ] }
1007.4890
1640740452
As energy proportional computing gradually extends the success of DVFS (Dynamic voltage and frequency scaling) to the entire system, DVFS control algorithms will play a key role in reducing server clusters' power consumption. The focus of this paper is to provide accurate cluster-level DVFS control for power saving in a server cluster. To achieve this goal, we propose a request tracing approach that online classifies the major causal path patterns of a multi-tier service and monitors their performance data as a guide for accurate DVFS control. The request tracing approach significantly decreases the time cost of performance profiling experiments that aim to establish the empirical performance model. Moreover, it decreases the controller complexity so that we can introduce a much simpler feedback controller, which only relies on the single-node DVFS modulation at a time as opposed to varying multiple CPU frequencies simultaneously. Based on the request tracing approach, we present a hybrid DVFS control system that combines an empirical performance model for fast modulation at different load levels and a simpler feedback controller for adaption. We implement a prototype of the proposed system, called PowerTracer, and conduct extensive experiments on a 3-tier platform. Our experimental results show that PowerTracer outperforms its peer in terms of power saving and system performance.
Related work focuses on the performance optimization problem , while our work addresses the power optimization problem under performance constraints @cite_24 . P. @cite_2 proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems (ensemble). C. @cite_13 present a technique that controls the peak power consumption of a high-density server . X. @cite_22 @cite_32 propose a cluster-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the cluster to be lower than a constraint. S. @cite_10 develop mechanisms to better utilize installed power infrastructure. S. @cite_25 explore a combination of statistical multiplexing techniques to improve the utilization of the power hierarchy within a data center. R. @cite_4 propose and validate a power management solution that coordinates different individual energy-saving approaches. X. @cite_17 present the aggregate power usage characteristics of large collections of servers (up to 15 thousand) for different classes of applications over a period of approximately six months, and modeling to attack data center-level power provisioning inefficiencies.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_10", "@cite_32", "@cite_24", "@cite_2", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2038337218", "2100948980", "2106675331", "2114013570", "", "", "1977556410", "2052220570", "2118955868" ], "abstract": [ "Power delivery, electricity consumption, and heat management are becoming key challenges in data center environments. Several past solutions have individually evaluated different techniques to address separate aspects of this problem, in hardware and software, and at local and global levels. Unfortunately, there has been no corresponding work on coordinating all these solutions. In the absence of such coordination, these solutions are likely to interfere with one another, in unpredictable (and potentially dangerous) ways. This paper seeks to address this problem. We make two key contributions. First, we propose and validate a power management solution that coordinates different individual approaches. Using simulations based on 180 server traces from nine different real-world enterprises, we demonstrate the correctness, stability, and efficiency advantages of our solution. Second, using our unified architecture as the base, we perform a detailed quantitative sensitivity analysis and draw conclusions about the impact of different architectures, implementations, workloads, and system design choices.", "Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operation costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high-density. Control-theoretic techniques have recently shown a lot of promise on power management thanks to their better control performance and theoretical guarantees on control accuracy and system stability. However, existing work over-simplifies the problem by controlling a single server independently from others. As a result, at the cluster level where multiple servers are correlated by common workloads and share common power supplies, power cannot be shared to improve application performance. In this paper, we propose a cluster-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the cluster to be lower than a constraint. Our controller features a rigorous design based on an optimal multi-input-multi-output control theory. Empirical results demonstrate that our controller outperforms two state-of-the-art controllers, by having better application performance and more accurate power control.", "Data center power infrastructure incurs massive capital costs, which typically exceed energy costs over the life of the facility. To squeeze maximum value from the infrastructure, researchers have proposed over-subscribing power circuits, relying on the observation that peak loads are rare. To ensure availability, these proposals employ power capping, which throttles server performance during utilization spikes to enforce safe power budgets. However, because budgets must be enforced locally -- at each power distribution unit (PDU) -- local utilization spikes may force throttling even when power delivery capacity is available elsewhere. Moreover, the need to maintain reserve capacity for fault tolerance on power delivery paths magnifies the impact of utilization spikes. In this paper, we develop mechanisms to better utilize installed power infrastructure, reducing reserve capacity margins and avoiding performance throttling. Unlike conventional high-availability data centers, where collocated servers share identical primary and secondary power feeds, we reorganize power feeds to create shuffled power distribution topologies. Shuffled topologies spread secondary power feeds over numerous PDUs, reducing reserve capacity requirements to tolerate a single PDU failure. Second, we propose Power Routing, which schedules IT load dynamically across redundant power feeds to: (1) shift slack to servers with growing power demands, and (2) balance power draw across AC phases to reduce heating and improve electrical stability. We describe efficient heuristics for scheduling servers to PDUs (an NP-complete problem). Using data collected from nearly 1000 servers in three production facilities, we demonstrate that these mechanisms can reduce the required power infrastructure capacity relative to conventional high-availability data centers by 32 without performance degradation.", "Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operating costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high server density. Control-theoretic techniques have recently shown a lot of promise for power management because of their better control performance and theoretical guarantees on control accuracy and system stability. However, existing work oversimplifies the problem by controlling a single server independently from others. As a result, at the enclosure level where multiple high-density servers are correlated by common workloads and share common power supplies, power cannot be shared to improve application performance. In this paper, we propose an enclosure-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the enclosure to be lower than a constraint. Our controller features a rigorous design based on an optimal Multi-Input-Multi-Output (MIMO) control theory. We present detailed control problem formulation and transformation to a standard constrained least-squares problem, as well as stability analysis in the face of significant workload variations. We then conduct extensive experiments on a physical testbed to compare our controller with three state-of-the-art controllers: a heuristic-based MIMO control solution, a Single-Input-Single-Output (SISO) control solution, and an improved SISO controller with simple power shifting among servers. Our empirical results demonstrate that our controller outperforms all the three baselines by having more accurate power control and up to 11.8 percent better benchmark performance.", "", "", "", "Current capacity planning practices based on heavy over-provisioning of power infrastructure hurt (i) the operational costs of data centers as well as (ii) the computational work they can support. We explore a combination of statistical multiplexing techniques to improve the utilization of the power hierarchy within a data center. At the highest level of the power hierarchy, we employ controlled underprovisioning and over-booking of power needs of hosted workloads. At the lower levels, we introduce the novel notion of soft fuses to flexibly distribute provisioned power among hosted workloads based on their needs. Our techniques are built upon a measurement-driven profiling and prediction framework to characterize key statistical properties of the power needs of hosted workloads and their aggregates. We characterize the gains in terms of the amount of computational work (CPU cycles) per provisioned unit of power Computation per Provisioned Watt (CPW). Our technique is able to double the CPWoffered by a Power Distribution Unit (PDU) running the e-commerce benchmark TPC-W compared to conventional provisioning practices. Over-booking the PDU by 10 based on tails of power profiles yields a further improvement of 20 . Reactive techniques implemented on our Xen VMM-based servers dynamically modulate CPU DVFS states to ensure power draw below the limits imposed by soft fuses. Finally, information captured in our profiles also provide ways of controlling application performance degradation despite overbooking. The 95th percentile of TPC-W session response time only grew from 1.59 sec to 1.78 sec--a degradation of 12 .", "Large-scale Internet services require a computing infrastructure that can beappropriately described as a warehouse-sized computing system. The cost ofbuilding datacenter facilities capable of delivering a given power capacity tosuch a computer can rival the recurring energy consumption costs themselves.Therefore, there are strong economic incentives to operate facilities as closeas possible to maximum capacity, so that the non-recurring facility costs canbe best amortized. That is difficult to achieve in practice because ofuncertainties in equipment power ratings and because power consumption tends tovary significantly with the actual computing activity. Effective powerprovisioning strategies are needed to determine how much computing equipmentcan be safely and efficiently hosted within a given power budget. In this paper we present the aggregate power usage characteristics of largecollections of servers (up to 15 thousand) for different classes ofapplications over a period of approximately six months. Those observationsallow us to evaluate opportunities for maximizing the use of the deployed powercapacity of datacenters, and assess the risks of over-subscribing it. We findthat even in well-tuned applications there is a noticeable gap (7 - 16 )between achieved and theoretical aggregate peak power usage at the clusterlevel (thousands of servers). The gap grows to almost 40 in wholedatacenters. This headroom can be used to deploy additional compute equipmentwithin the same power budget with minimal risk of exceeding it. We use ourmodeling framework to estimate the potential of power management schemes toreduce peak power and energy usage. We find that the opportunities for powerand energy savings are significant, but greater at the cluster-level (thousandsof servers) than at the rack-level (tens). Finally we argue that systems needto be power efficient across the activity range, and not only at peakperformance levels." ] }
1007.4290
2949331335
The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable coefficient Helmholtz equation including very high frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea of this approach is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The GMRES solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.
The most efficient direct methods for solving the discretized Helmholtz systems are the multifrontal methods or their pivoted versions @cite_31 @cite_19 @cite_39 . The multifrontal methods exploit the locality of the discrete operator and construct an @math factorization based on a hierarchical partitioning of the domain. Their computational costs depend quite strongly on the dimensionality. In 2D, for a problem with @math unknowns, a multifrontal method takes @math steps and @math storage space. The prefactor is usually rather small, making the multifrontal methods effectively the default choice for the 2D Helmholtz problem. In 3D, for a problem with @math unknowns, a multifrontal method takes @math steps and @math storage space. For large scale 3D problems, they can be very costly.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_39" ], "mid": [ "", "2063675347", "2031990962" ], "abstract": [ "", "On etend la methode frontale pour resoudre des systemes lineaires d'equations en permettant a plus d'un front d'apparaitre en meme temps", "This paper presents an overview of the multifrontal method for the solution of large sparse symmetric positive definite linear systems. The method is formulated in terms of frontal matrices, update matrices, and an assembly tree. Formal definitions of these notions are given based on the sparse matrix structure. Various advances to the basic method are surveyed. They include the role of matrix reorderings, the use of supernodes, and other implementatjon techniques. The use of the method in different computational environments is also described." ] }
1007.4290
2949331335
The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable coefficient Helmholtz equation including very high frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea of this approach is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The GMRES solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.
There has been a surge of developments in the category of iterative methods for solving the Helmholtz equation. The following discussion is by no means complete and more details can be found in @cite_32 .
{ "cite_N": [ "@cite_32" ], "mid": [ "2095572167" ], "abstract": [ "In this paper we survey the development of fast iterative solvers aimed at solving 2D 3D Helmholtz problems. In the first half of the paper, a survey on some recently developed methods is given. The second half of the paper focuses on the development of the shifted Laplacian preconditioner used to accelerate the convergence of Krylov subspace methods applied to the Helmholtz equation. Numerical examples are given for some difficult problems, which had not been solved iteratively before." ] }
1007.4290
2949331335
The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable coefficient Helmholtz equation including very high frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea of this approach is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The GMRES solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.
Several other methods @cite_40 @cite_11 @cite_15 leverage the idea of domain decomposition. These methods are typically quite suitable for parallel implementation, as the computation in each subdomain can essentially be done independently. However, convergence rates of the these methods are usually quite slow @cite_32 .
{ "cite_N": [ "@cite_40", "@cite_15", "@cite_32", "@cite_11" ], "mid": [ "2084463157", "1994793184", "2095572167", "" ], "abstract": [ "We present an iterative domain decomposition method to solve the Helmholtz equation and related optimal control problems. The proof of convergence of this method relies on energy techniques. This method leads to efficient algorithms for the numerical resolution of harmonic wave propagation problems in homogeneous and heterogeneous media.", "A new domain decomposition method is presented for the exterior Helmholtz problem. The nonlocal Dirichlet-to-Neumann (DtN) map is used as a nonreflecting condition on the outer computational boundary. The computational domain is divided into nonoverlapping subdomains with Sommerfeld-type conditions on the adjacent subdomain boundaries to ensure uniqueness. An iterative scheme is developed, where independent subdomain boundary-value problems are obtained by applying the DtN operator to values from the previous iteration. The independent problems are then discretized with finite elements and can be solved concurrently. Numerical results are presented for a two-dimensional model problem, and both the solution accuracy and convergence rate are investigated.", "In this paper we survey the development of fast iterative solvers aimed at solving 2D 3D Helmholtz problems. In the first half of the paper, a survey on some recently developed methods is given. The second half of the paper focuses on the development of the shifted Laplacian preconditioner used to accelerate the convergence of Krylov subspace methods applied to the Helmholtz equation. Numerical examples are given for some difficult problems, which had not been solved iteratively before.", "" ] }
1007.4290
2949331335
The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable coefficient Helmholtz equation including very high frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea of this approach is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The GMRES solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.
Another class of methods @cite_35 @cite_14 @cite_12 @cite_0 that attracts a lot of attention recently preconditions the Helmholtz operator with a shifted Laplacian operator, [ - ^2 c^2(x) ( + i ), >0, ] to improve the spectrum property of the discrete Helmholtz system. Since the shifted Laplacian operator is elliptic, standard algorithms such as multigrid can be used for its inversion. These methods offer quite significant improvements for the convergence rate, but the reported number of iterations typically still grow linearly with respect to @math and are much larger than the iteration numbers produced by the sweeping preconditioner.
{ "cite_N": [ "@cite_0", "@cite_35", "@cite_14", "@cite_12" ], "mid": [ "950823788", "2069595221", "2042199958", "2142094322" ], "abstract": [ "Using a finite element method to solve the Helmholtz equation leads to a sparse system of equations which in three dimensions is too large to solve directly. It is also non-Hermitian and highly indefinite and consequently difficult to solve iteratively. The approach taken in this paper is to precondition this linear system with a new preconditioner and then solve it iteratively using a Krylov subspace method. Numerical analysis shows the preconditioner to be effective on a simple 1D test problem, and results are presented showing considerable convergence acceleration for a number of different Krylov methods for more complex problems in 2D, as well as for the more general problem of harmonic disturbances to a non-stagnant steady flow.", "An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.", "An iterative solution method, in the form of a preconditioner for a Krylov subspace method, is presented for the Helmholtz equation. The preconditioner is based on a Helmholtz-type differential operator with a complex term. A multigrid iteration is used for approximately inverting the preconditioner. The choice of multigrid components for the corresponding preconditioning matrix with a complex diagonal is validated with Fourier analysis. Multigrid analysis results are verified by numerical experiments. High wavenumber Helmholtz problems in heterogeneous media are solved indicating the performance of the preconditioner.", "In 1983, a preconditioner was proposed [J. Comput. Phys. 49 (1983) 443] based on the Laplace operator for solving the discrete Helmholtz equation efficiently with CGNR. The preconditioner is especially effective for low wavenumber cases where the linear system is slightly indefinite. Laird [Preconditioned iterative solution of the 2D Helmholtz equation, First Year's Report, St. Hugh's College, Oxford, 2001] proposed a preconditioner where an extra term is added to the Laplace operator. This term is similar to the zeroth order term in the Helmholtz equation but with reversed sign. In this paper, both approaches are further generalized to a new class of preconditioners, the so-called \"shifted Laplace\" preconditioners of the form Δφ-αk2φ with α ∈ C. Numerical experiments for various wavenumbers indicate the effectiveness of the preconditioner. The preconditioner is evaluated in combination with GMRES, Bi-CGSTAB, and CGNR." ] }
1007.4290
2949331335
The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable coefficient Helmholtz equation including very high frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea of this approach is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The GMRES solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.
Several other constructions of preconditioners @cite_10 @cite_4 @cite_28 are based on incomplete LU (ILU) decomposition, i.e., generating only a small portion of the entries of the LU factorization of the discrete Helmholtz operator and applying this ILU decomposition as a preconditioner. Recent approaches based on ILUT (incomplete LU factorization with thresholding) and ARMS (algebraic recursive multilevel solver) have been reported in @cite_28 . These ILU preconditioners bring down the number of iterations quite significantly, however the number of iterations still scale typically linearly in @math . In connection with the ILU preconditioners, the sweeping preconditioner can be viewed as an approximate LU (ALU) preconditioner: instead of keeping only a few selected entries, it approximates the whole inverse operator more accurately in a more sophisticated and effective form, thus resulting in substantially better convergence properties.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_4" ], "mid": [ "2056296875", "", "2063547741" ], "abstract": [ "Linear systems which originate from the simulation of wave propagation phenomena can be very difficult to solve by iterative methods. These systems are typically complex valued and they tend to be highly indefinite, which renders the standard ILU-based preconditioners ineffective. This paper presents a study of ways to enhance standard preconditioners by altering the diagonal by imaginary shifts. Prior work indicates that modifying the diagonal entries during the incomplete factorization process, by adding to it purely imaginary values can improve the quality of the preconditioner in a substantial way. Here we propose simple algebraic heuristics to perform the shifting and test these techniques with the ARMS and ILUT preconditioners. Comparisons are made with applications stemming from the diffraction of an acoustic wave incident on a bounded obstacle (governed by the Helmholtz Wave Equation).", "", "We present an incomplete LU preconditioner for solving discretized Helmholtz problems. The preconditioner is based on an analytic factorization of the Helmholtz operator. This allows us to take the physical properties of the acoustics problem modeled by the Helmholtz equation into account in the preconditioner. We show how the parameters in the preconditioner can be chosen in order to make it effective. Numerical experiments show that the new preconditioner leads to convergent iterative methods even for large wave numbers, and it outperforms classical ILU preconditioners by a large margin." ] }
1007.4291
2950304429
This paper introduces a new sweeping preconditioner for the iterative solution of the variable coefficient Helmholtz equation in two and three dimensions. The algorithms follow the general structure of constructing an approximate @math factorization by eliminating the unknowns layer by layer starting from an absorbing layer or boundary condition. The central idea of this paper is to approximate the Schur complement matrices of the factorization using moving perfectly matched layers (PMLs) introduced in the interior of the domain. Applying each Schur complement matrix is equivalent to solving a quasi-1D problem with a banded LU factorization in the 2D case and to solving a quasi-2D problem with a multifrontal method in the 3D case. The resulting preconditioner has linear application cost and the preconditioned iterative solver converges in a number of iterations that is essentially indefinite of the number of unknowns or the frequency. Numerical results are presented in both two and three dimensions to demonstrate the efficiency of this new preconditioner.
There has been a vast literature on developing efficient algorithms for the Helmholtz equation. A partial list of significant progresses includes @cite_19 @cite_22 @cite_20 @cite_3 @cite_18 @cite_23 @cite_4 @cite_0 @cite_12 @cite_9 . We refer to the review article @cite_16 and our previous paper @cite_24 for detailed discussion. The brief discussion below is restricted to the ones that are closely related to the approach proposed in this paper.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_9", "@cite_3", "@cite_0", "@cite_19", "@cite_24", "@cite_23", "@cite_16", "@cite_12", "@cite_20" ], "mid": [ "2151478420", "2042199958", "2084463157", "1987397719", "", "950823788", "2069595221", "2098102734", "", "2095572167", "", "175176216" ], "abstract": [ "Standard multigrid algorithms have proven ineffective for the solution of discretizations of Helmholtz equations. In this work we modify the standard algorithm by adding GMRES iterations at coarse levels and as an outer iteration. We demonstrate the algorithm's effectiveness through theoretical analysis of a model problem and experimental results. In particular, we show that the combined use of GMRES as a smoother and outer iteration produces an algorithm whose performance depends relatively mildly on wave number and is robust for normalized wave numbers as large as 200. For fixed wave numbers, it displays grid-independent convergence rates and has costs proportional to the number of unknowns.", "An iterative solution method, in the form of a preconditioner for a Krylov subspace method, is presented for the Helmholtz equation. The preconditioner is based on a Helmholtz-type differential operator with a complex term. A multigrid iteration is used for approximately inverting the preconditioner. The choice of multigrid components for the corresponding preconditioning matrix with a complex diagonal is validated with Fourier analysis. Multigrid analysis results are verified by numerical experiments. High wavenumber Helmholtz problems in heterogeneous media are solved indicating the performance of the preconditioner.", "We present an iterative domain decomposition method to solve the Helmholtz equation and related optimal control problems. The proof of convergence of this method relies on energy techniques. This method leads to efficient algorithms for the numerical resolution of harmonic wave propagation problems in homogeneous and heterogeneous media.", "Abstract The diagonal forms are constructed for the translation operators for the Helmholtz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the three-dimensional case of the results of Rokhlin (J. Complexity4(1988), 12-32), where a similar apparatus is developed in the two-dimensional case.", "", "Using a finite element method to solve the Helmholtz equation leads to a sparse system of equations which in three dimensions is too large to solve directly. It is also non-Hermitian and highly indefinite and consequently difficult to solve iteratively. The approach taken in this paper is to precondition this linear system with a new preconditioner and then solve it iteratively using a Krylov subspace method. Numerical analysis shows the preconditioner to be effective on a simple 1D test problem, and results are presented showing considerable convergence acceleration for a number of different Krylov methods for more complex problems in 2D, as well as for the more general problem of harmonic disturbances to a non-stagnant steady flow.", "An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.", "The paper introduces the sweeping preconditioner, which is highly efficient for iterative solutions of the variable-coefficient Helmholtz equation including very-high-frequency problems. The first central idea of this novel approach is to construct an approximate factorization of the discretized Helmholtz equation by sweeping the domain layer by layer, starting from an absorbing layer or boundary condition. Given this specific order of factorization, the second central idea is to represent the intermediate matrices in the hierarchical matrix framework. In two dimensions, both the construction and the application of the preconditioners are of linear complexity. The generalized minimal residual method (GMRES) solver with the resulting preconditioner converges in an amazingly small number of iterations, which is essentially independent of the number of unknowns. This approach is also extended to the three-dimensional case with some success. Numerical results are provided in both two and three dimensions to demonstrate the efficiency of this new approach.", "", "In this paper we survey the development of fast iterative solvers aimed at solving 2D 3D Helmholtz problems. In the first half of the paper, a survey on some recently developed methods is given. The second half of the paper focuses on the development of the shifted Laplacian preconditioner used to accelerate the convergence of Krylov subspace methods applied to the Helmholtz equation. Numerical examples are given for some difficult problems, which had not been solved iteratively before.", "", "Multigrid methods are known for their high efficiency in the solution of definite elliptic problems. However, difficulties that appear in highly indefinite problems, such as standing wave equations, cause a total loss of efficiency in the standard multigrid solver. The aim of this paper is to isolate these difficulties, analyze them, suggest how to deal with them, and then test the suggestions with numerical experiments. The modified multigrid methods introduced here exhibit the same high convergence rates as usually obtained for definite elliptic problems, for nearly the same cost. They also yield a very efficient treatment of the radiation boundary conditions." ] }
1007.4291
2950304429
This paper introduces a new sweeping preconditioner for the iterative solution of the variable coefficient Helmholtz equation in two and three dimensions. The algorithms follow the general structure of constructing an approximate @math factorization by eliminating the unknowns layer by layer starting from an absorbing layer or boundary condition. The central idea of this paper is to approximate the Schur complement matrices of the factorization using moving perfectly matched layers (PMLs) introduced in the interior of the domain. Applying each Schur complement matrix is equivalent to solving a quasi-1D problem with a banded LU factorization in the 2D case and to solving a quasi-2D problem with a multifrontal method in the 3D case. The resulting preconditioner has linear application cost and the preconditioned iterative solver converges in a number of iterations that is essentially indefinite of the number of unknowns or the frequency. Numerical results are presented in both two and three dimensions to demonstrate the efficiency of this new preconditioner.
The most efficient direct methods for solving the discrete Helmholtz systems are the multifrontal methods or their pivoted versions @cite_17 @cite_7 @cite_21 . The multifrontal methods exploit the locality of the discrete operator and construct an @math factorization based on a hierarchical partitioning of the domain. The cost of a multifrontal method depends strongly on the number of dimensions. For a 2D problem with @math unknowns, a multifrontal method takes @math flops and @math storage space. The prefactor is usually rather small, making the multifrontal methods effectively the default choice for most 2D Helmholtz problems. However, for a 3D problem with @math unknowns, a multifrontal method requires @math flops and @math storage space, which can be very costly for large scale 3D problems.
{ "cite_N": [ "@cite_21", "@cite_7", "@cite_17" ], "mid": [ "2031990962", "", "2063675347" ], "abstract": [ "This paper presents an overview of the multifrontal method for the solution of large sparse symmetric positive definite linear systems. The method is formulated in terms of frontal matrices, update matrices, and an assembly tree. Formal definitions of these notions are given based on the sparse matrix structure. Various advances to the basic method are surveyed. They include the role of matrix reorderings, the use of supernodes, and other implementatjon techniques. The use of the method in different computational environments is also described.", "", "On etend la methode frontale pour resoudre des systemes lineaires d'equations en permettant a plus d'un front d'apparaitre en meme temps" ] }
1007.4935
2951609900
This paper formalizes the optimal base problem, presents an algorithm to solve it, and describes its application to the encoding of Pseudo-Boolean constraints to SAT. We demonstrate the impact of integrating our algorithm within the Pseudo-Boolean constraint solver MINISAT+. Experimentation indicates that our algorithm scales to bases involving numbers up to 1,000,000, improving on the restriction in MINISAT+ to prime numbers up to 17. We show that, while for many examples primes up to 17 do suffice, encoding with respect to optimal bases reduces the CNF sizes and improves the subsequent SAT solving time for many examples.
Recent work @cite_4 encodes Pseudo-Boolean constraints via totalizers'' similar to sorting networks, determined by the representation of the coefficients in an underlying base. Here the authors choose the standard base 2 representation of numbers. It is straightforward to generalize their approach for an arbitrary mixed base, and our algorithm is directly applicable. @cite_9 the author considers the @math cost function and analyzes the size of representing the natural numbers up to @math with (a particular class of) mixed radix bases. Our Lemma may lead to a contribution in that context.
{ "cite_N": [ "@cite_9", "@cite_4" ], "mid": [ "2021464560", "1479859531" ], "abstract": [ "We introduce a new approach to the study of sum-of-digits functions for integral nonstationary bases and apply it to certain classical examples, namely, to the Cantor and Ostrowski number systems.", "This paper answers affirmatively the open question of the existence of a polynomial size CNF encoding of pseudo-Boolean (PB) constraints such that generalized arc consistency (GAC) is maintained through unit propagation (UP). All previous encodings of PB constraints either did not allow UP to maintain GAC, or were of exponential size in the worst case. This paper presents an encoding that realizes both of the desired properties. From a theoretical point of view, this narrows the gap between the expressive power of clauses and the one of pseudo-Boolean constraints." ] }
1007.3229
2949765894
We consider several WLAN stations associated at rates r1, r2, ..., rk with an Access Point. Each station is downloading a long file from a local server, located on the LAN to which the AP is attached. We model these simultaneous TCP-controlled transfers using a Markov Chain. Our analytical approach leads to a procedure to compute aggregate download throughput numerically, and the results match simulations very well.
In the first group, all WLAN entities (STAs and the AP) are assumed to be , , each entity is backlogged permanently. Bianchi @cite_9 , @cite_4 and @cite_10 consider this saturated traffic model. However, our interest is in modelling aggregate throughput, and the saturated traffic model does not capture the situation well.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_4" ], "mid": [ "2126345586", "2162598825", "" ], "abstract": [ "In wireless LANs (WLANs), the medium access control (MAC) protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel. In this paper we focus on the efficiency of the IEEE 802.11 standard for WLANs. Specifically, we analytically derive the average size of the contention window that maximizes the throughput, hereafter theoretical throughput limit, and we show that: 1) depending on the network configuration, the standard can operate very far from the theoretical throughput limit; and 2) an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to the theoretical throughput limit. Hence we propose a distributed algorithm that enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are extensively investigated by simulation. Specifically, we investigate the sensitiveness of our algorithm to some network configuration parameters (number of active stations, presence of hidden terminals). Our results indicate that the capacity of the enhanced protocol is very close to the theoretical upper bound in all the configurations analyzed.", "The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.", "" ] }
1007.3240
2079995598
An asynchronous, variational method for simulating elastica in complex contact and impact scenarios is developed. Asynchronous Variational Integrators [1] (AVIs) are extended to handle contact forces by associating dierent time steps to forces instead of to spatial elements. By discretizing a barrier potential by an innite sum of nested quadratic potentials, these extended AVIs are used to resolve contact while obeying momentum- and energy-conservation laws. A series of two- and three-dimensional examples illustrate the robustness and good energy behavior of the method.
The simplest contact models for finite element simulation follow the early analytical work of Hertz @cite_21 in assuming frictionless contact of planar (or nearly planar) surfaces with small strain. In this regime, several approaches have been explored to arrive at a weak formulation of contact; for a high-level survey of these approaches, see for example the overview by Belytschko al @cite_25 or Wriggers @cite_12 . The first of these are the use of penalty forces, described for instance by Oden @cite_13 and Kikuchi and Oden @cite_17 . The penalty approach results in a contact force proportional to an arbitrary parameter and to the rate of interpenetration, or in more general formulations to an arbitrary function of rate of interpenetration and interpenetration depth; Belytschko and Neal @cite_0 discuss the choosing of this parameter in Section 8. Recent work by Belytschko al @cite_39 uses moving least squares to construct an implicit smooth contact surface, from which the interpenetration distance is evaluated. Peric and Owen @cite_30 describe how to equip penalty forces with a Coulomb friction model.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_39", "@cite_0", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2135957539", "", "2106411615", "2166197075", "979213183", "1589288858", "2040124556", "1520204308" ], "abstract": [ "The friction forces are assumed to follow the Coulomb law, with a slip criterion treated in the context of a standard return mapping algorithm. Consistent linearization of the field equations is performed which leads to a fully implicit scheme with non-symmetric tangent stiffness which preserves asymptotic quadratic convergence of the Newton-Raphson method.", "", "A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.", "Contact-impact algorithms, which are sometimes called slideline algorithms, are a computationally time-consuming part of many explicit simulations of non-linear problems because they involve many branches, so they are not amenable to vectorization, which is essential for speed on supercomputers. The pinball algorithm is a simplified slideline algorithm which is readily vectorized. Its major idea is to embed pinballs in surface elements and to enforce the impenetrability condition only to pinballs. It can be implemented in either a Lagrange multiplier or penalty method. It is shown that, in any Lagrange multiplier method, no iterations are needed to define the contact surface. Examples of solutions and running times are given.", "This paper reviews recent results of Oden, Kikuchi, and Song [4] on the use of exterior penalty methods as a basis for finite element approximations of contact problems in linear elasticity.", "Preface. List of Boxes. Introduction. Lagrangian and Eulerian Finite Elements in One Dimension. Continuum Mechanics. Lagrangian Meshes. Constitutive Models Solution Methods and Stability. Arbitrary Lagrangian Eulerian Formulations. Element Technology. Beams and Shells. Contact--Impact. Appendix 1: Voigt Notation. Appendix 2: Norms. Appendix 3: Element Shape Functions. Glossary. References. Index.", "The numerical treatment of contact problems involves the formulation of the geometry, the statement of interface laws, the variational formulation and the development of algorithms. In this paper we give an overview with regard to the different topics which are involved when contact problems have to be simulated. To be most general we will derive a geometrical model for contact which is valid for large deformations. Furthermore interface laws will be discussed for the normal and tangential stress components in the contact area. Different variational formulations can be applied to treat the variational inequalities due to contact. Several of these different techniques will be presented. Furthermore the discretization of a contact problem in time and space is of great importance and has to be chosen with regard to the nature of the contact problem. Thus the standard discretization schemes will be discussed as well as techiques to search for contact in case of large deformations.", "Introduction Signorini's problem Minimization methods and their variants Finite element approximations Orderings, Trace Theorems, Green's Formulas and korn's Inequalities Signorini's problem revisited Signorini's problem for incompressible materials Alternate variational principles for Signorini's problem Contact problems for large deflections of elastic plates Some special contact problems with friction Contact problems with nonclassical friction laws Contact problems involving deformations and nonlinear materials Dynamic friction problems Rolling contact problems Concluding comments." ] }
1007.3240
2079995598
An asynchronous, variational method for simulating elastica in complex contact and impact scenarios is developed. Asynchronous Variational Integrators [1] (AVIs) are extended to handle contact forces by associating dierent time steps to forces instead of to spatial elements. By discretizing a barrier potential by an innite sum of nested quadratic potentials, these extended AVIs are used to resolve contact while obeying momentum- and energy-conservation laws. A series of two- and three-dimensional examples illustrate the robustness and good energy behavior of the method.
Seeking to exactly enforce non-penetration along the contact surface leads to generalizations of the method of Lagrange multipliers. Hughes al @cite_9 and Nour-Omid and Wriggers @cite_42 provide an overview of this approach in the context of contact response. Such contraint enforcement can be viewed as a penalty force in the limit of infinite stiffness, impossible to attain in practice since the system becomes ill-conditioned. Taylor and Papadopoulos @cite_32 considers persistent contact by extending Newmark to treat jump conditions in kinematic fields, thus reducing undesirable oscillatory modes. However, the effects of these modifications on numerical dissipation and long-time energy behavior is not considered.
{ "cite_N": [ "@cite_9", "@cite_42", "@cite_32" ], "mid": [ "2056375289", "2082653392", "2077411108" ], "abstract": [ "Abstract We present a finite element method for a class of contact-impact problems. Theoretical background and numerical implementation features are discussed. In particular, we consider the basic ideas of contact-impact, the assumptions which define the class of problems we deal with, spatial and temporal discretizations of the bodies involved, special problems concerning the contact of bodies of different dimensions, discrete impact and release conditions, and solution of the nonlinear algebraic problem. Several sample problems are presented which demonstrate the accuracy and versatility of the algorithm.", "Abstract The merits and limitations of some existing procedures for the solution of contact problems, modeled by the finite element method, are examined. Based on the Lagrangian multiplier method, a partitioning scheme can be used to obtain a small system of equation for the Lagrange multipliers which is then solved by the conjugate gradient method. A two-level contact algorithm is employed which first linearizes the nonlinear contact problem to obtain a linear contact problem that is in turn solved by the Newton method. The performance of the algorithm compared to some existing procedures is demonstrated on some test problems.", "This paper addresses the formulation and discrete approximation of dynamic contact impact initial-value problems. The continuous problem is presented in the context of non-linear kinematics. Standard semi-discrete time integrators are introduced and are shown to be unsuccessful in modelling the kinematic constraints imposed on the interacting bodies during persistent contact. A procedure that bypasses the aforementioned difficulty is proposed by means of a novel variational formulation. Numerical simulations are conducted and the results are reported and discussed." ] }
1007.3240
2079995598
An asynchronous, variational method for simulating elastica in complex contact and impact scenarios is developed. Asynchronous Variational Integrators [1] (AVIs) are extended to handle contact forces by associating dierent time steps to forces instead of to spatial elements. By discretizing a barrier potential by an innite sum of nested quadratic potentials, these extended AVIs are used to resolve contact while obeying momentum- and energy-conservation laws. A series of two- and three-dimensional examples illustrate the robustness and good energy behavior of the method.
Non-smooth contact requires special consideration, since in the non-smooth regime there is no straightforward way of defining a contact normal or penetration distance. Simo al @cite_3 discretize the contact surface into segments over which they assume constant contact pressure; this formulation allows them to handle non-node-to-node contact using a perturbed Lagrangian. Kane al @cite_10 apply non-smooth analysis to resolve contact constraints between sharp objects. Pandolfi al @cite_15 extend the work of Kane al ,by describing a variational model for non-smooth contact with friction. Cirak and West @cite_43 decompose contact resolution into an impenetrability-enforcement and momentum-transfer step, thereby exactly enforcing non-interpenetration while nearly conserving momentum and energy.
{ "cite_N": [ "@cite_43", "@cite_15", "@cite_10", "@cite_3" ], "mid": [ "2099876657", "2075921599", "2046156462", "2030010167" ], "abstract": [ "We propose a new explicit contact algorithm for finite element discretized solids and shells with smooth and non-smooth geometries. The equations of motion are integrated in time with a predictor-corrector-type algorithm. After each predictor step, the impenetrability constraints and the exchange of momenta between the impacting bodies are considered and enforced independently. The geometrically inadmissible penetrations are removed using closest point projections or similar updates. Penetration is measured using the signed volume of intersection described by the contacting surface elements, which is well-defined for both smooth and non-smooth geometries. For computing the instantaneous velocity changes that occur during the impact event, we introduce the decomposition contact response method. This enables the closed-form solution of the jump equations at impact, and applies to non-frictional as well as frictional contact, as exemplified by the Coulomb frictional model. The overall algorithm has excellent momentum and energy conservation characteristics, as several numerical examples demonstrate. Copyright © 2005 John Wiley & Sons, Ltd.", "The present work extends the non-smooth contact class of algorithms introduced by to the case of friction. The formulation specifically addresses contact geometries, e.g. involving multiple collisions between tightly packed non-smooth bodies, for which neither normals nor gap functions can be properly defined. A key aspect of the approach is that the incremental displacements follow from a minimum principle. The objective function comprises terms which account for inertia, strain energy, contact, friction and external forcing. The Euler–Lagrange equations corresponding to this extended variational principle are shown to be consistent with the equations of motion of solids in frictional contact. In addition to its value as a basis for formulating numerical algorithms, the variational framework offers theoretical advantages as regards the selection of trajectories in cases of non-uniqueness. We present numerical and analytical examples which demonstrate the good momentum and energy conservation characteristics of the numerical algorithms, as well as the ability of the approach to account for stick and slip conditions.", "This work develops robust contact algorithms capable of dealing with complex contact situations involving several bodies with corners. Amongst the mathematical tools we bring to bear on the problem is nonsmooth analysis, following Clarke (F.H. Clarke. Optimization and nonsmooth analysis. John Wiley and Sons, New York, 1983.). We specifically address contact geometries for which both the use of normals and gap functions have difficulties and therefore precludes the application of most contact algorithms proposed to date. Such situations arise in applications such as fragmentation, where angular fragments undergo complex collision sequences before they scatter. We demonstrate the robustness and versatility of the nonsmooth contact algorithms developed in this paper with the aid of selected two and three-dimensional applications.", "Making use of a perturbed Lagrangian formulation, a finite element procedure for contact problems is developed for the general case in which node-to-node contact no longer holds. The proposed procedure leads naturally to a discretization of the contact interface into contact segments. Within the context of a bilinear interpolation for the displacement field, a mixed finite element approximation is introduced by assuming discont naous contact pressure, constant on the contact segment. Because of this piece-wise constant approximation, the gap function enters into the formulation in an ‘average’ sense instead of through a point-wise definition. Numerical examples are presented that illustrate the performance of the proposed procedure. Current finite element formulations for contact problems based on either the classical Lagrange parameter procedure [l-3, 12, 203 or the penalty-function method [4-6,11], are characterized by a point-wise enforcement of the contact-constraint condition, in the sense that penetration of the bodies is established on a nodal basis. Moreover, in this methods the recovery of the contact pressure over the element from the contact nodal forces generally requires an additional procedure. Within the framework of classical Lagrange multiplier methods the contact condition is exactly satisfied by transforming the constrained problem into an unconstrained one with the introduction of additional variables (Lagrange multipliers). These extra variables add computational effort to the solution process which often requires special procedures to handle the presence of zero diagonal terms. Penalty methods, on the other hand, enable one to transform the constrained problem into an unconstrained one without introducing additional variables. The constraint condition is now satisfied only approximately for finite values of the penalty parameter. The main difhculty associated with these methods, however, lies in the poor conditioning of the problem as the penalty is increased to more accurately enforce the constraint condition. This is a well-understood phenomenon, particularly in the context of the incompressible and nearly incompressible problem in solid and fluid mechanics (e.g. see [l&22,25] for a review). Recently, augmented Lagrangian procedures have been proposed as a promising way to partially overcome these difficulties and ‘regularize’ the penalty formulation (e.g. see the survey in [7,8]). Within the framework of Zinearited e u c , it is possible to restrict the finite element formulation of contact problems by assuming that node-to-node contact occurs. This is in fact the case often considered in the literature [l, 2, 4, 6, 12, 20, 211. In the general context of fully" ] }
1007.3240
2079995598
An asynchronous, variational method for simulating elastica in complex contact and impact scenarios is developed. Asynchronous Variational Integrators [1] (AVIs) are extended to handle contact forces by associating dierent time steps to forces instead of to spatial elements. By discretizing a barrier potential by an innite sum of nested quadratic potentials, these extended AVIs are used to resolve contact while obeying momentum- and energy-conservation laws. A series of two- and three-dimensional examples illustrate the robustness and good energy behavior of the method.
Several authors have explored a structure-preserving approach to solving the contact problem. Barth al @cite_20 consider an adaptive-step-size algorithm that preserves the time-reversible symmetry of the RATTLE algorithm, and demonstrate an application to an elastic rod interacting with a Lennard-Jones potential. Kane al @cite_35 show that the Newmark method, for all parameters, is variational, and construct two two-step dissipative integrators that yield good energy decay. Laursen and Love @cite_24 , by taking into account velocity discontinuities that occur at contact interfaces, develop a momentum- and energy-preserving method for simulating frictionless contact. This paper shares with these last approaches the viewpoint that structured integration, with its associated conservation guarantees, is an invaluable tool for accurately simulating dynamic systems with contact.
{ "cite_N": [ "@cite_24", "@cite_35", "@cite_20" ], "mid": [ "2067309337", "2124969394", "2047976760" ], "abstract": [ "The value of energy and momentum conserving algorithms has been well established for the analysis of highly non-linear systems, including those characterized by the nonsmooth non-linearities of an impact event. This work proposes an improved integration scheme for frictionless dynamic contact, seeking to preserve the stability properties of exact energy and momentum conservation without the heretofore unavoidable compromise of violating geometric admissibility as established by the contact constraints. The physically motivated introduction of a discrete contact velocity provides an algorithmic framework that ensures exact conservation locally while remaining independent of the choice of constraint treatment, thus making full conservation equally possible in conjunction with a penalty regularization as with an exact Lagrange multiplier enforcement. The discrete velocity effects are incorporated as a post-convergence update to the system velocities, and thus have no direct effect on the non-linear solution of the displacement equilibrium equation. The result is a robust implicit algorithmic treatment of dynamic frictionless impact, appropriate for large deformations and fully conservative for a range of geometric constraints. Copyright © 2001 John Wiley & Sons, Ltd.", "The purpose of this work is twofold. First, we demonstrate analytically that the classical Newmark family as well as related integration algorithms are variational in the sense of the Veselov formulation of discrete mechanics. Such variational algorithms are well known to be symplectic and momentum preserving and to often have excellent global energy behavior. This analytical result is veried through numerical examples and is believed to be one of the primary reasons that this class of algorithms performs so well. Second, we develop algorithms for mechanical systems with forcing, and in particular, for dissipative systems. In this case, we develop integrators that are based on a discretization of the Lagrange d'Alembert principle as well as on a variational formulation of dissipation. It is demonstrated that these types of structured integrators have good numerical behavior in terms of obtaining the correct amounts by which the energy changes over the integration run.", "This article considers the design and implementation of variable-timestep methods for simulating holonomically constrained mechanical systems. Symplectic variable stepsizes are briefly discussed, and we consider time-reparameterization techniques employing a time-reversible (symmetric) integration method to solve the equations of motion. We give several numerical examples, including a simulation of an elastic (inextensible, unshearable) rod undergoing large deformations and collisions with the sides of a bounding box. Numerical experiments indicate that adaptive stepping can significantly smooth the numerical energy and improve the overall efficiency of the simulation." ] }
1007.2503
2952286870
We study the problem of ranking with submodular valuations. An instance of this problem consists of a ground set @math , and a collection of @math monotone submodular set functions @math , where each @math . An additional ingredient of the input is a weight vector @math . The objective is to find a linear ordering of the ground set elements that minimizes the weighted cover time of the functions. The cover time of a function is the minimal number of elements in the prefix of the linear ordering that form a set whose corresponding function value is greater than a unit threshold value. Our main contribution is an @math -approximation algorithm for the problem, where @math is the smallest non-zero marginal value that any function may gain from some element. Our algorithm orders the elements using an adaptive residual updates scheme, which may be of independent interest. We also prove that the problem is @math -hard to approximate, unless P = NP. This implies that the outcome of our algorithm is optimal up to constant factors.
Submodular functions arise naturally in operations research and combinatorial optimization. One of the most extensively studied questions is how to minimize a submodular function. A series of results demonstrate that this task can be performed efficiently, either by the ellipsoid algorithm @cite_31 or through strongly polynomial time combinatorial algorithms @cite_18 @cite_5 @cite_11 @cite_24 @cite_13 @cite_17 . Recently, there has been a surge of interest in understanding the limits of tractability of minimization problems in which the classic linear objective function was replaced by a submodular one (see, e.g., @cite_26 @cite_32 @cite_20 @cite_4 ). Notably, these submodular problems are commonly considerably harder to approximate than their linear counterparts. For example, the minimum spanning tree problem, which is polynomial time solvable with linear cost functions is @math -hard to approximate with submodular cost functions @cite_20 , and the sparsest cut problem, which admits an @math -approximation algorithm when the cost is linear @cite_3 becomes @math -hard to approximate with submodular costs @cite_26 . Our work extends the tools and techniques in this line of research. In particular, our results establish a computational separation of logarithmic order between the submodular settings and the linear setting, which admits a constant factor approximation @cite_33 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_33", "@cite_17", "@cite_32", "@cite_3", "@cite_24", "@cite_5", "@cite_31", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2035575256", "2106752752", "1572332593", "", "2070208123", "2278268662", "2573057963", "2050511894", "", "2012329067", "", "1974657465", "2022049946" ], "abstract": [ "We give a strongly polynomial-time algorithm minimizing a submodular function f given by a value-giving oracle. The algorithm does not use the ellipsoid method or any other linear programming method. No bound on the complexity of the values of f is needed to be known a priori. The number of oracle calls is bounded by a polynomial in the size of the underlying set.", "We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions.The new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle.The approximation guarantees for most of our algorithms are of the order of radic(n ln n). We show that this is the inherent difficulty of the problems by proving matching lower bounds.We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.", "This paper addresses the problems of minimizing nonnegative submodular functions under covering constraints, which generalize the vertex cover, edge cover, and set cover problems. We give approximation algorithms for these problems exploiting the discrete convexity of submodular functions. We first present a rounding 2-approximation algorithm for the submodular vertex cover problem based on the half-integrality of the continuous relaxation problem, and show that the rounding algorithm can be performed by one application of submodular function minimization on a ring family. We also show that a rounding algorithm and a primal-dual algorithm for the submodular cost set cover problem are both constant factor approximation algorithms if the maximum frequency is fixed. In addition, we give an essentially tight lower bound on the approximability of the submodular edge cover problem.", "", "This paper presents a strongly polynomial algorithm for submodular function minimization using only additions, subtractions, comparisons, and oracle calls for function values.", "", "This paper shows how to compute O(√log n)-approximations to the S PAR S EST CUT and BALANCED SEPARATOR problems in O(n 2 ) time, thus improving upon the recent algorithm of Arora, Rao, and Vazirani [Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231]. Their algorithm uses semidefinite programming and requires O(n 9.5 ) time. Our algorithm relies on efficiently finding expander flows in the graph and does not solve semidefinite programs. The existence of expander flows was also established by Arora, Rao, and Vazirani [Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 2004, pp. 222-231].", "We consider the problem of minimizing a submodular function f defined on a set V with n elements. We give a combinatorial algorithm that runs in O(n 5EO + n 6) time, where EO is the time to evaluate f(S) for some @math . This improves the previous best strongly polynomial running time by more than a factor of n. We also extend our result to ring families.", "", "L. G. Khachiyan recently published a polynomial algorithm to check feasibility of a system of linear inequalities. The method is an adaptation of an algorithm proposed by Shor for non-linear optimization problems. In this paper we show that the method also yields interesting results in combinatorial optimization. Thus it yields polynomial algorithms for vertex packing in perfect graphs; for the matching and matroid intersection problems; for optimum covering of directed cuts of a digraph; for the minimum value of a submodular set function; and for other important combinatorial problems. On the negative side, it yields a proof that weighted fractional chromatic number is NP-hard.", "", "Applications in complex systems such as the Internet have spawned a recent interest in studying situations involving multiple agents with their individual cost or utility functions. In this paper, we introduce an algorithmic framework for studying combinatorial optimization problems in the presence of multiple agents with submodular cost functions. We study several fundamental covering problems in this framework and establish upper and lower bounds on their approximability.", "Combinatorial strongly polynomial algorithms for minimizing submodular functions have been developed by Iwata, Fleischer, and Fujishige (IFF) and by Schrijver. The IFF algorithm employs a scaling scheme for submodular functions, whereas Schrijver's algorithm achieves strongly polynomial bound with the aid of distance labeling. Subsequently, Fleischer and Iwata have described a push relabel version of Schrijver's algorithm to improve its time complexity. This paper combines the scaling scheme with the push relabel framework to yield a faster combinatorial algorithm for submodular function minimization. The resulting algorithm improves over the previously best known bound by essentially a linear factor in the size of the underlying ground set." ] }
1007.2964
1494146003
Let F be a family of Borel measurable functions on a complete separable metric space. The gap (or fat-shattering) dimension of F is a combinatorial quantity that measures the extent to which functions f in F can separate finite sets of points at a predefined resolution gamma > 0. We establish a connection between the gap dimension of F and the uniform convergence of its sample averages under ergodic sampling. In particular, we show that if the gap dimension of F at resolution gamma > 0 is finite, then for every ergodic process the sample averages of functions in F are eventually within 10 gamma of their limiting expectations uniformly over the class F. If the gap dimension of F is finite for every resolution gamma > 0 then the sample averages of functions in F converge uniformly to their limiting expectations. We assume only that F is uniformly bounded and countable (or countably approximable). No smoothness conditions are placed on F, and no assumptions beyond ergodicity are placed on the sampling processes. Our results extend existing work for i.i.d. processes.
Alon @cite_8 considered the relationship between the gap dimension and the learnability of classes of uniformly bounded functions under independent sampling. In particular, they showed that if @math is a family of functions @math satisfying suitable measurability conditions, and such that @math is finite for some @math , then n [ X I ( X ) ( m n ( : ) > ) ] = 0 when @math . Here @math is the family of all i.i.d. processes taking values in @math . Conversely, if @math , they showed that ) fails to hold for every @math . Further connections between the gap dimension and different notions of learnability (in the i.i.d. case) can be found in @cite_2 and the references therein. Talagrand @cite_3 and Mendelson and Vershynin @cite_16 showed that the @math covering numbers of a uniformly bounded sets of functions can be bounded in terms of its weak gap dimension.
{ "cite_N": [ "@cite_16", "@cite_2", "@cite_3", "@cite_8" ], "mid": [ "2093294140", "2059811159", "2022644078", "" ], "abstract": [ "We solve Talagrand’s entropy problem: the L2-covering numbers of every uniformly bounded class of functions are exponential in its shattering dimension. This extends Dudley’s theorem on classes of 0,1 -valued functions, for which the shattering dimension is the Vapnik-Chervonenkis dimension. In convex geometry, the solution means that the entropy of a convex body K is controlled by the maximal dimension of a cube of a fixed side contained in the coordinate projections of K. This has a number of consequences, including the optimal Elton’s Theorem and estimates on the uniform central limit theorem in the real valued case.", "We consider the problem of learning real-valued functions from random examples when the function values are corrupted with noise. With mild conditions on independent observation noise, we provide characterizations of the learnability of a real-valued function class in terms of a generalization of the Vapnik-Chervonenkis dimension, the fat shattering function, introduced by Kearns and Schapire. We show that, given some restrictions on the noise, a function class is learnable in our model if and only if its fat-shattering function is finite. With different (also quite mild) restrictions, satisfied for example by gaussian noise, we show that a function class is learnable from polynomially many examples if and only if its fat-shattering function grows polynomially. We prove analogous results in an agnostic setting, where there is no assumption of an underlying function class.", "Given a bounded class of functions, we introduce a combinatorial quantity (related to the idea of Vapnik--Chervonenkis classes) that is much more explicit than the Koltchinskii--Pollard entropy, but is proved to be essentially of the same order.", "" ] }
1007.2964
1494146003
Let F be a family of Borel measurable functions on a complete separable metric space. The gap (or fat-shattering) dimension of F is a combinatorial quantity that measures the extent to which functions f in F can separate finite sets of points at a predefined resolution gamma > 0. We establish a connection between the gap dimension of F and the uniform convergence of its sample averages under ergodic sampling. In particular, we show that if the gap dimension of F at resolution gamma > 0 is finite, then for every ergodic process the sample averages of functions in F are eventually within 10 gamma of their limiting expectations uniformly over the class F. If the gap dimension of F is finite for every resolution gamma > 0 then the sample averages of functions in F converge uniformly to their limiting expectations. We assume only that F is uniformly bounded and countable (or countably approximable). No smoothness conditions are placed on F, and no assumptions beyond ergodicity are placed on the sampling processes. Our results extend existing work for i.i.d. processes.
Adams and Nobel @cite_0 established Theorem in the special case where the elements of @math are indicator functions of subsets of @math . The problem simplifies in this case, as @math is zero for @math , and equal to the VC-dimension of @math if @math . If @math has finite VC-dimension, their results imply that @math for every ergodic process @math . For uniformly bounded families @math they show that @math for every ergodic process @math if @math , or if @math is a VC-graph class (c.f. @cite_5 ).
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2094096036", "2010029425" ], "abstract": [ "We show that if X is a complete separable metric space and C is a countable family of Borel subsets of X with finite VC dimension, then, for every stationary ergodic process with values in X, the relative frequencies of sets C ∈ C converge uniformly to their limiting probabilities. Beyond ergodicity, no assumptions are imposed on the sampling process, and no regularity conditions are imposed on the elements of C. The result extends existing work of Vapnik and Chervonenkis, among others, who have studied uniform convergence for i.i.d. and strongly mixing processes. Our method of proof is new and direct: it does not rely on symmetrization techniques, probability inequalities or mixing conditions. The uniform convergence of relative frequencies for VC-major and VC-graph classes of functions under ergodic sampling is established as a corollary of the basic result for sets.", "I Functional on Stochastic Processes.- 1. Stochastic Processes as Random Functions.- Notes.- Problems.- II Uniform Convergence of Empirical Measures.- 1. Uniformity and Consistency.- 2. Direct Approximation.- 3. The Combinatorial Method.- 4. Classes of Sets with Polynomial Discrimination.- 5. Classes of Functions.- 6. Rates of Convergence.- Notes.- Problems.- III Convergence in Distribution in Euclidean Spaces.- 1. The Definition.- 2. The Continuous Mapping Theorem.- 3. Expectations of Smooth Functions.- 4. The Central Limit Theorem.- 5. Characteristic Functions.- 6. Quantile Transformations and Almost Sure Representations.- Notes.- Problems.- IV Convergence in Distribution in Metric Spaces.- 1. Measurability.- 2. The Continuous Mapping Theorem.- 3. Representation by Almost Surely Convergent Sequences.- 4. Coupling.- 5. Weakly Convergent Subsequences.- Notes.- Problems.- V The Uniform Metric on Spaces of Cadlag Functions.- 1. Approximation of Stochastic Processes.- 2. Empirical Processes.- 3. Existence of Brownian Bridge and Brownian Motion.- 4. Processes with Independent Increments.- 5. Infinite Time Scales.- 6. Functional of Brownian Motion and Brownian Bridge.- Notes.- Problems.- VI The Skorohod Metric on D(0, ?).- 1. Properties of the Metric.- 2. Convergence in Distribution.- Notes.- Problems.- VII Central Limit Theorems.- 1. Stochastic Equicontinuity.- 2. Chaining.- 3. Gaussian Processes.- 4. Random Covering Numbers.- 5. Empirical Central Limit Theorems.- 6. Restricted Chaining.- Notes.- Problems.- VIII Martingales.- 1. A Central Limit Theorem for Martingale-Difference Arrays.- 2. Continuous Time Martingales.- 3. Estimation from Censored Data.- Notes.- Problems.- Appendix A Stochastic-Order Symbols.- Appendix B Exponential Inequalities.- Notes.- Problems.- Appendix C Measurability.- Notes.- Problems.- References.- Author Index." ] }
1007.1049
1860703980
Gradecast is a simple three-round algorithm presented by Feldman and Micali. The current work presents a very simple synchronous algorithm that utilized Gradecast to achieve Byzantine agreement. Two small variations of the presented algorithm lead to improved algorithms for solving the Approximate agreement problem and the Multi-consensus problem. An optimal approximate agreement algorithm was presented by Fekete, which supports up to 1 n Byzantine nodes and has message complexity of O(n k ), where n is the number of nodes and k is the number of rounds. Our solution to the approximate agreement problem is optimal,
Approximate Agreement: Approximate agreement was presented in @cite_12 . The synchronous solution provided in @cite_12 supports @math and the convergence rate is @math per round, which asymptotically is @math after @math rounds. To easily compare the different algorithms, we consider the number of rounds it takes to reach convergence of @math . For @cite_12 , within @math rounds the algorithm ensures all non-faulty nodes have converged to @math . The message complexity of @cite_12 is @math per each round of the @math rounds.
{ "cite_N": [ "@cite_12" ], "mid": [ "2126906505" ], "abstract": [ "This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal." ] }
1007.1049
1860703980
Gradecast is a simple three-round algorithm presented by Feldman and Micali. The current work presents a very simple synchronous algorithm that utilized Gradecast to achieve Byzantine agreement. Two small variations of the presented algorithm lead to improved algorithms for solving the Approximate agreement problem and the Multi-consensus problem. An optimal approximate agreement algorithm was presented by Fekete, which supports up to 1 n Byzantine nodes and has message complexity of O(n k ), where n is the number of nodes and k is the number of rounds. Our solution to the approximate agreement problem is optimal,
In @cite_11 several results are presented. First, for failures there is a solution that tolerates @math and converges to @math within @math rounds. For crash failures, @cite_11 provides a solution tolerating @math that converges to @math within @math rounds. The message complexity of both algorithms is @math . Moreover, @cite_11 shows a lower bound for the case of @math rounds to reach @math convergence.
{ "cite_N": [ "@cite_11" ], "mid": [ "1976693492" ], "abstract": [ "This paper introduces some algorithms to solve crash-failure, failure-by-omission and Byzantine failure versions of the Byzantine Generals or consensus problem, where non-faulty processors need only arrive at values that are close together rather than identical. For each failure model and each value ofS, we give at-resilient algorithm usingS rounds of communication. IfS=t+1, exact agreement is obtained. In the algorithms for the failure-by-omission and Byzantine failure models, each processor attempts to identify the faulty processors and corrects values transmited by them to reduce the amount of disagreement. We also prove lower bounds for each model, to show that each of our algorithms has a convergence rate that is asymptotic to the best possible in that model as the number of processors increases." ] }
1007.1049
1860703980
Gradecast is a simple three-round algorithm presented by Feldman and Micali. The current work presents a very simple synchronous algorithm that utilized Gradecast to achieve Byzantine agreement. Two small variations of the presented algorithm lead to improved algorithms for solving the Approximate agreement problem and the Multi-consensus problem. An optimal approximate agreement algorithm was presented by Fekete, which supports up to 1 n Byzantine nodes and has message complexity of O(n k ), where n is the number of nodes and k is the number of rounds. Our solution to the approximate agreement problem is optimal,
Multi Consensus: The algorithm Multi-Consensus presented in @cite_17 solves @math sequential consensuses within @math rounds and is resilient to @math . However, @cite_17 assumes that the starts of the different @math consensuses are synchronized, a property that cannot be ensured when a consensus stops early. In the current paper we show how to adapt ideas from @cite_5 such that our solution does not require synchronized starts of the different consensuses.
{ "cite_N": [ "@cite_5", "@cite_17" ], "mid": [ "2044436778", "1972328077" ], "abstract": [ "In a seminal paper, Feldman and Micali show an n-party Byzantine agreement protocol in the plain model that tolerates t", "Are randomized consensus algorithms more powerful than deterministic ones? Seemingly so, since randomized algorithms exist that reach consensus in expected constant number of rounds, whereas the deterministic counterparts are constrained by the r ≥ t + 1 lower bound in the number of communication rounds, where t is the maximum number of faults to be tolerated." ] }
1007.1049
1860703980
Gradecast is a simple three-round algorithm presented by Feldman and Micali. The current work presents a very simple synchronous algorithm that utilized Gradecast to achieve Byzantine agreement. Two small variations of the presented algorithm lead to improved algorithms for solving the Approximate agreement problem and the Multi-consensus problem. An optimal approximate agreement algorithm was presented by Fekete, which supports up to 1 n Byzantine nodes and has message complexity of O(n k ), where n is the number of nodes and k is the number of rounds. Our solution to the approximate agreement problem is optimal,
In summary, a main contribution of this work is its simplicity. Using gradecast as a building block we present a very simple basic algorithm that solves the consensus problem and two small variations of it that solve multi consensus and approximate agreement. All three algorithms support @math , have the early-stopping property and are asymptotically optimal in their running time (up to a constant multiplicative factor). Aside from the simplicity, following are the properties of the presented algorithms: enumerate The basic algorithm solves the consensus problem and terminates within @math rounds. The first variation solves the approximate agreement problem, with convergence rate of @math , per @math rounds ( within @math rounds it converges to @math ). The message complexity is @math per @math rounds, as opposed to @math of the previous best known results. Moreover, the solution dynamically adapts to the number of failures at each round. The second variation solves @math sequential consensuses within @math rounds, and efficiently overcomes the requirement of synchronized starts of the consensus instances (a requirement assumed by @cite_17 ). enumerate
{ "cite_N": [ "@cite_17" ], "mid": [ "1972328077" ], "abstract": [ "Are randomized consensus algorithms more powerful than deterministic ones? Seemingly so, since randomized algorithms exist that reach consensus in expected constant number of rounds, whereas the deterministic counterparts are constrained by the r ≥ t + 1 lower bound in the number of communication rounds, where t is the maximum number of faults to be tolerated." ] }
1007.1261
2952211532
Developing data mining algorithms that are suitable for cloud computing platforms is currently an active area of research, as is developing cloud computing platforms appropriate for data mining. Currently, the most common benchmark for cloud computing is the Terasort (and related) benchmarks. Although the Terasort Benchmark is quite useful, it was not designed for data mining per se. In this paper, we introduce a benchmark called MalStone that is specifically designed to measure the performance of cloud computing middleware that supports the type of data intensive computing common when building data mining models. We also introduce MalGen, which is a utility for generating data on clouds that can be used with MalStone.
One of the motivations for choosing 10 billion 100-byte records is that the TeraSort Benchmark @cite_10 (sometimes called the Terabyte Sort Benchmark) also uses 10 billion 100-byte records.
{ "cite_N": [ "@cite_10" ], "mid": [ "156237147" ], "abstract": [ "A bonding agent for a ceramic decalcomania comprises a balanced combination of a fast acting solvent and a moderating agent. A thickening agent may also be present. The decalcomania may be applied to the article either before or after a glaze is applied. The use of such a bonding agent for adhering a decalcomania to ceramic ware permits glaze firing without the necessity of first removing the organic material in the decalcomania and fusing the pigment to the ceramic article by prior heat treatment." ] }
1007.1261
2952211532
Developing data mining algorithms that are suitable for cloud computing platforms is currently an active area of research, as is developing cloud computing platforms appropriate for data mining. Currently, the most common benchmark for cloud computing is the Terasort (and related) benchmarks. Although the Terasort Benchmark is quite useful, it was not designed for data mining per se. In this paper, we introduce a benchmark called MalStone that is specifically designed to measure the performance of cloud computing middleware that supports the type of data intensive computing common when building data mining models. We also introduce MalGen, which is a utility for generating data on clouds that can be used with MalStone.
The paper by Provos et. al. @cite_6 describes a system for detecting drive-by malware that uses MapReduce. Specifically, MapReduce is used to extract links from a large collection of crawled web pages. These links are then analyzed using heuristics to identify a relatively small number of suspect web sites. These suspect web sites are then tested using Internet Explorer to retrieve web pages in a virtual machine that is instrumented. This allows those web sites resulting in drive-by infections to be directly monitored. In contrast, the work described in this paper is quite different. The work here uses Hadoop and MapReduce to compute the statistic from a collection of log files generated by in one of the illustrative implementations of .
{ "cite_N": [ "@cite_6" ], "mid": [ "1491237615" ], "abstract": [ "As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker to detect vulnerabilities in the user's applications and force the download a multitude of malware binaries. Frequently, this malware allows the adversary to gain full control of the compromised systems leading to the ex-filtration of sensitive information or installation of utilities that facilitate remote control of the host. We believe that such behavior is similar to our traditional understanding of botnets. However, the main difference is that web-based malware infections are pull-based and that the resulting command feedback loop is looser. To characterize the nature of this rising thread, we identify the four prevalent mechanisms used to inject malicious content on popular web sites: web server security, user contributed content, advertising and third-party widgets. For each of these areas, we present examples of abuse found on the Internet. Our aim is to present the state of malware on the Web and emphasize the importance of this rising threat." ] }
1007.1261
2952211532
Developing data mining algorithms that are suitable for cloud computing platforms is currently an active area of research, as is developing cloud computing platforms appropriate for data mining. Currently, the most common benchmark for cloud computing is the Terasort (and related) benchmarks. Although the Terasort Benchmark is quite useful, it was not designed for data mining per se. In this paper, we introduce a benchmark called MalStone that is specifically designed to measure the performance of cloud computing middleware that supports the type of data intensive computing common when building data mining models. We also introduce MalGen, which is a utility for generating data on clouds that can be used with MalStone.
The paper @cite_12 describes how several standard data mining algorithms can be implemented using MapReduce, but this paper does not describe a computation similar to the statistic.
{ "cite_N": [ "@cite_12" ], "mid": [ "2109722477" ], "abstract": [ "We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain \"summation form,\" which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors." ] }
1007.1345
1862426967
In this paper we propose an improved approximation scheme for the Vector Bin Packing problem (VBP), based on the combination of (near-)optimal solution of the Linear Programming (LP) relaxation and a greedy (modified first-fit) heuristic. The Vector Bin Packing problem of higher dimension (d 2) is not known to have asymptotic polynomial-time approximation schemes (unless P = NP). Our algorithm improves over the previously-known guarantee of (ln d + 1 + epsilon) by [1] for higher dimensions (d > 2). We provide a (1) approximation scheme for certain set of inputs for any dimension d. More precisely, we provide a 2-OPT algorithm, a result which is irrespective of the number of dimensions d.
One dimensional bin packing problem has been studied extensively. Fernandez de la Vega and Lueker @cite_0 gave the first asymptotic polynomial-time approximation scheme (APTAS). They put forward a rounding technique that allowed them to reduce the problem of packing large items to finding an optimum packing of just a constant number of items (at a cost of @math times OPT). Their algorithm was later improved by Karmarkar and Karp @cite_6 , to a (1+ @math )-OPT bound.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2145028977", "2123068254" ], "abstract": [ "For any listL ofn numbers in (0, 1) letL* denote the minimum number of unit capacity bins needed to pack the elements ofL. We prove that, for every positive e, there exists anO(n)-time algorithmS such that, ifS(L) denotes the number of bins used byS forL, thenS(L) L*≦1+e for anyL providedL* is sufficiently large.", "We present several polynomial-time approximation algorithms for the one-dimensional bin-packing problem. using a subroutine to solve a certain linear programming relaxation of the problem. Our main results are as follows: There is a polynomial-time algorithm A such that A(I) ≤ OPT(I) + O(log2 OPT(I)). There is a polynomial-time algorithm A such that, if m(I) denotes the number of distinct sizes of pieces occurring in instance I, then A(I) ≤ OPT(I) + O(log2 m(I)). There is an approximation scheme which accepts as input an instance I and a positive real number e, and produces as output a packing using as most (1 + e) OPT(I) + O(e-2) bins. Its execution time is O(e-c n log n), where c is a constant. These are the best asymptotic performance bounds that have been achieved to date for polynomial-time bin-packing. Each of our algorithms makes at most O(log n) calls on the LP relaxation subroutine and takes at most O(n log n) time for other operations. The LP relaxation of bin packing was solved efficiently in practice by Gilmore and Gomory. We prove its membership in P, despite the fact that it has an astronomically large number of variables." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
@cite_4 studied the property of the optimal prices over time with network externality and strategic agents. They show that the seller might set a low introductory price to attract a critical mass of agents. Another notable body of work in computer science is the problem (e.g. @cite_9 and @cite_13 ), in which a set of @math seeds are selected to maximize the total influence according to some stochastic propagation model.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_4" ], "mid": [ "", "2097568888", "1989863137" ], "abstract": [ "", "Selfish routing is a classical mathematical model of how self-interested users might route traffic through a congested network. The outcome of selfish routing is generally inefficient, in that it fails to optimize natural objective functions. The price of anarchy is a quantitative measure of this inefficiency. We survey recent work that analyzes the price of anarchy of selfish routing. We also describe related results on bounding the worst-possible severity of a phenomenon called Braess's Paradox, and on three techniques for reducing the price of anarchy of selfish routing. This survey concentrates on the contributions of the author's PhD thesis, but also discusses several more recent results in the area.", "Abstract How should a monopolist price a durable good or a new technology that is subject to network externalities? In particular, should the monopolist set a low “introductory price” to attract a “critical mass” of adopters? In this paper, we provide intuition as to when and why introductory pricing might occur in the presence of network externalities. Incomplete information about demand or asymmetric information about costs is necessary for introductory pricing to occur in equilibrium when consumers are small." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
The concept and existence of pessimistic and optimistic equilibria is not new. For instance, in analogous problems with externalities, Milgrom and Roberts @cite_10 and Vives @cite_2 have witnessed the existence of such equilibria in the complete information setting. Notice that our pricing problem, when restricted to complete information, can be trivially solved by an iterative method.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2130752181", "2132275236" ], "abstract": [ "The authors study a rich class of noncooperative games that includes models of oligopoly competition, macroeconomic coordination failures, arms races, bank runs, technology adoption and diffusion, R&D competition, pretrial bargaining, coordination in teams, and many others. For all these games, the sets of pure strategy Nash equilibria, correlated equilibria, and rationalizable strategies have identical bounds. Also, for a class of models of dynamic adaptive choice behavior that encompasses both best-response dynamics and Bayesian learning, the players' choices lie eventually within the same bounds. These bounds are shown to vary monotonically with certain exogenous parameters. Copyright 1990 by The Econometric Society.", "Abstract Using lattice-theoretical methods, we analyze the existence and order structure of Nash equilibria of non-cooperative games where payoffs satisfy certain monotonicity properties (which are directly related to strategic complementarities) but need not be quasiconcave. In games with strategic complementarities the equilibrium set is always non-empty and has an order structure which ranges from the existence of a minimum and a maxinum element to being a complete lattice. Some stability properties of equilibria are also pointed out." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
Hartline, Mirrokni and Sundararajan @cite_1 study the explore and exploit framework. In their model the seller offers the product to the agents in a sequential manner, and assumes all agents are myopic , i.e., each agent is making the decision based on the known results of the previous agents in the sequence. As they have pointed out, if the pricing strategy of the seller and the private value distributions of the subsequent agents are publicly known, the agents can make more informed'' decisions than the myopic ones. In contrast to them, we consider perfect rational'' agents in the simultaneous-move game, where agents make decisions in anticipation of what others may do given their beliefs to other agents' valuations.
{ "cite_N": [ "@cite_1" ], "mid": [ "2110373679" ], "abstract": [ "We discuss the use of social networks in implementing viral marketing strategies. While influence maximization has been studied in this context (see Chapter 24 of [10]), we study revenue maximization, arguably, a more natural objective. In our model, a buyer's decision to buy an item is influenced by the set of other buyers that own the item and the price at which the item is offered. We focus on algorithmic question of finding revenue maximizing marketing strategies. When the buyers are completely symmetric, we can find the optimal marketing strategy in polynomial time. In the general case, motivated by hardness results, we investigate approximation algorithms for this problem. We identify a family of strategies called influence-and-exploit strategies that are based on the following idea: Initially influence the population by giving the item for free to carefully a chosen set of buyers. Then extract revenue from the remaining buyers using a 'greedy' pricing strategy. We first argue why such strategies are reasonable and then show how to use recently developed set-function maximization techniques to find the right set of buyers to influence." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
@cite_0 also use the explore and exploit framework, and study a similar problem; potential buyers do not arrive sequentially as in @cite_1 , but can choose to buy the product with some probability only if being recommended by friends.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2952466854", "2110373679" ], "abstract": [ "We study the use of viral marketing strategies on social networks to maximize revenue from the sale of a single product. We propose a model in which the decision of a buyer to buy the product is influenced by friends that own the product and the price at which the product is offered. The influence model we analyze is quite general, naturally extending both the Linear Threshold model and the Independent Cascade model, while also incorporating price information. We consider sales proceeding in a cascading manner through the network, i.e. a buyer is offered the product via recommendations from its neighbors who own the product. In this setting, the seller influences events by offering a cashback to recommenders and by setting prices (via coupons or discounts) for each buyer in the social network. Finding a seller strategy which maximizes the expected revenue in this setting turns out to be NP-hard. However, we propose a seller strategy that generates revenue guaranteed to be within a constant factor of the optimal strategy in a wide variety of models. The strategy is based on an influence-and-exploit idea, and it consists of finding the right trade-off at each time step between: generating revenue from the current user versus offering the product for free and using the influence generated from this sale later in the process. We also show how local search can be used to improve the performance of this technique in practice.", "We discuss the use of social networks in implementing viral marketing strategies. While influence maximization has been studied in this context (see Chapter 24 of [10]), we study revenue maximization, arguably, a more natural objective. In our model, a buyer's decision to buy an item is influenced by the set of other buyers that own the item and the price at which the item is offered. We focus on algorithmic question of finding revenue maximizing marketing strategies. When the buyers are completely symmetric, we can find the optimal marketing strategy in polynomial time. In the general case, motivated by hardness results, we investigate approximation algorithms for this problem. We identify a family of strategies called influence-and-exploit strategies that are based on the following idea: Initially influence the population by giving the item for free to carefully a chosen set of buyers. Then extract revenue from the remaining buyers using a 'greedy' pricing strategy. We first argue why such strategies are reasonable and then show how to use recently developed set-function maximization techniques to find the right set of buyers to influence." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
Recently, @cite_20 consider the multi-stage model that the seller sets different prices for each stage. In contrast to @cite_1 , within each stage, agents are perfectly rational'', which is characterized by the pessimistic equilibrium in our setting with complete information . As mentioned in @cite_20 , they did not consider the case where a rational agent may defer her decision to later stages in order to improve the utility.
{ "cite_N": [ "@cite_1", "@cite_20" ], "mid": [ "2110373679", "141957916" ], "abstract": [ "We discuss the use of social networks in implementing viral marketing strategies. While influence maximization has been studied in this context (see Chapter 24 of [10]), we study revenue maximization, arguably, a more natural objective. In our model, a buyer's decision to buy an item is influenced by the set of other buyers that own the item and the price at which the item is offered. We focus on algorithmic question of finding revenue maximizing marketing strategies. When the buyers are completely symmetric, we can find the optimal marketing strategy in polynomial time. In the general case, motivated by hardness results, we investigate approximation algorithms for this problem. We identify a family of strategies called influence-and-exploit strategies that are based on the following idea: Initially influence the population by giving the item for free to carefully a chosen set of buyers. Then extract revenue from the remaining buyers using a 'greedy' pricing strategy. We first argue why such strategies are reasonable and then show how to use recently developed set-function maximization techniques to find the right set of buyers to influence.", "We study the optimal pricing for revenue maximization over social networks in the presence of positive network externalities. In our model, the value of a digital good for a buyer is a function of the set of buyers who have already bought the item. In this setting, a decision to buy an item depends on its price and also on the set of other buyers that have already owned that item. The revenue maximization problem in the context of social networks has been studied by Hartline, Mirrokni, and Sundararajan [4], following the previous line of research on optimal viral marketing over social networks [5,6,7]. We consider the Bayesian setting in which there are some prior knowledge of the probability distribution on the valuations of buyers. In particular, we study two iterative pricing models in which a seller iteratively posts a new price for a digital good (visible to all buyers). In one model, re-pricing of the items are only allowed at a limited rate. For this case, we give a FPTAS for the optimal pricing strategy in the general case. In the second model, we allow very frequent re-pricing of the items. We show that the revenue maximization problem in this case is inapproximable even for simple deterministic valuation functions. In the light of this hardness result, we present constant and logarithmic approximation algorithms when the individual distributions are identical." ] }
1007.1501
2951529406
In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.
If the value of the product does not exhibit social influence, the seller can maximize the revenue following the optimal auction process by the seminal work of Myerson @cite_5 . Truthful auction mechanisms have also been studied for digital goods, where one can achieve constant ratio of the profit with optimal fixed price @cite_21 @cite_19 . On computing equilibria for problems that guarantees to find an equilibrium through iterative methods, most of them, for instance the famous congestion game, is proved to be PLS-hard @cite_7 .
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_21", "@cite_7" ], "mid": [ "1994475402", "2029050771", "1480158300", "2145297839" ], "abstract": [ "We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.", "This paper considers the problem faced by a seller who has a single object to sell to one of several possible buyers, when the seller has imperfect information about how much the buyers might be willing to pay for the object. The seller's problem is to design an auction game which has a Nash equilibrium giving him the highest possible expected utility. Optimal auctions are derived in this paper for a wide class of auction design problems.", "We study a class of single-round, sealed-bid auctions for a set of identical items. We adopt the worst case competitive framework defined by [6,3] that compares the profit of an auction to that of an optimal single price sale to at least two bidders. In this framework, we give a lower bound of 2.42 (an improvement from the bound of 2 given in [3]) on the competitive ratio of any truthful auction, one where each bidders best strategy is to declare the true maximum value an item is worth to them. This result contrasts with the 3.39 competitive ratio of the best known truthful auction [4].", "We investigate from the computational viewpoint multi-player games that are guaranteed to have pure Nash equilibria. We focus on congestion games, and show that a pure Nash equilibrium can be computed in polynomial time in the symmetric network case, while the problem is PLS-complete in general. We discuss implications to non-atomic congestion games, and we explore the scope of the potential function method for proving existence of pure Nash equilibria." ] }
1007.1604
2096065541
We study the dynamics of information (or virus) dissemination by @math mobile agents performing independent random walks on an @math -node grid. We formulate our results in terms of two scenarios: broadcasting and gossiping. In the broadcasting scenario, the mobile agents are initially placed uniformly at random among the grid nodes. At time 0, one agent is informed of a rumor and starts a random walk. When an informed agent meets an uninformed agent, the latter becomes informed and starts a new random walk. We study the broadcasting time of the system, that is, the time it takes for all agents to know the rumor. In the gossiping scenario, each agent is given a distinct rumor at time 0 and all agents start random walks. When two agents meet, they share all rumors they are aware of. We study the gossiping time of the system, that is, the time it takes for all agents to know all rumors. We prove that both the broadcasting and the gossiping times are @math w.h.p., thus achieving a tight characterization up to logarithmic factors. Previous results for the grid provided bounds which were weaker and only concerned average times. In the context of virus infection, a corollary of our results is that static and dynamically moving agents are infected at about the same speed.
With the advent of mobile ad-hoc networks there has been growing interest in studying information dissemination in dynamic scenarios, where a number of agents move either in a continuous space or along the nodes of some underlying graph and exchange information when their positions satisfy a specified proximity constraint. In @cite_15 @cite_2 the authors study the time it takes to broadcast information from one of @math mobile agents to all others. The agents move on a square grid of @math nodes and in each time step, an agent can (a) exchange information with all agents at distance at most @math from it, and (b) move to any random node at distance at most @math from its current position. The results in these papers only apply to a very dense scenario where the number of agents is linear in the number of grid nodes (i.e., @math ). They show that the broadcasting time is @math w.h.p., when @math and @math @cite_15 , and it is @math w.h.p., when @math @cite_2 . These results crucially rely on @math , which implies that the range of agents' communications or movements at each step defines a connected graph.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2952627339", "2152628545" ], "abstract": [ "Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the \"stationary phase\" by analyzing the completion time of the \"flooding mechanism\". We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. \"Geometric Markovian evolving graphs\" where the Markovian behaviour is yielded by \"n\" mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. \"Edge-Markovian evolving graphs\" where the probability of existence of any edge at time \"t\" depends on the existence (or not) of the same edge at time \"t-1\". In both cases, the obtained upper bounds hold \"with high probability\" and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.", "We consider Mobile Ad-hoc NETworks (MANETs) formed by n nodes that move independently at random over a finite square region of the plane. Nodes exchange data if they are at distance at most r within each other, where r > 0 is the node transmission radius . The flooding time is the number of time steps required to broadcast a message from a source node to every node of the network. Flooding time is an important measure of the speed of information spreading in dynamic networks. We derive a nearly-tight upper bound on the flooding time which is a decreasing function of the maximal velocity of the nodes. It turns out that, when the node velocity is \"sufficiently\" high, even if the node transmission radius r is far below the connectivity threshold , the flooding time does not asymptotically depend on r . So, flooding can be very fast even though every snapshot (i.e. the static random geometric graph at any fixed time) of the MANET is fully disconnected. Our result is the first analytical evidence of the fact that high, random node mobility strongly speed-up information spreading and, at the same time, let nodes save energy ." ] }
1007.1604
2096065541
We study the dynamics of information (or virus) dissemination by @math mobile agents performing independent random walks on an @math -node grid. We formulate our results in terms of two scenarios: broadcasting and gossiping. In the broadcasting scenario, the mobile agents are initially placed uniformly at random among the grid nodes. At time 0, one agent is informed of a rumor and starts a random walk. When an informed agent meets an uninformed agent, the latter becomes informed and starts a new random walk. We study the broadcasting time of the system, that is, the time it takes for all agents to know the rumor. In the gossiping scenario, each agent is given a distinct rumor at time 0 and all agents start random walks. When two agents meet, they share all rumors they are aware of. We study the gossiping time of the system, that is, the time it takes for all agents to know all rumors. We prove that both the broadcasting and the gossiping times are @math w.h.p., thus achieving a tight characterization up to logarithmic factors. Previous results for the grid provided bounds which were weaker and only concerned average times. In the context of virus infection, a corollary of our results is that static and dynamically moving agents are infected at about the same speed.
Finally, a related line of research deals with the cover time of a random walk on a graph, that is, the expected time when all of the graph nodes are touched by the random walk. (See @cite_14 for a comprehensive account of the relevant literature.) The cover time is strictly related to the hitting time @cite_6 , namely the average time required of a random walk to reach a specified node. For @math -node meshes, it is known that the hitting time is @math , while the cover time is @math @cite_8 @cite_3 . Bounds on the speed-up achieved on the cover time by multiple random walks as opposed to a single one are proved in @cite_4 @cite_5 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_3", "@cite_6", "@cite_5" ], "mid": [ "", "2160504298", "2138144746", "2048572907", "2094887806", "1564693713" ], "abstract": [ "", "We pose a new and intriguing question motivated by distributed computing regarding random walks on graphs: How long does it take for several independent random walks, starting from the same vertex, to cover an entire graph? We study the cover time - the expected time required to visit every node in a graph at least once - and we show that for a large collection of interesting graphs, running many random walks in parallel yields a speed-up in the cover time that is linear in the number of parallel walks. We demonstrate that an exponential speed-up is sometimes possible, but that some natural graphs allow only a logarithmic speed-up. A problem related to ours (in which the walks start from some probablistic distribution on vertices) was previously studied in the context of space efficient algorithms for undirected s-t-connectivity and our results yield, in certain cases, an improvement upon some of the earlier bounds.", "A general technique for proving lower bounds on expected covering times of random walks on graphs in terms of expected hitting times between vertices is given. This technique is used to prove (i) A tight bound of @math for the two-dimensional torus; (ii) A tight bound of @math for trees with maximum degree @math ; (iii) Tight bounds of @math for rapidly mixing walks on vertex transitive graphs, where @math denotes the maximum expected hitting time between vertices.In addition to these new results, the technique allows several known lower bounds on cover times to be systematically proved, often in a much simpler way.Finally, a different technique is used to prove an @math lower bound on the cover time, where @math is the second largest eigenvalue of the transition matrix. This was previously known only in the case where the walk starts in the stationary distribution [...", "", "[20th Annual Symposium on Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA, 1979, pp. 218--223] posed the following question: \"The reachability problem for undirected graphs can be solved in log space and @math time [ @math is the number of edges and @math is the number of vertices] by a probabilistic algorithm that simulates a random walk, or in linear time and space by a conventional deterministic graph traversal algorithm. Is there a spectrum of time-space trade-offs between these extremes?\" This question is answered in the affirmative for sparse graphs by presentation of an algorithm that is faster than the random walk by a factor essentially proportional to the size of its workspace. For denser graphs, this algorithm is faster than the random walk but the speed-up factor is smaller.", "We study the cover time of multiple random walks. Given a graph G of n vertices, assume that k independent random walks start from the same vertex. The parameter of interest is the speed-up defined as the ratio between the cover time of one and the cover time of k random walks. Recently developed several bounds that are based on the quotient between the cover time and maximum hitting times. Their technique gives a speed-up of *** (k ) on many graphs, however, for many graph classes, k has to be bounded by @math . They also conjectured that, for any 1 ≤ k ≤ n , the speed-up is at most @math on any graph. As our main results, we prove the following: We present a new lower bound on the speed-up that depends on the mixing-time. It gives a speed-up of *** (k ) on many graphs, even if k is as large as n . We prove that the speed-up is @math on any graph. Under rather mild conditions, we can also improve this bound to @math , matching exactly the conjecture of We find the correct order of the speed-up for any value of 1 ≤ k ≤ n on hypercubes, random graphs and expanders. For d -dimensional torus graphs (d > 2), our bounds are tight up to a factor of @math . Our findings also reveal a surprisingly sharp dichotomy on several graphs (including d -dim. torus and hypercubes): up to a certain threshold the speed-up is k , while there is no additional speed-up above the threshold." ] }
1007.0159
1490959288
Often, when modelling a system there are properties and operations that are related to a group of objects rather than to a single object. In this paper we extend Java with Swarm Behavior, a new com- position operator that associates behavior with a collection of instances. The lookup resolution of swarm behavior is based on the element type of a collection and is thus orthogonal to the collection hierarchy.
is not the same as array programming ( projection of message sends). The principle behind array programming is that the same operation is applied to an entire array of data, without the need for explicit loops. Apart from ancient languages such as APL, or mathematical software such as MathLab and Mathematica, array programming has been recently applied in the context of dynamic languages by FScript @cite_1 , a Smalltalk-based scripting language for OSX, and by the ECMAScript for XML (E4X) specification, an extension of JavaScript.
{ "cite_N": [ "@cite_1" ], "mid": [ "2027416917" ], "abstract": [ "Array programming shines in its ability to express computations at a high-level of abstraction, allowing one to manipulate and query whole sets of data at once. This paper presents the OPA model that enhances object-oriented programming with array programming features. The goal of OPA is to determine a minimum set of modifications that must be made to the traditional object model in order to take advantage of the possibilities of array programming. It is based on a minimal extension of method invocation and the definition of a kernel of methods implementing fundamental array programming operations. The OPA model presents a generalization of traditional message passing in the sense that a message can be send to an entire set of objects. The model is validated in FS, a new scripting language." ] }
1007.0159
1490959288
Often, when modelling a system there are properties and operations that are related to a group of objects rather than to a single object. In this paper we extend Java with Swarm Behavior, a new com- position operator that associates behavior with a collection of instances. The lookup resolution of swarm behavior is based on the element type of a collection and is thus orthogonal to the collection hierarchy.
Traits @cite_7 are collections of methods (behavior) that can be composed into classes. Traits are normally seen as entities that are composed by the developer when designing the system. More dynamic notions of traits have been explored where traits are installed or retracted at runtime @cite_2 . As traits are applied to classes, they provide behavior to all instances of the class, similar to normal methods. Thus traits alone do not help to model behavior of a collection of objects. It would be interesting to explore a combination of traits with . Traits could be used to structure the . As classes are composed of traits, groups could be composed from traits to support reuse and limit code-duplication.
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "2111898165", "26503509" ], "abstract": [ "Inheritance is well-known and accepted as a mechanism for reuse in object-oriented languages. Unfortunately, due to the coarse granularity of inheritance, it may be difficult to decompose an application into an optimal class hierarchy that maximizes software reuse. Existing schemes based on single inheritance, multiple inheritance, or mixins, all pose numerous problems for reuse. To overcome these problems we propose traits, pure units of reuse consisting only of methods. We develop a formal model of traits that establishes how traits can be composed, either to form other traits, or to form classes. We also outline an experimental validation in which we apply traits to refactor a nontrivial application into composable units.", "On the one hand, traits are a powerful way of structuring classes. Traits support the reuse of method collections over several classes. However, traits cannot be used when specifying unanticipated changes to an application. On the other hand, classboxes are a new module system that supports the local redefinition of classes: a collection of classes can be locally extended with variables and or methods and the existing clients do not get impacted by changes. However, an extension applied to a class by a classbox cannot be reused for other classes. This paper describes how combining Traits and Classboxes supports the safe introduction of crosscutting collaborations: safe because the existing clients of the classes do not get impacted, crosscutting because collaborations between several classes can be put in place in a unanticipated manner. In the resulting system, a collaboration is represented by a classbox and a role by a trait." ] }
1007.0614
2952627676
We propose an online form of the cake cutting problem. This models situations where players arrive and depart during the process of dividing a resource. We show that well known fair division procedures like cut-and-choose and the Dubins-Spanier moving knife procedure can be adapted to apply to such online problems. We propose some desirable properties that online cake cutting procedures might possess like online forms of proportionality and envy-freeness, and identify which properties are in fact possessed by the different online cake procedures.
There is an extensive literature on fair division and cake cutting procedures. See, for instance, @cite_0 for an introduction. There has, however, been considerably less work on fair division problems similar to those considered here.
{ "cite_N": [ "@cite_0" ], "mid": [ "2022749618" ], "abstract": [ "Cutting a cake, dividing up the property in an estate, determining the borders in an international dispute - such problems of fair division are ubiquitous. Fair Division treats all these problems and many more through a rigorous analysis of a variety of procedures for allocating goods (or 'bads' like chores), or deciding who wins on what issues, when there are disputes. Starting with an analysis of the well-known cake-cutting procedure, 'I cut, you choose', the authors show how it has been adapted in a number of fields and then analyze fair-division procedures applicable to situations in which there are more than two parties, or there is more than one good to be divided. In particular they focus on procedures which provide 'envy-free' allocations, in which everybody thinks he or she has received the largest portion and hence does not envy anybody else. They also discuss the fairness of different auction and election procedures." ] }
1006.5188
1674808786
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
In @cite_11 are presented a logic language, SeqLog, for mining sequences of logical atoms, and the inductive mining system MineSeqLog, that combines principles of the level-wise search algorithm with the version space in order to find all patterns that satisfy a constraint by using an optimal refinement operator for SeqLog. SeqLog is a logic representational framework that adopts two operators to represent the sequences: one to indicate that an atom is the direct successor of another and the other to say that an atom occurs somewhere after another. Furthermore, based on this language, the notion of subsumption, entailment and a fix point semantic are given.
{ "cite_N": [ "@cite_11" ], "mid": [ "1498433609" ], "abstract": [ "A logical language, SeqLog, for mining and querying sequential data and databases is presented. In SeqLog, data takes the form of a sequence of logical atoms, background knowledge can be specified using Datalog style clauses and sequential queries or patterns correspond to subsequences of logical atoms. SeqLog is then used as the representation language for the inductive database mining system MineSeqLog. Inductive queries in MineSeqLog take the form of a conjunction of a monotonic and an anti-monotonic constraint on sequential patterns. Given such an inductive query, MineSeqLog computes the borders of the solution space. MineSeqLog uses variants of the famous level-wise algorithm together with ideas from version spaces to realize this. Finally, we report on a number of experiments in the domains of user-modelling that validate the approach." ] }
1006.5188
1674808786
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
These work even if may be correlated to our work, they tackle into account the feature construction problem only. Here, however we combine a feature construction process with a feature selection algorithm maximising the predictive accuracy of a probabilistic model. Systems very similar to our approach are those that combine a probabilistic models with a relational description such as logical hidden Markov models (LoHHMs) @cite_14 , Fisher kernels for logical sequences @cite_10 , and relational conditional random fields @cite_2 that are purposely designed for relational sequences learning.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_2" ], "mid": [ "2164781796", "1597090307", "1604179321" ], "abstract": [ "Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.", "One approach to improve the accuracy of classifications based on generative models is to combine them with successful discriminative algorithms. Fisher kernels were developed to combine generative models with a currently very popular class of learning algorithms, kernel methods. Empirically, the combination of hidden Markov models with support vector machines has shown promising results. So far, however, Fisher kernels have only been considered for sequences over flat alphabets. This is mostly due to the lack of a method for computing the gradient of a generative model over structured sequences. In this paper, we show how to compute the gradient of logical hidden Markov models, which allow for the modelling of logical sequences, i.e., sequences over an alphabet of logical atoms. Experiments show a considerable improvement over results achieved without Fisher kernels for logical sequences.", "Conditional Random Fields (CRFs) provide a powerful instrument for labeling sequences. So far, however, CRFs have only been considered for labeling sequences over flat alphabets. In this paper, we describe TildeCRF, the first method for training CRFs on logical sequences, i.e., sequences over an alphabet of logical atoms. TildeCRF's key idea is to use relational regression trees in 's gradient tree boosting approach. Thus, the CRF potential functions are represented as weighted sums of relational regression trees. Experiments show a significant improvement over established results achieved with hidden Markov models and Fisher kernels for logical sequences." ] }
1006.5188
1674808786
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
In @cite_10 has been proposed an extension of classical Fisher kernels, working on sequences over flat alphabets, in order to make them able to model logical sequences, i.e., sequences over an alphabet of logical atoms. Fisher kernels were developed to combine generative models with kernel methods, and have shown promising results for the combinations of support vector machines with (logical) hidden Markov models and Bayesian networks. Successively, in @cite_14 the same authors proposed an algorithm for selecting LoHMMs from data. HMM @cite_16 are one of the most popular methods for analysing sequential data, but they can be exploited to handle sequence of flat unstructured symbols. The proposed logical extension @cite_20 overcomes such weakness by handling sequences of structured symbols by means of a probabilistic ILP framework.
{ "cite_N": [ "@cite_20", "@cite_14", "@cite_10", "@cite_16" ], "mid": [ "2963975801", "2164781796", "1597090307", "2105594594" ], "abstract": [ "Many real world sequences such as protein secondary structures or shell logs exhibit a rich internal structures. Traditional probabilistic models of sequences, however, consider sequences of flat symbols only. Logical hidden Markov models have been proposed as one solution. They deal with logical sequences, i.e., sequences over an alphabet of logical atoms. This comes at the expense of a more complex model selection problem. Indeed, different abstraction levels have to be explored. In this paper, we propose a novel method for selecting logical hidden Markov models from data called SAGEM. SAGEM combines generalized expectation maximization, which optimizes parameters, with structure search for model selection using inductive logic programming refinement operators. We provide convergence and experimental results that show SAGEM's effectiveness.", "Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics.", "One approach to improve the accuracy of classifications based on generative models is to combine them with successful discriminative algorithms. Fisher kernels were developed to combine generative models with a currently very popular class of learning algorithms, kernel methods. Empirically, the combination of hidden Markov models with support vector machines has shown promising results. So far, however, Fisher kernels have only been considered for sequences over flat alphabets. This is mostly due to the lack of a method for computing the gradient of a generative model over structured sequences. In this paper, we show how to compute the gradient of logical hidden Markov models, which allow for the modelling of logical sequences, i.e., sequences over an alphabet of logical atoms. Experiments show a considerable improvement over results achieved without Fisher kernels for logical sequences.", "The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition." ] }
1006.5188
1674808786
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
Finally, in @cite_2 an extension of conditional random fields (CRFs) to logical sequences has been proposed. In the case of sequence labelling task, CRFs are a better alternative to HMMs that makes it relatively easy to model arbitrary dependencies in the input space. CRFs are undirected graphical models that instead of learning a generative model, such as in HMMs, they learn a discriminative model designed to handle non-independent input features. @cite_2 , the authors lifted CRFs to the relational case by representing the potential functions as a sum of relational regression trees learnt by a relational regression tree learner.
{ "cite_N": [ "@cite_2" ], "mid": [ "1604179321" ], "abstract": [ "Conditional Random Fields (CRFs) provide a powerful instrument for labeling sequences. So far, however, CRFs have only been considered for labeling sequences over flat alphabets. In this paper, we describe TildeCRF, the first method for training CRFs on logical sequences, i.e., sequences over an alphabet of logical atoms. TildeCRF's key idea is to use relational regression trees in 's gradient tree boosting approach. Thus, the CRF potential functions are represented as weighted sums of relational regression trees. Experiments show a significant improvement over established results achieved with hidden Markov models and Fisher kernels for logical sequences." ] }
1006.4228
2950740994
With the rapid proliferation of broadband wireless services, it is of paramount importance to understand how fast data can be sent through a wireless local area network (WLAN). Thanks to a large body of research following the seminal work of Bianchi, WLAN throughput under saturated traffic condition has been well understood. By contrast, prior investigations on throughput performance under unsaturated traffic condition was largely based on phenomenological observations, which lead to a common misconception that WLAN can support a traffic load as high as saturation throughput, if not higher, under non-saturation condition. In this paper, we show through rigorous analysis that this misconception may result in unacceptable quality of service: mean packet delay and delay jitter may approach infinity even when the traffic load is far below the saturation throughput. Hence, saturation throughput is not a sound measure of WLAN capacity under non-saturation condition. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safe-bounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require finite mean delay and delay jitter, respectively. Our earlier work proved that in a WLAN with multi-packet reception (MPR) capability, saturation throughput scales super-linearly with the MPR capability of the network. This paper extends the investigation to the non-saturation case and shows that super-linear scaling also holds for SBMD and SBDJ throughputs. Our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for WLAN under both saturation and non-saturation conditions.
Previous work on delay analysis can be divided into two main threads: medium-access delay of head-of-line (HOL) packets under saturation condition and queueing delay (also referred to as packet delay hereafter) under non-saturation condition. In saturated systems, mean medium-access delay is easily derived as the reciprocal of saturation throughput @cite_19 @cite_8 . More recently, Sakurai and Vu derived moments and generating function of medium-access delay under saturation. It was found that the EB mechanism induces a heave-tailed delay distribution. Similar observation was also made by Yang and Yum in @cite_10 when binary EB is deployed. In this paper, we show that in unsaturated WLANs, packet delay distribution also exhibits a heavy-tail behavior. It is for precisely this reason that the sustainable throughput subject to finite mean delay and delay jitter may differ from the saturation throughput.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_8" ], "mid": [ "2104428131", "1998010325", "" ], "abstract": [ "This letter presents a new approach to evaluate the throughput delay performance of the 802.11 distributed coordination function (DCF). Our approach relies on elementary conditional probability arguments rather than bidimensional Markov chains (as proposed in previous models) and can be easily extended to account for backoff operation more general than DCF's one.", "We derive the closed-form delay distributions of slotted ALOHA and nonpersistent carrier sense multiple access (CSMA) protocols under steady state. Three retransmission policies are analyzed. We find that under a binary exponential backoff retransmission policy, finite average delay and finite delay variance can be guaranteed for G<2S and G<4S 3, respectively, where G is the channel traffic and S is the channel throughput. As an example, in slotted ALOHA, S<(ln2) 2 and S<3(ln4-ln3) 4 are the operating ranges for finite first and second delay moments. In addition, the blocking probability and delay performance as a function of r sub max (maximum number of retransmissions allowed) is also derived.", "" ] }
1006.4937
2950622007
We consider the problem of scheduling in multihop wireless networks subject to interference constraints. We consider a graph based representation of wireless networks, where scheduled links adhere to the K-hop link interference model. We develop a distributed greedy heuristic for this scheduling problem. Further, we show that this distributed greedy heuristic computes the exact same schedule as the centralized greedy heuristic.
Scheduling and routing algorithms allocate resources to competing flows in multihop wireless networks. Research into scheduling, routing and congestion control is several decades old, but has seen a lot of activity , following the seminal paper of Tassiulas and Ephremides @cite_7 . One possible way to schedule links in a wireless network is to use a spatial time division multiple access (STDMA) along with the physical interference model. While physical interference model allows more aggressive scheduling, it has been shown that no localized distributed algorithm can solve the problem of building a feasible schedule under this model @cite_0 . Since the paper by Kumar and Gupta @cite_6 , the protocol model of wireless network has been studied extensively. Research has shown that a @math -hop link interference model can be used to effectively model the protocol model @cite_2 .
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "2144812583", "2137775453", "2105177639", "2059739072" ], "abstract": [ "It is known that CSMA CA channel access schemes are not well suited to meet the high traffic demand of wireless mesh networks. One possible way to increase traffic carrying capacity is to use a spatial TDMA (STDMA) approach in conjunction with the physical interference model, which allows more aggressive scheduling than the protocol interference model on which CSMA CA is based. While an efficient centralized solution for STDMA with physical interference has been recently proposed, no satisfactory distributed approaches have been introduced so far. In this paper, we first prove that no localized distributed algorithm can solve the problem of building a feasible schedule under the physical interference model. Motivated by this, we design a global primitive, called SCREAM, which is used to verify the feasibility of a schedule during an iterative distributed scheduling procedure. Based on this primitive, we present two distributed protocols for efficient, distributed scheduling under the physical interference model, and we prove an approximation bound for one of the protocols. We also present extensive packet-level simulation results, which show that our protocols achieve schedule lengths very close to those of the centralized algorithm and have running times that are practical for mesh networks.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >", "We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks." ] }
1006.4937
2950622007
We consider the problem of scheduling in multihop wireless networks subject to interference constraints. We consider a graph based representation of wireless networks, where scheduled links adhere to the K-hop link interference model. We develop a distributed greedy heuristic for this scheduling problem. Further, we show that this distributed greedy heuristic computes the exact same schedule as the centralized greedy heuristic.
A commonly used model is the @math -hop link interference model, in which two links that are not within K-hops of each other can communicate simultaneously, and the capacity of a link is a constant value if there is no interference @cite_15 , @cite_13 , @cite_8 , @cite_1 , @cite_12 , @cite_10 . In @cite_15 , the Maximal Matching (MM) scheduling algorithm is used under the node exclusive interference model. This algorithm can operate in a distributed fashion and is proven to achieve at least one half of the achievable throughput. This has motivated subsequent research on distributed algorithms with provable performance @cite_13 , @cite_8 , @cite_10 , @cite_1 , @cite_12 .
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_1", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2118844846", "207247871", "2088693732", "1901187898", "", "2005076479" ], "abstract": [ "We consider wireless networks with a special type of spectral allocation, where the only constraint is that a node cannot transmit to more than one receiver at a time and cannot receive more than one transmission at a time. We introduce a scheduling algorithm called regulated maximal matching which is fully distributed and guarantees a throughput that is at least half of the throughput achievable by a centralized algorithm.", "Abstract— We consider wireless networks with interference constraints. The network consists of a set of links and a set of users who generate packets that traverse these links. Each user is associated with a route consisting of a sequence of links. The links are subject to the usual interference constraints: (i) if link l interferes with link k, then link k also interferes with link l, and (ii) two links that interfere with each other cannot transmit simultaneously. The interference set of a link is defined to be the set of links that interfere with the link, along with the link itself. A greedy scheduler is one which selects an arbitrary set of links to transmit subject only to the interference constraint. We use a traffic regulator at each link along the route of each flow which shapes the traffic of the flow. We prove that the network is queue-length stable under any maximal greedy scheduling policy provided that the total arrival rate in the interference set of each link is less than one.", "The scheduling problem in multi-hop wireless networks has been extensively investigated. Although throughput optimal scheduling solutions have been developed in the literature, they are unsuitable for multi-hop wireless systems because they are usually centralized and have very high complexity. In this paper, we develop a random-access based scheduling scheme that utilizes local information. The important features of this scheme include constant-time complexity, distributed operations, and a provable performance guarantee. Analytical results show that it guarantees a larger fraction of the optimal throughput performance than the state-of-the-art. Through simulations with both single-hop and multi-hop traffics, we observe that the scheme provides high throughput, close to that of a well-known highly efficient centralized greedy solution called the greedy maximal scheduler.", "In this paper, we study cross-layer design for rate control in multihop wireless networks. In our previous work, we have developed an optimal cross-layered rate control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layered rate control scheme has to solve a complex global optimization problem at each time, and hence is too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer rate control can be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish desirable results on the performance bounds of cross-layered rate control with imperfect scheduling. Compared with a layered approach that does not design rate control and scheduling together, our cross-layered approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.", "", "We propose two new distributed scheduling policies for ad hoc wireless networks that can achieve provable capacity regions. Known scheduling policies that guarantee comparable capacity regions are either centralized or need computation time that increases with the size of the network. In contrast, the unique feature of the proposed distributed scheduling policies is that they are constant-time policies, i.e., the time needed for computing a schedule is independent of the network size. Hence, they can be easily deployed in large networks." ] }
1006.4937
2950622007
We consider the problem of scheduling in multihop wireless networks subject to interference constraints. We consider a graph based representation of wireless networks, where scheduled links adhere to the K-hop link interference model. We develop a distributed greedy heuristic for this scheduling problem. Further, we show that this distributed greedy heuristic computes the exact same schedule as the centralized greedy heuristic.
Scheduling algorithms under different SINR interference models have been studied in the literature @cite_5 , @cite_9 , @cite_3 , @cite_14 , @cite_11 . In @cite_5 , the authors have proposed a simple and distributed scheduling algorithm, that is an approximation to the optimal centralized algorithm. In @cite_9 , for the logarithmic SINR interference model, the author has proposed a distributed algorithm that is distributed and optimal when SINR values are high. The authors in @cite_3 and @cite_14 have also proposed heuristic algorithms under the target SINR interference model where the capacity of a link is a constant value when the received SINR exceeds a threshold, or zero otherwise. In @cite_11 , the authors have explore localized distributed scheduling for linear and logarithmic SINR model.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_3", "@cite_5", "@cite_11" ], "mid": [ "2106643974", "2150146513", "2056643664", "2165768656", "" ], "abstract": [ "Spatial reuse TDMA is an access scheme for multi-hop radio networks. The idea is to increase capacity by letting several radio terminals use the same time slot when possible. A time slot can be shared when the radio units are geographically separated such that small interference is obtained. STDMA schedules can assign transmission rights to nodes or alternatively assign transmission rights to links, i.e. transmitter receiver pairs. Here we compare these two methods and determine which one is preferable. We show that only the connectivity of the network and the input traffic load of the network is needed in order to determine whether node or link assignment is preferable.", "In a wireless ad hoc network with multihop transmissions and interference-limited link rates, can we balance power control in the physical layer and congestion control in the transport layer to enhance the overall network performance, while maintaining the stability, robustness, and architectural modularity of the network? We present a distributive power control algorithm that couples with the original TCP protocols to increase the end-to-end throughput and energy efficiency of the network. Under the rigorous framework of nonlinearly constrained optimization, we prove the convergence of this coupled system to the global optimum of joint power control and congestion control, for both synchronized and asynchronous implementations. The rate of convergence is geometric and a desirable modularity between the transport and physical layers is maintained. In particular, when the congestion control mechanism is TCP Vegas, that a simple utilization in the physical layer of the router buffer occupancy information suffices to achieve the joint optimum of this cross layer design. Both analytic results and simulations illustrate other desirable properties of the proposed algorithm, including robustness to channel outage and to path loss estimation errors, and flexibility in trading-off performance optimality for implementation simplicity.", "We consider the problem of designing distributed mechanisms for joint congestion control and resource allocation in spatial-reuse TDMA wireless networks. The design problem is posed as a utility maximization subject to link rate constraints that involve both power allocation and transmission scheduling over multiple time-slots. Starting from the performance limits of a centralized optimization based on global network information,we proceed systematically in the development of distributed and transparent protocols. In the process,we introduce a novel decomposition method for convex optimization,establish its convergence for the utility maximization problem and demonstrate how it suggests a distributed solution based on flow control optimization and incremental updates of the transmission schedule.We develop a two-step procedure for finding the schedule updates and suggest two schemes for distributed channel reservation and power control under realistic interference models. Although the final protocols are suboptimal,we isolate and quantify the performance losses incurred by each simplification and demonstrate strong performance in examples.", "We consider dynamic routing and power allocation for a wireless network with time varying channels. The network consists of power constrained nodes which transmit over wireless links with adaptive transmission rates. Packets randomly enter the system at each node and wait in output queues to be transmitted through the network to their destinations. We establish the capacity region of all rate matrices ( spl lambda sub ij ) that the system can stably support - where ( spl lambda sub ij ) represents the rate of traffic originating at node i and destined for node j. A joint routing and power allocation policy is developed which stabilizes the system and provides bounded average delay guarantees whenever the input rates are within this capacity region. Such performance holds for general arrival and channel state processes, even if these processes are unknown to the network controller. We then apply this control algorithm to an ad-hoc wireless network where channel variations are due to user mobility, and compare its performance with the Grossglauser-Tse (2001) relay model.", "" ] }
1006.4937
2950622007
We consider the problem of scheduling in multihop wireless networks subject to interference constraints. We consider a graph based representation of wireless networks, where scheduled links adhere to the K-hop link interference model. We develop a distributed greedy heuristic for this scheduling problem. Further, we show that this distributed greedy heuristic computes the exact same schedule as the centralized greedy heuristic.
The problem of link scheduling under the @math -hop link interference model has been shown to be NP-hard in @cite_2 , @cite_4 . Motivated by this, we explore heuristics to address the link scheduling problem. In particular, it is interesting to explore the greedy heuristic because it lends itself to a distributed implementation @cite_2 . While the idea of a distributed version of the greedy heuristic seems trivial, to the best of our knowledge, it has not been described precisely in the literature,. We find that the distributed greedy heuristic involves certain subtleties, that makes the algorithm non-trivial.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2435603672", "2059739072" ], "abstract": [ "In this paper, we address the following question: given a specific placement of wireless nodes in physical space and a specific traffic workload, what is the maximum throughput that can be supported by the resulting network? Unlike previous work that has focused on computing asymptotic performance bounds under assumptions of homogeneity or randomness in the network topology and or workload, we work with any given network and workload specified as inputs.A key issue impacting performance is wireless interference between neighboring nodes. We model such interference using a conflict graph, and present methods for computing upper and lower bounds on the optimal throughput for the given network and workload. To compute these bounds, we assume that packet transmissions at the individual nodes can be finely controlled and carefully scheduled by an omniscient and omnipotent central entity, which is unrealistic. Nevertheless, using ns-2 simulations, we show that the routes derived from our analysis often yield noticeably better throughput than the default shortest path routes even in the presence of uncoordinated packet transmissions and MAC contention. This suggests that there is opportunity for achieving throughput gains by employing an interference-aware routing protocol.", "We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks." ] }
1006.3678
2951880362
In this paper we propose an extension of Answer Set Programming (ASP), and in particular, of its most general logical counterpart, Quantified Equilibrium Logic (QEL), to deal with partial functions. Although the treatment of equality in QEL can be established in different ways, we first analyse the choice of decidable equality with complete functions and Herbrand models, recently proposed in the literature. We argue that this choice yields some counterintuitive effects from a logic programming and knowledge representation point of view. We then propose a variant called QELF where the set of functions is partitioned into partial and Herbrand functions (we also call constructors). In the rest of the paper, we show a direct connection to Scott's Logic of Existence and present a practical application, proposing an extension of normal logic programs to deal with partial functions and equality, so that they can be translated into function-free normal programs, being possible in this way to compute their answer sets with any standard ASP solver.
With respect to other logical characterisations of Functional Programming languages, the closest one is perhaps @cite_15 , from where we extracted the separation of constructors and evaluable functions. The main difference is that @math provides a completely logical description of all operators, allowing an arbitrary syntax (including rules with negation, disjunction in the head, negation and disjunction of rules, etc). Another important difference is that @math is constrained to strict functions, while @cite_15 is based on non-strict functions.
{ "cite_N": [ "@cite_15" ], "mid": [ "2063521547" ], "abstract": [ "Abstract We propose an approach to declarative programming which integrates the functional and relational paradigms by taking possibly non-deterministic lazy functions as the fundamental notion. Classical equational logic does not supply a suitable semantics in a natural way. Therefore, we suggest to view programs as theories in a constructor-based conditional rewriting logic. We present proof calculi and a model theory for this logic, and we prove the existence of free term models which provide an adequate intended semantics for programs. We develop a sound and strongly complete lazy narrowing calculus, which is able to support sharing without the technical overhead of graph rewriting and to identify safe cases for eager variable elimination. Moreover, we give some illustrative programming examples, and we discuss the implementability of our approach." ] }
1006.2588
2951528191
We present and analyze an agnostic active learning algorithm that works without keeping a version space. This is unlike all previous approaches where a restricted set of candidate hypotheses is maintained throughout learning, and only hypotheses from this set are ever returned. By avoiding this version space approach, our algorithm sheds the computational burden and brittleness associated with maintaining version spaces, yet still allows for substantial improvements over supervised learning for classification.
As already mentioned, our work is closely related to the previous works of @cite_13 and @cite_2 , both of which in turn draw heavily on the work of @cite_7 and @cite_18 . The algorithm from @cite_13 extends the selective sampling method of @cite_7 to the agnostic setting using generalization bounds in a manner similar to that first suggested in @cite_18 . It accesses hypotheses only through a special ERM oracle that can enforce an arbitrary number of example-based constraints; these constraints define a version space, and the algorithm only ever returns hypotheses from this space, which can be undesirable as we previously argued. Other previous algorithms with comparable performance guarantees also require similar example-based constraints (, @cite_18 @cite_2 @cite_20 @cite_6 ). Our algorithm differs from these in that (i) it never restricts its attention to a version space when selecting a hypothesis to return, and (ii) it only requires an ERM oracle that enforces at most one example-based constraint, and this constraint is only used for selective sampling. Our label complexity bounds are comparable to those proved in @cite_2 (though somewhat worse that those in @cite_18 @cite_13 @cite_20 @cite_6 ).
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_6", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2157958821", "2151023586", "53630002", "2167595980", "2117756453", "1600192391" ], "abstract": [ "We state and analyze the first active learning algorithm which works in the presence of arbitrary forms of noise. The algorithm, A2 (for Agnostic Active), relies only upon the assumption that the samples are drawn i.i.d. from a fixed distribution. We show that A2 achieves an exponential improvement (i.e., requires only O (ln 1 e) samples to find an e-optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere.", "Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples. In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization.", "Sequential algorithms of active learning based on the estimation of the level sets of the empirical risk are discussed in the paper. Localized Rademacher complexities are used in the algorithms to estimate the sample sizes needed to achieve the required accuracy of learning in an adaptive way. Probabilistic bounds on the number of active examples have been proved and several applications to binary classification problems are considered.", "We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process.", "We present an agnostic active learning algorithm for any hypothesis class of bounded VC dimension under arbitrary data distributions. Most previous work on active learning either makes strong distributional assumptions, or else is computationally prohibitive. Our algorithm extends the simple scheme of Cohn, Atlas, and Ladner [1] to the agnostic setting, using reductions to supervised learning that harness generalization bounds in a simple but subtle manner. We provide a fall-back guarantee that bounds the algorithm's label complexity by the agnostic PAC sample complexity. Our analysis yields asymptotic label complexity improvements for certain hypothesis classes and distributions. We also demonstrate improvements experimentally.", "We study the rates of convergence in classification error achievable by active learning in the presence of label noise. Additionally, we study the more general problem of active learning with a nested hierarchy of hypothesis classes, and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning." ] }
1006.2588
2951528191
We present and analyze an agnostic active learning algorithm that works without keeping a version space. This is unlike all previous approaches where a restricted set of candidate hypotheses is maintained throughout learning, and only hypotheses from this set are ever returned. By avoiding this version space approach, our algorithm sheds the computational burden and brittleness associated with maintaining version spaces, yet still allows for substantial improvements over supervised learning for classification.
Many of the previously mentioned algorithms are analyzed in the agnostic learning model, where no assumption is made about the noise distribution (see also @cite_21 ). In this setting, the label complexity of active learning algorithms cannot generally improve over supervised learners by more than a constant factor @cite_17 @cite_2 . However, under a parameterization of the noise distribution related to Tsybakov's low-noise condition @cite_15 , active learning algorithms have been shown to have improved label complexity bounds over what is achievable in the purely agnostic setting @cite_4 @cite_14 @cite_11 @cite_20 @cite_6 . We also consider this parameterization to obtain a tighter label complexity analysis.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_11", "@cite_21", "@cite_6", "@cite_2", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2128518360", "", "2106447856", "2114232233", "53630002", "2167595980", "", "1600192391", "2583780928" ], "abstract": [ "We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature.We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition.", "", "This paper analyzes the potential advantages and theoretical challenges of \"active learning\" algorithms. Active learning involves sequential sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to \"passive learning\" algorithms, which are based on nonadaptive (usually random) samples. There are a number of empirical and theoretical results suggesting that in certain situations active learning can be significantly more effective than passive learning. However, the fact that active learning algorithms are feedback systems makes their theoretical analysis very challenging. This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore, we show that the learning rates derived are tight for \"boundary fragment\" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below.", "We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford (, 2006). This represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.", "Sequential algorithms of active learning based on the estimation of the level sets of the empirical risk are discussed in the paper. Localized Rademacher complexities are used in the algorithms to estimate the sample sizes needed to achieve the required accuracy of learning in an adaptive way. Probabilistic bounds on the number of active examples have been proved and several applications to binary classification problems are considered.", "We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process.", "", "We study the rates of convergence in classification error achievable by active learning in the presence of label noise. Additionally, we study the more general problem of active learning with a nested hierarchy of hypothesis classes, and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning.", "Most of the existing active learning algorithms are based on the realizability assumption: The learner's hypothesis class is assumed to contain a target function that perfectly classifies all training and test examples. This assumption can hardly ever be justified in practice. In this paper, we study how relaxing the realizability assumption affects the sample complexity of active learning. First, we extend existing results on query learning to show that any active learning algorithm for the realizable case can be transformed to tolerate random bounded rate class noise. Thus, bounded rate class noise adds little extra complications to active learning, and in particular exponential label complexity savings over passive learning are still possible. However, it is questionable whether this noise model is any more realistic in practice than assuming no noise at all. Our second result shows that if we move to the truly non-realizable model of statistical learning theory, then the label complexity of active learning has the same dependence Ω(1 ∈ 2 ) on the accuracy parameter e as the passive learning label complexity. More specifically, we show that under the assumption that the best classifier in the learner's hypothesis class has generalization error at most β > 0, the label complexity of active learning is Ω(β 2 ∈ 2 log(1 δ)), where the accuracy parameter e measures how close to optimal within the hypothesis class the active learner has to get and δ is the confidence parameter. The implication of this lower bound is that exponential savings should not be expected in realistic models of active learning, and thus the label complexity goals in active learning should be refined." ] }
1006.3039
2951378232
(To appear in Theory and Practice of Logic Programming (TPLP)) We introduce a systematic, concurrent execution scheme for Constraint Handling Rules (CHR) based on a previously proposed sequential goal-based CHR semantics. We establish strong correspondence results to the abstract CHR semantics, thus guaranteeing that any answer in the concurrent, goal-based CHR semantics is reproducible in the abstract CHR semantics. Our work provides the foundation to obtain efficient, parallel CHR execution schemes.
Parallel execution models of forward chaining production rule based languages (e.g. OPS5 @cite_18 ) have been widely studied in the context of production rule systems. A production rule system is defined by a set of multi-headed production rules (analogous to CHR rules) and a set of assertions (analogous to the CHR store). Production rule systems are richer than the CHR language, consisting of user definable execution strategies and negated rule heads. This makes parallelizing production rule execution extremely difficult, because rule application is not monotonic (rules may not be applied in a larger context). As such, many previous works in parallel production rule systems focuses on efficient means of maintaining correctness of parallel rule execution (e.g. data dependency analysis @cite_7 , sequential to parallel program transformation @cite_22 ), with respect to such user specified execution strategies. These works can be classified under two approaches, namely synchronous and asynchronous parallel production systems.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7" ], "mid": [ "1553764325", "2085596514", "" ], "abstract": [ "It has been claimed that production systems have several advantages over other representational schemes. These include the potential for general self-augmentation (i.e., learning of new behavior) and the ability to function in complex environments. The production system language, OPS, was implemented to test these claims. In this paper we explore some of the issues that bear on the design of production system languages and try to show the adequacy of OPS for its intended purpose.", "Conflict resolution is a form of global control used in production systems to achieve an efficient sequential execution of a rule-based program. This type of control is not used in parallel production system models[6, 13]. Instead, only those programs are executed which make no assumptions regarding conflict resolution. Therefore, the initial sequential rule-based programs are either executed in parallel without their conflict resolution strategy, which normally results in incorrect behavior, or the programs are transformed in an ad hoc manner to execute on a particular parallel production system model. As a result, these programs do not exhibit the parallelism hoped for [10, 13]. We believe that a second reason behind the lack of parallelism is that no formal methods of verifying the correctness of rule-based programs are utilized. Correctness is especially important when conflict resolution is no longer utilized, because it necessary to transform sequential rule-based programs into equivalent programs without conflict resolution. Also, the parallel execution of a rule-based program is more complex and demands these formal methods even more than its sequential counterpart. We are concerned with designing and developing correct rule-based programs for parallel execution. In this paper, we show the difficulty in transforming a simple sequential rule-based program to a new version of the program with no conflict resolution. Also, we show that the use of a new programming paradigm and language may result in more efficient programs which are provably correct, and can be executed in parallel.", "" ] }
1006.3039
2951378232
(To appear in Theory and Practice of Logic Programming (TPLP)) We introduce a systematic, concurrent execution scheme for Constraint Handling Rules (CHR) based on a previously proposed sequential goal-based CHR semantics. We establish strong correspondence results to the abstract CHR semantics, thus guaranteeing that any answer in the concurrent, goal-based CHR semantics is reproducible in the abstract CHR semantics. Our work provides the foundation to obtain efficient, parallel CHR execution schemes.
The most distinct characteristic of RETE is that partial matches are computed and stored. This and the eager nature of RETE matching is suitable for production rule systems as assertions (constraints) are propagated (not deleted) by default. Hence computing all matches rarely results in redundancy. Traditional CHR systems do not advocate this eager matching scheme because doing so results to many redundancies, due to overlapping simplified matching heads. Eager matching algorithms is also proved in @cite_11 to have a larger asymptotic worst-case space complexity than lazy matching algorithms.
{ "cite_N": [ "@cite_11" ], "mid": [ "2152631562" ], "abstract": [ "Production systems are an established method for encoding knowledge in an expert system. The semantics of production system languages and the concomitant algorithms for their evaluation, RETE and TREAT, enumerate the set of rule instantiations and then apply a strategy that selects a single instantiation for firing. Often rule instantiations are calculated and never fired. In a sense, the time and space required to eagerly compute these unfired instantiations is wasted. This paper presents preliminary results about a new match technique, lazy matching. The lazy match algorithm folds the selection strategy into the search for instantiations, such that only one instantiation is computed per cycle. The algorithm improves the worst-case asymptotic space complexity of incremental matching. Moreover, empirical and analytic results demonstrate that lazy matching can substantially improve the execution time of production system programs." ] }
1006.3039
2951378232
(To appear in Theory and Practice of Logic Programming (TPLP)) We introduce a systematic, concurrent execution scheme for Constraint Handling Rules (CHR) based on a previously proposed sequential goal-based CHR semantics. We establish strong correspondence results to the abstract CHR semantics, thus guaranteeing that any answer in the concurrent, goal-based CHR semantics is reproducible in the abstract CHR semantics. Our work provides the foundation to obtain efficient, parallel CHR execution schemes.
Asynchronous parallel production rule systems (e.g. Swarm @cite_22 , CREL @cite_2 ) introduce parallel rule execution via asynchronously running processors threads. In such systems, rules can fire asynchronously (not synchronized by production cycles), hence enforcing execution strategies is more difficult and limited. Similar to implementations of goal based CHR semantics rule matching is such systems often use a variant of the LEAPS @cite_11 lazy matching algorithm.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_2" ], "mid": [ "2152631562", "2085596514", "" ], "abstract": [ "Production systems are an established method for encoding knowledge in an expert system. The semantics of production system languages and the concomitant algorithms for their evaluation, RETE and TREAT, enumerate the set of rule instantiations and then apply a strategy that selects a single instantiation for firing. Often rule instantiations are calculated and never fired. In a sense, the time and space required to eagerly compute these unfired instantiations is wasted. This paper presents preliminary results about a new match technique, lazy matching. The lazy match algorithm folds the selection strategy into the search for instantiations, such that only one instantiation is computed per cycle. The algorithm improves the worst-case asymptotic space complexity of incremental matching. Moreover, empirical and analytic results demonstrate that lazy matching can substantially improve the execution time of production system programs.", "Conflict resolution is a form of global control used in production systems to achieve an efficient sequential execution of a rule-based program. This type of control is not used in parallel production system models[6, 13]. Instead, only those programs are executed which make no assumptions regarding conflict resolution. Therefore, the initial sequential rule-based programs are either executed in parallel without their conflict resolution strategy, which normally results in incorrect behavior, or the programs are transformed in an ad hoc manner to execute on a particular parallel production system model. As a result, these programs do not exhibit the parallelism hoped for [10, 13]. We believe that a second reason behind the lack of parallelism is that no formal methods of verifying the correctness of rule-based programs are utilized. Correctness is especially important when conflict resolution is no longer utilized, because it necessary to transform sequential rule-based programs into equivalent programs without conflict resolution. Also, the parallel execution of a rule-based program is more complex and demands these formal methods even more than its sequential counterpart. We are concerned with designing and developing correct rule-based programs for parallel execution. In this paper, we show the difficulty in transforming a simple sequential rule-based program to a new version of the program with no conflict resolution. Also, we show that the use of a new programming paradigm and language may result in more efficient programs which are provably correct, and can be executed in parallel.", "" ] }
1006.1551
1530289039
Climate change has been a popular topic for a number of years now. Computer Science has contributed to aiding humanity in reducing energy requirements and consequently global warming. Much of this work is through calculators which determine a user’s carbon footprint. However there are no expert systems which can offer advice in an efficient and time saving way. There are many publications which do offer advice on reducing greenhouse gas (GHG) emissions but to find the advice the reader seeks will involve reading a lot of irrelevant material. This work built an expert system (which we call EcoHomeHelper) and attempted to show that it is useful in changing people’s behaviour with respect to their GHG emissions and that they will be able to find the information in a more efficient manner. Twelve participants were used. Seven of which used the program and five who read and attempted to find advice by reading from a list. The application itself has current implementations and the concept further developed, has applications for the future.
It may appear strange to some readers to be considering Social Psychology issues when attempting Computer Science research. However, this author believes the union is at times inevitable and indeed unavoidable when building an interactive application offering advise. In discussing Human-Computer Interaction (HCI), from @cite_6 we can confidently conclude that people do form social-emotional relationships with such things as jewelry, clothing and other non humanoid physical or nonphysical forms. The authors further suggest that people can respond to computers in social ways even though they may be unconscious of this behaviour and consequently form relationships with computers. The above article continues to argue that due to this social-emotional bonding with the agent, in this case the application, computers can play a role in aiding people to change their behaviour. In other words, computer applications can play a role in persuasion.
{ "cite_N": [ "@cite_6" ], "mid": [ "1985945240" ], "abstract": [ "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers." ] }
1006.1592
2953176767
We analyze the capacity scaling laws of clustered ad hoc networks in which nodes are distributed according to a doubly stochastic shot-noise Cox process. We identify five different operational regimes, and for each regime we devise a communication strategy that allows to achieve a throughput to within a poly-logarithmic factor (in the number of nodes) of the maximum theoretical capacity.
In this paper, we follow the stream of work @cite_4 @cite_0 @cite_8 , analyzing the information-theoretic capacity of clustered random networks containing significant inhomogeneities in the node spatial distribution. In particular, we consider nodes distributed according to a doubly stochastic Shot-Noise Cox Process (SNCP) over a square region whose edge size can scale with the number of nodes.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_8" ], "mid": [ "", "2002649876", "2116844604" ], "abstract": [ "", "n source and destination pairs randomly located in an area want to communicate with each other. Signals transmitted from one user to another at distance r apart are subject to a power loss of r-alpha as well as a random phase. We identify the scaling laws of the information-theoretic capacity of the network when nodes can relay information for each other. In the case of dense networks, where the area is fixed and the density of nodes increasing, we show that the total capacity of the network scales linearly with n. This improves on the best known achievability result of n2 3 of Aeron and Saligrama. In the case of extended networks, where the density of nodes is fixed and the area increasing linearly with n, we show that this capacity scales as n2-alpha 2 for 2lesalpha 4. Thus, much better scaling than multihop can be achieved in dense networks, as well as in extended networks with low attenuation. The performance gain is achieved by intelligent node cooperation and distributed multiple-input multiple-output (MIMO) communication. The key ingredient is a hierarchical and digital architecture for nodal exchange of information for realizing the cooperation.", "In recent work, Ozgur, Leveque, and Tse (2007) obtained a complete scaling characterization of throughput scaling for random extended wireless networks (i.e., n nodes are placed uniformly at random in a square region of area n). They showed that for small path-loss exponents alpha isin (2,3], cooperative communication is order optimal, and for large path-loss exponents alpha > 3, multihop communication is order optimal. However, their results (both the communication scheme and the proof technique) are strongly dependent on the regularity induced with high probability by the random node placement. In this paper, we consider the problem of characterizing the throughput scaling in extended wireless networks with arbitrary node placement. As a main result, we propose a more general novel cooperative communication scheme that works for arbitrarily placed nodes. For small path-loss exponents alpha isin (2,3], we show that our scheme is order optimal for all node placements, and achieves exactly the same throughput scaling as in Ozgur. This shows that the regularity of the node placement does not affect the scaling of the achievable rates for alpha isin (2,3]. The situation is, however, markedly different for large path-loss exponents alpha > 3. We show that in this regime the scaling of the achievable per-node rates depends crucially on the regularity of the node placement. We then present a family of schemes that smoothly ldquointerpolaterdquo between multihop and cooperative communication, depending upon the level of regularity in the node placement. We establish order optimality of these schemes under adversarial node placement for alpha > 3." ] }
1006.1592
2953176767
We analyze the capacity scaling laws of clustered ad hoc networks in which nodes are distributed according to a doubly stochastic shot-noise Cox process. We identify five different operational regimes, and for each regime we devise a communication strategy that allows to achieve a throughput to within a poly-logarithmic factor (in the number of nodes) of the maximum theoretical capacity.
Third, our constructive lower bounds require to employ novel scheduling routing strategies in combination to existing cooperative communication schemes. Such strategies represent an important contribution in themselves, as they could be adopted to cope with the nodes spatial inhomogeneity in more general topologies which cannot be described by the SNCP model considered here. At last we emphasize that this work extends @cite_10 @cite_12 , where we have analyzed the capacity of networks in which nodes are distributed according to a SNCP model, but considering single-user communication schemes only (i.e., traditional point-to-point links).
{ "cite_N": [ "@cite_10", "@cite_12" ], "mid": [ "1491622425", "2117967300" ], "abstract": [ "We analyze the capacity scaling laws of wireless ad hoc networks comprising significant inhomogeneities in the node spatial distribution over the network area. In particular, we consider nodes placed according to a shot-noise Cox process, which allows to model the clustering behavior usually recognized in large-scale systems. For this class of networks, we introduce novel techniques to compute upper bounds to the available per-flow throughput as the number of nodes tends to infinity, which are tight in the case of interference limited systems.", "We consider static ad hoc wireless networks comprising significant inhomogeneities in the node spatial distribution over the area and analyze the scaling laws of their transport capacity as the number of nodes increases. In particular, we consider nodes placed according to a shot-noise Cox process (SNCP), which allows to model the clustering behavior usually recognized in large-scale systems. For this class of networks, we propose novel scheduling and routing schemes that approach previously computed upper bounds to the per-flow throughput as the number of nodes tends to infinity." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
Bancilhon and Spyratos @cite_21 , Deutsch and Papakonstantinou @cite_42 , and Miklau and Suciu @cite_14 provide general models for privacy loss in information releases from a database, called , which identifies the sensitive information as a specific secret, @math . Attackers are allowed to form legal queries and ask them of the database, while the database owner tries to protect the information that these queries leak about the secret @math . Note that we are instead considering a related, but different, quantitative measure, where there is no specifically sensitive part of the data, but the data owner, Alice, is trying to limit releasing too much of her data.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_42" ], "mid": [ "2087154854", "128781889", "1570343904" ], "abstract": [ "We perform a theoretical study of the following query-view security problem: given a view V to be published, does V logically disclose information about a confidential query S? The problem is motivated by the need to manage the risk of unintended information disclosure in today's world of universal data exchange. We present a novel information-theoretic standard for query-view security. This criterion can be used to provide a precise analysis of information disclosure for a host of data exchange scenarios, including multi-party collusion and the use of outside knowledge by an adversary trying to learn privileged facts about the database. We prove a number of theoretical results for deciding security according to this standard. We also generalize our security criterion to account for prior knowledge a user or adversary may possess, and introduce techniques for measuring the magnitude of partial disclosures. We believe these results can be a foundation for practical efforts to secure data exchange frameworks, and also illuminate a nice interaction between logic and probability theory.", "This paper is concerned with protection of information in relational databases from disclosure to properly identified users. It is assumed that the only means of access to the database is through a relational query language. The objective of the paper is to formalize the notion of protection. We first describe the information content of the database by a set of propositions and their truth values. The objects to be protected are (the truth values of) certain propositions that have been declared confidential. A query violates a protected proposition if its answer modifies the knowledge of tHe user about (the truth value of) this proposition. Following this approach, we propose a model for evaluating protection systems. In this model a protection system is characterized by the type of queries it takes as its input, the type of data it can protect, the means of protection against queries (e.g. rejection or modification) and the type of protection it provides (e.g., total protection, partial protection, protection against user's inference). Some examples of the use of the model as a tool for analysis are given.", "We formulate and study a privacy guarantee to data owners, who share information with clients by publishing views of a proprietary database. The owner identi.es the sensitive proprietary data using a secret query against the proprietary database. Given an extra view, the privacy guarantee ensures that potential attackers will not learn any information about the secret that could not already be obtained from the existing views. We de.ne “learning” as the modi.cation of the attacker's a-priori probability distribution on the set of possible secrets. We assume arbitrary a-priori distributions (including distributions that correlate the existence of particular tuples) and solve the problem when secret and views are expressed as unions of conjunctive queries with non-equalities, under integrity constraints. We consider guarantees (a) for given view extents (b) for given domain of the secret and (c) independent of the domain and extents." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
There has been considerable recent work on designing technological approaches that can help protect the privacy or intellectual property rights of a database by modifying its content. For example, several researchers (e.g., see @cite_0 @cite_16 @cite_7 @cite_0 @cite_24 @cite_33 @cite_11 @cite_31 ) have focused on data watermarking, which is a technique for altering the data to make it easier, after the fact, to track when someone has stolen information and published their own database from it. Alternatively, several other researchers @cite_32 @cite_23 @cite_39 @cite_17 @cite_2 @cite_10 @cite_37 @cite_28 have proposed as a way of specifying a quantifiable secrecy-preservation requirement for databases. The generalization approach is to group attribute values into equivalence classes, and replace each individual attribute value with its class name, thereby limiting the information that can be derived from any selection query. Our assumption in this paper, however, is that the data owner, Alice, is not interested in modifying her data, since she derives an economic interest from its accuracy. She may be interested instead in placing a reasonable limit on the number of queries any one user might ask, so that she can limit her exposure to the risk of that user cloning her data.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_7", "@cite_28", "@cite_32", "@cite_17", "@cite_39", "@cite_0", "@cite_24", "@cite_23", "@cite_2", "@cite_31", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2046776785", "2030834451", "2039690308", "2161229593", "", "2052806235", "56293434", "", "2120201091", "2119067110", "", "1598863324", "1993332882", "2142291269", "2100913369" ], "abstract": [ "In order to protect individuals' privacy, the technique of k-anonymization has been proposed to de-associate sensitive attributes from the corresponding identifiers. In this paper, we provide privacy-enhancing methods for creating k-anonymous tables in a distributed scenario. Specifically, we consider a setting in which there is a set of customers, each of whom has a row of a table, and a miner, who wants to mine the entire table. Our objective is to design protocols that allow the miner to obtain a k-anonymous table representing the customer data, in such a way that does not reveal any extra information that can be used to link sensitive attributes to corresponding identifiers, and without requiring a central authority who has access to all the original data. We give two different formulations of this problem, with provably private solutions. Our solutions enhance the privacy of k-anonymization in the distributed scenario by maintaining end-to-end privacy from the original customer data to the final k-anonymous results.", "This paper presents a way to embed watermarks into 2D vectordata. The watermarking system provides a high capacity and is robust against the following attacks: polyline simplifications like the Douglas-Peucker algorithm [1], moving and cropping of data and addition of small amounts of random noise. The system is designed for adding information to digital maps. The attacks mentioned above can happen during the daily work with these maps, so the watermark will not be destroyed by working with the data. Nevertheless it is possible to destroy it on purpose by using other attacks. The information is embedded by changing the x y-coordinates of datapoints within the tolerance of the data. A block code is used to reconstruct missing data when not enough datapoints are available and a synchronization provides a way to detect the information even when the map is cropped.", "Abstract.We enunciate the need for watermarking database relations to deter data piracy, identify the characteristics of relational data that pose unique challenges for watermarking, and delineate desirable properties of a watermarking system for relational data. We then present an effective watermarking technique geared for relational data. This technique ensures that some bit positions of some of the attributes of some of the tuples contain specific values. The specific bit locations and values are algorithmically determined under the control of a secret key known only to the owner of the data. This bit pattern constitutes the watermark. Only if one has access to the secret key can the watermark be detected with high probability. Detecting the watermark requires access neither to the original data nor the watermark, and the watermark can be easily and efficiently maintained in the presence of insertions, updates, and deletions. Our analysis shows that the proposed technique is robust against various forms of malicious attacks as well as benign updates to the data. Using an implementation running on DB2, we also show that the algorithms perform well enough to be used in real-world applications.", "Data de-identification reconciles the demand for release of data for research purposes and the demand for privacy from individuals. This paper proposes and evaluates an optimization algorithm for the powerful de-identification procedure known as k-anonymization. A k-anonymized dataset has the property that each record is indistinguishable from at least k - 1 others. Even simple restrictions of optimized k-anonymity are NP-hard, leading to significant computational challenges. We present a new approach to exploring the space of possible anonymizations that tames the combinatorics of the problem, and develop data-management strategies to reduce reliance on expensive operations such as sorting. Through experiments on real census data, we show the resulting algorithm can find optimal k-anonymizations under two representative cost measures and a wide range of k. We also show that the algorithm can produce good anonymizations in circumstances where the input data or input parameters preclude finding an optimal solution in reasonable time. Finally, we use the algorithm to explore the effects of different coding approaches and problem variations on anonymization quality and performance. To our knowledge, this is the first result demonstrating optimal k-anonymization of a non-trivial dataset under a general model of the problem.", "", "The technique of k-anonymization has been proposed in the literature as an alternative way to release public information, while ensuring both data privacy and data integrity. We prove that two general versions of optimal k-anonymization of relations are NP-hard, including the suppression version which amounts to choosing a minimum number of entries to delete from the relation. We also present a polynomial time algorithm for optimal k-anonymity that achieves an approximation ratio independent of the size of the database, when k is constant. In particular, it is a O(k log k)-approximation where the constant in the big-O is no more than 4, However, the runtime of the algorithm is exponential in k. A slightly more clever algorithm removes this condition, but is a O(k log m)-approximation, where m is the degree of the relation. We believe this algorithm could potentially be quite fast in practice.", "", "", "Watermarking allows robust and unobtrusive insertion of information in a digital document. During the last few years, techniques have been proposed for watermarking relational databases or Xml documents, where information insertion must preserve a specific measure on data (for example the mean and variance of numerical attributes). In this article we investigate the problem of watermarking databases or Xml while preserving a set of parametric queries in a specified language, up to an acceptable distortion. We first show that unrestricted databases can not be watermarked while preserving trivial parametric queries. We then exhibit query languages and classes of structures that allow guaranteed watermarking capacity, namely 1) local query languages on structures with bounded degree Gaifman graph, and 2) monadic second-order queries on trees or treelike structures. We relate these results to an important topic in computational learning theory, the VC-dimension. We finally consider incremental aspects of query-preserving watermarking.", "Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.", "", "Mechanisms for privacy assurances (e.g., queries over encrypted data) are essential to a viable and secure management solution for outsourced data. On a somewhat orthogonal dimension but equally important, we find the requirement to be able to assert and protect rights over such data.", "", "k-anonymization techniques have been the focus of intense research in the last few years. An important requirement for such techniques is to ensure anonymization of data while at the same time minimizing the information loss resulting from data modifications. In this paper we propose an approach that uses the idea of clustering to minimize information loss and thus ensure good data quality. The key observation here is that data records that are naturally similar to each other should be part of the same equivalence class. We thus formulate a specific clustering problem, referred to as k-member clustering problem. We prove that this problem is NP-hard and present a greedy heuristic, the complexity of which is in O(n2). As part of our approach we develop a suitable metric to estimate the information loss introduced by generalizations, which works for both numeric and categorical data.", "we introduce a solution for relational database content rights protection through watermarking. Rights protection for relational data is of ever-increasing interest, especially considering areas where sensitive, valuable content is to be outsourced. A good example is a data mining application, where data is sold in pieces to parties specialized in mining it. Different avenues are available, each with its own advantages and drawbacks. Enforcement by legal means is usually ineffective in preventing theft of copyrighted works, unless augmented by a digital counterpart, for example, watermarking. While being able to handle higher level semantic constraints, such as classification preservation, our solution also addresses important attacks, such as subset selection and random and linear data changes. We introduce wmdb., a proof-of-concept implementation and its application to real-life data, namely, in watermarking the outsourced Wal-Mart sales data that we have available at our institute." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
There has been less prior related work that takes the black hat'' perspective of this paper, which asks the question of how quickly a data set can be discovered from seemingly minimalistic responses to queries on that object. For example, Goodrich @cite_8 studies the problem of discovering a DNA string from genomic comparison queries and Nohl and Evans @cite_38 study quantitative measures on the global system information that is leaked from learning the contents of multiple RFID tags that are generated by that system.
{ "cite_N": [ "@cite_38", "@cite_8" ], "mid": [ "2783478601", "2154357802" ], "abstract": [ "Radio Frequency Identification (RFID) systems promise large scale, automated tracking solutions but also pose a threat to customer privacy. The tree-based hash protocol proposed by Molnar and Wagner presents a scalable, privacy-preserving solution. Previous analyses of this protocol concluded that an attacker who can extract secrets from a large number of tags can compromise privacy of other tags. We propose a new metric for information leakage in RFID protocols along with a threat model that more realistically captures the goals and capabilities of potential attackers. Using this metric, we measure the information leakage in the tree-based hash protocol and estimate an attacker’s probability of success in tracking targeted individuals, considering scenarios in which multiple information sources can be combined to track an individual. We conclude that an attacker has a reasonable chance of tracking tags when the tree-based hash protocol is used.", "In this paper, we study the degree to which a genomic string, @math ,leaks details about itself any time it engages in comparison protocolswith a genomic querier, Bob, even if those protocols arecryptographically guaranteed to produce no additional information otherthan the scores that assess the degree to which @math matches stringsoffered by Bob. We show that such scenarios allow Bob to play variantsof the game of Mastermind with @math so as to learn the complete identityof @math . We show that there are a number of efficient implementationsfor Bob to employ in these Mastermind attacks, depending on knowledgehe has about the structure of @math , which show how quickly he candetermine @math . Indeed, we show that Bob can discover @math using anumber of rounds of test comparisons that is much smaller than thelength of @math , under various assumptions regarding the types of scoresthat are returned by the cryptographic protocols and whether he can useknowledge about the distribution that @math comes from, e.g., usingpublic knowledge about the properties of human DNA. We also providethe results of an experimental study we performed on a database ofmitochondrial DNA, showing the vulnerability of existing real-world DNAdata to the Mastermind attack." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
Motivated by the problem of having a robot discover the shape of an object by touching it @cite_22 , there is considerable amount of related work in the computational geometry literature on discovering polygonal and polyhedral shapes from probing (e.g., see @cite_9 @cite_29 @cite_43 @cite_1 @cite_12 @cite_3 @cite_4 @cite_26 @cite_41 @cite_34 ). Rather than review all of this work in detail, we (e.g., see @cite_9 @cite_29 @cite_1 @cite_12 @cite_3 @cite_4 @cite_41 @cite_34 ). We refer the interested reader to the survey and book chapter by Skiena @cite_18 @cite_13 , and simply mention that, with the notable exception of work by Dobkin @cite_12 , this prior work is primarily directed at discovering obstacles in a two-dimensional environment using various kinds of contact probes. Thus, although it is related, most of this prior work cannot be adapted to the problem of discovering a Voronoi diagram through nearest-neighbor probes.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_22", "@cite_41", "@cite_9", "@cite_29", "@cite_1", "@cite_3", "@cite_43", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "1968632103", "2026732056", "2088549534", "1973492835", "2153050879", "2015816463", "1590151786", "2038334420", "2073564310", "1995282992", "", "", "1972923728" ], "abstract": [ "", "A testing algorithm takes a model and produces a set of points that can be used to test whether or not an unknown object is sufficiently similar to the model. A testing algorithm performs a complementary task to that performed by a learning algorithm, which takes a set of examples and builds a model that succinctly describes them. Testing can also be viewed as a type of geometric probing that uses point probes (i.e. test points) to verify that an unknown geometric object is similar to a given model. In this paper we examine the problem of verifying orthogonal shapes using test points. In particular, we give testing algorithms for sets of disjoint rectangles in two and higher dimensions and for general orthogonal shapes in 2-D and 3-D. This work is a first step towards developing efficient testing algorithms for objects with more general shapes, including those with non-orthogonal and curved surfaces.", "Abstract We prove that n+4 finger probes are sufficient to determine the shape of a convex n-gon from a finite collection of models, improving the previous result of 2n+1. Further, we show that n−1 are necessary, proving this is optimal to within an additive constant. For line probes, we show that 2n+4 probes are sufficient and 2n−3 necessary. The difference between these results is particularly interesting in light of the duality relationship between finger and line probes.", "We consider a new problem motivated by robotics: how to determine shape and position from probes. We show that 3n probes are sufficient, but 3n − 1 are necessary, to determine the shape and position of any n-gon. Under a mild assumption, 3n probes are necessary.", "Geometric probing considers problems of determining a geometric structure or some aspect of that structure from the results of a mathematical or physical measuring device, a probe. The field of geometric probing is surveyed, with results ordered by a probing model. The emphasis is on interactive reconstruction, where the results of all previous measurements are used to determine the orientation of the next probe so it provides the maximum amount of information about the structure. Through interactive reconstruction, finite determination strategies exist for such diverse models as finger, X-ray, and half-plane probes. >", "We present algorithms to reconstruct the planar cross-section of a simply connected object from data points measured by rays. The rays are semi-infinite curves representing, for example, the laser beam or the articulated arms of a robot moving around the object. This paper shows that the information provided by the rays is crucial (though generally neglected) when solving 2-dimensional reconstruction problems. The main property of the rays is that they induce a total order on the measured points. This order is shown to be computable in optimal time O(n log n). The algorithm is fully dynamic and allows the insertion or the deletion of a point in O(log n) time. From this order a polygonal approximation of the object can be deduced in a straightforward manner. However, if insufficient data are available or if the points belong to several connected objects, this polygonal approximation may not be a simple polygon or may intersect the rays. This can be checked in O(n log n) time. The order induced by the rays can also be used to find a strategy for discovering the exact shape of a simple (but not necessarily convex) polygon by means of a minimal number of probes. When each probe outcome consists of a contact point, a ray measuring that point and the normal to the object at the point, we have shown that 3n-3 probes are necessary and sufficient if the object has n non-colinear edges. Each probe can be determined in O(log n) time yielding an O(n log n)-time 0(n)-space algorithm. When each probe outcome consists of a contact point and a ray measuring that point but not the normal, the same strategy can still be applied. Under a mild condition, 8n-4 probes are sufficient to discover a shape that is almost surely the actual shape of the object.", "Suppose that for a set H of n unknown hyperplanes in the Euclidean d-dimensional space, a line probe is available which reports the set of intersection points of a query line with the hyperplanes. Under this model, this paper investigates the complexity to find a generic line for H and further to determine the hyperplanes in H. This problem arises in factoring the u-resultant to solve systems of polynomials (e.g., Renegar [13]). We prove that d+1 line probes are sufficient to determine H. Algorithmically, the time complexity to find a generic line and reconstruct H from O(dn) probed points of intersection is important. It is shown that a generic line can be computed in O(dn log n) time after d line probes, and by an additional d line probes, all the hyperplanes in H are reconstructed in O(dn log n) time. This result can be extended to the d-dimensional complex space. Also, concerning the factorization of the u-resultant using the partial derivatives on a generic line, we touch upon reducing the time complexity to compute the partial derivatives of the u-resultant represented as the determinant of a matrix.", "", "An X-ray probe through a polygon measures the length of intersection between a line and the polygon. This paper considers the properties of various classes of X-ray probes, and shows how they interact to give finite strategies for completely describing convex n-gons. It is shown that @math probes are sufficient to verify a specified n-gon, while for determining convex polygons @math X-ray probes are necessary and @math sufficient, with @math sufficient given that a lower bound on the size of the smallest edge of P is known.", "Abstract Let Γ be a set of convex unimodal polygons in fixed position and orientation. We prove that the problem of determining whether k finger probes are sufficient to distinguish among the polygons in Γ is NP-complete for two types of finger probes. This implies that the same results hold for most interesting classes of polygons on which finger probes can be used.", "", "", "We investigate the complexity of determining the shape and presentation (i.e. position with orientation) of convex polytopes in multi-dimensional Euclidean space using a variety of probe models." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
By a well-known lifting method (e.g., see @cite_40 ), a 2-dimensional Voronoi diagram can be defined by a projection to the plane of the skeleton of a 3-dimensional convex polyhedron, which is defined by the upper envelope of planes tangent at point sites mapped vertically to a certain paraboloid. This property of Voronoi diagrams implies that the method of Dobkin @cite_12 for discovering a convex polytope via ray-shooting finger'' probes can be used to discover a Voronoi diagram in the plane using nearest-neighbor queries (for instance, the queries we call exact queries''). Translated into this context, their method results in a scheme that would use @math queries to clone a Voronoi diagram, with a time overhead that is @math .
{ "cite_N": [ "@cite_40", "@cite_12" ], "mid": [ "2149906774", "1972923728" ], "abstract": [ "This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement.", "We investigate the complexity of determining the shape and presentation (i.e. position with orientation) of convex polytopes in multi-dimensional Euclidean space using a variety of probe models." ] }
1006.1921
2951132123
We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.
Demaine @cite_35 show how a general comparison-based ordered dictionary (with successor and predecessor queries) of @math elements (which may not belong to a total order, but which can always be compared when they are in @math for the same time value) can be made fully retroactive in @math space and @math query time and amortized @math update time in the pointer machine model. Blelloch @cite_27 and Giora and Kaplan @cite_15 improve these bounds, for numerical (totally ordered) items, showing how to achieve a fully retroactive ordered dictionary in @math space and @math query and update times in the RAM model. These latter results do not apply to the general comparison-based partially-ordered setting, however.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_15" ], "mid": [ "2121573131", "1998117630", "2624264009" ], "abstract": [ "We introduce a new data structuring paradigm in which operations can be performed on a data structure not only in the present, but also in the past. In this new paradigm, called retroactive data structures, the historical sequence of operations performed on the data structure is not fixed. The data structure allows arbitrary insertion and deletion of operations at arbitrary times, subject only to consistency requirements. We initiate the study of retroactive data structures by formally defining the model and its variants. We prove that, unlike persistence, efficient retroactivity is not always achievable. Thus, we present efficient retroactive data structures for queues, doubly ended queues, priority queues, union-find, and decomposable search structures.", "We describe an asymptotically optimal data-structure for dynamic point location for horizontal segments. For n line-segments, queries take O(log n) time, updates take O(log n) amortized time and the data structure uses O(n) space. This is the first structure for the problem that is optimal in space and time (modulo the possibility of removing amortization). We also describe dynamic data structures for orthogonal range reporting and orthogonal intersection reporting. In both data structures for n points (segments) updates take O(log n) amortized time, queries take O(log n+k log n log log n) time, and the structures use O(n) space, where k is the size of the output. The model of computation is the unit cost RAM.", "In this paper we consider the dynamic vertical ray shooting problem, that is the task of maintaining a dynamic set S of n non intersecting horizontal line segments in the plane subject to a query that reports the first segment in S intersecting a vertical ray from a query point. We develop a linear-size structure that supports queries, insertions and deletions in O(log n) worst-case time. Our structure works in the comparison model and uses a RAM." ] }
1006.0526
2951868394
Centrality is an important notion in network analysis and is used to measure the degree to which network structure contributes to the importance of a node in a network. While many different centrality measures exist, most of them apply to static networks. Most networks, on the other hand, are dynamic in nature, evolving over time through the addition or deletion of nodes and edges. A popular approach to analyzing such networks represents them by a static network that aggregates all edges observed over some time period. This approach, however, under or overestimates centrality of some nodes. We address this problem by introducing a novel centrality metric for dynamic network analysis. This metric exploits an intuition that in order for one node in a dynamic network to influence another over some period of time, there must exist a path that connects the source and destination nodes through intermediaries at different times. We demonstrate on an example network that the proposed metric leads to a very different ranking than analysis of an equivalent static network. We use dynamic centrality to study a dynamic citations network and contrast results to those reached by static network analysis.
Time-aware ranking Closely related to dynamic network analysis is the problem of time-aware ranking of Web pages in information retrieval. This research is motivated by the observation @cite_22 that PageRank's Web ranking algorithm is biased against newer pages, which may not have had enough time to accumulate links to give it a high rank. Several methods have been proposed to address the recency bias in PageRank, including @cite_22 @cite_15 @cite_29 @cite_14 . In general terms, these methods weigh edges in the network by age, with newer edges contributing more heavily to a page's importance. Our motivation is different. Rather than focus on improving the rank of newer nodes, we focus instead on defining a time-aware centrality metric that takes the temporal order of edges into account.
{ "cite_N": [ "@cite_14", "@cite_15", "@cite_29", "@cite_22" ], "mid": [ "2155467656", "2151496648", "", "1591738266" ], "abstract": [ "In web search, recency ranking refers to ranking documents by relevance which takes freshness into account. In this paper, we propose a retrieval system which automatically detects and responds to recency sensitive queries. The system detects recency sensitive queries using a high precision classifier. The system responds to recency sensitive queries by using a machine learned ranking model trained for such queries. We use multiple recency features to provide temporal evidence which effectively represents document recency. Furthermore, we propose several training methodologies important for training recency sensitive rankers. Finally, we develop new evaluation metrics for recency sensitive queries. Our experiments demonstrate the efficacy of the proposed approaches.", "Web search is probably the single most important application on the Internet. The most famous search techniques are perhaps the PageRank and HITS algorithms. These algorithms are motivated by the observation that a hyperlink from a page to another is an implicit conveyance of authority to the target page. They exploit this social phenomenon to identify quality pages, e.g., \"authority\" pages and \"hub\" pages. In this paper we argue that these algorithms miss an important dimension of the Web, the temporal dimension. The Web is not a static environment. It changes constantly. Quality pages in the past may not be quality pages now or in the future. These techniques favor older pages because these pages have many in-links accumulated over time. New pages, which may be of high quality, have few or no in-links and are left behind. Bringing new and quality pages to users is important because most users want the latest information. Research publication search has exactly the same problem. This paper studies the temporal dimension of search in the context of research publication search. We propose a number of methods deal with the problem. Our experimental results show that these methods are highly effective.", "", "This paper is aimed at the study of quantitative measures of the relation between Web structure, page recency, and quality of Web pages. Quality is studied using different link-based metrics considering their relationship with the structure of the Web and the last modification time of a page. We show that, as expected, Pagerank is biased against new pages. As a subproduct we propose a Pagerank variant that includes page recency into account and we obtain information on how recency is related with Web structure." ] }
1006.0526
2951868394
Centrality is an important notion in network analysis and is used to measure the degree to which network structure contributes to the importance of a node in a network. While many different centrality measures exist, most of them apply to static networks. Most networks, on the other hand, are dynamic in nature, evolving over time through the addition or deletion of nodes and edges. A popular approach to analyzing such networks represents them by a static network that aggregates all edges observed over some time period. This approach, however, under or overestimates centrality of some nodes. We address this problem by introducing a novel centrality metric for dynamic network analysis. This metric exploits an intuition that in order for one node in a dynamic network to influence another over some period of time, there must exist a path that connects the source and destination nodes through intermediaries at different times. We demonstrate on an example network that the proposed metric leads to a very different ranking than analysis of an equivalent static network. We use dynamic centrality to study a dynamic citations network and contrast results to those reached by static network analysis.
Authors of @cite_18 considered the temporal order of edges in the flow of information on a network. They proposed EventRank algorithm, a modification of PageRank, that takes into account a temporal sequence of events, e.g., spread of an email message, in order to calculate importance of nodes in a network. This approach takes into account the effect of the on ranking. In contrast, we consider the effect of the on ranking. These approaches are somewhat related: our method can be said to estimate the expected value of all temporal sequences taking place on the network.
{ "cite_N": [ "@cite_18" ], "mid": [ "1967947187" ], "abstract": [ "Node-ranking algorithms for (social) networks do not respect the sequence of events from which the network is constructed, but rather measure rank on the aggregation of all data. For data sets that relate to the flow of information (e.g., email), this loss of information can obscure the true relative importances of individuals in the network. We present EventRank, a framework for ranking algorithms that respect event sequences and provide a natural way of tracking changes in ranking over time. We compare the performance of a number of ranking algorithms using a large organizational data set consisting of approximately 1 million emails involving over 600 users, including an evaluation of how the email-based ranking correlates with known organizational hierarchy." ] }
1006.0809
1645064312
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.
We assume that a directed graph @math is a set of @math vertices and @math edges. The earliest works on graph compression were theoretical, and they usually dealt with specific graph classes. For example, it is known that planar graphs can be compressed into @math bits @cite_7 @cite_25 . For dense enough graphs, it is impossible to reach @math bits of space, i.e., go below the space complexity of the trivial adjacency list representation. Since the seminal Jacobson's thesis @cite_19 on succinct data structures, there appear papers taking into account not only the space occupied by a graph, but also access times.
{ "cite_N": [ "@cite_19", "@cite_25", "@cite_7" ], "mid": [ "127947978", "2065527713", "1975373301" ], "abstract": [ "Data compression is when you take a big chunk of data and crunch it down to fit into a smaller space. That data is put on ice; you have to un-crunch the compressed data to get at it. Data optimization, on the other hand, is when you take a chunk of data plus a collection of operations you can perform on that data, and crunch it into a smaller space while retaining the ability to perform the operations efficiently. This thesis investigates the problem of data optimization for some fundamental static data types, concentrating on linked data structures such as trees. I chose to restrict my attention to static data structures because they are easier to optimize since the optimization can be performed off-line. Data optimization comes in two different flavors: concrete and abstract. Concrete optimization finds minimal representations within a given implementation of a data structure; abstract optimization seeks implementations with guaranteed economy of space and time. I consider the problem of concrete optimization of various pointer-based implementations of trees and graphs. The only legitimate use of a pointer is as a reference, so we are free to map the pieces of a linked structure into memory as we choose. The problem is to find a mapping that maximizes overlap of the pieces, and hence minimizes the space they occupy. I solve the problem of finding a minimal representation for general unordered trees where pointers to children are stored in a block of consecutive locations. The algorithm presented is based on weighted matching. I also present an analysis showing that the average number of cons-cells required to store a binary tree of n nodes as a minimal binary DAG is asymptotic to @math lg @math . Methods for representing trees of n nodes in @math ( @math ) bits that allow efficient tree-traversal are presented. I develop tools for abstract optimization based on a succinct representation for ordered sets that supports ranking and selection. These tools are put to use in a building an @math ( @math )-bit data structure that represents n-node planar graphs, allowing efficient traversal and adjacency-testing.", "We propose a fast methodology for encoding graphs with information-theoretically minimum numbers of bits. Specifically, a graph with property @math is called a @math -graph . If @math satisfies certain properties, then an n-node m-edge @math -graph G can be encoded by a binary string X such that (1) G and X can be obtained from each other in O(n log n) time, and (2) X has at most @math bits for any continuous superadditive function @math so that there are at most @math distinct @math -node @math -graphs. The methodology is applicable to general classes of graphs; this paper focuses on planar graphs. Examples of such @math include all conjunctions over the following groups of properties: (1) G is a planar graph or a plane graph; (2) @math is directed or undirected; (3) @math is triangulated, triconnected, biconnected, merely connected, or not required to be connected; (4) the nodes of G are labeled with labels from @math for @math ; (5) the edges of G are labeled with labels from @math for @math ; and (6) each node (respectively, edge) of G has at most @math self-loops (respectively, @math multiple edges). Moreover, @math and @math are not required to be O(1) for the cases of @math being a plane triangulation. These examples are novel applications of small cycle separators of planar graphs and are the only nontrivial classes of graphs, other than rooted trees, with known polynomial-time information-theoretically optimal coding schemes.", "Abstract It is shown that unlabeled planar graphs can be encoded using 12 n bits, and an asymptotically optimal representation is given for labeled planar graphs." ] }
1006.0809
1645064312
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.
There are several works dedicated to Web graph compression. @cite_9 suggested to order documents according to their URL's, to exploit the simple observation that most outgoing links actually point to another document within the same Web site. Their Connectivity Server provided linkage information for all pages indexed by the AltaVista search engine at that time. The links are merely represented by the node numbers (integers) using the URL lexicographical order. We noted that we assume the order of hyperlinks in a document irrelevant (like most works on Web graph compression do), hence the link lists can be sorted, in ascending order. As the successive numbers tend to be close, differential encoding may be applied efficiently.
{ "cite_N": [ "@cite_9" ], "mid": [ "1976232673" ], "abstract": [ "Abstract We have built a server that provides linkage information for all pages indexed by the AltaVista search engine. In its basic operation, the server accepts a query consisting of a set L of one or more URLs and returns a list of all pages that point to pages in L (predecessors) and a list of all pages that are pointed to from pages in L (successors). More generally the server can produce the entire neighbourhood (in the graph theory sense) of L up to a given distance and can include information about all links that exist among pages in the neighbourhood. Although some of this information can be retrieved directly from Alta Vista or other search engines, these engines are not optimized for this purpose and the process of constructing the neighbourhood of a given set of pages is show and laborious. In contrast our prototype server needs less than 0.1 ms per result URL. So far we have built two applications that use the Connectivity Server: a direct interface that permits fast navigation of the Web via the predecessor successor relation, and a visualization tool for the neighbourhood of a given set of pages. We envisage numerous other applications such as ranking, visualization, and classification." ] }
1006.0809
1645064312
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.
@cite_3 also use this technique (stating that for their data 80 within the same site share large parts of their adjacency lists. To exploit this phenomenon, a given list may be encoded with a reference to another list from its neighborhood (located earlier), plus a set of additions and deletions to from the referenced list. Their encoding, in the most compact variant, encodes an outgoing link in 5.55 bits on average, a result reported over a Web crawl consisting of 61 million URL's and 1 billion links.
{ "cite_N": [ "@cite_3" ], "mid": [ "2161088492" ], "abstract": [ "The Connectivity Server is a special-purpose database whose schema models the Web as a graph: a set of nodes (URL) connected by directed edges (hyperlinks). The Link Database provides fast access to the hyperlinks. To support easy implementation of a wide range of graph algorithms we have found it important to fit the Link Database into RAM. In the first version of the Link Database, we achieved this fit by using machines with lots of memory (8 GB), and storing each hyperlink in 32 bits. However, this approach was limited to roughly 100 million Web pages. This paper presents techniques to compress the links to accommodate larger graphs. Our techniques combine well-known compression methods with methods that depend on the properties of the Web graph. The first compression technique takes advantage of the fact that most hyperlinks on most Web pages point to other pages on the same host as the page itself. The second technique takes advantage of the fact that many pages on the same host share hyperlinks, that is, they tend to point to a common set of pages. Together, these techniques reduce space requirements to under 6 bits per link. While (de)compression adds latency to the hyperlink access time, we can still compute the strongly connected components of a 6 billion-edge graph in 22 minutes and run applications such as Kleinberg's HITS in real time. This paper describes our techniques for compressing the Link Database, and provides performance numbers for compression ratios and decompression speed." ] }
1006.0809
1645064312
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.
One of the most efficient compression schemes for Web graph was presented by Boldi and Vigna @cite_26 in 2003. Their method is likely to achieve around 3 bits per edge, or less, at link access time below 1 ,ms at their 2.4 ,GHz Pentium4 machine. Of course, the compression ratios vary from dataset to dataset. We are going to describe the Boldi and Vigna algorithm in detail in the next section as this is the main inspiration for our solution.
{ "cite_N": [ "@cite_26" ], "mid": [ "1994727615" ], "abstract": [ "Studying web graphs is often difficult due to their large size. Recently,several proposals have been published about various techniques that allow tostore a web graph in memory in a limited space, exploiting the inner redundancies of the web. The WebGraph framework is a suite of codes, algorithms and tools that aims at making it easy to manipulate large web graphs. This papers presents the compression techniques used in WebGraph, which are centred around referentiation and intervalisation (which in turn are dual to each other). WebGraph can compress the WebBase graph (118 Mnodes, 1 Glinks)in as little as 3.08 bits per link, and its transposed version in as littleas 2.89 bits per link." ] }
1006.0809
1645064312
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs.
Apostolico and Drovandi @cite_27 proposed an alternative Web graph ordering, reflecting their BFS traversal (starting from a random node) rather than traditional URL-based order. They obtain quite impressive compressed graph structures, often by 20--30 those from BV at comparable access speeds. Interestingly, the BFS ordering allows to handle the link existential query (testing if page @math has a link to page @math ) almost twice faster than returning the whole neighbor list. Still, we note that using non-lexicographical ordering is harmful for compact storing of the webpage URLs themselves (a problem accompanying pure graph structure compression in most practical applications). Note also that reordering the graph is the approach followed in more recent works from the Boldi and Vigna team @cite_22 @cite_16 .
{ "cite_N": [ "@cite_27", "@cite_16", "@cite_22" ], "mid": [ "2018900730", "2949093975", "2000851569" ], "abstract": [ "The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10 over existing methods.", "We continue the line of research on graph compression started with WebGraph, but we move our focus to the compression of social networks in a proper sense (e.g., LiveJournal): the approaches that have been used for a long time to compress web graphs rely on a specific ordering of the nodes (lexicographical URL ordering) whose extension to general social networks is not trivial. In this paper, we propose a solution that mixes clusterings and orders, and devise a new algorithm, called Layered Label Propagation, that builds on previous work on scalable clustering and can be used to reorder very large graphs (billions of nodes). Our implementation uses overdecomposition to perform aggressively on multi-core architecture, making it possible to reorder graphs of more than 600 millions nodes in a few hours. Experiments performed on a wide array of web graphs and social networks show that combining the order produced by the proposed algorithm with the WebGraph compression framework provides a major increase in compression with respect to all currently known techniques, both on web graphs and on social networks. These improvements make it possible to analyse in main memory significantly larger graphs.", "Abstract Since the first investigations on web-graph compression, it has been clear that the ordering of the nodes of a web graph has a fundamental influence on the compression rate (usually expressed as the number of bits per link). The authors of the LINK database [ 02], for instance, investigated three different approaches: an extrinsic ordering (URL ordering) and two intrinsic orderings based on the rows of the adjacency matrix (lexicographic and Gray code); they concluded that URL ordering has many advantages in spite of a small penalty in compression. In this paper we approach this issue in a more systematic way, testing some known orderings and proposing some new ones. Our experiments are made in the WebGraph framework [Boldi and Vigna 04], and show that the compression technique and the structure of the graph can produce significantly different results. In particular, we show that for a transposed web graph, URL ordering is significantly less effective, and that some new mixed orderi..." ] }
1005.5114
2950532014
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Constructing ontological relations from text has long interested researchers, e.g., @cite_21 @cite_9 @cite_14 . Many of these methods exploit linguistic patterns to infer if two keywords are related under a certain relationship. However, these approaches are not applicable to social metadata because it is usually and much more than natural language text.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21" ], "mid": [ "2155734303", "2144108169", "2068737686" ], "abstract": [ "This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness.", "We propose a novel algorithm for inducing semantic taxonomies. Previous algorithms for taxonomy induction have typically focused on independent classifiers for discovering new single relationships based on hand-constructed or automatically discovered textual patterns. By contrast, our algorithm flexibly incorporates evidence from multiple classifiers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word's coordinate terms to help in determining its hypernyms, and vice versa. We apply our algorithm on the problem of sense-disambiguated noun hyponym acquisition, where we combine the predictions of hypernym and coordinate term classifiers with the knowledge in a preexisting semantic taxonomy (WordNet 2.1). We add 10,000 novel synsets to WordNet 2.1 at 84 precision, a relative error reduction of 70 over a non-joint algorithm using the same component classifiers. Finally, we show that a taxonomy built using our algorithm shows a 23 relative F-score improvement over WordNet 2.1 on an independent testset of hypernym pairs.", "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested." ] }
1005.5114
2950532014
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Several researchers have investigated various techniques to construct conceptual hierarchies from social metadata. Most of the previous work utilizes tag statistics as evidence. Mika @cite_6 uses a graph-based approach to construct a network of related tags, projected from either a user-tag or object-tag association graphs; then induces broader narrower relations using betweenness centrality and set theory. Other works apply clustering techniques to tags, and use their co-occurrence statistics to produce conceptual hierarchies @cite_4 . Heymann and Garcia-Molina @cite_0 use centrality in the similarity graph of tags. The tag with the highest centrality is considered more abstract than one with a lower centrality; thus it should be merged to the hierarchy first, to guarantee that more abstract nodes are closer to the root. Schmitz @cite_20 applied a statistical subsumption model @cite_10 to induce hierarchical relations among tags. Since these works are based on tag statistics, they are likely to suffer from the popularity vs. generality'' problem, where a tag may be used more frequently not because it is more general, but because it is more popular among users.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_0", "@cite_10", "@cite_20" ], "mid": [ "2120721759", "2167699886", "2161173831", "2034953016", "2154331289" ], "abstract": [ "Tags have recently become popular as a means of annotating and organizing Web pages and blog entries. Advocates of tagging argue that the use of tags produces a 'folksonomy', a system in which the meaning of a tag is determined by its use among the community as a whole. We analyze the effectiveness of tags for classifying blog entries by gathering the top 350 tags from Technorati and measuring the similarity of all articles that share a tag. We find that tags are useful for grouping articles into broad categories, but less effective in indicating the particular content of an article. We then show that automatically extracting words deemed to be highly relevant can produce a more focused categorization of articles. We also show that clustering algorithms can be used to reconstruct a topical hierarchy among tags, and suggest that these approaches may be used to address some of the weaknesses in current tagging systems.", "In our work the traditional bipartite model of ontologies is extended with the social dimension, leading to a tripartite model of actors, concepts and instances. We demonstrate the application of this representation by showing how community-based semantics emerges from this model through a process of graph transformation. We illustrate ontology emergence by two case studies, an analysis of a large scale folksonomy system and a novel method for the extraction of community-based ontologies from Web pages.", "Collaborative tagging systems---systems where many casual users annotate objects with free-form strings (tags) of their choosing---have recently emerged as a powerful way to label and organize large collections of data. During our recent investigation into these types of systems, we discovered a simple but remarkably effective algorithm for converting a large corpus of tags annotating objects in a tagging system into a navigable hierarchical taxonomy of tags. We first discuss the algorithm and then present a preliminary model to explain why it is so effective in these types of systems.", "Abstract : This paper presents a means of automatically deriving a hierarchical organization of concepts from a set of documents without use of training data or standard clustering techniques. Instead, salient words and phrases extracted from the documents are organized hierarchically using a type of co-occurrence known as subsumption. The resulting structure is displayed as a series of hierarchical menus. When generated from a set of retrieved documents, a user browsing the menus is provided with a detailed overview of their content in a manner distinct from existing overview and summarization techniques. The methods used to build the structure are simple, but appear to be effective: a smallscale user study reveals that the generated hierarchy possesses properties expected of such a structure in that general terms are placed at the top levels leading to related and more specific terms below. The formation and presentation of the hierarchy is described along with the user study and some other informal evaluations. The organization of a set of documents into a concept hierarchy derived automatically from the set itself is undoubtedly one goal of information retrieval. Were this goal to be achieved, the documents would be organized into a form somewhat like existing manually constructed subject hierarchies, such as the Library of Congress categories, or the Dewey Decimal system. The only difference being that the categories would be customized to the set of documents itself. For example, from a collection of media related articles, the category \"Entertainment\" might appear near the top level; below it, (amongst others) one might find the category \"Movies\", a type of entertainment; and below that, there could be the category \"Actors & Actresses\", an aspect of movies. As can be seen, the arrangement of the categories provides an overview of the topic structure of those articles.", "paper, we describe some promising initial results in inducing ontology from the Flickr tag vocabulary, using a subsumption-based model. We describe the utility of faceted ontology as a supplement to a tagging system and present our model and results. We propose a revised, probabilistic model using seed ontologies to induce faceted ontology, and describe how the model can integrate into the logistics of tagging communities." ] }
1005.5114
2950532014
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Our present work, sap , is different from our earlier approach, sig @cite_11 in many aspects. First, sap exploits more evidence, i.e., structure and tag statistics of personal hierarchies rather than individual relations' co-occurrence statistics as in sig . Second, sap is based on the relational clustering approach that incrementally attaches relevant saplings to the learned folksonomies, as sig exhaustively determines the best path out of all possible paths from the root node to a leaf, which is computationally expensive when the learned folksonomies are deep. Last, sap demonstrates many advantages as presented in sec:results .
{ "cite_N": [ "@cite_11" ], "mid": [ "2145500920" ], "abstract": [ "Automatic folksonomy construction from tags has attracted much attention recently. However, inferring hierarchical relations between concepts from tags has a drawback in that it is difficult to distinguish between more popular and more general concepts. Instead of tags we propose to use user-specified relations for learning folksonomy. We explore two statistical frameworks for aggregating many shallow individual hierarchies, expressed through the collection set relations on the social photosharing site Flickr, into a common deeper folksonomy that reflects how a community organizes knowledge. Our approach addresses a number of challenges that arise while aggregating information from diverse users, namely noisy vocabulary, and variations in the granularity level of the concepts expressed. Our second contribution is a method for automatically evaluating learned folksonomy by comparing it to a reference taxonomy, e.g., the Web directory created by the Open Directory Project. Our empirical results suggest that user-specified relations are a good source of evidence for learning folksonomies." ] }
1005.5114
2950532014
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Handling mutual shortcuts by keeping the sapling which is more similar to the ancestor is similar in spirit to the minimum evolution assumption in @cite_14 . Specifically, a certain hierarchy should not have any sudden changes from a parent to its child concepts. Our approach is also similar to several works on ontology alignment (e.g. @cite_7 @cite_13 ). However, unlike those works, which merge a small number of deep, detailed and consistent concepts, we merge large number of noisy and shallow concepts, which are specified by different users.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_7" ], "mid": [ "2155734303", "2125149214", "" ], "abstract": [ "This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness.", "There is a great deal of research on ontology integration which makes use of rich logical constraints to reason about the structural and logical alignment of ontologies. There is also considerable work on matching data instances from heterogeneous schema or ontologies. However, little work exploits the fact that ontologies include both data and structure. We aim to close this gap by presenting a new algorithm (ILIADS) that tightly integrates both data matching and logical reasoning to achieve better matching of ontologies. We evaluate our algorithm on a set of 30 pairs of OWL Lite ontologies with the schema and data matchings found by human reviewers. We compare against two systems - the ontology matching tool FCA-merge [28] and the schema matching tool COMA++ [1]. ILIADS shows an average improvement of 25 in quality over FCA-merge and a 11 improvement in recall over COMA++.", "" ] }
1005.5283
2950566770
We consider a general polling model with @math stations. The stations are served exhaustively and in cyclic order. Once a station queue falls empty, the server does not immediately switch to the next station. Rather, it waits at the station for the possible arrival of new work ("wait-and-see") and, in the case of this happening, it restarts service in an exhaustive fashion. The total time the server waits idly is set to be a fixed, deterministic parameter for each station. Switchover times and service times are allowed to follow some general distribution, respectively. In some cases, which can be characterised, this strategy yields strictly lower average queueing delay than for the exhaustive strategy, which corresponds to setting the "wait-and-see credit" equal to zero for all stations. This extends results of Pek "oz (Probability in the Engineering and Informational Sciences 13 (1999)) and of (Annals of Operations Research 112 (2002)). Furthermore, we give a lower bound for the delay for all strategies that allow the server to wait at the stations even though no work is present.
References that refer to polling models where the server may be waiting at a station are apparently rare. The main references for us are Pek " o z @cite_4 and @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_4" ], "mid": [ "2099066336", "2414390159" ], "abstract": [ "We consider two-queue polling models with the special feature that a timer mechanism is employed at Q1: whenever the server polls Q1 and finds it empty, it activates a timer and remains dormant, waiting for the first arrival. If such an arrival occurs before the timer expires, a busy period starts in accordance with Q1's service discipline. However, if the timer is shorter than the interarrival time to Q1, the server does not wait any more and switches back to Q2. We consider three configurations: (i) Q1 is controlled by the 1-limited protocol while Q2 is served exhaustively, (ii) Q1 employs the exhaustive regime while Q2 follows the 1-limited procedure, and (iii) both queues are served exhaustively. In all cases, we assume Poisson arrivals and allow general service and switchover time distributions. Our main results include the queue length distributions at polling instants, the waiting time distributions and the distribution of the total workload in the system.", "A recent interesting paper Cooper, Niu, and Srinivasan @2 #! shows how for some cyclic production systems reducing setup times can surprisingly increase work in process+ There the authors show how in these situations the introduction of forced idle time can be used to optimize system performance+ Here we show that introducing forced idle time at different points during the production cycle can further improve performance in these situations and also in situations where the suggestions from @2# yield no improvement+" ] }
1005.5283
2950566770
We consider a general polling model with @math stations. The stations are served exhaustively and in cyclic order. Once a station queue falls empty, the server does not immediately switch to the next station. Rather, it waits at the station for the possible arrival of new work ("wait-and-see") and, in the case of this happening, it restarts service in an exhaustive fashion. The total time the server waits idly is set to be a fixed, deterministic parameter for each station. Switchover times and service times are allowed to follow some general distribution, respectively. In some cases, which can be characterised, this strategy yields strictly lower average queueing delay than for the exhaustive strategy, which corresponds to setting the "wait-and-see credit" equal to zero for all stations. This extends results of Pek "oz (Probability in the Engineering and Informational Sciences 13 (1999)) and of (Annals of Operations Research 112 (2002)). Furthermore, we give a lower bound for the delay for all strategies that allow the server to wait at the stations even though no work is present.
The second main reference is @cite_13 , where a polling model with @math stations is analysed. In that work, the following situation is investigated: If the server encounters an at station 1 , a wait-and-see'' timer is activated in order to wait for the possible arrival of new messages. However, -- contrary to the present setup -- once the server has finished some work or the timer has run out, it will immediately switch to the next station. We compare the resulting delay obtained from this strategy to ours in Figure 2. We have found cases, where our strategy leads to lower delay than the strategy proposed by and also cases where it performs worse. The latter is usually the case if the intensities @math are close to each other, whereas in the case that we deal with a highly asymmetric system, our strategy seems to be better. Also, for large switchover times our strategy seems to perform better than @cite_13 , since in this case the timer from @cite_13 is rarely activated. Unfortunately, it does not seem to be possible to compare the strategies directly due to the non-explicit nature of the delay formulas in @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2099066336" ], "abstract": [ "We consider two-queue polling models with the special feature that a timer mechanism is employed at Q1: whenever the server polls Q1 and finds it empty, it activates a timer and remains dormant, waiting for the first arrival. If such an arrival occurs before the timer expires, a busy period starts in accordance with Q1's service discipline. However, if the timer is shorter than the interarrival time to Q1, the server does not wait any more and switches back to Q2. We consider three configurations: (i) Q1 is controlled by the 1-limited protocol while Q2 is served exhaustively, (ii) Q1 employs the exhaustive regime while Q2 follows the 1-limited procedure, and (iii) both queues are served exhaustively. In all cases, we assume Poisson arrivals and allow general service and switchover time distributions. Our main results include the queue length distributions at polling instants, the waiting time distributions and the distribution of the total workload in the system." ] }
1005.5283
2950566770
We consider a general polling model with @math stations. The stations are served exhaustively and in cyclic order. Once a station queue falls empty, the server does not immediately switch to the next station. Rather, it waits at the station for the possible arrival of new work ("wait-and-see") and, in the case of this happening, it restarts service in an exhaustive fashion. The total time the server waits idly is set to be a fixed, deterministic parameter for each station. Switchover times and service times are allowed to follow some general distribution, respectively. In some cases, which can be characterised, this strategy yields strictly lower average queueing delay than for the exhaustive strategy, which corresponds to setting the "wait-and-see credit" equal to zero for all stations. This extends results of Pek "oz (Probability in the Engineering and Informational Sciences 13 (1999)) and of (Annals of Operations Research 112 (2002)). Furthermore, we give a lower bound for the delay for all strategies that allow the server to wait at the stations even though no work is present.
The strategy employed in @cite_13 and in the present paper is somehow related to a so-called forced idle time . We refer e.g. to @cite_7 @cite_5 for some work on this. However, in the present setup, the server is not forced to be idle; whenever it is set to wait-and-see'', it rather resumes service as soon as new messages arrive. This is the reason we prefer the term wait-and-see'' rather than forced idle time''.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_7" ], "mid": [ "2123952734", "2099066336", "1976263021" ], "abstract": [ "We compare two versions of a symmetric two-queue polling model with switchover times and setup times. The SI version has State-Independent setups, according to which the server sets up at the polled queue whether or not work is waiting there; and the SD version has State-Dependent setups, according to which the server sets up only when work is waiting at the polled queue. Naive intuition would lead one to believe that the SD version should perform better than the SI version. We characterize the difference in the expected waiting times of these two versions, and we uncover some surprising facts. In particular, we show that, regardless of the server utilization or the service-time distribution, the SD version performs (i) the same as, (ii) worse than, or (iii) better than its SI counterpart if the switchover and setup times are, respectively, (i) both constants, (ii) variable (i.e. non-deterministic) and constant, or (iii) constant and variable. Only (iii) is consistent with naive intuition.", "We consider two-queue polling models with the special feature that a timer mechanism is employed at Q1: whenever the server polls Q1 and finds it empty, it activates a timer and remains dormant, waiting for the first arrival. If such an arrival occurs before the timer expires, a busy period starts in accordance with Q1's service discipline. However, if the timer is shorter than the interarrival time to Q1, the server does not wait any more and switches back to Q2. We consider three configurations: (i) Q1 is controlled by the 1-limited protocol while Q2 is served exhaustively, (ii) Q1 employs the exhaustive regime while Q2 follows the 1-limited procedure, and (iii) both queues are served exhaustively. In all cases, we assume Poisson arrivals and allow general service and switchover time distributions. Our main results include the queue length distributions at polling instants, the waiting time distributions and the distribution of the total workload in the system.", "Sarkar and Zangwill (1991) showed by numerical examples that reduction in setup times can, surprisingly, actually increase work in process in some cyclic production systems (that is, reduction in switchover times can increase waiting times in some polling models). We present, for polling models with exhaustive and gated service disciplines, some explicit formulas that provide additional insight and characterization of this anomaly. More specifically, we show that, for both of these models, there exist simple formulas that define for each queue a critical value z* of the mean total setup time z per cycle such that, if z < z*, then the expected waiting time at that queue will be minimized if the server is forced to idle for a constant length of time z*- z every cycle; also, for the symmetric polling model, we give a simple explicit formula for the expected waiting time and the critical value z* that minimizes it." ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
Analysis on the reliability of overlay networks in terms of connectivity in the overlays has been developed @cite_9 . It achieves a good estimation of connectivity in the embedded overlay networks through a Monte Carlo simulation-based algorithm. Unfortunately, it is not applicable to our problem as we are concerned with critical virtual nodes and embedding them, as well as the whole infrastructure with reliability guarantees.
{ "cite_N": [ "@cite_9" ], "mid": [ "2027480718" ], "abstract": [ "We consider network reliability in layered networks where the lower layer experiences random link failures. In layered networks, each failure at the lower layer may lead to multiple failures at the upper layer. We generalize the classical polynomial expression for network reliability to the multilayer setting. Using random sampling techniques, we develop polynomial-time approximation algorithms for the failure polynomial. Our approach gives an approximate expression for reliability as a function of the link failure probability, eliminating the need to resample for different values of the failure probability. Furthermore, it gives insight on how the routings of the logical topology on the physical topology impact network reliability. We show that maximizing the min cut of the (layered) network maximizes reliability in the low-failure-probability regime. Based on this observation, we develop algorithms for routing the logical topology to maximize reliability." ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
Fault tolerance is provided in data centers @cite_12 @cite_3 through special design of the network: having large excess of nodes and links in an organized manner as redundancies. These works provide reliability to the data center as a whole, but do not customize reliability guarantees to embedded virtual infrastructures.
{ "cite_N": [ "@cite_3", "@cite_12" ], "mid": [ "2126210439", "2123016589" ], "abstract": [ "This paper presents BCube, a new network architecture specifically designed for shipping-container based, modular data centers. At the core of the BCube architecture is its server-centric network structure, where servers with multiple network ports connect to multiple layers of COTS (commodity off-the-shelf) mini-switches. Servers act as not only end hosts, but also relay nodes for each other. BCube supports various bandwidth-intensive applications by speeding-up one-to-one, one-to-several, and one-to-all traffic patterns, and by providing high network capacity for all-to-all traffic. BCube exhibits graceful performance degradation as the server and or switch failure rate increases. This property is of special importance for shipping-container data centers, since once the container is sealed and operational, it becomes very difficult to repair or replace its components. Our implementation experiences show that BCube can be seamlessly integrated with the TCP IP protocol stack and BCube packet forwarding can be efficiently implemented in both hardware and software. Experiments in our testbed demonstrate that BCube is fault tolerant and load balancing and it significantly accelerates representative bandwidth-intensive applications.", "This paper considers the requirements for a scalable, easily manageable, fault-tolerant, and efficient data center network fabric. Trends in multi-core processors, end-host virtualization, and commodities of scale are pointing to future single-site data centers with millions of virtual end points. Existing layer 2 and layer 3 network protocols face some combination of limitations in such a setting: lack of scalability, difficult management, inflexible communication, or limited support for virtual machine migration. To some extent, these limitations may be inherent for Ethernet IP style protocols when trying to support arbitrary topologies. We observe that data center networks are often managed as a single logical network fabric with a known baseline topology and growth model. We leverage this observation in the design and implementation of PortLand, a scalable, fault tolerant layer 2 routing and forwarding protocol for data center environments. Through our implementation and evaluation, we show that PortLand holds promise for supporting a plug-and-play\" large-scale, data center network." ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
Meanwhile there are some works targeted at node fault tolerance at the server virtualization level. Bressoud @cite_7 was the first few to introduce fault tolerance at the hypervisor. Two virtual slices residing on the same physical node can be made to operate in sync through the hypervisor. However, this provides reliability against software failures at most, since the slices reside on the same node.
{ "cite_N": [ "@cite_7" ], "mid": [ "2114488210" ], "abstract": [ "Protocols to implement a fault-tolerant computing system are described. These protocols augment the hypervisor of a virtual-machine manager and coordinate a primary virtual machine with its backup. No modifications to the hardware, operating system, or application programs are required. A prototype system was constructed for HP's PA-RISC instruction-set architecture. Even though the prototype was not carefully tuned, it ran programs about a factor of 2 slower than a bare machine would." ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
Others @cite_14 @cite_35 have made progress for the virtual slices to be duplicated and migrated over a network. Various duplication techniques and migration protocols were proposed for different types of applications (web servers, game servers, and benchmarking applications) @cite_35 . Remus @cite_14 and Kemari @cite_34 are two other systems that allows for state synchronization between two virtual nodes for full, dedicated redundancy. However, these works focus on the practical issues, and do not address the resource allocation issue (in both compute capacity and bandwidth) while having redundant nodes residing somewhere in the network.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_34" ], "mid": [ "2754017209", "1572904055", "" ], "abstract": [ "A method for live migration of a virtual machine includes receiving a data packet that is sent to a migrated virtual machine on the source physical machine in a stage when the migrated virtual machine is suspended, and caching the received data packet; and sending the cached data packet to the migrated virtual machine on the destination physical machine after it is sensed that the migrated virtual machine is restored at the destination, to speed up restoration of a TCP connection inside the virtual machine. The apparatus of the present disclosure includes a caching unit and a data restoration unit. The method and apparatus of the present disclosure improve a restoration speed of the TCP connection, make live migration of a virtual machine more imperceptible for users, and improve user experience.", "Allowing applications to survive hardware failure is an expensive undertaking, which generally involves reengineering software to include complicated recovery logic as well as deploying special-purpose hardware; this represents a severe barrier to improving the dependability of large or legacy applications. We describe the construction of a general and transparent high availability service that allows existing, unmodified software to be protected from the failure of the physical machine on which it runs. Remus provides an extremely high degree of fault tolerance, to the point that a running system can transparently continue execution on an alternate physical host in the face of failure with only seconds of downtime, while completely preserving host state such as active network connections. Our approach encapsulates protected software in a virtual machine, asynchronously propagates changed state to a backup host at frequencies as high as forty times a second, and uses speculative execution to concurrently run the active VM slightly ahead of the replicated system state.", "" ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
VNsnap @cite_4 is another method developed to take static snapshots of an entire virtual infrastructure to some reliable storage, in order to recover from failures. This can be stored reliably and distributedly as replicas @cite_30 , or as erasure codes @cite_25 @cite_28 . There is no synchronization, and whether the physical infrastructure has sufficient resources to recover automatically using the saved snapshots is another question altogether.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_4", "@cite_25" ], "mid": [ "2624304035", "2145023598", "2116845782", "1966612101" ], "abstract": [ "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.", "A cluster of PCs can be seen as a collection of networked low cost disks; such a collection can be operated by proper software so as to provide the abstraction of a single, larger block device. By adding suitable data redundancy, such a disk collection as a whole could act as single, highly fault tolerant, distributed RAID device, providing capacity and reliability along with the convenient price performance typical of commodity clusters. We report about the design and performance of DRAID, a distributed RAID prototype running on a Gigabit Ethernet cluster of PCs. DRAID offers storage services under a single I O space (SIOS) block device abstraction. The SIOS feature implies that the storage space is accessible by each of the stations in the cluster, rather than throughout one or few end-points, with a potentially higher aggregate I O bandwidth and better suitability to parallel I O.", "A virtual networked environment (VNE) consists of virtual machines (VMs) connected by a virtual network. It has been adopted to create “virtual infrastructures” for individual users on a shared cloud computing infrastructure. The ability to take snapshots of an entire VNE — including images of the VMs with their execution, communication and storage states — yields a unique approach to reliability as a snapshot can restore the operation of an entire virtual infrastructure. We present VNsnap, a system that takes distributed snapshots of VNEs. Unlike existing distributed snapshot checkpointing solutions, VNsnap does not require any modifications to the applications, libraries, or (guest) operating systems running in the VMs. Furthermore, VNsnap incurs only seconds of downtime as much of the snapshot operation takes place concurrently with the VNE's normal operation. We have implemented VNsnap on top of Xen. Our experiments with real-world parallel and distributed applications demonstrate VNsnap's effectiveness and efficiency.", "In this correspondence, we consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k < n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce decentralized erasure codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding." ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
At a fundamental level, there are methods to construct topologies for redundant nodes that address both nodes and links reliability @cite_32 @cite_31 . Based on some input graph, additional links (or, bandwidth reservations) are introduced optimally such that the least number is needed. However, this is based on designing fault tolerance for multiprocessor systems which are mostly stateless. A node failure, in this case, involve migrations or rotations among the remaining nodes to preserve the original topology. This may not be suitable in a virtualized network scenario where migrations may cause much disruptions to parts of the network that are unaffected by the failure.
{ "cite_N": [ "@cite_31", "@cite_32" ], "mid": [ "2150155789", "2148116905" ], "abstract": [ "Structural fault tolerance (SFT) is the ability of a multiprocessor to reconfigure around faulty processors or links in order to preserve its original processor interconnection structure; In this paper, we focus on the design of SFT multiprocessors that have low switch and link overheads, but can tolerate a very large number of processor faults on the average. Most previous work has concentrated on deterministic k-fault-tolerant (k-FT) designs in which exactly k spare processors and some spare switches and links are added to construct multiprocessors that can tolerate any k processor faults. However, after k faults are reconfigured around, much of the extra links and switches can remain unutilized. It is possible within the basic node-covering framework, which was introduced by Dutt and Hayes as an efficient k-FT design method, to design FT multiprocessors that have the same amount of switches and links as, say, a two-FT deterministic design, but have s spare processors, where s spl Gt 2, so that, on the average, k= spl Theta (s) (k spl les s) processor failures can be reconfigured around. Such designs utilize the spare link and switch capacity very efficiently, and are called probabilistic FT designs. An elegant and powerful method to construct covering graphs or CG's, which are key to obtaining the probabilistic FT designs, is to use linear error-correcting codes (ECCs). We show how to construct probabilistic designs with very high average fault tolerance but low wiring and switch overhead using ECCs like the 2D-parity, full-two, 3D-parity, and full-three codes.", "Given a graph G on n nodes the authors say that a graph T on n + k nodes is a k-fault tolerant version of G, if one can embed G in any n node induced subgraph of T. Thus T can sustain k faults and still emulate G without any performance degradation. They show that for a wide range of values of n, k and d, for any graph on n nodes with maximum degree d there is a k-fault tolerant graph with maximum degree O(kd). They provide lower bounds as well: there are graphs G with maximum degree d such that any k-fault tolerant version of them has maximum degree at least Omega (d square root k). >" ] }
1005.5367
2950441226
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
Our problem formulation involves virtual network embedding @cite_5 @cite_21 @cite_13 with added node and link redundancy for reliability. In particular, our model employs the use of path-splitting @cite_13 . Path-splitting is implicitly incorporated in our multi-commodity flow problem formulation. Path-splitting allows a flow between two nodes to be split over multiple routes such that the aggregate flow across those routes equal to the demand between the two nodes. This gives more resilience to link failures and allows for graceful degradation.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_13" ], "mid": [ "2161965229", "2152415706", "2114298221" ], "abstract": [ "Recently network virtualization has been proposed as a promising way to overcome the current ossification of the Internet by allowing multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. A major challenge in this respect is the VN embedding problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms which had clear separation between the node mapping and the link mapping phases. This paper proposes VN embedding algorithms with better coordination between the two phases. We formulate the VN em- bedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program, and devise two VN embedding algo- rithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. Simulation experiments show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.", "Assigning the resources of a virtual network to the components of a physical network, called Virtual Network Mapping, plays a central role in network virtualization. Existing approaches use classical heuristics like simulated annealing or attempt a two stage solution by solving the node mapping in a first stage and doing the link mapping in a second stage. The contribution of this paper is a Virtual Network Mapping (VNM) algorithm based on subgraph isomorphism detection: it maps nodes and links during the same stage. Our experimental evaluations show that this method results in better mappings and is faster than the two stage approach, especially for large virtual networks with high resource consumption which are hard to map.", "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks" ] }
1005.5413
2951915616
We prove that it is NP-hard to decide whether two points in a polygonal domain with holes can be connected by a wire. This implies that finding any approximation to the shortest path for a long snake amidst polygonal obstacles is NP-hard. On the positive side, we show that snake's problem is "length-tractable": if the snake is "fat", i.e., its length width ratio is small, the shortest path can be computed in polynomial time.
In VLSI numerous extensions and generalizations of the basic problem were considered. These include routing multiple paths, on several levels, and with different constraints and objectives. It is impossible to survey all literature on the subject; we will only mention the books @cite_14 @cite_16 .
{ "cite_N": [ "@cite_14", "@cite_16" ], "mid": [ "1979696180", "2150368068" ], "abstract": [ "We consider the existence and efficient construction of bounded curvature paths traversing constant-width regions of the plane, called corridors. We make explicit a width threshold τ with the property that (a) all corridors of width at least τ admit a unit-curvature traversal and (b) for any width w < τ there exist corridors of width w with no such traversal. Applications to the design of short, but not necessarily shortest, and high clearance, but not necessarily maximum clearance, curvature-bounded paths in general polygonal domains, are also discussed.", "This pioneering study of two-dimensional wiring patterns develops powerful algorithms for the physical design of VLSI circuits. Its homotopic approach to circuit layout advances the state of the art in wire routing and layout compaction, and will inspire future research. By viewing wires as flexible connections with fixed topology, the author obtains simple and efficient algorithms for CAD problems whose previous solutions employed, unreliable or inefficient heuristics.\"Single-Layer Wire Routing and Compaction\" is the first rigorous treatment of homotopic layouts and the techniques for optimizing them. In a novel application of classical mathematics to computer science, Maley characterizes the ideal routing of a layout in terms of simple topological invariants. He derives practical algorithms from this theoretical insight. The algorithms and their underlying ideas are intuitive, widely applicable, and presented in a highly readable style.F. Miller Maley is a Research Associate in the Computer Science Department at Princeton University. \"Single-Layer Wire Routing and Compaction\" is included in the series Foundations of Computing, edited by Michael Garey and Albert Meyer." ] }
1005.5413
2951915616
We prove that it is NP-hard to decide whether two points in a polygonal domain with holes can be connected by a wire. This implies that finding any approximation to the shortest path for a long snake amidst polygonal obstacles is NP-hard. On the positive side, we show that snake's problem is "length-tractable": if the snake is "fat", i.e., its length width ratio is small, the shortest path can be computed in polynomial time.
In robotics thick paths were studied as routes for a circular robot. In this context, path self-overlap poses no problem as even a self-overlapping path may be traversed by the robot; that is, in contrast to VLSI, robotics research should not care about finding non-selfoverlapping paths. In @cite_2 , Chew gave an efficient algorithm for finding a shortest thick path in a polygonal domain. In a sense, our algorithm for shortest path for a short snake () may be viewed as an extension of Chew's.
{ "cite_N": [ "@cite_2" ], "mid": [ "2077163943" ], "abstract": [ "Given a robot R, a set S of obstacles, and points p and q, the Shortest Path Problem is to find the shortest path for R to move from p to q without crashing into any of the obstacles. We show that if the problem is restricted to a disc-shaped robot in the plane with nonintersecting polygons as obstacles then the shortest path can be found in time O(n 2 log n) where n is the number of edges that make up the polygonal obstacles. This matches the best time currently known for the simpler problem of finding the shortest path in the plane for a point robot." ] }
1005.5413
2951915616
We prove that it is NP-hard to decide whether two points in a polygonal domain with holes can be connected by a wire. This implies that finding any approximation to the shortest path for a long snake amidst polygonal obstacles is NP-hard. On the positive side, we show that snake's problem is "length-tractable": if the snake is "fat", i.e., its length width ratio is small, the shortest path can be computed in polynomial time.
Motion planning for an object with few degrees of freedom may be approached with the cell decomposition techniques @cite_12 @cite_9 . Closest to our bounded-length snake problem is the work on path planning for a segment (rod) @cite_21 @cite_13 @cite_18 @cite_4 . Short snakes are also relevant to more recent applications of motor protein motion @cite_10 @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_10", "@cite_9", "@cite_21", "@cite_13", "@cite_12" ], "mid": [ "141519090", "2061311037", "1515092449", "2113877752", "", "2781991398", "1993860574", "101508493" ], "abstract": [ "", "We present here a new and efficient algorithm for planning collision-free motion of a line segment (a rod or a “ladder”) in two-dimensional space amidst polygonal obstacles. The algorithm uses a different approach than those used in previous motion-planning techniques, namely, it calculates the boundary of the (three-dimensional) space of free positions of the ladder, and then uses this boundary for determining the existence of required motions, and plans such motions whenever possible. The algorithm runs in timeO(K logn) =O(n 2 logn) wheren is the number of obstacle corners and whereK is the total number of pairs of obstacle walls or corners of distance less than or equal to the length of the ladder. The algorithm has thus the same complexity as the best previously known algorithm of Leven and Sharir [5], but if the obstacles are not too cluttered together it will run much more efficiently. The algorithm also serves as an initial demonstration of the viability of the technique it uses, which we expect to be useful in obtaining efficient motion-planning algorithms for other more complex robot systems.", "Many types of cellular motility, including muscle contraction, are driven by the cyclical interaction of the motor protein myosin with actin filaments, coupled to the breakdown of ATP. It is thought that myosin binds to actin and then produces force and movement as it ‘tilts’ or ‘rocks’ into one or more subsequent, stable conformations1,2. Here we use an optical-tweezers transducer to measure the mechanical transitions made by a single myosin head while it is attached to actin. We find that two members of the myosin-I family, rat liver myosin-I of relative molecular mass 130,000 (Mr 130K) and chick intestinal brush-border myosin-I, produce movement in two distinct steps. The initial movement (of roughly 6 nanometres) is produced within 10 milliseconds of actomyosin binding, and the second step (of roughly 5.5nanometres) occurs after a variable time delay. The duration of the period following the second step is also variable and depends on the concentration of ATP. At the highest time resolution possible (about 1 millisecond), we cannot detect this second step when studying the single-headed subfragment-1 of fast skelet al muscle myosin II. The slower kinetics of myosin-I have allowed us to observe the separate mechanical states that contribute to its working stroke.", "Theoretical modeling and computer simulations of molecular motors provide insight that engineers can exploit to design and control artificial nanomotors.For obvious reasons, the study of molecular motors has been a traditional area of research in molecular cell biology and biochemistry, but in recent years, it has attracted physicists' as well as engineers. Exploring the design and mechanisms of these motors from an engineering perspective requires investigating their structure and dynamics using the fundamental principles of physics at the subcellular level. The insights gained from such fundamental research could also find practical applications in designing and manufacturing artificial nanomotors - motors whose typical size is usually in the rage of a few nanometers to a few tens of nanometers. In contrast to man-made macroscopically large motors, natural nanomotors have evolved over billions of years. While discussing the design, mechanism, and control of molecular motors, this article also compares their macroscopic counterparts to emphasize common features as well as differences.", "", "", "We present a new roadmap for a rod-shaped robot operating in a three-dimensional workspace, whose configuration space is diffeomorphic to R3 X S2. This roadmap is called the rod hierarchical generalized Voronoi graph (rod-HGVG) and can be used to find a path between any two points in an unknown configuration space using only the sensor data. More importantly, the rod-HGVG serves as a basis for an algorithm to explore an unknown configuration space without explicitly constructing it. Once the rod-HGVG is constructed, the planner can use it to plan a path between any two configurations. One of the challenges in defining the roadmap revolves around a homotopy theory result, which asserts that there cannot be a one-dimensional deformation retract of a non-contractible space with dimension greater than two. Instead, we define an exact cellular decomposition on the free configuration space and a deformation retract in each cell (each cell is contractible). Next, we \"connect\" the deformation retracts of each of the cells using a roadmap of the workspace. We call this roadmap a piecewise retract because it comprises many deformation retracts. Exploiting the fact that the rod-HGVG is defined in terms of workspace distance measurements, we prescribe an incremental procedure to construct the rod-HGVG that uses the distance information that can be obtained from conventional range sensors.", "1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References." ] }
1005.4774
1920911877
The market economy deals with many interacting agents such as buyers and sellers who are autonomous intelligent agents pursuing their own interests. One such multi-agent system (MAS) that plays an important role in auctions is the combinatorial auctioning system (CAS). We use this framework to define our concept of fairness in terms of what we call as "basic fairness" and "extended fairness". The assumptions of quasilinear preferences and dominant strategies are taken into consideration while explaining fairness. We give an algorithm to ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an algorithm of Sandholm to achieve optimality. Basic and extended fairness are then analyzed according to the dominant strategy solution concept.
The auction mechanism proposed by Biggart @cite_14 provides an economic sociology perspective. There, fairness can mean different things for bidders and auctioneer. The auctioneer may consider a process fair which in fact only gives him the maximum revenue, whereas the bidders may consider a process fair which only gives the auctioneer the least return on all items. The most important consideration overall is to sustain the community's faith in the fairness of the process. This does not mean that buyers and sellers cannot press their advantage, but they are allowed to do so only insofar as the community as a whole considers their actions appropriate and acceptable.
{ "cite_N": [ "@cite_14" ], "mid": [ "387913279" ], "abstract": [ "List of Contributors. Acknowledgments. Preface. Part I: Foundational Statements. 1. An Inquiry into the Nature and Causes of the Wealth of Nations (Adam Smith) 2. Grundrisse: Foundations of the Critique of Political Economy Selections from the Chapter on capital (Karl Max) 3. Economy and Society: An Outline of Interpretive Sociology (Max Weber) 4. The Great Transformation (Karl Polanyi) Part II: Economic Action. 5. Economic Action and Social Structure: The Problem of Embeddedness (Mark Granovetter) 6. Making Markets: Opportunism and restraint on Wall Street (Mitchel Y. Abolafia) 7. Auctions: The Social Construction of Value (Charles Smith) 8. the Structural Sources of Adventurism: The Case of the California Gold Rush (Gary G. Hamilton) 9. The Separative Self: Andocentric Bias in Neoclassical Assumptions (Paula England) Part III: Capitalist States and Globalizing Markets. 10. Weber's Last Theory of capitalism (Randall Collins) 11. Markets as Politics: A Political-Culture Approach to Market Institutions (Neil Fligstein) 12. Rethinking Capitalism (Fred Block) 13. Developing Difference: Social Organization and the rise of the Auto Industries of South Korea, Taiwan, Spain, and Argentina (Nicole Woolsey Biggart and Mauro F. Guillen) 14. Learning from Collaboration: Knowledge and Networks in the Biotechnology and Pharmaceutical Industries (Walter W. Powell) Part IV: Economic Culture and the Culture of the Economy. 15. The Forms of Capital (Pierre Bourdieu) 16. Money, Meaning, and Morality (Bruce G. Carruthers and Wendy Nelson Espeland) 17. The Social Meaning of Money (Viviana A. Zelizer) 18. Opposing Ambitions: Gender and Identity in an Alternative Organization (Sherryl Kleinman) 19. Greening the Economy from the Bottom Up? Lessons in Consumption from the Energy Case (Loren Lutzenhiser) Index." ] }
1005.4774
1920911877
The market economy deals with many interacting agents such as buyers and sellers who are autonomous intelligent agents pursuing their own interests. One such multi-agent system (MAS) that plays an important role in auctions is the combinatorial auctioning system (CAS). We use this framework to define our concept of fairness in terms of what we call as "basic fairness" and "extended fairness". The assumptions of quasilinear preferences and dominant strategies are taken into consideration while explaining fairness. We give an algorithm to ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an algorithm of Sandholm to achieve optimality. Basic and extended fairness are then analyzed according to the dominant strategy solution concept.
A concept of verifiable fairness in Internet auctions has been proposed by Liao and Hwang @cite_17 . This was to promote trust in Internet auctions. The scheme proposed provides evidence regarding policies implemented so that the confidence of bidders increases and they consider it to be fair. Most of these auctions see transparency in the auctioning process and rules as the basis for ensuring fairness in the system, but clarity regarding fairness still remains wanting.
{ "cite_N": [ "@cite_17" ], "mid": [ "2055445578" ], "abstract": [ "Describes a novel Internet auction model achieving verifiable fairness, a requirement aimed at enhancing the trust of bidders in auctioneers. Distrust in remote auctioneers prevents bidders from participating in Internet auctioning. According to proposed survey reports, this study presents four characteristics that render the Internet untrustworthy for bidders. These intrinsic properties suggest that auction sites not only follow auction policies, but provide customers with evidence validating that the policies are applied fairly. Evidence of verifiable fairness provides bidders with a basis for confidence in Internet auctions. Cryptographic techniques are also applied herein to establish a novel auction model with evidence to manifest and verify every step of the auctioneer. Analysis results demonstrate that the proposed model satisfies various requirements regarding fairness and privacy. Moreover, in the proposed model, the losing bids remain sealed." ] }
1005.4774
1920911877
The market economy deals with many interacting agents such as buyers and sellers who are autonomous intelligent agents pursuing their own interests. One such multi-agent system (MAS) that plays an important role in auctions is the combinatorial auctioning system (CAS). We use this framework to define our concept of fairness in terms of what we call as "basic fairness" and "extended fairness". The assumptions of quasilinear preferences and dominant strategies are taken into consideration while explaining fairness. We give an algorithm to ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an algorithm of Sandholm to achieve optimality. Basic and extended fairness are then analyzed according to the dominant strategy solution concept.
Fairness as a collective measure has been considered by Moulin @cite_1 , who proposes aggregate or collective welfare which is measured in terms of an objective standard or index that assumes equivalence between this measure and a particular mix of economic and non-economic goods which gives happiness to a varying set of individual utility functions. This tries to capture social welfare and commonwealth to be incorporated into every individuals' happiness equations. Though debatable, it provides an excellent introduction to the concept of fairness.
{ "cite_N": [ "@cite_1" ], "mid": [ "1577069963" ], "abstract": [ "The concept of fair division is as old as civil society itself. Aristotle's \"equal treatment of equals\" was the first step toward a formal definition of distributive fairness. The concept of collective welfare, more than two centuries old, is a pillar of modern economic analysis. Reflecting fifty years of research, this book examines the contribution of modern microeconomic thinking to distributive justice. Taking the modern axiomatic approach, it compares normative arguments of distributive justice and their relation to efficiency and collective welfare. The book begins with the epistemological status of the axiomatic approach and the four classic principles of distributive justice: compensation, reward, exogenous rights, and fitness. It then presents the simple ideas of equal gains, equal losses, and proportional gains and losses. The book discusses three cardinal interpretations of collective welfare: Bentham's \"utilitarian\" proposal to maximize the sum of individual utilities, the Nash product, and the egalitarian leximin ordering. It also discusses the two main ordinal definitions of collective welfare: the majority relation and the Borda scoring method. The Shapley value is the single most important contribution of game theory to distributive justice. A formula to divide jointly produced costs or benefits fairly, it is especially useful when the pattern of externalities renders useless the simple ideas of equality and proportionality. The book ends with two versatile methods for dividing commodities efficiently and fairly when only ordinal preferences matter: competitive equilibrium with equal incomes and egalitarian equivalence. The book contains a wealth of empirical examples and exercises." ] }
1005.4774
1920911877
The market economy deals with many interacting agents such as buyers and sellers who are autonomous intelligent agents pursuing their own interests. One such multi-agent system (MAS) that plays an important role in auctions is the combinatorial auctioning system (CAS). We use this framework to define our concept of fairness in terms of what we call as "basic fairness" and "extended fairness". The assumptions of quasilinear preferences and dominant strategies are taken into consideration while explaining fairness. We give an algorithm to ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an algorithm of Sandholm to achieve optimality. Basic and extended fairness are then analyzed according to the dominant strategy solution concept.
A Distributed Combinatorial Auctioning System (DCAS) consisting of auctioneers and bidders who communicate by message passing has been proposed @cite_2 . Their work uses a fair division algorithm that is based on DCAS concept and model. It also discusses how basic and extended fairness implementations may be achieved in distributed resource allocation.
{ "cite_N": [ "@cite_2" ], "mid": [ "1821032742" ], "abstract": [ "Combinatorial Auctions are auctions where bidders can place bids on combinations of items, called packages or bundles, rather than just on individual items. In this paper we extend this concept to distributed system, by proposing a Distributed Combinatorial Auctioning System consisting of auctioneers and bidders who communicate by message-passing. We also propose a fair division algorithm that is based on our DCAS concept and model. Our model consist of auctioneers that are distributed in the system each having local bidders. Auctioneers collect local bids for the bundles. One of the auctioneers acts obtains all the bids from other auctioneers, and performs the computations necessary for the combinatorial auction. We also briefly discuss how basic and extended fairness are implemented in resource allocation by our algorithm." ] }
1005.4774
1920911877
The market economy deals with many interacting agents such as buyers and sellers who are autonomous intelligent agents pursuing their own interests. One such multi-agent system (MAS) that plays an important role in auctions is the combinatorial auctioning system (CAS). We use this framework to define our concept of fairness in terms of what we call as "basic fairness" and "extended fairness". The assumptions of quasilinear preferences and dominant strategies are taken into consideration while explaining fairness. We give an algorithm to ensure fairness in a CAS using a Generalized Vickrey Auction (GVA). We use an algorithm of Sandholm to achieve optimality. Basic and extended fairness are then analyzed according to the dominant strategy solution concept.
The fair package assignment model proposed by Lahaie and Parkes @cite_12 is defined on items having pure complements or super additive valuations. This model does not address combinatorial package assignments which involve both complements and substitutes in general. Their model provides fairness to a core'' which contains a set of all distributions which are considered competitive---no fairness is posited for other distributions. Hence the bidders whose distributions lie outside the core do not get the benefits of fair assessment. In the case of multiple-round combinatorial auctions, for example, bidders whose bids are not in the core during earlier rounds are not in contention in later ones. This scheme seems unfair in a fundamental way, as it effectively discriminates against bidders who cannot make it into the core. In our model only truthfulness in bidding is considered, and no bidders are distinguished based on whether their bids lie inside or outside a putative core.
{ "cite_N": [ "@cite_12" ], "mid": [ "2144859203" ], "abstract": [ "We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations." ] }
1005.5020
1637993330
In secure multi-party computation @math parties jointly evaluate an @math -variate function @math in the presence of an adversary which can corrupt up till @math parties. Almost all the works that have appeared in the literature so far assume the presence of authenticated channels between the parties. This assumption is far from realistic. Two directions of research have been borne from relaxing this (strong) assumption: (a) The adversary is virtually omnipotent and can control all the communication channels in the network, (b) Only a partially connected topology of authenticated channels is guaranteed and adversary controls a subset of the communication channels in the network. This work introduces a new setting for (unconditional) secure multiparty computation problem which is an interesting intermediate model with respect to the above well studied models from the literature (by sharing a salient feature from both the above models). We consider the problem of (unconditional) secure multi-party computation when 'some' of the communication channels connecting the parties can be corrupted passively as well as actively. For this setting, some honest parties may be connected to several other honest parties via corrupted channels and may not be able to authentically communicate with them. Such parties may not be assured the canonical guarantees of correctness or privacy. We present refined definitions of security for this new intermediate model of unconditional multiparty computation. We show how to adapt protocols for (Unconditional) secure multiparty computation to realize the definitions and also argue the tightness of the results achieved by us.
Assuming that strictly more than @math parties are honest, it has been shown that it is possible to securely compute any @math -variate function, @cite_0 , @cite_8 for the information theoretic regime. In the computational model, the results have been given in @cite_5 , @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3", "@cite_8" ], "mid": [ "2006453614", "44936433", "2092422002", "2080911030" ], "abstract": [ "Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t n 2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t n 3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight!", "Permission to copy without fee all or part of this material is granted provided that the copies are not made or Idistributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machimery. To copy otherwise, or to republish, requires a fee and or specfic permission. correctly run a given Turing machine hi on these 2;‘s while keeping the maximum possible pniracy about them. That is, they want to compute Y ( l,..., 2,) without revealing more about the Zi’s than it is already contained in the value y itself. For instance, if M computes the sum of the q’s, every single player should not be able to learn more than the sum of the inputs of the other parties. Here A4 ma.y very well be a probabilistic Turing machine. In this case, all playen want to agree on a single string y, selected with the right probability distribution, as M’s output.", "Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.", "Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2 n 3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability." ] }
1005.5020
1637993330
In secure multi-party computation @math parties jointly evaluate an @math -variate function @math in the presence of an adversary which can corrupt up till @math parties. Almost all the works that have appeared in the literature so far assume the presence of authenticated channels between the parties. This assumption is far from realistic. Two directions of research have been borne from relaxing this (strong) assumption: (a) The adversary is virtually omnipotent and can control all the communication channels in the network, (b) Only a partially connected topology of authenticated channels is guaranteed and adversary controls a subset of the communication channels in the network. This work introduces a new setting for (unconditional) secure multiparty computation problem which is an interesting intermediate model with respect to the above well studied models from the literature (by sharing a salient feature from both the above models). We consider the problem of (unconditional) secure multi-party computation when 'some' of the communication channels connecting the parties can be corrupted passively as well as actively. For this setting, some honest parties may be connected to several other honest parties via corrupted channels and may not be able to authentically communicate with them. Such parties may not be assured the canonical guarantees of correctness or privacy. We present refined definitions of security for this new intermediate model of unconditional multiparty computation. We show how to adapt protocols for (Unconditional) secure multiparty computation to realize the definitions and also argue the tightness of the results achieved by us.
The trusted third party paradigm was proposed in @cite_5 (It has been extended to propose universal composability framework in its most general form).
{ "cite_N": [ "@cite_5" ], "mid": [ "44936433" ], "abstract": [ "Permission to copy without fee all or part of this material is granted provided that the copies are not made or Idistributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machimery. To copy otherwise, or to republish, requires a fee and or specfic permission. correctly run a given Turing machine hi on these 2;‘s while keeping the maximum possible pniracy about them. That is, they want to compute Y ( l,..., 2,) without revealing more about the Zi’s than it is already contained in the value y itself. For instance, if M computes the sum of the q’s, every single player should not be able to learn more than the sum of the inputs of the other parties. Here A4 ma.y very well be a probabilistic Turing machine. In this case, all playen want to agree on a single string y, selected with the right probability distribution, as M’s output." ] }
1005.3902
1679112856
This paper describes in details the first version of Morphonette, a new French morphological resource and a new radically lexeme-based method of morphological analysis. This research is grounded in a paradigmatic conception of derivational morphology where the morphological structure is a structure of the entire lexicon and not one of the individual words it contains. The discovery of this structure relies on a measure of morphological similarity between words, on formal analogy and on the properties of two morphological paradigms:
The construction of Morphonette uses a bootstrapping algorithm in order to extend an initial reliable seed. This technique has often also been used in computational morphology, for instance by @cite_19 or by @cite_13 . However, our method differs from these ones because it is fully lexeme-based and does not make use of morpheme nor contain any representation of them. Morphological regularities emerge from a very large set of analogies. Gathering of this set is one of contributions of the work presented in this paper. It was made possible through the use of the measure of morphological similarity of @cite_10 . This measure was inspired by work on small words by @cite_11 . Our method is also close to the ones of @cite_16 and @cite_3 where the words are not decomposed into morphemes. Both make use of string edit distance to identify formal similarity between words. Our work is also close to the one by @cite_9 , @cite_4 and @cite_0 who use formal analogies to analyze words morphologically and to translate them.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_9", "@cite_3", "@cite_0", "@cite_19", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2035410082", "2132396579", "2075910053", "1727944201", "2121963504", "2042143122", "2047603832", "1595341061", "2044589019" ], "abstract": [ "Handling terminology is an important matter in a translation workflow. However, current Machine Translation (MT) systems do not yet propose anything proactive upon tools which assist in managing terminological databases. In this work, we investigate several enhancements to analogical learning and test our implementation on translating medical terms. We show that the analogical engine works equally well when translating from and into a morphologically rich language, or when dealing with language pairs written in different scripts. Combining it with a phrase-based statistical engine leads to significant improvements.", "The paper presents a computational model aiming at making the morphological structure of the lexicon emerge from the formal and semantic regularities of the words it contains. The model is purely lexeme-based. The proposed morphological structure consists of (1) binary relations that connect each headword with words that are morphologically related, and especially with the members of its morphological family and its derivational series, and of (2) the analogies that hold between the words. The model has been tested on the lexicon of French using the TLFi machine readable dictionary.", "Analogical learning is based on a two-step inference process: (i) computation of a structural mapping between a new and a memorized situation; (ii) transfer of knowledge from the known to the unknown situation. This approach requires the ability to search for and exploit such mappings, hence the need to properly define analogical relationships, and to efficiently implement their computation. In this paper, we propose a unified definition for the notion of (formal) analogical proportion, which applies to a wide range of algebraic structures. We show that this definition is suitable for learning in domains involving large databases of structured data, as is especially the case in Natural Language Processing (NLP). We then present experimental results obtained on two morphological analysis tasks which demonstrate the flexibility and accuracy of this approach.", "We present an algorithm that takes an unannotated corpus as its input, and returns a ranked list of probable morphologically related pairs as its output. The algorithm tries to discover morphologically related pairs by looking for pairs that are both orthographically and semantically similar, where orthographic similarity is measured in terms of minimum edit distance, and semantic similarity is measured in terms of mutual information. The procedure does not rely on a morpheme concatenation model, nor on distributional properties of word substrings (such as affix frequency). Experiments with German and English input give encouraging results, both in terms of precision (proportion of good pairs found at various cutoff points of the ranked list), and in terms of a qualitative analysis of the types of morphological patterns discovered by the algorithm.", "While most approaches to unsupervised morphology acquisition often rely on metrics based on information theory for identifying morphemes, we describe a novel approach relying on the notion of formal analogy, that is, a relation between four forms, such as: reader is to doer as reading is to doing. Our assumption is that formal analogies identify pairs of morphologically related words, for instance reader reading and doer doing. Based on this assumption, our approach simply consists in identifying all the formal analogies involving the words in a lexicon. It turned out that for large lexicons, this happens to be a very time consuming task. Therefore, we report our attempts in designing practical systems based on the analogical principle. We applied our systems on the five languages of the shared task. The learning is made in an unsupervised manner, making use of the supplied lexicons only.", "This paper describes in detail an algorithm for the unsupervised learning of natural language morphology, with emphasis on challenges that are encountered in languages typologically similar to European languages. It utilizes the Minimum Description Length analysis described in Goldsmith (2001), and has been implemented in software that is available for downloading and testing.", "This paper presents a corpus-based algorithm capable of inducing inflectional morphological analyses of both regular and highly irregular forms (such as brought→bring) from distributional patterns in large monolingual text with no direct supervision. The algorithm combines four original alignment models based on relative corpus frequency, contextual similarity, weighted string similarity and incrementally retrained inflectional transduction probabilities. Starting with no paired examples for training and no prior seeding of legal morphological transformations, accuracy of the induced analyses of 3888 past-tense test cases in English exceeds 99.2 for the set, with currently over 80 accuracy on the most highly irregular forms and 99.7 accuracy on forms exhibiting non-concatenative suffixation.", "Semantic relationships like specialisation can be acquired either by word-external methods relying on the context or word-internal methods based on lexical structure. Word segments are thus a relevant cue for the automatic acquisition of semantic relationships. We have developed an unsupervised method for morphological segmentation devised for this objective. Semantic relationships are deduced from specific morphological structures based on the segments discovered. Evaluation of the validity of the semantic relationships inferred is performed against WordNet and the NCI Thesaurus.", "If work in psychology has clearly brought to light that ‘conceptual flexibility’ exists in the categorization of objects, which led to re-questioning the traditional conception of categorization which considers rigid and discontinuous categories, it is not the case in linguistics and psycholinguistics. We propose, through highlighting the role of analogy in the categorization of verbs, to defend the idea of semantic flexibility which constitutes a linguistic counterpart to psychologists' advances on categorization. Accordingly, it is shown that the production of ‘metaphoric’ verbal utterances by adults and more particularly by 2 3-year-old children reflects analogical categorization of verbs which makes it possible to argue in favour of a computational model of the role of analogy in the semantic network of the verb lexicon." ] }
1005.3902
1679112856
This paper describes in details the first version of Morphonette, a new French morphological resource and a new radically lexeme-based method of morphological analysis. This research is grounded in a paradigmatic conception of derivational morphology where the morphological structure is a structure of the entire lexicon and not one of the individual words it contains. The discovery of this structure relies on a measure of morphological similarity between words, on formal analogy and on the properties of two morphological paradigms:
The Morphonette network could also be compared to the morphological families constructed by @cite_15 , @cite_5 or @cite_8 among others. It is also very close to Polymots, a manually-constructed morphological lexicon . Polymots and Morphonette are complementary since the former primarily contains short words while the latter mainly contains long words because of the criteria we have used to select the morphological relations.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "98065071", "84197106", "2107772017" ], "abstract": [ "A railroad car dumper, suitable for dumping cars of a unit train, is disclosed. The dumper has a frame and a carriage, with tracks on the carriage to receive a car from adjacent tracks. The frame has a sidewall to support a car on the tracks during dumping with the car couplers on the axis of rotation of the dumper. The dumper frame is shiftable laterally, while the carriage remains fixed to maintain alignment of the carriage tracks with the adjacent tracks. Lateral shifting of the dumper frame while the carriage and tracks remain fixed permits a locomotive, larger than the cars to be dumped, to pass through the shifted frame.", "Electronic dictionaries offer many possibilities unavailable in paper dictionaries to view, display or access information. However, even these resources fall short when it comes to access words sharing semantic features and certain aspects of form: few applications offer the possibility to access a word via a morphologically or semantically related word. In this paper, we present such an application, POLYMOTS, a lexical database for contemporary French containing 20.000 words grouped in 2.000 families. The purpose of this resource is to group words into families on the basis of shared morpho-phonological and semantic information. Words with a common stem form a family; words in a family also share a set of common conceptual fragments (in some families there is a continuity of meaning, in others meaning is distributed). With this approach, we capitalize on the bidirectional link between semantics and morpho-phonology : the user can thus access words not only on the basis of ideas, but also on the basis of formal characteristics of the word, i.e. its morphological features. The resulting lexical database should help people learn French vocabulary and assist them to find words they are looking for, going thus beyond other existing lexical resources.", "This paper investigates a novel approach to unsupervised morphology induction relying on community detection in networks. In a first step, morphological transformation rules are automatically acquired based on graphical similarities between words. These rules encode substring substitutions for transforming one word form into another. The transformation rules are then applied to the construction of a lexical network. The nodes of the network stand for words while edges represent transformation rules. In the next step, a clustering algorithm is applied to the network to detect families of morphologically related words. Finally, morpheme analyses are produced based on the transformation rules and the word families obtained after clustering. While still in its preliminary development stages, this method obtained encouraging results at Morpho Challenge 2009, which demonstrate the viability of the approach." ] }
1005.2012
2148087609
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and or communication.
As discussed in the introduction, other researchers have designed algorithms for solving the problem . Most previous work @cite_21 @cite_5 @cite_6 @cite_30 studies convergence of a (projected) gradient method in which each node @math in the network maintains @math , and at time @math performs the update for @math . With the update , Corollary 5.5 in the paper @cite_30 shows that (we use our notation and assumptions from Theorem ). The above bound is minimized by setting the stepsize @math , giving convergence rate @math . It is clear that this convergence rate is substantially slower than all the rates in Corollary .
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_21", "@cite_6" ], "mid": [ "2066332749", "2044212084", "1556217901", "2140655807" ], "abstract": [ "We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.", "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.", "We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.", "We consider distributed iterative algorithms for the averaging problem over time-varying topologies. Our focus is on the convergence time of such algorithms when complete (unquantized) information is available, and on the degradation of performance when only quantized information is available. We study a large and natural class of averaging algorithms, which includes the vast majority of algorithms proposed to date, and provide tight polynomial bounds on their convergence time. We also describe an algorithm within this class whose convergence time is the best among currently available averaging algorithms for time-varying topologies. We then propose and analyze distributed averaging algorithms under the additional constraint that agents can only store and communicate quantized information, so that they can only converge to the average of the initial values of the agents within some error. We establish bounds on the error and tight bounds on the convergence time, as a function of the number of quantization levels." ] }
1005.2012
2148087609
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and or communication.
The distributed dual averaging algorithm -- is quite different from the update . The use of the proximal function @math allows us to address problems with non-Euclidean geometry, which is useful, for example, for very high-dimensional problems or where the domain @math is the simplex (e.g. [Chapter 3] NemirovskiYu83 ). The differences between the algorithms become more pronounced in the analysis. Since we use dual averaging, we can avoid some technical difficulties introduced by the projection step in the update . Precisely because of this technical issue, earlier works @cite_5 @cite_21 studied unconstrained optimization, and the averaging in @math seems essential to the faster rates our approach achieves as well as the ease with which we can extend our results to stochastic settings.
{ "cite_N": [ "@cite_5", "@cite_21" ], "mid": [ "2044212084", "1556217901" ], "abstract": [ "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.", "We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm." ] }
1005.2012
2148087609
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and or communication.
In other related work, @cite_11 establish network-dependent rates for Markov incremental gradient descent (MIGD), which maintains a single vector @math at all times. A token @math determines an active node at time @math , and at time step @math the token moves to one of its neighbors @math , each with probability @math . Letting @math , the update is show that with optimal setting of @math and symmetric transition matrix @math , MIGD has convergence rate @math , where @math is the return time matrix @math . In this case, let @math denote the @math th eigenvalue of @math . The eigenvalues of @math are thus @math and @math for @math , and so we have Consequently, the bound in Theorem is never weaker, and for certain graphs, our results are substantially tighter, as shown in Corollary . For @math -dimensional grids (where @math ) we have @math , whereas MIGD scales as @math . For well-connected graphs, such as expanders and the complete graph, the MIGD algorithm scales as @math , essentially a factor of @math worse than our results.
{ "cite_N": [ "@cite_11" ], "mid": [ "2049659086" ], "abstract": [ "We present an algorithm that generalizes the randomized incremental subgradient method with fixed stepsize due to Nedic and Bertsekas [SIAM J. Optim., 12 (2001), pp. 109-138]. Our novel algorithm is particularly suitable for distributed implementation and execution, and possible applications include distributed optimization, e.g., parameter estimation in networks of tiny wireless sensors. The stochastic component in the algorithm is described by a Markov chain, which can be constructed in a distributed fashion using only local information. We provide a detailed convergence analysis of the proposed algorithm and compare it with existing, both deterministic and randomized, incremental subgradient methods." ] }
1005.2603
1802422466
This paper presents a concise tutorial on spectral clustering for broad spectrum graphs which include unipartite (undirected) graph, bipartite graph, and directed graph. We show how to transform bipartite graph and directed graph into corresponding unipartite graph, therefore allowing a unified treatment to all cases. In bipartite graph, we show that the relaxed solution to the @math -way co-clustering can be found by computing the left and right eigenvectors of the data matrix. This gives a theoretical basis for @math -way spectral co-clustering algorithms proposed in the literatures. We also show that solving row and column co-clustering is equivalent to solving row and column clustering separately, thus giving a theoretical support for the claim: column clustering implies row clustering and vice versa''. And in the last part, we generalize the Ky Fan theorem---which is the central theorem for explaining spectral clustering---to rectangular complex matrix motivated by the results from bipartite graph analysis.
@cite_25 and @cite_22 mention the Ky Fan theorem in their discussions on the spectral clustering. However, the role of the theorem in the spectral clustering can be easily overlooked as it is not clearly described.
{ "cite_N": [ "@cite_22", "@cite_25" ], "mid": [ "2099242680", "2139850885" ], "abstract": [ "Principal component analysis (PCA) is a widely used statistical technique for unsupervised dimension reduction. K-means clustering is a commonly used data clustering for performing unsupervised learning tasks. Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. New lower bounds for K-means objective function are derived, which is the total variance minus the eigenvalues of the data covariance matrix. These results indicate that unsupervised dimension reduction is closely related to unsupervised learning. Several implications are discussed. On dimension reduction, the result provides new insights to the observed effectiveness of PCA-based data reductions, beyond the conventional noise-reduction explanation that PCA, via singular value decomposition, provides the best low-dimensional linear approximation of the data. On learning, the result suggests effective techniques for K-means data clustering. DNA gene expression and Internet newsgroups are analyzed to illustrate our results. Experiments indicate that the new bounds are within 0.5-1.5 of the optimal values.", "The popular K-means clustering partitions a data set by minimizing a sum-of-squares cost function. A coordinate descend method is then used to find local minima. In this paper we show that the minimization can be reformulated as a trace maximization problem associated with the Gram matrix of the data vectors. Furthermore, we show that a relaxed version of the trace maximization problem possesses global optimal solutions which can be obtained by computing a partial eigendecomposition of the Gram matrix, and the cluster assignment for each data vectors can be found by computing a pivoted QR decomposition of the eigenvector matrix. As a by-product we also derive a lower bound for the minimum of the sum-of-squares cost function." ] }