aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1001.3437
2950376105
We study Hilbert-Samuel multiplicity for points of Schubert varieties in the complete flag variety, by Groebner degenerations of the Kazhdan-Lusztig ideal. In the covexillary case, we give a positive combinatorial rule for multiplicity by establishing (with a Groebner basis) a reduced and equidimensional limit whose Stanley-Reisner simplicial complex is homeomorphic to a shellable ball or sphere. We show that multiplicity counts the number of facets of this complex. We also obtain a formula for the Hilbert series of the local ring. In particular, our work gives a multiplicity rule for Grassmannian Schubert varieties, providing alternative statements and proofs to formulae of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], [Krattenthaler '01], [Kreiman-Lakshmibai '04] and [Woo-Yong '09]. We suggest extensions of our methodology to the general case.
V. Lakshmibai and J. Weyman @cite_37 and V. Kreiman and V. Lakshmibai @cite_15 utilized standard monomial theory to determine multiplicity rules for Grassmannians (actually, @cite_37 deduces a recursive rule valid for any minuscule @math ).
{ "cite_N": [ "@cite_37", "@cite_15" ], "mid": [ "2015253075", "1647571282" ], "abstract": [ "In this paper we prove the results announced in [13]. Let G be a semi- simple, simply connected algebraic group defined over an algebraically closed field k. Let T be a maximal torus, B a Bore1 subgroup, B 3 T. Let W be the Weyl group of G. Let R (resp. R+ ) be the set of roots (resp. positive roots) relative to T (resp. B). Let S be the set of simple roots in R+. Let P be a maximal parabolic subgroup in G with associated fundamental weight w. Let W, be the Weyl group of P, and Wp be the set of minimal representatives of W W,. For w E Wp, let e(w) be the point and X(w) the Schubert variety in G P associated to w. In this paper we deter- mine the multiplicity m,(w) of X(w) at e(z), where e(z) E X(w), for all minuscule P’s and also for P = Pgn, G being of type C, (here Pun denotes the maximal parabolic subgroup obtained by omitting a,). The determina- tion of m,(w) is done as follows. Let L be the ample generator of Pic(G P). A basis has been constructed for @(X(w), L”) in terms of standard monomials on X(w) (cf. [ 16, 11 I). Let U; be the unipotent subgroup of G generated by U-,, ?ET(R+ - Rp+) (here R, denotes the set of roots of P and U, denotes the unipotent subgroup of G, associated to tl E R). Then U; e(r) gives an affme neighborhood of e(z) in G P. Let A, be the affine algebra of U, e(z) and A,.. = A, &, where & is the ideal of elements of A, that vanish on X(w) n U; e(z). Let M,,, be the maximal ideal in A, H, corresponding to e(r). Then using the results of [ 16, 111, we obtain a basis of M;, &f:,+,,’ . This enables us to obtain an inductive formula for F,,,, the Hilbert polynomial of X(w) at e(T) (cf. Corollaries 3.8 and 4.11), and also express m, (w) in terms of m, (w’)‘s, X(w’)‘s being the Schubert divisors in X(w) such that e(T)E X(w’) (cf. Theorems 3.7 and 4.10). Using this we", "We give positive formulas for the restriction of a Schubert Class to a T-fixed point in the equivariant K-theory and equivariant cohomology of the Grassmannian. Our formulas rely on a result of Kodiyalam-Raghavan and Kreiman-Lakshmibai, which gives an equivariant Grobner degeneration of a Schubert variety in the neighborhood of a T-fixed point of the Grassmannian." ] }
1001.3437
2950376105
We study Hilbert-Samuel multiplicity for points of Schubert varieties in the complete flag variety, by Groebner degenerations of the Kazhdan-Lusztig ideal. In the covexillary case, we give a positive combinatorial rule for multiplicity by establishing (with a Groebner basis) a reduced and equidimensional limit whose Stanley-Reisner simplicial complex is homeomorphic to a shellable ball or sphere. We show that multiplicity counts the number of facets of this complex. We also obtain a formula for the Hilbert series of the local ring. In particular, our work gives a multiplicity rule for Grassmannian Schubert varieties, providing alternative statements and proofs to formulae of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], [Krattenthaler '01], [Kreiman-Lakshmibai '04] and [Woo-Yong '09]. We suggest extensions of our methodology to the general case.
A. Woo and the second author @cite_40 explain how the Kazhdan-Lusztig ideals of @cite_24 are compatible with the Schubert polynomial combinatorics of A. Lascoux and M.-P. Sch " u tzenberger @cite_29 @cite_14 . Moreover, a Gr " o bner basis theorem for arbitrary Kazhdan-Lusztig ideals was obtained, generalizing work on Schubert determinantal ideals due to @cite_17 . The squarefree initial ideal is equidimensional, and the Stanley-Reisner simplicial complex is homeomorphic to a shellable ball or sphere; more precisely, it is a as defined by A. Knutson and E. Miller @cite_18 . For special cases of Kazhdan-Lusztig varieties, and choices of @math , the @math -shuffled tableaux are the pipe dreams of S. Fomin and A. N. Kirillov @cite_5 , and our thesis subsumes the geometric explanation for these pipe dreams from @cite_17 . Similar results to @cite_17 , used in this paper, were obtained for covexillary Schubert determinantal ideals in @cite_9 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_29", "@cite_9", "@cite_24", "@cite_40", "@cite_5", "@cite_17" ], "mid": [ "2019363208", "", "", "1645016350", "1991634974", "2952032366", "1217949241", "2952522043" ], "abstract": [ "Abstract Let ( Π , Σ ) be a Coxeter system. An ordered list of elements in Σ and an element in Π determine a subword complex , as introduced in Knutson and Miller (Ann. of Math. (2) (2003), to appear). Subword complexes are demonstrated here to be homeomorphic to balls or spheres, and their Hilbert series are shown to reflect combinatorial properties of reduced expressions in Coxeter groups. Two formulae for double Grothendieck polynomials, one of which appeared in Fomin and Kirillov (Proceedings of the Sixth Conference in Formal Power Series and Algebraic Combinatorics, DIMACS, 1994, pp. 183–190), are recovered in the context of simplicial topology for subword complexes. Some open questions related to subword complexes are presented.", "", "", "Let f be a polynomial of degree n in ZZ[x_1,..,x_n], typically reducible but squarefree. From the hypersurface f=0 one may construct a number of other subschemes Y by extracting prime components, taking intersections, taking unions, and iterating this procedure. We prove that if the number of solutions to f=0 in ^n is not a multiple of p, then all these intersections in ^n_ just described are reduced. (If this holds for infinitely many p, then it holds over as well.) More specifically, there is a_Frobenius splitting_ on ^n_ compatibly splitting all these subschemes Y . We determine when a Gr \"obner degeneration f_0=0 of such a hypersurface f=0 is again such a hypersurface. Under this condition, we prove that compatibly split subschemes degenerate to compatibly split subschemes, and stay reduced. Our results are strongest in the case that f's lexicographically first term is i=1 ^n x_i. Then for all large p, there is a Frobenius splitting that compatibly splits f's hypersurface and all the associated Y . The Gr \"obner degeneration Y' of each such Y is a reduced union of coordinate spaces (a Stanley-Reisner scheme), and we give a result to help compute its Gr \"obner basis. We exhibit an f whose associated Y include Fulton's matrix Schubert varieties, and recover much more easily the Gr \"obner basis theorem of [Knutson-Miller '05]. We show that in Bott-Samelson coordinates on an opposite Bruhat cell X^v_ in G B, the f defining the complement of the big cell also has initial term i=1 ^n x_i, and hence the Kazhdan-Lusztig subvarieties X^v_ w degenerate to Stanley-Reisner schemes. This recovers, in a weak form, the main result of [Knutson '08].", "We present a combinatorial and computational commutative algebra methodology for studying singularities of Schubert varieties of flag manifolds. We define the combinatorial notion of interval pattern avoidance. For “reasonable” invariants P of singularities, we geometrically prove that this governs (1) the P-locus of a Schubert variety, and (2) which Schubert varieties are globally not P. The prototypical case is P=“singular”; classical pattern avoidance applies admirably for this choice [V. Lakshmibai, B. Sandhya, Criterion for smoothness of Schubert varieties in SL(n) B, Proc. Indian Acad. Sci. Math. Sci. 100 (1) (1990) 45–52, MR 91c:14061], but is insufficient in general. Our approach is analyzed for some common invariants, including Kazhdan–Lusztig polynomials, multiplicity, factoriality, and Gorensteinness, extending [A. Woo, A. Yong, When is a Schubert variety Gorenstein?, Adv. Math. 207 (1) (2006) 205–220, MR 2264071]; the description of the singular locus (which was independently proved by [S. Billey, G. Warrington, Maximal singular loci of Schubert varieties in SL(n) B, Trans. Amer. Math. Soc. 335 (2003) 3915–3945, MR 2004f:14071; A. Cortez, Singularites generiques et quasi-resolutions des varietes de Schubert pour le groupe lineaire, Adv. Math. 178 (2003) 396–445, MR 2004i:14056; C. Kassel, A. Lascoux, C. Reutenauer, The singular locus of a Schubert variety, J. Algebra 269 (2003) 74–108, MR 2005f:14096; L. Manivel, Le lieu singulier des varietes de Schubert, Int. Math. Res. Not. 16 (2001) 849–871, MR 2002i:14045]) is also thus reinterpreted. Our methods are amenable to computer experimentation, based on computing with Kazhdan–Lusztig ideals (a class of generalized determinantal ideals) using Macaulay 2. This feature is supplemented by a collection of open problems and conjectures.", "Kazhdan-Lusztig ideals, a family of generalized determinantal ideals investigated in [Woo-Yong '08], provide an explicit choice of coordinates and equations encoding a neighbourhood of a torus-fixed point of a Schubert variety on a type A flag variety. Our main result is a Grobner basis for these ideals. This provides a single geometric setting to transparently explain the naturality of pipe dreams on the Rothe diagram of a permutation, and their appearance in: * combinatorial formulas [Fomin-Kirillov '94] for Schubert and Grothendieck polynomials of [Lascoux-Schutzenberger '82]; * the equivariant K-theory specialization formula of [Buch-Rimanyi '04]; and * a positive combinatorial formula for multiplicities of Schubert varieties in good cases, including those for which the associated Kazhdan-Lusztig ideal is homogeneous under the standard grading. Our results generalize (with alternate proofs) [Knutson-Miller '05]'s Grobner basis theorem for Schubert determinantal ideals and their geometric interpretation of the monomial positivity of Schubert polynomials. We also complement recent work of [Knutson '08,'09] on degenerations of Kazhdan-Lusztig varieties in general Lie type, as well as work of [Goldin '01] on equivariant localization and of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], and [Krattenthaler '01] on Grassmannian multiplicity formulas.", "A device and method is provided for the qualitative and semi-quantitative determination of the presence of phenothiazine-type drugs in urine. The article comprises an ion exchange resin which denotes cations in a reaction between a color forming reagent and the phenothiazine-type drug to produce a permanent color change. The intensity of the color change is proportional to the dose concentration.", "Our main theorems provide a single geometric setting in which polynomial representatives for Schubert classes in the integral cohomology ring of the flag manifold are determined uniquely, and have positive coefficients for geometric reasons. This results in a geometric explanation for the naturality of Schubert polynomials and their associated combinatorics. Given a permutation w in S_n, we consider a determinantal ideal I_w whose generators are certain minors in the generic n x n matrix (filled with independent variables). Using multidegrees' as simple algebraic substitutes for torus-equivariant cohomology classes on vector spaces, our main theorems describe, for each ideal I_w: - variously graded multidegrees and Hilbert series in terms of ordinary and double Schubert and Grothendieck polynomials; - a Grobner basis consisting of minors in the generic n x n matrix; - the Stanley-Reisner complex of the initial ideal in terms of known combinatorial diagrams associated to permutations in S_n; and - a procedure inductive on weak Bruhat order for listing the facets of this complex, thereby generating the coefficients of Schubert polynomials by a positive recursion on combinatorial diagrams. We show that the initial ideal is Cohen-Macaulay, by identifying the Stanley-Reisner complex as a special kind of subword complex in S_n'', which we define generally for arbitrary Coxeter groups, and prove to be shellable by giving an explicit vertex decomposition. We also prove geometrically a general positivity statement for multidegrees of subschemes." ] }
1001.3437
2950376105
We study Hilbert-Samuel multiplicity for points of Schubert varieties in the complete flag variety, by Groebner degenerations of the Kazhdan-Lusztig ideal. In the covexillary case, we give a positive combinatorial rule for multiplicity by establishing (with a Groebner basis) a reduced and equidimensional limit whose Stanley-Reisner simplicial complex is homeomorphic to a shellable ball or sphere. We show that multiplicity counts the number of facets of this complex. We also obtain a formula for the Hilbert series of the local ring. In particular, our work gives a multiplicity rule for Grassmannian Schubert varieties, providing alternative statements and proofs to formulae of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], [Krattenthaler '01], [Kreiman-Lakshmibai '04] and [Woo-Yong '09]. We suggest extensions of our methodology to the general case.
As an application of @cite_40 , formulae for the multigraded Hilbert series of Kazhdan-Lusztig ideals were geometrically proved, where the multigrading comes from the torus action of the invertible diagonal matrices @math . While this theorem is actually used in a crucial way in the present paper, in general this Hilbert series does not help to directly compute multiplicity, because this torus action is not compatible with the dilation action. However, a Kazhdan-Lusztig ideal happens to already be homogeneous with respect to the standard grading that assigns each variable degree one, then it is automatic that it is also the ideal for its projectivized tangent cone, and one can deduce a formula for multiplicity from this Hilbert series (homogeneity is guaranteed if @math is @math -avoiding; see [pg. 25] Knutson:frob ). Moreover, it was explained that for the Grassmannian cases, one can always use the trick of to reduce to the homogeneous case. This gives an easy solution to the Grassmannian multiplicity problem, using Kazhdan-Lusztig ideals. Unfortunately, even for covexillary Schubert varieties, parabolic moving is ineffective for even some small examples. The approach of this paper avoids this issue, by using more direct arguments.
{ "cite_N": [ "@cite_40" ], "mid": [ "2952032366" ], "abstract": [ "Kazhdan-Lusztig ideals, a family of generalized determinantal ideals investigated in [Woo-Yong '08], provide an explicit choice of coordinates and equations encoding a neighbourhood of a torus-fixed point of a Schubert variety on a type A flag variety. Our main result is a Grobner basis for these ideals. This provides a single geometric setting to transparently explain the naturality of pipe dreams on the Rothe diagram of a permutation, and their appearance in: * combinatorial formulas [Fomin-Kirillov '94] for Schubert and Grothendieck polynomials of [Lascoux-Schutzenberger '82]; * the equivariant K-theory specialization formula of [Buch-Rimanyi '04]; and * a positive combinatorial formula for multiplicities of Schubert varieties in good cases, including those for which the associated Kazhdan-Lusztig ideal is homogeneous under the standard grading. Our results generalize (with alternate proofs) [Knutson-Miller '05]'s Grobner basis theorem for Schubert determinantal ideals and their geometric interpretation of the monomial positivity of Schubert polynomials. We also complement recent work of [Knutson '08,'09] on degenerations of Kazhdan-Lusztig varieties in general Lie type, as well as work of [Goldin '01] on equivariant localization and of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], and [Krattenthaler '01] on Grassmannian multiplicity formulas." ] }
1001.3437
2950376105
We study Hilbert-Samuel multiplicity for points of Schubert varieties in the complete flag variety, by Groebner degenerations of the Kazhdan-Lusztig ideal. In the covexillary case, we give a positive combinatorial rule for multiplicity by establishing (with a Groebner basis) a reduced and equidimensional limit whose Stanley-Reisner simplicial complex is homeomorphic to a shellable ball or sphere. We show that multiplicity counts the number of facets of this complex. We also obtain a formula for the Hilbert series of the local ring. In particular, our work gives a multiplicity rule for Grassmannian Schubert varieties, providing alternative statements and proofs to formulae of [Lakshmibai-Weyman '90], [Rosenthal-Zelevinsky '01], [Krattenthaler '01], [Kreiman-Lakshmibai '04] and [Woo-Yong '09]. We suggest extensions of our methodology to the general case.
While this paper focuses on type @math , our results should have analogues for other Lie types. Recent papers of A. Knutson @cite_13 @cite_34 point the way towards coordinates and equations for Kazhdan-Lusztig varieties. His papers also explain how to iteratively degenerate these varieties, although the degenerations he considers are not directly applicable in general to the multiplicity problem, since they do not degenerate the projectivized tangent cone. Finally, we remark that the notion of covexillary for type @math has already been examined in a paper by S. Billey and T. K. Lam @cite_7 .
{ "cite_N": [ "@cite_34", "@cite_13", "@cite_7" ], "mid": [ "", "2952970674", "1524898583" ], "abstract": [ "", "We study the intersections of general Schubert varieties X_w with permuted big cells, and give an inductive degeneration of each such \"Schubert patch\" to a Stanley-Reisner scheme. Similar results had been known for Schubert patches in various types of Grassmannians. We maintain reducedness using the results of [Knutson 2007] on automatically reduced degenerations, or through more standard cohomology-vanishing arguments. The underlying simplicial complex of the Stanley-Reisner scheme is a subword complex, as introduced for slightly different purposes in [Knutson-Miller 2004], and is homeomorphic to a ball. This gives a new proof of the Andersen-Jantzen-Soergel Billey and Graham Willems formulae for restrictions of equivariant Schubert classes to fixed points.", "In analogy with the symmetric group, we define the vexillary elements in the hyperoctahedral group to be those for which the Stanley 1 symmetric function is a single Schur Q-function. We show that the vexillary elements can be again determined by pattern avoidance conditions. These results can be extended to include the root systems of types A, B, C, and D. Finally, we give an algorithm for multiplication of Schur Q -functions with a superfied Schur function and a method for determining the shape of a vexillary signed permutation using jeu de taquin." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
has been inspired by several feature-oriented languages and tools, most notably AHEAD Jak @cite_39 , FeatureC++ @cite_8 , FeatureHouse @cite_42 , and Prehofer's feature-oriented Java extension @cite_25 . Their key aim is to separate the implementation of software artifacts, e.g., classes and methods, from the definition of features. That is, classes and refinements are not annotated or declared to belong to a feature. There is no statement in the program text that defines explicitly a connection between code and features. Instead, the mapping of software artifacts to features is established via so-called containment hierarchies, which are basically directories containing software artifacts. The advantage of this approach is that a feature's implementation can include, beside classes in the form of Java files, also other supporting documents, e.g., documentation in the form of HTML files, grammar specifications in the form of JavaCC files, or build scripts and deployment descriptors in the form of XML files @cite_39 . To this end, feature composition merges not only classes with their refinements but also other artifacts, such as HTML or XML files, with their respective refinements @cite_5 @cite_42 .
{ "cite_N": [ "@cite_8", "@cite_42", "@cite_39", "@cite_5", "@cite_25" ], "mid": [ "2071631718", "2164024695", "", "1563884893", "1591471358" ], "abstract": [ "This paper presents FeatureC++, a novel language extension to C++ that supports Feature-Oriented Programming (FOP) and Aspect-Oriented Programming (AOP). Besides well-known concepts of FOP languages, FeatureC++ contributes several novel FOP language features, in particular multiple inheritance and templates for generic programming. Furthermore, FeatureC++ solves several problems regarding incremental software development by adopting AOP concepts. Starting our considerations on solving these problems, we give a summary of drawbacks and weaknesses of current FOP languages in expressing incremental refinements. Specifically, we outline five key problems and present three approaches to solve them: Multi Mixins, Aspectual Mixin Layers, and Aspectual Mixins that adopt AOP concepts in different ways. We use FeatureC++ as a representative FOP language to explain these three approaches. Finally, we present a case study to clarify the benefits of FeatureC++ and its AOP extensions.", "Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages, Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages, in particular, we have integrated Java, C#, C, Haskell, JavaCC, and XML. Several case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties a language must have in order to be ready for superimposition.", "", "Step-wise refinement is a powerful paradigm for developing a complex program from a simple program by adding features incrementally where each feature is an increment in program functionality. Existing works focus on object-oriented representations such as Java or C++ artifacts. For this paradigm to be brought to the Web, refinement should be realised for XML representations. This paper elaborates on the notion of XML refinement by addressing what and how XML can be refined. These ideas are realised in the XAK language. A Struts application serves to illustrate the approach.", "We propose a new model for flexible composition of objects from a set of features. Features are similar to (abstract) subclasses, but only provide the core functionality of a (sub)class. Overwriting other methods is viewed as resolving feature interactions and is specified separately for two features at a time. This programming model allows to compose features (almost) freely in a way which generalizes inheritance and aggregation. For a set of n features, an exponential number of different feature combinations is possible, assuming a quadratic number of interaction resolutions. We present the feature model as an extension of Java and give two translations to Java, one via inheritance and the other via aggregation. We further discuss parameterized features, which work nicely with our feature model and can be translated into Pizza, an extension of Java." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
Another class of programming languages that provide mechanisms for the definition and extension of classes and class hierarchies includes, e.g., @cite_9 , @cite_11 , and @cite_24 . The difference to feature-oriented languages is that they provide explicit language constructs for aggregating the classes that belong to a feature, e.g., family classes, classboxes, or layers. This implies that non-code software artifacts cannot be included in a feature @cite_54 . However, still models a subset of these languages, in particular, class refinement.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_54", "@cite_11" ], "mid": [ "2125089485", "156948308", "2065646798", "2150101804" ], "abstract": [ "Unanticipated changes to complex software systems can introduce anomalies such as duplicated code, suboptimal inheritance relationships and a proliferation of run-time downcasts. Refactoring to eliminate these anomalies may not be an option, at least in certain stages of software evolution. Classboxes are modules that restrict the visibility of changes to selected clients only, thereby offering more freedom in the way unanticipated changes may be implemented, and thus reducing the need for convoluted design anomalies. In this paper we demonstrate how classboxes can be implemented in statically-typed languages like Java. We also present an extended case study of Swing, a Java GUI package built on top of AWT, and we document the ensuing anomalies that Swing introduces. We show how Classbox J, a prototype implementation of classboxes for Java, is used to provide a cleaner implementation of Swing using local refinement rather than subclassing.", "", "Two programming paradigms are gaining attention in the overlapping fields of software product lines (SPLs) and incremental software development (ISD). Feature-oriented programming (FOP) aims at large-scale compositional programming and feature modularity in SPLs using ISD. Aspect-oriented programming (AOP) focuses on the modularization of crosscutting concerns in complex software. Although feature modules, the main abstraction mechanisms of FOP, perform well in implementing large-scale software building blocks, they are incapable of modularizing certain kinds of crosscutting concerns. This weakness is exactly the strength of aspects, the main abstraction mechanisms of AOP. We contribute a systematic evaluation and comparison of FOP and AOP. It reveals that aspects and feature modules are complementary techniques. Consequently, we propose the symbiosis of FOP and AOP and aspectual feature modules (AFMs), a programming technique that integrates feature modules and aspects. We provide a set of tools that support implementing AFMs on top of Java and C++. We apply AFMs to a nontrivial case study demonstrating their practical applicability and to justify our design choices.", "We identify three programming language abstractions for the construction of reusable components: abstract type members, explicit selftypes, and modular mixin composition. Together, these abstractions enable us to transform an arbitrary assembly of static program parts with hard references between them into a system of reusable components. The transformation maintains the structure of the original system. We demonstrate this approach in two case studies, a subject observer framework and a compiler front-end." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
Similarly, related work on a formalization of the key concepts underlying feature-oriented programming has not disassociated the concept of a feature from the level of code. Especially, calculi for mixins @cite_52 @cite_30 @cite_64 @cite_40 , traits @cite_14 , family polymorphism and virtual classes @cite_41 @cite_6 @cite_46 @cite_60 , path-dependent types @cite_11 @cite_16 , open classes @cite_57 , dependent classes @cite_23 , and nested inheritance @cite_34 either support only the refinement of single classes or expect the classes that form a semantically coherent unit (i.e., that belong to a feature) to be located in a physical module that is defined in the host programming language. For example, a virtual class is by definition an inner class of the enclosing object, and a classbox is a package that aggregates a set of related classes. Thus, differs from previous approaches in that it relies on contextual information that has been collected by the compiler, e.g., the features' composition order or the mapping of code to features.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_64", "@cite_60", "@cite_41", "@cite_52", "@cite_6", "@cite_34", "@cite_57", "@cite_40", "@cite_23", "@cite_46", "@cite_16", "@cite_11" ], "mid": [ "1591849443", "2114980545", "2086957250", "2108556267", "2140602484", "2080648611", "2057779758", "2172232818", "2098884586", "1542995193", "", "1986108927", "1577182889", "2150101804" ], "abstract": [ "We develop an imperative calculus that provides a formal model for both single and mixin inheritance. By introducing classes and mixins as the basic object-oriented constructs in a λ-calculus with records and references, we obtain a system with an intuitive operational semantics. New classes are produced by applying mixins to superclasses. Objects are represented by records and produced by instantiating classes. The type system for objects uses only functional, record, and reference types, and there is a clean separation between subtyping and inheritance.", "In the context of statically typed, class-based languages, we investigate classes that can be extended with trait composition. A trait is a collection of methods without state; it can be viewed as an incomplete stateless class. Traits can be composed in any order, but only make sense when imported by a class that provides state variables and additional methods to disambiguate conflicting names arising between the imported traits. We introduce FeatherTrait Java (FTJ), a conservative extension of the simple lightweight class-based calculus Featherweight Java (FJ) with statically typed traits. In FTJ, classes can be built using traits as basic behavioral bricks; method conflicts between imported traits must be resolved explicitly by the user either by (i) aliasing or excluding method names in traits, or by (ii) overriding explicitly the conflicting methods in the class or in the trait itself. We present an operational semantics with a lookup algorithm, and a sound type system that guarantees that evaluating a well-typed expression never yields a message-not-understood run-time error nor gets the interpreter stuck. We give examples of the increased expressive power of the trait-based inheritance model. The resulting calculus appears to be a good starting point for a rigorous mathematical analysis of typed class-based languages featuring trait-based inheritance.", "In this paper we present Jam, an extension of the Java language supporting mixins, that is, parametric heir classes. A mixin declaration in Jam is similar to a Java heir class declaration, except that it does not extend a fixed parent class, but simply specifies the set of fields and methods a generic parent should provide. In this way, the same mixin can be instantiated on many parent classes, producing different heirs, thus avoiding code duplication and largely improving modularity and reuse. Moreover, as happens for classes and interfaces, mixin names are reference types, and all the classes obtained by instantiating the same mixin are considered subtypes of the corresponding type, and hence can be handled in a uniform way through the common interface. This possibility allows a programming style where different ingredients are \"mixed\" together in defining a class; this paradigm is somewhat similar to that based on multiple inheritance, but avoids its complication.The language has been designed with the main objective in mind to obtain, rather than a new theoretical language, a working and smooth extension of Java. That means, on the design side, that we have faced the challenging problem of integrating the Java overall principles and complex type system with this new notion; on the implementation side, it means that we have developed a Jam-to-Java translator which makes Jam sources executable on every Java Virtual Machine.", "Beginning with BETA, a range of programming language mechanisms such as virtual classes (class-valued attributes of objects) have been developed to allow inheritance in the presence of mutually dependent classes. This paper presents Tribe, a type system which generalises and simplifies other formalisms of such mechanisms, by treating issues which are inessential for soundness, such as the precise details of dispatch and field initialisation, as orthogonal to the core formalism. Tribe can support path types dependent simultaneously on both classes and objects, which is useful for writing library code, and ubiquitous access to an object's family, which offers family polymorphism without the need to drag around family arguments. Languages based on Tribe will be both simpler and more expressive than existing designs, while having a simpler type system, serving as a useful basis for future language designs.", "Family polymorphism has been proposed for object-oriented languages as a solution to supporting reusable yet type-safe mutually recursive classes. A key idea of family polymorphism is the notion of families, which are used to group mutually recursive classes. In the original proposal, due to the design decision that families are represented by objects, dependent types had to be introduced, resulting in a rather complex type system. In this article, we propose a simpler solution of lightweight family polymorphism, based on the idea that families are represented by classes rather than by objects. This change makes the type system significantly simpler without losing much expressive power of the language. Moreover, “family-polymorphic” methods now take a form of parametric methods; thus, it is easy to apply method type argument inference as in Java 5.0. To rigorously show that our approach is safe, we formalize the set of language features on top of Featherweight Java and prove that the type system is sound. An algorithm for type inference for family-polymorphic method invocations is also formalized and proved to be correct. Finally, a formal translation by erasure to Featherweight Java is presented; it is proved to preserve typing and execution results, showing that our new language features can be implemented in Java by simply extending the compiler.", "While class-based object-oriented programming languages provide a flexible mechanism for re-using and managing related pieces of code, they typically lack linguistic facilities for specifying a uniform extension of many classes with one set of fields and methods. As a result, programmers are unable to express certain abstractions over classes.In this paper we develop a model of class-to-class functions that we refer to as mixins. A mixin function maps a class to an extended class by adding or overriding fields and methods. Programming with mixins is similar to programming with single inheritance classes, but mixins more directly encourage programming to interfaces.The paper develops these ideas within the context of Java. The results are 1. an intuitive model of an essential Java subset; 2. an extension that explains and models mixins; and 3. type soundness theorems for these languages.", "Virtual classes are class-valued attributes of objects. Like virtual methods, virtual classes are defined in an object's class and may be redefined within subclasses. They resemble inner classes, which are also defined within a class, but virtual classes are accessed through object instances, not as static components of a class. When used as types, virtual classes depend upon object identity -- each object instance introduces a new family of virtual class types. Virtual classes support large-scale program composition techniques, including higher-order hierarchies and family polymorphism. The original definition of virtual classes in BETA left open the question of static type safety, since some type errors were not caught until runtime. Later the languages Caesar and gbeta have used a more strict static analysis in order to ensure static type safety. However, the existence of a sound, statically typed model for virtual classes has been a long-standing open question. This paper presents a virtual class calculus, VC, that captures the essence of virtual classes in these full-fledged programming languages. The key contributions of the paper are a formalization of the dynamic and static semantics of VC and a proof of the soundness of VC.", "Inheritance is a useful mechanism for factoring and reusing code. However, it has limitations for building extensible systems. We describe nested inheritance, a mechanism that addresses some of the limitations of ordinary inheritance and other code reuse mechanisms. Using our experience with an extensible compiler framework, we show how nested inheritance can be used to construct highly extensible software frameworks. The essential aspects of nested inheritance are formalized in a simple object-oriented language with an operational semantics and type system. The type system of this language is sound, so no run-time type checking is required to implement it and no run-time type errors can occur. We describe our implementation of nested inheritance as an unobtrusive extension of the Java language, called Jx. Our prototype implementation translates Jx code to ordinary Java code, without duplicating inherited code.", "MultiJava is a conservative extension of the Java programming language that adds symmetric multiple dispatch and open classes. Among other benefits, multiple dispatch provides a solution to the binary method problem. Open classes provide a solution to the extensibility problem of object-oriented programming languages, allowing the modular addition of both new types and new operations to an existing type hierarchy. This article illustrates and motivates the design of MultiJava and describes its modular static typechecking and modular compilation strategies. Although MultiJava extends Java, the key ideas of the language design are applicable to other object-oriented languages, such as C# and Cpp, and even, with some modifications, to functional languages such as ML.This article also discusses the variety of application domains in which MultiJava has been successfully used by others, including pervasive computing, graphical user interfaces, and compilers. MultiJava allows users to express desired programming idioms in a way that is declarative and supports static typechecking, in contrast to the tedious and type-unsafe workarounds required in Java. MultiJava also provides opportunities for new kinds of extensibility that are not easily available in Java.", "A programming construct mixin was invented to implement uniform extensions and modifications to classes. Although mixin-based programming has been extensively studied both on the methodological and theoretical point of views, relatively few attempts have been made on designing real programming languages that support mixins. In this paper, we address the issue of how to introduce a feature of declaring a mixin that may also be used as a type to nominally typed object-oriented languages like Java. We propose a programming language McJava, an extension of Java with mixin-types. To study type-soundness of McJava, we have formulated the core of McJava with typing and reduction rules, and proved its type-soundness. We also describe a compilation strategy of McJava that translates McJava programs to Java programs thus eventually making it runnable on standard Java virtual machines.", "", "In mainstream OO languages, inheritance can be used to add new methods, or to override existing methods. Virtual classes and feature oriented programming are techniques which extend the mechanism of inheritance so that it is possible to refine nested classes as well. These techniques are attractive for programming in the large, because inheritance becomes a tool for manipulating whole class hierarchies rather than individual classes. Nevertheless, it has proved difficult to design static type systems for virtual classes, because virtual classes introduce dependent types. The compile-time type of an expression may depend on the run-time values of objects in that expression.We present a formal object calculus which implements virtual classes in a type-safe manner. Our type system uses a novel technique based on prototypes, which blur the distinction between compile-time and run-time. At run-time, prototypes act as objects, and they can be used in ordinary computations. At compile-time, they act as types. Prototypes are similar in power to dependent types, and subtyping is shown to be a form of partial evaluation. We prove that prototypes are type-safe but undecidable, and briefly outline a decidable semi-algorithm for dealing with them.", "We design and study vObj, a calculus and dependent type system for objects and classes which can have types as members. Type members can be aliases, abstract types, or new types. The type system can model the essential concepts of JAVA’s inner classes as well as virtual types and family polymorphism found in BETA or GBETA. It can also model most concepts of SML-style module systems, including sharing constraints and higher-order functors, but excluding applicative functors. The type system can thus be used as a basis for unifying concepts that so far existed in parallel in advanced object systems and in module systems. The paper presents results on confluence of the calculus, soundness of the type system, and undecidability of type checking.", "We identify three programming language abstractions for the construction of reusable components: abstract type members, explicit selftypes, and modular mixin composition. Together, these abstractions enable us to transform an arbitrary assembly of static program parts with hard references between them into a system of reusable components. The transformation maintains the structure of the original system. We demonstrate this approach in two case studies, a subject observer framework and a compiler front-end." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
A different line of research aims at the language-independent reasoning about features @cite_39 @cite_51 @cite_42 @cite_31 . The calculus gDeep is most closely related to since it provides a type system for feature-oriented languages that is language-independent @cite_58 . The idea is that the recursive process of merging software artifacts, when composing hierarchically structured features, is very similar for different host languages, e.g., for Java, C #, and XML. The calculus describes formally how feature composition is performed and what type constraints have to be satisfied. In contrast, does not aspire to be language-independent, although the key concepts can certainly be used with different languages. The advantage of is that its type system can be used to check whether terms of the host language (Java or FJ) violate the principles of feature orientation, e.g., whether methods refer to classes that have been added by other features. Due to its language independence, gDeep does not have enough information to perform such checks.
{ "cite_N": [ "@cite_42", "@cite_39", "@cite_31", "@cite_58", "@cite_51" ], "mid": [ "2164024695", "", "1625440377", "198251016", "2000551127" ], "abstract": [ "Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages, Furthermore, we offer a supporting framework and tool chain, called FEATUREHOUSE. We use attribute grammars to automate the integration of additional languages, in particular, we have integrated Java, C#, C, Haskell, JavaCC, and XML. Several case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties a language must have in order to be ready for superimposition.", "", "A software product line (SPL) is a family of related program variants in a well-defined domain, generated from a set of features. A fundamental difference from classical application development is that engineers develop not a single program but a whole family with hundreds to millions of variants. This makes it infeasible to separately check every distinct variant for errors. Still engineers want guarantees on the entire SPL. A further challenge is that an SPL may contain artifacts in different languages (code, documentation, models, etc.) that should be checked. In this paper, we present CIDE, an SPL development tool that guarantees syntactic correctness for all variants of an SPL. We show how CIDE’s underlying mechanism abstracts from textual representation and we generalize it to arbitrary languages. Furthermore, we automate the generation of plug-ins for additional languages from annotated grammars. To demonstrate the language-independent capabilities, we applied CIDE to a series of case studies with artifacts written in Java, C++, C, Haskell, ANTLR, HTML, and XML.", "The goal of Feature-oriented Programming (FOP) is to modularize software systems in terms of features. A feature is an increment in functionality and refines the content of other features. A software system typically consists of a collection of different kinds of software artifacts, e.g. source code, build scripts, documentation, design documents, and performance profiles. We and others have noticed a principle of uniformity, which dictates that when composing features, all software artifacts can actually be refined in a uniform way, regardless of what they represent. Previous work did not take advantage of this uniformity; each kind of software artifact used a separate tool for composition, developed from scratch. We present gDEEP, a core calculus for features and feature composition which is language-independent; it can be used to compose features containing any kinds of artifact. This calculus allows us to define general algorithms for feature refinement, composition, and validation. We provide the formal syntax, operational semantics, and type system of gDEEP and explain how different kinds of software artifacts, including Java, Bali, and XML files, can be represented. A prototype tool and three case studies demonstrate the practicality of our approach.", "Aspect-oriented programming is a promising paradigm that challenges traditional notions of program modularity. Despite its increasing acceptance, aspects have been documented to suffer limited reuse, hard to predict behavior, and difficult modular reasoning. We develop an algebraic model that relates aspects to program transformations and uncovers aspect composition as a significant source of the problems mentioned. We propose an alternative model of composition that eliminates these problems, preserves the power of aspects, and lays an algebraic foundation on which to build and understand AOP tools." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
Our work on type checking feature-oriented product lines was motivated by the work of @cite_3 . They suggested the development of a type system for feature-oriented product lines that does not check all individual programs but the individual feature implementations. They have implemented an (incomplete) type system and, in a number of case studies on real product lines, they found numerous hidden errors using their type rules. Nevertheless, the implementation of their type system is ad-hoc in the sense that it is described only informally, and they do not provide a correctness and completeness proof. Our type system has been inspired by their work and we were able to provide a formalization and a proof of type safety.
{ "cite_N": [ "@cite_3" ], "mid": [ "1972612110" ], "abstract": [ "Programs of a software product line can be synthesized by composing modules that implement features. Besides high-level domain constraints that govern the compatibility of features, there are also low-level implementation constraints: a feature module can reference elements that are defined in other feature modules. Safe composition is the guarantee that all programs in a product line are type safe: i.e., absent of references to undefined elements (such as classes, methods, and variables). We show how safe composition properties can be verified for AHEAD product lines using feature models and SAT solvers." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
have shown that a flattening semantics and a direct semantics are equivalent @cite_18 . An advantage of a direct'' semantics is that it allows a type checking and error reporting at a finer grain. In LFJ, all feature modules are composed and a single propositional formula is generated and tested for satisfiability; if the formula is not satisfiable, it is difficult to identify precisely the point of failure. In , the individual type rules consult the feature model and can point directly to the point of failure.
{ "cite_N": [ "@cite_18" ], "mid": [ "2106052980" ], "abstract": [ "We present FJig, a simple calculus where basic building blocks are classes in the style of Featherweight Java, declaring fields, methods and one constructor. However, inheritance has been generalized to the much more flexible notion originally proposed in Bracha's Jigsaw framework. That is, classes play also the role of modules, that can be composed by a rich set of operators, all of which can be expressed by a minimal core. We keep the nominal approach of Java-like languages, that is, types are class names. However, a class is not necessarily a structural subtype of any class used in its defining expression. The calculus allows the encoding of a large variety of different mechanisms for software composition in class-based languages, including standard inheritance, mixin classes, traits and hiding. Hence, FJig can be used as a unifying framework for analyzing existing mechanisms and proposing new extensions. We provide two different semantics of an FJig program: flattening and direct semantics. The difference is analogous to that between two intuitive models to understand inheritance: the former where inherited methods are copied into heir classes, and the latter where member lookup is performed by ascending the inheritance chain. Here we address equivalence of these two views for a more sophisticated composition mechanism." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
Even previously to the work of , presented an automatic verification procedure for ensuring that no ill-structured UML model template instances will be generated from a valid feature selection @cite_53 . That is, they type check product lines that consist not of Java programs but of UML models. They use OCL (object constraint language) constraints to express and implement a type system for model composition. In this sense, their aim is very similar to that of , but limited to model artifacts -- although they have proposed to generalize their work to programming languages.
{ "cite_N": [ "@cite_53" ], "mid": [ "2112000202" ], "abstract": [ "Feature-based model templates have been recently proposed as a approach for modeling software product lines. Unfortunately, templates are notoriously prone to errors that may go unnoticed for long time. This is because such an error is usually exhibited for some configurations only, and testing all configurations is typically not feasible in practice. In this paper, we present an automated verification procedure for ensuring that no ill-structured template instance will be generated from a correct configuration. We present the formal underpinnings of our proposed approach, analyze its complexity, and demonstrate its practical feasibility through a prototype implementation." ] }
1001.3604
1614620876
A feature-oriented product line is a family of programs that share a common set of features. A feature implements a stakeholder's requirement, represents a design decision and configuration option and, when added to a program, involves the introduction of new structures, such as classes and methods, and the refinement of existing ones, such as extending methods. With feature-oriented decomposition, programs can be generated, solely on the basis of a user's selection of features, by the composition of the corresponding feature code. A key challenge of feature-oriented product line engineering is how to guarantee the correctness of an entire feature-oriented product line, i.e., of all of the member programs generated from different combinations of features. As the number of valid feature combinations grows progressively with the number of features, it is not feasible to check all individual programs. The only feasible approach is to have a type system check the entire code base of the feature-oriented product line. We have developed such a type system on the basis of a formal model of a feature-oriented Java-like language. We demonstrate that the type system ensures that every valid program of a feature-oriented product line is well-typed and that the type system is complete.
K "a have implemented a tool, called CIDE, that allows a developer to decompose a software system into features via annotations @cite_37 . In contrast to other feature-oriented languages and tools, the link between code and features is established via annotations. If a user selects a set of features, all code that is annotated with features (using background colors) that are not present in the selection is removed. K "a have developed a formal calculus and a set of type rules that ensure that only well-typed programs can be generated from a valid feature selection @cite_63 . For example, if a method declaration is removed, the remaining code must not contain calls to this method. CIDE's type rules are related to the type rules of but, so far, mutually exclusive features are not supported in CIDE. In some sense, and CIDE represent two sides of the same coin: the former aims at the composition of feature modules, the latter at the annotation of feature-related code.
{ "cite_N": [ "@cite_37", "@cite_63" ], "mid": [ "2171002355", "2117352154" ], "abstract": [ "Building software product lines (SPLs) with features is a challenging task. Many SPL implementations support features with coarse granularity - e.g., the ability to add and wrap entire methods. However, fine-grained extensions, like adding a statement in the middle of a method, either require intricate workarounds or obfuscate the base code with annotations. Though many SPLs can and have been implemented with the coarse granularity of existing approaches, fine-grained extensions are essential when extracting features from legacy applications. Furthermore, also some existing SPLs could benefit from fine-grained extensions to reduce code replication or improve readability. In this paper, we analyze the effects of feature granularity in SPLs and present a tool, called Colored IDE (CIDE), that allows features to implement coarse-grained and fine-grained extensions in a concise way. In two case studies, we show how CIDE simplifies SPL development compared to traditional approaches.", "A software product line (SPL) is an efficient means to generate a family of program variants for a domain from a single code base. However, because of the potentially high number of possible program variants, it is difficult to test all variants and ensure properties like type-safety for the entire SPL. While first steps to type-check an entire SPL have been taken, they are informal and incomplete. In this paper, we extend the Featherweight Java (FJ) calculus with feature annotations to be used for SPLs. By extending FJ's type system, we guarantee that - given a well-typed SPL - all possible program variants are well- typed as well. We show how results from this formalization reflect and help implementing our own language-independent SPL tool CIDE." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
Pioneering efforts on automated program verification focused on very simple data types --- in most cases just scalar variables --- as the inherent difficulties were already egregious. As verification techniques progressed and matured, more complex data types were considered, such as lists (usually Lisp), arrays, maps, and pointers, up to complex dynamic data structures. Arrays in particular received a lot of attention, both for historical reasons (programming languages have been offering them natively for decades), and because they often serve as the basis for implementing more complex data structures. More generally, a renewed interest in developing decision procedures for new theories and in integrating existing ones has blossomed over the last few years. A review of this staggering amount of work is beyond the scope of this paper; for a partial account and further references we refer the reader to e.g., @cite_6 @cite_17 (and @cite_32 @cite_10 for applications). In this section, we review approaches that are most similar to ours and in particular which yield decidable logics that can be compared directly to our theory of sequences (see Section ). This is the case with several of the works on the theory of arrays and extensions thereof.
{ "cite_N": [ "@cite_10", "@cite_32", "@cite_6", "@cite_17" ], "mid": [ "2110318050", "2039468209", "2147650421", "1497884533" ], "abstract": [ "Many automatic testing, analysis, and verification techniques for programs can be effectively reduced to a constraint generation phase followed by a constraint-solving phase. This separation of concerns often leads to more effective and maintainable tools. The increasing efficiency of off-the-shelf constraint solvers makes this approach even more compelling. However, there are few effective and sufficiently expressive off-the-shelf solvers for string constraints generated by analysis techniques for string-manipulating programs. We designed and implemented H ampi , a solver for string constraints over fixed-size string variables. H ampi constraints express membership in regular languages and fixed-size context-free languages. H ampi constraints may contain context-free-language definitions, regular language definitions and operations, and the membership predicate. Given a set of constraints, H ampi outputs a string that satisfies all the constraints, or reports that the constraints are unsatisfiable. H ampi is expressive and efficient, and can be successfully applied to testing and analysis of real programs. Our experiments use H ampi in: static and dynamic analyses for finding SQL injection vulnerabilities in Web applications; automated bug finding in C programs using systematic testing; and compare H ampi with another string solver. H ampi's source code, documentation, and the experimental data are available at http: people.csail.mit.edu akiezun hampi.", "Reasoning about string variables, in particular program inputs, is an important aspect of many program analyses and testing frameworks. Program inputs invariably arrive as strings, and are often manipulated using high-level string operations such as equality checks, regular expression matching, and string concatenation. It is difficult to reason about these operations because they are not well-integrated into current constraint solvers. We present a decision procedure that solves systems of equations over regular language variables. Given such a system of constraints, our algorithm finds satisfying assignments for the variables in the system. We define this problem formally and render a mechanized correctness proof of the core of the algorithm. We evaluate its scalability and practical utility by applying it to the problem of automatically finding inputs that cause SQL injection vulnerabilities.", "We present the first verification of full functional correctness for a range of linked data structure implementations, including mutable lists, trees, graphs, and hash tables. Specifically, we present the use of the Jahob verification system to verify formal specifications, written in classical higher-order logic, that completely capture the desired behavior of the Java data structure implementations (with the exception of properties involving execution time and or memory consumption). Given that the desired correctness properties include intractable constructs such as quantifiers, transitive closure, and lambda abstraction, it is a challenge to successfully prove the generated verification conditions. Our Jahob verification system uses integrated reasoning to split each verification condition into a conjunction of simpler subformulas, then apply a diverse collection of specialized decision procedures, first-order theorem provers, and, in the worst case, interactive theorem provers to prove each subformula. Techniques such as replacing complex subformulas with stronger but simpler alternatives, exploiting structure inherently present in the verification conditions, and, when necessary, inserting verified lemmas and proof hints into the imperative source code make it possible to seamlessly integrate all of the specialized decision procedures and theorem provers into a single powerful integrated reasoning system. By appropriately applying multiple proof techniques to discharge different subformulas, this reasoning system can effectively prove the complex and challenging verification conditions that arise in this context.", "Techniques such as verification condition generation, predicate abstraction, and expressive type systems reduce software verification to proving formulas in expressive logics. Programs and their specifications often make use of data structures such as sets, multisets, algebraic data types, or graphs. Consequently, formulas generated from verification also involve such data structures. To automate the proofs of such formulas we propose a logic (a “calculus”) of such data structures. We build the calculus by starting from decidable logics of individual data structures, and connecting them through functions and sets, in ways that go beyond the frameworks such as Nelson-Oppen. The result are new decidable logics that can simultaneously specify properties of different kinds of data structures and overcome the limitations of the individual logics. Several of our decidable logics include abstraction functions that map a data structure into its more abstract view (a tree into a multiset, a multiset into a set), into a numerical quantity (the size or the height), or into the truth value of a candidate data structure invariant (sortedness, or the heap property). For algebraic data types, we identify an asymptotic many-to-one condition on the abstraction function that guarantees the existence of a decision procedure. In addition to the combination based on abstraction functions, we can combine multiple data structure theories if they all reduce to the same data structure logic. As an instance of this approach, we describe a decidable logic whose formulas are propositional combinations of formulas in: weak monadic second-order logic of two successors, two-variable logic with counting, multiset algebra with Presburger arithmetic, the Bernays-Schonfinkel-Ramsey class of first-order logic, and the logic of algebraic data types with the set content function. The subformulas in this combination can share common variables that refer to sets of objects along with the common set algebra operations. Such sound and complete combination is possible because the relations on sets definable in the component logics are all expressible in Boolean Algebra with Presburger Arithmetic. Presburger arithmetic and its new extensions play an important role in our decidability results. In several cases, when we combine logics that belong to NP, we can prove the satisfiability for the combined logic is still in NP." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
McCarthy initiated the research on formal reasoning about arrays @cite_15 . His theory of arrays defines the axiomatization of the basic access operations of and for quantifier-free formulas and without arithmetic or extensionality (i.e., the property that if all elements of two arrays are equal then the arrays themselves are equal). McCarthy's work has usually been the kernel of every theory of arrays: most works on (automated) reasoning about arrays extend McCarthy's theory with more complex (decidable) properties or efficiently automate reasoning within an existing theory.
{ "cite_N": [ "@cite_15" ], "mid": [ "2124141583" ], "abstract": [ "In this paper I shall discuss the prospects for a mathematical science of computation. In a mathematical science, it is possible to deduce from the basic assumptions, the important properties of the entities treated by the science. Thus, from Newton’s law of gravitation and his laws of motion, one can deduce that the planetary orbits obey Kepler’s laws." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
The fast technological advances in automated theorem proving over the last years have paved the way for efficient implementations of the theory of arrays (usually with extensionality). These implementations use a variety of techniques such as SMT solving @cite_11 @cite_3 @cite_34 @cite_21 @cite_5 , saturation theorem proving @cite_20 @cite_30 , and abstraction @cite_39 @cite_41 @cite_28 . Automated invariant inference is an important application of these decision procedures, which originated a specialized line of work @cite_22 @cite_38 @cite_29 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_22", "@cite_41", "@cite_28", "@cite_29", "@cite_21", "@cite_3", "@cite_39", "@cite_5", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2084417024", "1517192598", "1711276981", "171295454", "2080841971", "1503677488", "1511903507", "2129487583", "2158602337", "1984364182", "", "", "2169235184" ], "abstract": [ "Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combination of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.", "Interpolating provers have a variety of applications in verification, including invariant generation and abstraction refinement. Here, we extended these methods to produce universally quantified interpolants and invariants, allowing the verification of programs manipulating arrays and heap data structures. We show how a paramodulation-based saturation prover, such as SPASS, can be modified in a simple way to produce a first-order interpolating prover that is complete for universally quantified interpolants. Using a partial axiomatization of the theory of arrays with transitive closure, we show that the method can verify properties of simple programs manipulating arrays and linked lists.", "We present a constraint-based algorithm for the synthesis of invariants expressed in the combined theory of linear arithmetic and uninterpreted function symbols. Given a set of programmer-specified invariant templates, our algorithm reduces the invariant synthesis problem to a sequence of arithmetic constraint satisfaction queries. Since the combination of linear arithmetic and uninterpreted functions is a widely applied predicate domain for program verification, our algorithm provides a powerful tool to statically and automatically reason about program correctness. The algorithm can also be used for the synthesis of invariants over arrays and set data structures, because satisfiability questions for the theories of sets and arrays can be reduced to the theory of linear arithmetic with uninterpreted functions. We have implemented our algorithm and used it to find invariants for a low-level memory allocator written in C.", "We present a technique for using infeasible program paths to automatically infer Range Predicates that describe properties of unbounded array segments. First, we build proofs showing the infeasibility of the paths, using axioms that precisely encode the high-level (but informal) rules with which programmers reason about arrays. Next, we mine the proofs for Craig Interpolants which correspond to predicates that refute the particular counterexample path. By embedding the predicate inference technique within a Counterexample-Guided Abstraction-Refinement (CEGAR) loop, we obtain a method for verifying data-sensitive safety properties whose precision is tailored in a program- and property-sensitive manner. Though the axioms used are simple, we show that the method suffices to prove a variety of array-manipulating programs that were previously beyond automatic model checkers.", "Interpolation based automatic abstraction is a powerful and robust technique for the automated analysis of hardware and software systems. Its use has however been limited to control-dominated applications because of a lack of algorithms for computing interpolants for data structures used in software programs. We present efficient procedures to construct interpolants for the theories of arrays, sets, and multisets using the reduction approach for obtaining decision procedures for complex data structures. The approach taken is that of reducing the theories of such data structures to the theories of equality and linear arithmetic for which efficient interpolating decision procedures exist. This enables interpolation based techniques to be applied to proving properties of programs that manipulate these data structures.", "We present a new method for automatic generation of loop invariants for programs containing arrays. Unlike all previously known methods, our method allows one to generate first-order invariants containing alternations of quantifiers. The method is based on the automatic analysis of the so-called update predicates of loops. An update predicate for an array A expresses updates made to A . We observe that many properties of update predicates can be extracted automatically from the loop description and loop properties obtained by other methods such as a simple analysis of counters occurring in the loop, recurrence solving and quantifier elimination over loop variables. We run the theorem prover Vampire on some examples and show that non-trivial loop invariants can be generated.", "We introduce the notion of array-based system as a suitable abstraction of infinite state systems such as broadcast protocols or sorting programs. By using a class of quantified-first order formulae to symbolically represent array-based systems, we propose methods to check safety (invariance) and liveness (recurrence) properties on top of Satisfiability Modulo Theories solvers. We find hypotheses under which the verification procedures for such properties can be fully mechanized.", "STP is a decision procedure for the satisfiability of quantifier-free formulas in the theory of bit-vectors and arrays that has been optimized for large problems encountered in software analysis applications. The basic architecture of the procedure consists of word-level pre-processing algorithms followed by translation to SAT. The primary bottlenecks in software verification and bug finding applications are large arrays and linear bit-vector arithmetic. New algorithms based on the abstraction-refinement paradigm are presented for reasoning about large arrays. A solver for bit-vector linear arithmetic is presented that eliminates variables and parts of variables to enable other transformations, and reduce the size of the problem that is eventually received by the SAT solver. These and other algorithms have been implemented in STP, which has been heavily tested over thousands of examples obtained from several real-world applications. Experimental results indicate that the above mix of algorithms along with the overall architecture is far more effective, for a variety of applications, than a direct translation of the original formula to SAT or other comparable decision procedures.", "Deciding satisfiability in the theory of arrays, particularly in combination with bit-vectors, is essential for software and hardware verification. We precisely describe how the lemmas on demand approach can be applied to this decision problem. In particular, we show how our new propagation based algorithm can be generalized to the extensional theory of arrays. Our implementation achieves competitive performance.", "How to efficiently reason about arrays in an automated solver based on decision procedures? The most efficient SMT solvers of the day implement \"lazy axiom instantiation\": treat the array operations read and write as uninterpreted, but supply at appropriate times appropriately many---not too many, not too few---instances of array axioms as additional clauses. We give a precise account of this approach, specifying \"how many\" is enough for correctness, and showing how to be frugal and correct.", "", "", "We show how a well-known superposition-based inference system for first-order equational logic can be used almost directly as a decision procedure for various theories including lists, arrays, extensional arrays and combinations of them. We also give a superposition-based decision procedure for homomorphism." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
@cite_36 develop the , a decidable subset of the @math fragment of the theory of arrays. An is a formula of the form @math , where the universal quantification is restricted to index variables, @math is a guard on index variables with arithmetic (restricted to existentially quantified variables), and @math is a constraint on array values without arithmetic or nested reads, and where no universally quantified index variable is used to select an element that is written to. The array property fragment is decidable with a decision procedure that eliminates universal quantification on index variables by reducing it to conjunctions on a suitable finite set of index values. Extensions of the array property fragment that relax any of the restrictions on the form of array properties are undecidable. also show how to adapt their theory of arrays to reason about maps.
{ "cite_N": [ "@cite_36" ], "mid": [ "1511737804" ], "abstract": [ "Motivated by applications to program verification, we study a decision procedure for satisfiability in an expressive fragment of a theory of arrays, which is parameterized by the theories of the array elements. The decision procedure reduces satisfiability of a formula of the fragment to satisfiability of an equisatisfiable quantifier-free formula in the combined theory of equality with uninterpreted functions (EUF), Presburger arithmetic, and the element theories. This fragment allows a constrained use of universal quantification, so that one quantifier alternation is allowed, with some syntactic restrictions. It allows expressing, for example, that an assertion holds for all elements in a given index range, that two arrays are equal in a given range, or that an array is sorted. We demonstrate its expressiveness through applications to verification of sorting algorithms and parameterized systems. We also prove that satisfiability is undecidable for several natural extensions to the fragment. Finally, we describe our implementation in the πVC verifying compiler." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
@cite_25 develop semantic'' techniques to integrate decision procedures into a decidable extension of the theory of arrays. Their @math theory merges the quantifier-free extensional theory of arrays with dimension and Presburger arithmetic over indices into a decidable logic. Two extensions of the @math theory are still decidable: one with a unary predicate that determines if an array is (i.e., it has no repeated elements); and one with a function that returns the of an array (i.e., the set of indices that correspond to definite values). suggest that these extensions might be the basis for automated reasoning on Separation Logic models. The framework of @cite_25 also supports other decidable extensions, such as the , and predicates, as well as the combinator also discussed in @cite_23 .
{ "cite_N": [ "@cite_25", "@cite_23" ], "mid": [ "2154032363", "2115134174" ], "abstract": [ "The theory of arrays, introduced by McCarthy in his seminal paper \"Towards a mathematical science of computation,\" is central to Computer Science. Unfortunately, the theory alone is not sufficient for many important verification applications such as program analysis. Motivated by this observation, we study extensions of the theory of arrays whose satisfiability problem (i.e., checking the satisfiability of conjunctions of ground literals) is decidable. In particular, we consider extensions where the indexes of arrays have the algebraic structure of Presburger arithmetic and the theory of arrays is augmented with axioms characterizing additional symbols such as dimension, sortedness, or the domain of definition of arrays. We provide methods for integrating available decision procedures for the theory of arrays and Presburger arithmetic with automatic instantiation strategies which allow us to reduce the satisfiability problem for the extension of the theory of arrays to that of the theories decided by the available procedures. Our approach aims to re-use as much as possible existing techniques so as to ease the implementation of the proposed methods. To this end, we show how to use model-theoretic, rewriting-based theorem proving (i.e., superposition), and techniques developed in the Satisfiability Modulo Theories communities to implement the decision procedures for the various extensions.", "The theory of arrays is ubiquitous in the context of software and hardware verification and symbolic analysis. The basic array theory was introduced by McCarthy and allows to symbolically representing array updates. In this paper we present combinatory array logic, CAL, using a small, but powerful core of combinators, and reduce it to the theory of uninterpreted functions. CAL allows expressing properties that go well beyond the basic array theory. We provide a new efficient decision procedure for the base theory as well as CAL. The efficient procedure serves a critical role in the performance of the state-of-the-art SMT solver Z3 on array formulas from applications." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
De Moura and Bj rner @cite_23 introduce , a decidable extension of the quantifier-free extensional theory of arrays with the and combinators (i.e., array functors). The combinator defines an array with all values equal to a constant; the combinator applies a @math -ary function to the elements at position @math in @math arrays @math . De Moura and Bj rner define a decision procedure for their combinatory array logic, which is implemented in the Z3 SMT solver.
{ "cite_N": [ "@cite_23" ], "mid": [ "2115134174" ], "abstract": [ "The theory of arrays is ubiquitous in the context of software and hardware verification and symbolic analysis. The basic array theory was introduced by McCarthy and allows to symbolically representing array updates. In this paper we present combinatory array logic, CAL, using a small, but powerful core of combinators, and reduce it to the theory of uninterpreted functions. CAL allows expressing properties that go well beyond the basic array theory. We provide a new efficient decision procedure for the base theory as well as CAL. The efficient procedure serves a critical role in the performance of the state-of-the-art SMT solver Z3 on array formulas from applications." ] }
1001.2100
2126807233
We present a first-order theory of (finite) sequences with integer elements, Presburger arithmetic, and regularity constraints, which can model significant properties of data structures such as lists and queues. We give a decision procedure for the quantifier-free fragment, based on an encoding into the first-order theory of concatenation; the procedure has PSPACE complexity. The quantifier-free fragment of the theory of sequences can express properties such as sortedness and injectivity, as well as Boolean combinations of periodic and arithmetic facts relating the elements of the sequence and their positions (e.g., "for all even i's, the element at position i has value i + 3 or 2i"). The resulting expressive power is orthogonal to that of the most expressive decidable logics for arrays. Some examples demonstrate that the fragment is also suitable to reason about sequence-manipulating programs within the standard framework of axiomatic semantics.
Static analysis and abstract interpretation techniques have also been successfully applied to the analysis of array operations, especially with the goal of inferring invariants automatically (e.g., @cite_24 @cite_12 @cite_1 ).
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_12" ], "mid": [ "2060697066", "2015362443", "2132251441" ], "abstract": [ "Automatic discovery of relationships among values of array elements is a challenging problem due to the unbounded nature of arrays. We present a framework for analyzing array operations that is capable of capturing numeric properties of array elements.In particular, the analysis is able to establish that all array elements are initialized by an array-initialization loop, as well as to discover numeric constraints on the values of initialized elements.The analysis is based on the combination of canonical abstraction and summarizing numeric domains. We describe a prototype implementation of the analysis and discuss our experience with applying the prototype to several examples, including the verification of correctness of an insertion-sort procedure.", "Array bound checking and array dependency analysis (for parallelization) have been widely studied. However, there are much less results about analyzing properties of array contents. In this paper, we propose a way of using abstract interpretation for discovering properties about array contents in some restricted cases: one-dimensional arrays, traversed by simple \"for\" loops. The basic idea, borrowed from [GRS05], consists in partitioning arrays into symbolic intervals (e.g., [1,i -- 1], [i,i], [i + 1,n]), and in associating with each such interval I and each array A an abstract variable AI; the new idea is to consider relational abstract properties ψ(AI, BI, ...) about these abstract variables, and to interpret such a property pointwise on the interval I: ∀l ∈ I, ψ(A[l], B[l],...). The abstract semantics of our simple programs according to these abstract properties has been defined and implemented in a prototype tool. The method is able, for instance, to discover that the result of an insertion sort is a sorted array, or that, in an array traversal guarded by a \"sentinel\", the index stays within the bounds.", "We describe a general technique for building abstract interpreters over powerful universally quantified abstract domains that leverage existing quantifier-free domains. Our quantified abstract domain can represent universally quantified facts like ∀i(0 ≤ i" ] }
1001.2391
2950522828
Sampling-based motion planners are an effective means for generating collision-free motion paths. However, the quality of these motion paths (with respect to quality measures such as path length, clearance, smoothness or energy) is often notoriously low, especially in high-dimensional configuration spaces. We introduce a simple algorithm for merging an arbitrary number of input motion paths into a hybrid output path of superior quality, for a broad and general formulation of path quality. Our approach is based on the observation that the quality of certain sub-paths within each solution may be higher than the quality of the entire path. A dynamic-programming algorithm, which we recently developed for comparing and clustering multiple motion paths, reduces the running time of the merging algorithm significantly. We tested our algorithm in motion-planning problems with up to 12 degrees of freedom. We show that our algorithm is able to merge a handful of input paths produced by several different motion planners to produce output paths of much higher quality.
Wilmarth @cite_1 improved the local clearance of sampled configurations by sampling closer to the medial axis. Nieuwenhuisen @cite_0 improved the optimal path length in probabilistic roadmaps by closing cycles only when they significantly reduce the (graph) path length between configurations, and Geraerts @cite_36 combined both approaches. In contrast to the above techniques, the approach we present below is not tailored for any specific criterion of path quality, and is designed to allow general formulations of path quality.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_36" ], "mid": [ "", "2056579540", "2144481803" ], "abstract": [ "", "Several motion planning methods using networks of randomly generated nodes in the free space have been shown to perform well in a number of cases, however their performance degrades when paths are required to pass through narrow passages in the free space. In previous work we proposed MAPRM, a method of sampling the configuration space in which randomly generated configurations, free or not, are retracted onto the medial axis of the free space without having to first compute the medial axis; this was shown to increase sampling in narrow passages. In this paper we give details of the MAPRM algorithm for the case of a rigid body moving in three dimensions, and show that the retraction may be carried out without explicitly computing the C-obstacles or the medial axis. We give theoretical and experimental results to show this improves performance on problems involving narrow corridors and compare the performance to uniform random sampling from the free space.", "Our goal is to create road maps that are particularly suited for motion planning in virtual environments. We use our reachability roadmap method to compute an initial, resolution complete roadmap. This roadmap is small which keeps query times and memory consumption low. However, for use in virtual environments, there are additional criteria that must be satisfied. In particular, we require that the roadmap contains useful cycles. These provide short paths and alternative routes which allow for variation in the routes a moving object can take. We will show how to incorporate such cycles. In addition, we provide high-clearance paths by retracting the edges of the roadmap to the medial axis. Since all operations are performed in a preprocessing phase, high-quality paths can be extracted in real-time as is required in interactive applications" ] }
1001.2391
2950522828
Sampling-based motion planners are an effective means for generating collision-free motion paths. However, the quality of these motion paths (with respect to quality measures such as path length, clearance, smoothness or energy) is often notoriously low, especially in high-dimensional configuration spaces. We introduce a simple algorithm for merging an arbitrary number of input motion paths into a hybrid output path of superior quality, for a broad and general formulation of path quality. Our approach is based on the observation that the quality of certain sub-paths within each solution may be higher than the quality of the entire path. A dynamic-programming algorithm, which we recently developed for comparing and clustering multiple motion paths, reduces the running time of the merging algorithm significantly. We tested our algorithm in motion-planning problems with up to 12 degrees of freedom. We show that our algorithm is able to merge a handful of input paths produced by several different motion planners to produce output paths of much higher quality.
Two paths are said to be homotopy equivalent if one path can be continuously deformed into the other, without introducing any collisions along the way. Often the output path of a roadmap is homotopy equivalent to another higher-quality path. In this case, post-processing procedures ignore the roadmap that originally created the path, and focus on small perturbations that improve the path within its homotopy class. Path pruning and shortcut heuristics are common post-processing techniques for creating shorter and smoother paths, with little chance of switching between homotopy classes. Geraerts @cite_23 locally improved path clearance using a retraction schemes that resembles the approach taken by Wilmarth @cite_1 , and more recently @cite_20 , improved both path length and path clearance simultaneously (but not other criteria of path quality). Geraerts @cite_27 locally improved path quality within a corridor (an inflated path) by applying a force-field to the moving body within that corridor. In this case, the output path is restricted by construction to the selected corridor. In the Appendix, we discuss some more related work that deals with the very formulation of path-quality measures.
{ "cite_N": [ "@cite_27", "@cite_20", "@cite_1", "@cite_23" ], "mid": [ "1984493088", "2167348159", "2056579540", "2119159374" ], "abstract": [ "In many virtual environment applications, paths have to be planned for characters to traverse from a start to a goal position in the virtual world while avoiding obstacles. Contemporary applications require a path planner that is fast (to ensure real-time interaction with the environment) and flexible (to avoid local hazards such as small and dynamic obstacles). In addition, paths need to be smooth and short to ensure natural looking motions. Current path planning techniques do not obey these criteria simultaneously. For example, A* approaches generate unnatural looking paths, potential field-based methods are too slow, and sampling-based path planning techniques are inflexible. We propose a new technique, the Corridor Map Method (CMM), which satisfies all criteria. In an off-line construction phase, the CMM creates a system of collision-free corridors for the static obstacles in an environment. In the query phase, paths can be planned inside the corridors for different types of characters while avoiding dynamic obstacles. Experiments show that high-quality paths for single characters or groups of characters can be obtained in real-time.", "Many algorithms have been proposed that create a path for a robot in an environment with obstacles. Most methods are aimed at finding a solution. However, for many applications, the path must be of a good quality as well. That is, a path should be short and should keep some amount of minimum clearance to the obstacles. Traveling along such a path reduces the chances of collisions due to the difficulty of measuring and controlling the precise position of the robot. This paper reports a new technique, called Partial shortcut, which decreases the path length. While current methods have difficulties in removing all redundant motions, the technique efficiently removes these motions by interpolating one degree of freedom at a time. Two algorithms are also studied that increase the clearance along paths. The first one is fast but can only deal with rigid, translating bodies. The second algorithm is slower but can handle a broader range of robots, including three-dimensional free-flying and articulated robots, which may reside in arbitrary high-dimensional configuration spaces. A big advantage of these algorithms is that clearance along paths can now be increased efficiently without using complex data structures and algorithms. Finally, we combine the two criteria and show that high-quality paths can be obtained for a broad range of robots.", "Several motion planning methods using networks of randomly generated nodes in the free space have been shown to perform well in a number of cases, however their performance degrades when paths are required to pass through narrow passages in the free space. In previous work we proposed MAPRM, a method of sampling the configuration space in which randomly generated configurations, free or not, are retracted onto the medial axis of the free space without having to first compute the medial axis; this was shown to increase sampling in narrow passages. In this paper we give details of the MAPRM algorithm for the case of a rigid body moving in three dimensions, and show that the retraction may be carried out without explicitly computing the C-obstacles or the medial axis. We give theoretical and experimental results to show this improves performance on problems involving narrow corridors and compare the performance to uniform random sampling from the free space.", "Many motion planning techniques, like the probabilistic roadmap method (PRM), generate low quality paths. In this paper, we study a number of different quality criteria on paths in particular length and clearance. We describe a number of techniques to improve the quality of paths. These are based on a new approach to increase the path clearance. Experiments showed that the heuristics were able to generate paths of a much higher quality than previous approaches." ] }
1001.1414
1634866831
In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^ 2 3 ). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.
The problems where the decision maker has to optimize his total reward based on gained information as well as gain knowledge about the available rewards are referred to as Multi-Armed Bandit (MAB) problem. The MAB problem was first studied by Robbins @cite_0 in 1952. After his seminal work, MAB problems have been extensively studied for regret analysis and convergence rates. Readers are referred to @cite_4 for regret analysis in finite time MAB problems. However, when a mechanism designer has to consider strategic behavior of the agents, these bounds on regret would not work. Recently, Babaioff, Sharma, and Slivkins @cite_6 have derived a characterization for truthful MAB mechanisms in the context of pay-per-click sponsored search auctions if there is only a single slot for each keyword. They have shown that any truthful MAB mechanism must have at least @math worst case regret and also proposed a mechanism that achieves this regret. Here @math indicates the number of rounds for which the auction is conducted for a given keyword, with the same set of agents involved.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_6" ], "mid": [ "1998498767", "2168405694", "2131101582" ], "abstract": [ "Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves.", "Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.", "We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the multi-armed bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same \"best\" advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that truthful mechanisms have certain strong structural properties -- essentially, they must separate exploration from exploitation -- and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret." ] }
1001.1414
1634866831
In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^ 2 3 ). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.
Devanur and Kakade @cite_5 have also addressed the problem of designing truthful MAB mechanisms for pay-per-click auctions with a single sponsored slot. Though they have not explicitly attempted a characterization of truthful MAB mechanisms, they have derived similar results on payments as in @cite_6 . They have also obtained a bound on regret of a MAB mechanism to be @math . Note that the regret in @cite_5 is regret in the revenue to the search engine, as against regret analysis in @cite_6 is for social welfare of the advertisers. In this paper, unless explicitly stated, when we refer to , we mean loss in social welfare as compared to social welfare that could have been obtained with known CTRs.
{ "cite_N": [ "@cite_5", "@cite_6" ], "mid": [ "2138043622", "2131101582" ], "abstract": [ "We analyze the problem of designing a truthful pay-per-click auction where the click-through-rates (CTR) of the bidders are unknown to the auction. Such an auction faces the classic explore exploit dilemma: while gathering information about the click through rates of advertisers, the mechanism may loose revenue; however, this gleaned information may prove valuable in the future for a more profitable allocation. In this sense, such mechanisms are prime candidates to be designed using multi-armed bandit techniques. However, a naive application of multi-armed bandit algorithms would not take into account the strategic considerations of the players -- players might manipulate their bids (which determine the auction's revenue) in a way as to maximize their own utility. Hence, we consider the natural restriction that the auction be truthful. The revenue that we could hope to achieve is the expected revenue of a Vickrey auction that knows the true CTRs, and we define the truthful regret to be the difference between the expected revenue of the auction and this Vickrey revenue. This work sharply characterizes what regret is achievable, under a truthful restriction. We show that this truthful restriction imposes statistical limits on the achievable regret -- the achievable regret is Θ(T2 3), while for traditional bandit algorithms (without the truthful restriction) the achievable regret is Θ(T1 2) (where T is the number of rounds). We term the extra T1 6 factor, the price of truthfulness'.", "We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the multi-armed bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same \"best\" advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that truthful mechanisms have certain strong structural properties -- essentially, they must separate exploration from exploitation -- and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret." ] }
1001.1414
1634866831
In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^ 2 3 ). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.
Prior to the above two papers, Gonen and Pavlov @cite_1 had addressed the issue of unknown CTRs in multiple slot sponsored search auctions and proposed a specific mechanism. Their claim that their mechanism is truthful in expectation has been contested by @cite_6 @cite_5 . Also Gonen and Pavlov do not provide any characterization for truthful multi-slot MAB mechanisms.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_6" ], "mid": [ "2138043622", "", "2131101582" ], "abstract": [ "We analyze the problem of designing a truthful pay-per-click auction where the click-through-rates (CTR) of the bidders are unknown to the auction. Such an auction faces the classic explore exploit dilemma: while gathering information about the click through rates of advertisers, the mechanism may loose revenue; however, this gleaned information may prove valuable in the future for a more profitable allocation. In this sense, such mechanisms are prime candidates to be designed using multi-armed bandit techniques. However, a naive application of multi-armed bandit algorithms would not take into account the strategic considerations of the players -- players might manipulate their bids (which determine the auction's revenue) in a way as to maximize their own utility. Hence, we consider the natural restriction that the auction be truthful. The revenue that we could hope to achieve is the expected revenue of a Vickrey auction that knows the true CTRs, and we define the truthful regret to be the difference between the expected revenue of the auction and this Vickrey revenue. This work sharply characterizes what regret is achievable, under a truthful restriction. We show that this truthful restriction imposes statistical limits on the achievable regret -- the achievable regret is Θ(T2 3), while for traditional bandit algorithms (without the truthful restriction) the achievable regret is Θ(T1 2) (where T is the number of rounds). We term the extra T1 6 factor, the price of truthfulness'.", "", "We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the multi-armed bandit problem, and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by regret, the difference in social welfare between the algorithm and the benchmark which always selects the same \"best\" advertisement. We investigate how the design of multi-armed bandit algorithms is affected by the restriction that the resulting mechanism must be truthful. We find that truthful mechanisms have certain strong structural properties -- essentially, they must separate exploration from exploitation -- and they incur much higher regret than the optimal multi-armed bandit algorithms. Moreover, we provide a truthful mechanism which (essentially) matches our lower bound on regret." ] }
1001.0056
2056852781
Let G denote a complex, semisimple, simply-connected group. We identify the equivariant quantum differential equation for the cotangent bundle to the flag variety of G with the affine Knizhnik-Zamolodchikov connection of Cherednik and Matsuo. This recovers Kim's description of quantum cohomology of the flag variety itself as a limiting case. A parallel result is proven for resolutions of the Slodowy slices. Extension to arbitrary symplectic resolutions is discussed.
From our viewpoint, the operators of quantum multiplication form a family (parametrized by @math ) of maximal abelian subalgebras in a certain geometrically constructed Yangian. Long before geometric representation theory, Yangians appeared in mathematical physics as symmetries of integrable models of quantum mechanics and quantum field theory. The maximal abelian subalgebra there is the algebra of quantum integrals of motion, that is, of the operators commuting with the Hamiltonian. A profound correspondence between quantum integrable systems and supersymmetric gauge theories was discovered by Nekrasov and Shatashvili, see @cite_41 . It connects the two appearances of Yangians and, for example, correctly predicts that the eigenvalues of the operators of quantum multiplication are given by solutions of certain Bethe equations (well-known in quantum integrable systems, but probably quite mysterious to geometers). The integrable and geometric viewpoints are complementary in many ways and, no doubt, will lead to new insights on both sides of the correspondence.
{ "cite_N": [ "@cite_41" ], "mid": [ "1976841071" ], "abstract": [ "We study four dimensional N=2 supersymmetric gauge theory in the Omega-background with the two dimensional N=2 super-Poincare invariance. We explain how this gauge theory provides the quantization of the classical integrable system underlying the moduli space of vacua of the ordinary four dimensional N=2 theory. The epsilon-parameter of the Omega-background is identified with the Planck constant, the twisted chiral ring maps to quantum Hamiltonians, the supersymmetric vacua are identified with Bethe states of quantum integrable systems. This four dimensional gauge theory in its low energy description has two dimensional twisted superpotential which becomes the Yang-Yang function of the integrable system. We present the thermodynamic-Bethe-ansatz like formulae for these functions and for the spectra of commuting Hamiltonians following the direct computation in gauge theory. The general construction is illustrated at the examples of the many-body systems, such as the periodic Toda chain, the elliptic Calogero-Moser system, and their relativistic versions, for which we present a complete characterization of the L^2-spectrum. We very briefly discuss the quantization of Hitchin system." ] }
1001.0056
2056852781
Let G denote a complex, semisimple, simply-connected group. We identify the equivariant quantum differential equation for the cotangent bundle to the flag variety of G with the affine Knizhnik-Zamolodchikov connection of Cherednik and Matsuo. This recovers Kim's description of quantum cohomology of the flag variety itself as a limiting case. A parallel result is proven for resolutions of the Slodowy slices. Extension to arbitrary symplectic resolutions is discussed.
It is a well-known phenomenon that a particular solution of the quantum differential equation (the so-called @math -function) may often be computed as the generating function of integrals of certain cohomology classes over different compactifications of the moduli space of maps @math . For example for maps to flag varies of @math , one can try to use the so called Laumon moduli space flags of sheaves on @math . The corresponding generating function was identified by A. Negut with the eigenfunctions of the quantum Calogero-Moser system @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2004255580" ], "abstract": [ "This paper contains a proof of a conjecture of Braverman concerning Laumon quasiflag spaces. We consider the generating function Z(m), whose coefficients are the integrals of the equivariant Chern polynomial (with variable m) of the tangent bundles of the Laumon spaces. We prove Braverman’s conjecture, which states that Z(m) coincides with the eigenfunction of the Calogero-Sutherland hamiltonian, up to a simple factor which we specify. This conjecture was inspired by the work of Nekrasov in the affine ( sl _ n ) setting, where a similar conjecture is still open." ] }
1001.1009
1625256916
Knowing the largest rate at which data can be sent on an end-to-end path such that the egress rate is equal to the ingress rate with high probability can be very practical when choosing transmission rates in video streaming or selecting peers in peer-to-peer applications. We introduce probabilistic available bandwidth, which is defined in terms of ingress rates and egress rates of traffic on a path, rather than in terms of capacity and utilization of the constituent links of the path like the standard available bandwidth metric. In this paper, we describe a distributed algorithm, based on a probabilistic graphical model and Bayesian active learning, for simultaneously estimating the probabilistic available bandwidth of multiple paths through a network. Our procedure exploits the fact that each packet train provides information not only about the path it traverses, but also about any path that shares a link with the monitored path. Simulations and PlanetLab experiments indicate that this process can dramatically reduce the number of probes required to generate accurate estimates.
study in more depth the transition point and the relation between the input output ratio and the input rate @cite_0 . They derive a stochastic response curve that is tightly lower bounded by its fluid counterpart (used by previously referenced techniques) and conclude that this transition point is not exact. The idea of stochastic service curves, which expresses the service given to a flow by the network in terms of a probabilistic bound, was also used by to provide bounds on end-to-end delay @cite_3 . In the context of available bandwidth estimation, propose an elegant theoretical framework based on min-plus algebra, but only for worst-case deterministic estimations @cite_12 .
{ "cite_N": [ "@cite_0", "@cite_12", "@cite_3" ], "mid": [ "2108261513", "2129770080", "2082841997" ], "abstract": [ "This paper analyzes the asymptotic behavior of packet-train probing over a multi-hop network path P carrying arbitrarily routed bursty cross-traffic flows. We examine the statistical mean of the packet-train output dispersions and its relationship to the input dispersion. We call this relationship the response curve of path P. We show that the real response curve Z is tightly lower-bounded by its multi-hop fluid counterpart F, obtained when every cross-traffic flow on P is hypothetically replaced with a constant-rate fluid flow of the same average intensity and routing pattern. The real curve Z asymptotically approaches its fluid counterpart F as probing packet size or packet train length increases. Most existing measurement techniques are based upon the single-hop fluid curve S associated with the bottleneck link in P. We note that the curve S coincides with F in a certain large-dispersion input range, but falls below F in the remaining small-dispersion input ranges. As an implication of these findings, we show that bursty cross-traffic in multi-hop paths causes negative bias (asymptotic underestimation) to most existing techniques. This bias can be mitigated by reducing the deviation of Z from S using large packet size or long packet-trains. However, the bias is not completely removable for the techniques that use the portion of S that falls below F.", "Significant research has been dedicated to methods that estimate the available bandwidth in a network from traffic measurements. While estimation methods abound, less progress has been made on achieving a foundational understanding of the bandwidth estimation problem. In this paper, we develop a min-plus system theoretic formulation of bandwidth estimation. We show that the problem as well as previously proposed solutions can be concisely described and derived using min-plus system theory, thus establishing the existence of a strong link between network calculus and network probing methods. We relate difficulties in network probing to potential non-linearities of the underlying systems, and provide a justification for the distinctive treatment of FIFO scheduling in network probing.", "The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by O(Hlog H), where H is the number of nodes traversed by a flow. Using currently available techniques that compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by O(H3)." ] }
1001.1009
1625256916
Knowing the largest rate at which data can be sent on an end-to-end path such that the egress rate is equal to the ingress rate with high probability can be very practical when choosing transmission rates in video streaming or selecting peers in peer-to-peer applications. We introduce probabilistic available bandwidth, which is defined in terms of ingress rates and egress rates of traffic on a path, rather than in terms of capacity and utilization of the constituent links of the path like the standard available bandwidth metric. In this paper, we describe a distributed algorithm, based on a probabilistic graphical model and Bayesian active learning, for simultaneously estimating the probabilistic available bandwidth of multiple paths through a network. Our procedure exploits the fact that each packet train provides information not only about the path it traverses, but also about any path that shares a link with the monitored path. Simulations and PlanetLab experiments indicate that this process can dramatically reduce the number of probes required to generate accurate estimates.
Other approaches have been proposed previously to perform network-wide estimations without overloading the network and or consuming a large amount of resources. Song and Yalagandula proposed @math to measure real-time end-to-end network properties such as latency and loss rate @cite_5 . They measure only a subset of the network, by choosing paths according to the observed load on the links and at the end nodes, to infer statistics about the entire network. However, their work does not address the problem of estimating available bandwidth. @cite_16 , Hu and Steenkiste introduce BRoute; a scalable available bandwidth estimation system based on route sharing. They assume that most Internet bottlenecks are on path edges and use the fact that links near end-nodes are often shared by many paths. Although some of the intuition behind BRoute and our work is similar, the core implementation differs; it is not based on a probabilistic framework.
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "1970778715", "2054149522" ], "abstract": [ "Measuring real-time end-to-end network path performance metrics is important for several distributed applications such as media streaming systems (e.g., for switching to paths with higher bandwidth and lower jitter) and content distribution systems (e.g., for selecting servers with lower latency). However, it is challenging to perform such end-to-end pairwise measurements in large distributed systems while achieving high accuracy and avoid interfering with existing traffic. On the end hosts, the measurements can overload the machine by causing interference among themselves and other processes. On the network, the measurement packets from different hosts can interfere among themselves and with other flows on bottleneck links. In this paper, we propose a system to monitor end-host and network resources and adapt the number of measurements according to the observed load. Our scheme avoids interference by measuring only a small subset of network paths and reconstructing the entire network path properties from the partial, indirect measurements. Our simulation experiments and real testbed experiments on PlanetLab show that our path selection algorithm working with resource constraints does not adversely affect the accuracy of inference and our system can effectively adapt to the changing resource usage at the end hosts.", "Recent progress in active measurement techniques has made it possible to estimate end-to-end path available bandwidth. However, how to efficiently obtain available bandwidth information for the N2 paths in a large N-node system remains an open problem. While researchers have developed coordinate-based models that allow any node to quickly and accurately estimate latency in a scalable fashion, no such models exist for available bandwidth. In this paper we introduce BRoute--a scalable available bandwidth estimation system that is based on a route sharing model. The characteristics of BRoute are that its overhead is linear with the number of end nodes in the system, and that it requires only limited cooperation among end nodes. BRoute leverages the fact that most Internet bottlenecks are on path edges, and that edges are shared by many different paths. It uses AS-level source and sink trees to characterize and infer path-edge sharing in a scalable fashion. In this paper, we describe the BRoute architecture and evaluate the performance of its components. Initial experiments show that BRoute can infer path edges with an accuracy of over 80 . In a small case study on Planetlab, 80 of the available bandwidth estimates obtained from BRoute are accurate within 50 ." ] }
0912.4740
1802771192
In this chapter a general mathematical framework for probabilistic theories of operationally understood circuits is laid out. Circuits are comprised of operations and wires. An operation is one use of an apparatus and a wire is a diagrammatic device for showing how apertures on the apparatuses are placed next to each other. Mathematical objects are defined in terms of the circuit understood graphically. In particular, we do not think of the circuit as sitting in a background time. Circuits can be foliated by hypersurfaces comprised of sets of wires. Systems are defined to be associated with wires. A closable set of operations is defined to be one for which the probability associated with any circuit built from this set is independent both of choices on other circuits and of extra circuitry that may be added to outputs from this circuit. States can be associated with circuit fragments corresponding to preparations. These states evolve on passing through circuit fragments corresponding to transformations. The composition of transformations is treated. A number of theorems are proven including one which rules out quaternionic quantum theory. The case of locally tomographic theories (where local measurements on a systems components suffice to determine the global state) is considered. For such theories the probability can be calculated for a circuit from matrices pertaining the operations that comprise that circuit. Classical probability theory and quantum theory are exhibited as examples in this framework.
Barrett elaborated on @math - @math framework in @cite_12 . He makes two assumptions - that local operations commute and that local tomography is possible (whereby the state of a composite system can be determined by local measurements). In this work we do not make either assumption. The first assumption, in any case, would have no content since we are interested in the graphical information in a circuit diagram and interchanging the relative height of operations does not change the graph. Under these assumptions, Barrett showed showed that composite systems can be associated with a tensor product structure. We recover this here for the special case when we have local tomography but the more general case is also studied. In his paper Barrett shows that some properties which are thought to be specific to quantum theory are actually properties of any non-classical probability theory.
{ "cite_N": [ "@cite_12" ], "mid": [ "2040870895" ], "abstract": [ "I introduce a framework in which a variety of probabilistic theories can be defined, including classical and quantum theories, and many others. From two simple assumptions, a tensor product rule for combining separate systems can be derived. Certain features, usually thought of as specifically quantum, turn out to be generic in this framework, meaning that they are present in all except classical theories. These include the nonunique decomposition of a mixed state into pure states, a theorem involving disturbance of a system on measurement (suggesting that the possibility of secure key distribution is generic), and a no-cloning theorem. Two particular theories are then investigated in detail, for the sake of comparison with the classical and quantum cases. One of these includes states that can give rise to arbitrary nonsignaling correlations, including the superquantum correlations that have become known in the literature as nonlocal machines or Popescu-Rohrlich boxes. By investigating these correlations in the context of a theory with well-defined dynamics, I hope to make further progress with a question raised by Popescu and Rohrlich, which is why does quantum theory not allow these strongly nonlocal correlations? The existence of such correlations forces much of the dynamics in this theory tomore » be, in a certain sense, classical, with consequences for teleportation, cryptography, and computation. I also investigate another theory in which all states are local. Finally, I raise the question of what further axiom(s) could be added to the framework in order to identify quantum theory uniquely, and hypothesize that quantum theory is optimal for computation.« less" ] }
0912.4740
1802771192
In this chapter a general mathematical framework for probabilistic theories of operationally understood circuits is laid out. Circuits are comprised of operations and wires. An operation is one use of an apparatus and a wire is a diagrammatic device for showing how apertures on the apparatuses are placed next to each other. Mathematical objects are defined in terms of the circuit understood graphically. In particular, we do not think of the circuit as sitting in a background time. Circuits can be foliated by hypersurfaces comprised of sets of wires. Systems are defined to be associated with wires. A closable set of operations is defined to be one for which the probability associated with any circuit built from this set is independent both of choices on other circuits and of extra circuitry that may be added to outputs from this circuit. States can be associated with circuit fragments corresponding to preparations. These states evolve on passing through circuit fragments corresponding to transformations. The composition of transformations is treated. A number of theorems are proven including one which rules out quaternionic quantum theory. The case of locally tomographic theories (where local measurements on a systems components suffice to determine the global state) is considered. For such theories the probability can be calculated for a circuit from matrices pertaining the operations that comprise that circuit. Classical probability theory and quantum theory are exhibited as examples in this framework.
The assumption of local tomography is equivalent to the assumption that @math where @math is the number of probabilities needed to specify the state of the composite system @math and @math ( @math ) is the number needed to describe system @math ( @math ) alone (this is the content of Theorem 5 below). Theories having this property were investigated in a paper by Wootters @cite_41 (see also @cite_3 ) in 1990 who showed that they are consistent with the relation @math where @math is the number of states that can be distinguished in a single shot measurement (this was used in @cite_33 as part of the axiomatic structure).
{ "cite_N": [ "@cite_41", "@cite_33", "@cite_3" ], "mid": [ "57812058", "2148098913", "2025218323" ], "abstract": [ "", "The usual formulation of quantum theory is based on rather obscure axioms (employing complex Hilbert spaces, Hermitean operators, and the trace formula for calculating probabilities). In this paper it is shown that quantum theory can be derived from v e very reasonable axioms. The rst four of these axioms are obviously consistent with both quantum theory and classical probability theory. Axiom 5 (which requires that there exist continuous reversible transformations between pure states) rules out classical probability theory. If Axiom 5 (or even just the word \" from Axiom 5) is dropped then we obtain classical probability theory instead. This work provides some insight into the reasons why quantum theory is the way it is. For example, it explains the need for complex numbers and where the trace formula comes from. We also gain insight into the relationship between quantum theory and classical probability theory.", "First steps are taken toward a formulation of quantum mechanics which avoids the use of probability amplitudes and is expressed entirely in terms of observable probabilities. Quantum states are represented not by state vectors or density matrices but by “probability tables,” which contain only the probabilities of the outcomes of certain special measurements. The rule for computing transition probabilities, normally given by the squared modulus of the inner product of two state vectors, is re-expressed in terms of probability tables. The new version of the rule is surprisingly simple, especially when one considers that the notion of complex phases, so crucial in the evaluation of inner products, is entirely absent from the representation of states used here." ] }
0912.4740
1802771192
In this chapter a general mathematical framework for probabilistic theories of operationally understood circuits is laid out. Circuits are comprised of operations and wires. An operation is one use of an apparatus and a wire is a diagrammatic device for showing how apertures on the apparatuses are placed next to each other. Mathematical objects are defined in terms of the circuit understood graphically. In particular, we do not think of the circuit as sitting in a background time. Circuits can be foliated by hypersurfaces comprised of sets of wires. Systems are defined to be associated with wires. A closable set of operations is defined to be one for which the probability associated with any circuit built from this set is independent both of choices on other circuits and of extra circuitry that may be added to outputs from this circuit. States can be associated with circuit fragments corresponding to preparations. These states evolve on passing through circuit fragments corresponding to transformations. The composition of transformations is treated. A number of theorems are proven including one which rules out quaternionic quantum theory. The case of locally tomographic theories (where local measurements on a systems components suffice to determine the global state) is considered. For such theories the probability can be calculated for a circuit from matrices pertaining the operations that comprise that circuit. Classical probability theory and quantum theory are exhibited as examples in this framework.
Another line of work in this type of framework has been initiated by D'Ariano in @cite_16 who has a set of axioms from which he obtains quantum theory. In a very recent paper by Chiribella, D'Ariano, and Perinotti @cite_31 set up a general probabilistic framework also having the local tomography property. Like Abramsky, Coecke and co-workers, Chiribella develop a diagrammatic notation with which calculations can be performed. They show that theories having the property that every mixed state has a purification have many properties in common with quantum theory.
{ "cite_N": [ "@cite_31", "@cite_16" ], "mid": [ "2113235349", "2949432135" ], "abstract": [ "We investigate general probabilistic theories in which every mixed state has a purification, unique up to reversible channels on the purifying system. We show that the purification principle is equivalent to the existence of a reversible realization of every physical process, that is, to the fact that every physical process can be regarded as arising from a reversible interaction of the system with an environment, which is eventually discarded. From the purification principle we also construct an isomorphism between transformations and bipartite states that possesses all structural properties of the Choi-Jamiolkowski isomorphism in quantum theory. Such an isomorphism allows one to prove most of the basic features of quantum theory, like, e.g., existence of pure bipartite states giving perfect correlations in independent experiments, no information without disturbance, no joint discrimination of all pure states, no cloning, teleportation, no programming, no bit commitment, complementarity between correctable channels and deletion channels, characterization of entanglement-breaking channels as measure-and-prepare channels, and others, without resorting to the mathematical framework of Hilbert spaces.", "In the present paper I show how it is possible to derive the Hilbert space formulation of Quantum Mechanics from a comprehensive definition of \"physical experiment\" and assuming \"experimental accessibility and simplicity\" as specified by five simple Postulates. This accomplishes the program presented in form of conjectures in the previous paper quant-ph 0506034. Pivotal roles are played by the \"local observability principle\", which reconciles the holism of nonlocality with the reductionism of local observation, and by the postulated existence of \"informationally complete observables\" and of a \"symmetric faithful state\". This last notion allows one to introduce an operational definition for the real version of the \"adjoint\"--i. e. the transposition--from which one can derive a real Hilbert-space structure via either the Mackey-Kakutani or the Gelfand-Naimark-Segal constructions. Here I analyze in detail only the Gelfand-Naimark-Segal construction, which leads to a real Hilbert space structure analogous to that of (classes of generally unbounded) selfadjoint operators in Quantum Mechanics. For finite dimensions, general dimensionality theorems that can be derived from a local observability principle, allow us to represent the elements of the real Hilbert space as operators over an underlying complex Hilbert space (see, however, a still open problem at the end of the paper). The route for the present operational axiomatization was suggested by novel ideas originated from Quantum Tomography." ] }
0912.4740
1802771192
In this chapter a general mathematical framework for probabilistic theories of operationally understood circuits is laid out. Circuits are comprised of operations and wires. An operation is one use of an apparatus and a wire is a diagrammatic device for showing how apertures on the apparatuses are placed next to each other. Mathematical objects are defined in terms of the circuit understood graphically. In particular, we do not think of the circuit as sitting in a background time. Circuits can be foliated by hypersurfaces comprised of sets of wires. Systems are defined to be associated with wires. A closable set of operations is defined to be one for which the probability associated with any circuit built from this set is independent both of choices on other circuits and of extra circuitry that may be added to outputs from this circuit. States can be associated with circuit fragments corresponding to preparations. These states evolve on passing through circuit fragments corresponding to transformations. The composition of transformations is treated. A number of theorems are proven including one which rules out quaternionic quantum theory. The case of locally tomographic theories (where local measurements on a systems components suffice to determine the global state) is considered. For such theories the probability can be calculated for a circuit from matrices pertaining the operations that comprise that circuit. Classical probability theory and quantum theory are exhibited as examples in this framework.
There have been many attempts at reconstructing quantum theory, not all of them in the probabilistic framework of the sort considered in the above works. A recent conference on the general problem of reconstructing quantum theory can be seen at @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "1675985504" ], "abstract": [ "We review some of our recent results (with collaborators) on information processing in an ordered linear spaces framework for probabilistic theories. These include demonstrations that many \"inherently quantum\" phenomena are in reality quite general characteristics of non-classical theories, quantum or otherwise. As an example, a set of states in such a theory is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. As another example, information that can be obtained about a system in this framework without causing disturbance to the system state, must be inherently classical. We also review results on teleportation protocols in the framework, and the fact that any non-classical theory without entanglement allows exponentially secure bit commitment in this framework. Finally, we sketch some ways of formulating our framework in terms of categories, and in this light consider the relation of our work to that of Abramsky, Coecke, Selinger, Baez and others on information processing and other aspects of theories formulated categorically." ] }
0912.5176
2950191682
The deletion channel is the simplest point-to-point communication channel that models lack of synchronization. Despite significant effort, little is known about its capacity, and even less about optimal coding schemes. In this paper we intiate a new systematic approach to this problem, by demonstrating that capacity can be computed in a series expansion for small deletion probability. We compute two leading terms of this expansion, and show that capacity is achieved, up to this order, by i.i.d. uniform random distribution of the input. We think that this strategy can be useful in a number of capacity calculations.
When this paper was nearing submission, a preprint by Kalai, Mitzenmacher and Sudan @cite_5 was posted online, proving a statement analogous to Theorem . The result of @cite_5 is however not the same as in Theorem : only the @math term of the series is proved in @cite_5 . Further, the two proofs are based on very different approaches.
{ "cite_N": [ "@cite_5" ], "mid": [ "2121189333" ], "abstract": [ "In this paper, we consider the capacity C of the binary deletion channel for the limiting case where the deletion probability p goes to 0. It is known that for any p < 1 2, the capacity satisfies C ≥ 1−H(p), where H is the standard binary entropy. We show that this lower bound is essentially tight in the limit, by providing an upper bound C ≤ 1−(1−o(1))H(p), where the o(1) term is understood to be vanishing as p goes to 0. Our proof utilizes a natural counting argument that should prove helpful in analyzing related channels." ] }
0912.3848
2953251974
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian @math . Given a wavelet generating kernel @math and a scale parameter @math , we define the scaled wavelet operator @math . The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on @math , this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing @math . We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
Since the original introduction of wavelet theory for square integrable functions defined on the real line, numerous authors have introduced extensions and related transforms for signals on the plane and higher dimensional spaces. By taking separable products of one dimensional wavelets, one can construct orthogonal families of wavelets in any dimension @cite_6 . However, this yields wavelets with often undesirable bias for coordinate axis directions. A large family of alternative multiscale transforms has been developed and used extensively for image processing, including Laplacian pyramids @cite_13 , steerable wavelets @cite_40 , complex dual-tree wavelets @cite_43 , curvelets @cite_38 , and bandlets @cite_30 . Wavelet transforms have also been defined for certain non-Euclidean manifolds, most notably the sphere @cite_51 @cite_8 and other conic sections @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_7", "@cite_8", "@cite_6", "@cite_43", "@cite_40", "@cite_51", "@cite_13" ], "mid": [ "2064702003", "2069912449", "2097221133", "2136463719", "2115755118", "2129276048", "", "", "2103504761" ], "abstract": [ "This paper introduces orthogonal bandelet bases to approximate images having some geometrical regularity. These bandelet bases are computed by applying parametrized Alpert transform operators over an orthogonal wavelet basis. These bandeletization operators depend upon a multiscale geometric flow that is adapted to the image at each wavelet scale. This bandelet construction has a hierarchical structure over wavelet coefficients taking advantage of existing regularity among these coefficients. It is proved that C˛ -images having singularities along Calpha-curves are approximated in a best orthogonal bandelet basis with an optimal asymptotic error decay. Fast algorithms and compression applications are described.", "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.", "We review the coherent state ( or group-theoretical) construction of the continuous wavelet transform (CWT) on the two-sphere. Next, we describe the construction of a CWT on the upper sheet of a two-sheeted hyperboloid, emphasizing the similarities between the two cases. Finally, we give some indications on the CWT on a paraboloid and we introduce a unified approach to the CWT on conic sections.", "A new formalism is derived for the analysis and exact reconstruction of band-limited signals on the sphere with directional wavelets. It represents an evolution of a previously developed wavelet formalism developed by Antoine & Vandergheynst and The translations of the wavelets at any point on the sphere and their proper rotations are still defined through the continuous three-dimensional rotations. The dilations of the wavelets are directly defined in harmonic space through a new kernel dilation, which is a modification of an existing harmonic dilation. A family of factorized steerable functions with compact harmonic support which are suitable for this kernel dilation are first identified. A scale-discretized wavelet formalism is then derived, relying on this dilation. The discrete nature of the analysis scales allows the exact reconstruction of band-limited signals. A corresponding exact multi-resolution algorithm is finally described and an implementation is tested. The formalism is of interest notably for the denoising or the deconvolution of signals on the sphere with a sparse expansion in wavelets. In astrophysics, it finds a particular application for the identification of localized directional features in the cosmic microwave background data, such as the imprint of topological defects, in particular, cosmic strings, and for their reconstruction after separation from the other signal components.", "Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.", "Abstract This paper describes a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. This introduces limited redundancy (2m:1 for m-dimensional signals) and allows the transform to provide approximate shift invariance and directionally selective filters (properties lacking in the traditional wavelet transform) while preserving the usual properties of perfect reconstruction and computational efficiency with good well-balanced frequency responses. Here we analyze why the new transform can be designed to be shift invariant and describe how to estimate the accuracy of this approximation and design suitable filters to achieve this. We discuss two different variants of the new transform, based on odd even and quarter-sample shift (Q-shift) filters, respectively. We then describe briefly how the dual tree may be extended for images and other multi-dimensional signals, and finally summarize a range of applications of the transform that take advantage of its unique properties.", "", "", "We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding." ] }
0912.3848
2953251974
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian @math . Given a wavelet generating kernel @math and a scale parameter @math , we define the scaled wavelet operator @math . The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on @math , this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing @math . We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
Geller and Mayeli @cite_48 studied a construction for wavelets on compact differentiable manifolds that is formally similar to our approach on weighted graphs. In particular, they define scaling using a pseudodifferential operator @math , where @math is the manifold Laplace-Beltrami operator and @math is a scale parameter, and obtain wavelets by applying this to a delta impulse. They also study the localization of the resulting wavelets, however the methods and theoretical results in their paper are different as they are in the setting of smooth manifolds.
{ "cite_N": [ "@cite_48" ], "mid": [ "2003334210" ], "abstract": [ "Let M be a smooth compact oriented Riemannian manifold, and let ΔM be the Laplace–Beltrami operator on M. Say ( 0 f S ( R ^+) ) , and that f (0) = 0. For t > 0, let Kt(x, y) denote the kernel of f (t2 ΔM). We show that Kt is well-localized near the diagonal, in the sense that it satisfies estimates akin to those satisfied by the kernel of the convolution operator f (t2Δ) on ( R ^n ) . We define continuous ( S )-wavelets on M, in such a manner that Kt(x, y) satisfies this definition, because of its localization near the diagonal. Continuous ( S )-wavelets on M are analogous to continuous wavelets on ( R ^n ) in ( S ) ( ( R ^n )). In particular, we are able to characterize the Holder continuous functions on M by the size of their continuous ( S )-wavelet transforms, for Holder exponents strictly between 0 and 1. If M is the torus ( T^2 ) or the sphere S2, and f (s) = se−s (the “Mexican hat” situation), we obtain two explicit approximate formulas for Kt, one to be used when t is large, and one to be used when t is small." ] }
0912.3852
2949240576
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold @math can be scheduled almost surely and that all workload with utilization greater than @math is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
In our work, we explore some interesting aspects surrounding task set utilization and schedulability for real-time systems. There has been extensive work on deriving utilization bounds for periodic task systems starting with the work of Liu and Layland @cite_31 . Kuo and Mok @cite_19 made significant improvements on Liu and Layland's bound for rate monotonic scheduling by showing that schedulability is a function, not of the number of individual tasks but, of the number of harmonic chains. Bini, Buttazzo and Buttazzo @cite_20 have shown, using the hyperbolic bound , that the feasible region for schedulability using the rate monotonic scheduling policy can be larger if the product of individual task utilizations (and not their sum) is bounded. Wu, Liu and Zhao used techniques inspired by network calculus to derive schedulability bounds @cite_16 for static priority scheduling. Their contribution is an alternative framework for deriving utilization bounds.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_16", "@cite_20" ], "mid": [ "2168413716", "2109488193", "1629126420", "2167754005" ], "abstract": [ "A framework is given for discussing how to adjust load in order to handle periodic processes whose timing parameters vary with time. The schedulability of adjustable periodic processes by a preemptive fixed priority scheduler is formulated in terms of a configuration selection problem. Specifically, two process transformations are introduced for the purpose of deriving a bound for the achievable utilization factor of processes whose periods are related by harmonics. This result is then generalized so that the bound is applicable to any process set and an efficient algorithm to calculate the bound is provided. When the list of allowable configurations is implicitly given by a set of scalable periodic processes, the corresponding period assignment problem is shown to be NP-complete. The authors present an approximation algorithm for the period assignment problem for which some encouraging experimental results are included. >", "The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.", "While utilization bound based schedulability test is simple and effective, it is often difficult to derive the bound itself. For its analytical complexity, utilization bound results are usually obtained on a case-by-case basis. In this paper, we develop a general framework that allows one to effectively derive schedulability bounds for a wide range of real-time systems with different workload patterns and schedulers. Our analytical model is capable of describing a wide range of tasks and schedulers' behaviors. We propose a new definition of utilization, called workload rate. While similar to utilization, workload rate enables flexible representation of different scheduling and workload scenarios and leads to uniform derivation of schedulability bounds. We derive a parameterized schedulability bound for static priority schedulers with arbitrary priority assignment. Existing utilization bounds for different priority assignments and task releasing patterns can be derived from our closed-form formula by simple assignments of proper parameters.", "We propose a novel schedulability analysis for verifying the feasibility of large periodic task sets under the rate monotonic algorithm when the exact test cannot be applied on line due to prohibitively long execution times. The proposed test has the same complexity as the original Liu and Layland (1973) bound, but it is less pessimistic, thus allowing it to accept task sets that would be rejected using the original approach. The performance of the proposed approach is evaluated with respect to the classical Liu and Layland method and theoretical bounds are derived as a function of n (the number of tasks) and for the limit case of n tending to infinity. The analysis is also extended to include aperiodic servers and blocking times due to concurrency control protocols. Extensive simulations on synthetic tasks sets are presented to compare the effectiveness of the proposed test with respect to the Liu and Layland method and the exact response time analysis." ] }
0912.3852
2949240576
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold @math can be scheduled almost surely and that all workload with utilization greater than @math is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
Our work presents a fresh perspective on scheduling for real-time systems. Only Lehoczky, Sha and Ding @cite_1 have attempted to obtain average-case results. For rate monotonic scheduling, they characterized the breakdown utilization of the rate monotonic policy for the Liu and Layland model of real-time tasks as @math . Breakdown utilization, however, is not the same as a utilization threshold, and the connection between the two needs to be examined more closely. The methodology we employ in obtaining our results is new and extremely general. It was not possible to reason in a rather abstract sense about the average-case behavior of scheduling policies with the more traditional analysis techniques of time demand and resource supply. Furthermore, our abstraction allows for reasoning about multi-stage and multiprocessor systems. Dutertre @cite_30 identified phase transitions in a non-preemptive recurring task scheduling problem. While Dutertre's work emphasized the empirical evidence for sharp thresholds, we have provided the mathematical basis for the existence of sharp thresholds.
{ "cite_N": [ "@cite_30", "@cite_1" ], "mid": [ "2112780035", "2166440675" ], "abstract": [ "We present an approach to computing cyclic schedules online and in real time, while attempting to maximize a quality-of-service metric. The motivation is the detection of RF emitters using a schedule that controls the scanning of disjoint frequency bands. The problem is NP-hard, but it exhibits a so-called phase transition that can be exploited to rapidly find a \"good enough\" schedule. Our approach relies on a graph-based schedule-construction algorithm. Selecting the input to this algorithm in the phase-transition region ensures, with high probability, that a schedule will be found quickly, and gives a lower bound on the quality of service this schedule will achieve.", "An exact characterization of the ability of the rate monotonic scheduling algorithm to meet the deadlines of a periodic task set is represented. In addition, a stochastic analysis which gives the probability distribution of the breakdown utilization of randomly generated task sets is presented. It is shown that as the task set size increases, the task computation times become of little importance, and the breakdown utilization converges to a constant determined by the task periods. For uniformly distributed tasks, a breakdown utilization of 88 is a reasonable characterization. A case is shown in which the average-case breakdown utilization reaches the worst-case lower bound of C.L. Liu and J.W. Layland (1973). >" ] }
0912.3852
2949240576
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold @math can be scheduled almost surely and that all workload with utilization greater than @math is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
In the realm of aperiodic task sets, great progress has been made recently, by , with the identification of aperiodic schedulability bounds for static priority scheduling @cite_8 . The initial result obtained by Abdelzaher and Lu @cite_7 was a constant time utilization-based test for a set of aperiodic tasks. The original analysis has been extended to deal with end-to-end schedulability for multi-stage resource pipelines @cite_0 . It has also been shown that such analysis can be used to obtain non-utilization bounds for schedulability with static priority policies @cite_23 . In this article, we have studied single-node thresholds for the aperiodic task model. In the future, we will further the ideas described in this article to include resource pipelines and non-utilization metrics.
{ "cite_N": [ "@cite_0", "@cite_23", "@cite_7", "@cite_8" ], "mid": [ "2111213036", "", "2096989632", "2109118153" ], "abstract": [ "This paper generalizes the notion of utilization bounds for schedulability of aperiodic tasks to the case of distributed resource systems. In the basic model, aperiodically arriving tasks are processed by multiple stages of a resource pipeline within end-to-end deadlines. The authors consider a multidimensional space in which each dimension represents the instantaneous utilization of a single stage. A feasible region is derived in this space such that all tasks meet their deadlines as long as pipeline resource consumption remains within the feasible region. The feasible region is a multidimensional extension of the single-resource utilization bound giving rise to a bounding surface in the utilization space rather than a scalar bound. Extensions of the analysis are provided to nonindependent tasks and arbitrary task graphs. We evaluate the performance of admission control using simulation, as well as demonstrate the applicability of these results to task schedulability analysis in the total ship computing environment envisioned by the US navy.", "", "The proliferation of high-volume time-critical Web services such as online trading calls for a scalable server design that allows meeting individual response-time guarantees of real time transactions. A main challenge is to honor these guarantees despite unpredictability in incoming server load. The extremely high volume of real-time service requests mandates constant-time scheduling and schedulability analysis algorithms (as opposed to polynomial or logarithmic ones in the number of current requests). The paper makes two major contributions towards developing an architecture and theoretical foundations for scalable real-time servers operating in dynamic environments. First, we derive a tight utilization bound for schedulability of aperiodic tasks (requests) that allows implementing a constant time schedulability test on the server. We demonstrate that Liu and Layland's schedulable utilization bound of ln 2 does not apply to aperiodic tasks, and prove that an optimal arrival-time independent scheduling policy will meet all aperiodic task deadlines if utilization is maintained below 1 1+ spl radic (1 2). Second, we show that aperiodic deadline-monotonic scheduling is the optimal arrival-time-independent scheduling policy for aperiodic tasks. This result is used to optimally prioritize server requests. Evaluation of a utilization control loop that maintains server utilization below the bound shows that the approach is effective in meeting all individual deadlines in a high performance real-time server.", "Real-time scheduling theory offers constant-time schedulability tests for periodic and sporadic tasks based on utilization bounds. Unfortunately, the periodicity or the minimal interarrival-time assumptions underlying these bounds make them inapplicable to a vast range of aperiodic workloads such as those seen by network routers, Web servers, and event-driven systems. This paper makes several important contributions toward real-time scheduling theory and schedulability analysis. We derive the first known bound for schedulability of aperiodic tasks. The bound is based on a utilization-like metric we call synthetic utilization, which allows implementing constant-time schedulability tests at admission control time. We prove that the synthetic utilization bound for deadline-monotonic scheduling of aperiodic tasks is 1 1+ spl radic 1 2. We also show that no other time-independent scheduling policy can have a higher schedulability bound. Similarly, we show that EDF has a bound of 1 and that no dynamic-priority policy has a higher bound. We assess the performance of the derived bound and conclude that it is very efficient in hit-ratio maximization." ] }
0912.3852
2949240576
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold @math can be scheduled almost surely and that all workload with utilization greater than @math is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
Sharp thresholds are indicators of phase transitions. Phase transitions are common in physical systems. Freezing of ice and superconductivity are phenomena that have temperature as the critical parameter. Phase transitions have been identified in many combinatorial optimization problems, especially constraint satisfaction problems @cite_12 @cite_28 @cite_4 . Phase transitions provide very interesting insight into the behavior of combinatorial optimization problems, of which scheduling is an instance, and mayhold the key to faster, near-optimal solutions. Sharp thresholds for properties of random graphs were identified initially by Erd "o s and R 'e nyi @cite_22 and these results have been generalized by many mathematicians including Friedgut and Kalai @cite_13 @cite_27 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_28", "@cite_27", "@cite_13", "@cite_12" ], "mid": [ "1968198053", "2908457301", "4983232", "", "1529556207", "1620410031" ], "abstract": [ "Determining the satisfiability of randomly generated Boolean expressions with k variables per clause is a popular test for the performance of search algorithms in artificial intelligence and computer science. It is known that for k = 2, formulas are almost always satisfiable when the ratio of clauses to variables is less than 1; for ratios larger than 1, the formulas are almost never satisfiable. Similar sharp threshold behavior is observed for higher values of k. Finite-size scaling, a method from statistical physics, can be used to characterize size-dependent effects near the threshold. A relationship can be drawn between thresholds and computational complexity.", "", "We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult. Our results provide a benchmark for the evaluation of satisfiability-testing procedures.", "", "In their seminal work which initiated random graph theory Erdos and Renyi discovered that many graph properties have sharp thresholds as the number of vertices tends to infinity. We prove a conjecture of Linial that every monotone graph property has a sharp threshold. This follows from the following theorem. Let Vn(p) = 0, 1 n denote the Hamming space endowed with the probability measure μp defined by μp( 1, 2, . . . , n) = pk · (1 − p)n−k, where k = 1 + 2 + · · · + n. Let A be a monotone subset of Vn. We say that A is symmetric if there is a transitive permutation group Γ on 1, 2, . . . , n such that A is invariant under Γ. Theorem. For every symmetric monotone A, if μp(A) > then μq(A) > 1− for q = p+ c1 log(1 2 ) logn. (c1 is an absolute constant.)", "It is well known that for many NP-complete problems, such as K-Sat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P = NP). This paper shows that NP-complete problems can be summarized by at least one \"order parameter\", and that the hard problems occur at a critical value of such a parameter. This critical value separates two regions of characteristically different properties. For example, for K-colorability, the critical value separates overconstrained from underconstrained random graphs, and it marks the value at which the probability of a solution changes abruptly from near 0 to near 1. It is the high density of well-separated almost solutions (local minima) at this boundary that cause search algorithms to \"thrash\". This boundary is a type of phase transition and we show that it is preserved under mappings between problems. We show that for some P problems either there is no phase transition or it occurs for bounded N (and so bounds the cost). These results suggest a way of deciding if a problem is in P or NP and why they are different." ] }
0912.3856
1911361111
The rapid growth of peer-to-peer (P2P) networks in the past few years has brought with it increases in transit cost to Internet Service Providers (ISPs), as peers exchange large amounts of traffic across ISP boundaries. This ISP oblivious behavior has resulted in misalignment of incentives between P2P networks--that seek to maximize user quality--and ISPs--that would seek to minimize costs. Can we design a P2P overlay that accounts for both ISP costs as well as quality of service, and attains a desired tradeoff between the two? We design a system, which we call MultiTrack, that consists of an overlay of multiple whose purpose is to align these goals. mTrackers split demand from users among different ISP domains while trying to minimize their individual costs (delay plus transit cost) in their ISP domain. We design the signals in this overlay of mTrackers in such a way that potentially competitive individual optimization goals are aligned across the mTrackers. The mTrackers are also capable of doing admission control in order to ensure that users who are from different ISP domains have a fair chance of being admitted into the system, while keeping costs in check. We prove analytically that our system is stable and achieves maximum utility with minimum cost. Our design decisions and control algorithms are validated by Matlab and ns-2 simulations.
There has been much recent work on P2P systems and traffic management, and we provide a discussion of work that is closely related to our problem. Fluid models of P2P systems, and the multi-phase (transient steady state) behavior has been developed in @cite_22 @cite_15 . The results show how supply of a file correlates with its demand, and it is essentially transient delays that dominate.
{ "cite_N": [ "@cite_15", "@cite_22" ], "mid": [ "2166245380", "2099376907" ], "abstract": [ "In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet.", "In this paper we model and study the performance of peer-to-peer (P2P) file sharing systems in terms of their 'service capacity'. We identify two regimes of interest: the transient and stationary regimes. We show that in both regimes, the performance of P2P systems exhibits a favorable scaling with the offered load. P2P systems achieve this by efficiently leveraging the service capacity of other peers, who possibly are concurrently downloading the same file. Therefore to improve the performance, it is important to design mechanisms to give peers incentives for sharing cooperation. One approach is to introduce mechanisms for resource allocation that are 'fair', such that a peer's performance improves with his contributions. We find that some intuitive 'fairness' notions may unexpectedly lead to 'unfair' allocations, which do not provide the right incentives for peers. Thus, implementation of P2P systems may want to compromise the degree of 'fairness' in favor of maintaining system robustness and reducing overheads." ] }
0912.3856
1911361111
The rapid growth of peer-to-peer (P2P) networks in the past few years has brought with it increases in transit cost to Internet Service Providers (ISPs), as peers exchange large amounts of traffic across ISP boundaries. This ISP oblivious behavior has resulted in misalignment of incentives between P2P networks--that seek to maximize user quality--and ISPs--that would seek to minimize costs. Can we design a P2P overlay that accounts for both ISP costs as well as quality of service, and attains a desired tradeoff between the two? We design a system, which we call MultiTrack, that consists of an overlay of multiple whose purpose is to align these goals. mTrackers split demand from users among different ISP domains while trying to minimize their individual costs (delay plus transit cost) in their ISP domain. We design the signals in this overlay of mTrackers in such a way that potentially competitive individual optimization goals are aligned across the mTrackers. The mTrackers are also capable of doing admission control in order to ensure that users who are from different ISP domains have a fair chance of being admitted into the system, while keeping costs in check. We prove analytically that our system is stable and achieves maximum utility with minimum cost. Our design decisions and control algorithms are validated by Matlab and ns-2 simulations.
Traffic management and load balancing have become important as P2P networks grow in size. There has been work on traffic management for streaming traffic @cite_7 @cite_21 @cite_6 . In particular, @cite_7 focuses on server-assisted streaming, while @cite_21 @cite_6 aim at fair resource allocation to peers using optimization-decomposition. Closest to our setting is work such as @cite_12 @cite_1 @cite_14 , that study the need to localize traffic within ISP domains. In @cite_12 , the focus is on allowing only local communications and optimizing the performance by careful peer selection, while @cite_1 develops an optimization framework to balance load across ISPs using cost information. A different approach is taken in @cite_14 , wherein peers are selected based on inputs on nearness provided by CDNs (if a CDN directs two peers to the same cache, they are probably near by). Pricing and market mechanisms for P2P systems are of significant interest, and work such as @cite_19 use ideas of currency exchange between peers that can be used to facilitate file transfers. The system we design uses prices between mTrackers that map to real-world costs of traffic exchange, but do not have currency exchanges between peers which still use BitTorrent style bilateral barter.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_12" ], "mid": [ "2110980583", "2026716484", "2159016220", "2125890410", "2133365683", "2169646216", "2145116869" ], "abstract": [ "Peer-to-peer (P2P) systems, which provide a variety of popular services, such as file sharing, video streaming and voice-over-IP, contribute a significant portion of today's Internet traffic. By building overlay networks that are oblivious to the underlying Internet topology and routing, these systems have become one of the greatest traffic-engineering challenges for Internet Service Providers (ISPs) and the source of costly data traffic flows. In an attempt to reduce these operational costs, ISPs have tried to shape, block or otherwise limit P2P traffic, much to the chagrin of their subscribers, who consistently finds ways to eschew these controls or simply switch providers. In this paper, we present the design, deployment and evaluation of an approach to reducing this costly cross-ISP traffic without sacrificing system performance. Our approach recycles network views gathered at low cost from content distribution networks to drive biased neighbor selection without any path monitoring or probing. Using results collected from a deployment in BitTorrent with over 120,000 users in nearly 3,000 networks, we show that our lightweight approach significantly reduces cross-ISP traffic and, over 33 of the time, it selects peers along paths that are within a single autonomous system (AS). Further, we find that our system locates peers along paths that have two orders of magnitude lower latency and 30 lower loss rates than those picked at random, and that these high-quality paths can lead to significant improvements in transfer rates. In challenged settings where peers are overloaded in terms of available bandwidth, our approach provides 31 average download-rate improvement; in environments with large available bandwidth, it increases download rates by 207 on average (and improves median rates by 883", "Peer-to-peer streaming is a novel, low-cost, paradigm for large-scale video multicast. Viewers contribute their resources to an overlay network to act as relays for a real-time media stream. Early implementations fall short of the requirements of major content owners in terms of quality, reliability, and latency. In this work we show how adding a limited number of servers to a peer-to-peer streaming network can be used to enhance performance while preserving most of the benefits in terms of bandwidth cost savings. We present a theoretical model which is useful to estimate the number of servers needed to ensure fast connection times and improved error resilience. Experimental results show the proposed approach achieves 10times to 100times bandwidth cost savings compared to a content delivery network, and similar performance in terms of quality and startup latency.", "Peer-assisted streaming is a promising way for service providers to offer high-quality IPTV to consumers at reasonable cost. In peer-assisted streaming, the peers exchange video chunks with one another, and receive additional data from the central server as needed. In this paper, we analyze how to provision resources for the streaming system, in terms of the server capacity, the video quality, and the depth of the distribution trees that deliver the content. We derive the performance bounds for minimum server load, maximum streaming rate, and minimum tree depth under different peer selection constraints. Furthermore, we show that our performance bounds are actually tight, by presenting algorithms for constructing trees that achieve our bounds.", "As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications.", "In this paper, we study the problem of utility maximization in P2P systems, in which aggregate application-specific utilities are maximized by running distributed algorithms on P2P nodes, which are constrained by their uplink capacities. This may be understood as extending Kelly's seminal framework from single-path unicast over general topology to multi-path multicast over P2P topology, with network coding allowed. For certain classes of popular P2P topologies, we show that routing along a linear number of trees per source can achieve the largest rate region that can be possibly obtained by (multi-source) network coding. This simplification result allows us to develop a new multi-tree routing formulation for the problem. Despite of the negative results in literature on applying Primal-dual algorithms to maximize utility under multi-path settings, we have been able to develop a Primal-dual distributed algorithm to maximize the aggregate utility under the multi-path routing environments. Utilizing our proposed sufficient condition, we show global exponential convergence of the Primal-dual algorithm to the optimal solution under different P2P communication scenarios we study. The algorithm can be implemented by utilizing only end-to-end delay measurements between P2P nodes; hence, it can be readily deployed on today's Internet. To support this claim, we have implemented the Primal-dual algorithm for use in a peer-assisted multi-party conferencing system and evaluated its performance through actual experiments on a LAN testbed and the Internet.", "Peer-assisted content distribution matches user demand for content with available supply at other peers in the network. Inspired by this supply-and-demand interpretation of the nature of content sharing, we employ price theory to study peer-assisted content distribution. The market-clearing prices are those which align supply and demand, and the system is studied through the characterization of price equilibria. We discuss the efficiency and robustness gains of price-based multilateral exchange, and show that simply maintaining a single price per peer (even across multiple files) suffices to achieve these benefits. Our main contribution is a system design---PACE (Price-Assisted Content Exchange)---that effectively and practically realizes multilateral exchange. Its centerpiece is a market-based mechanism for exchanging currency for desired content, with a single, decentralized price per peer. Honest users are completely shielded from any notion of prices, budgeting, allocation, or other market issues, yet strategic or malicious clients cannot unduly damage the system's efficient operation. Our design encourages sharing of desirable content and network-friendly resource utilization. Bilateral barter-based systems such as BitTorrent have been attractive in large part because of their simplicity. Our research takes a significant step in understanding the efficiency and robustness gains possible with multilateral exchange.", "Peer-to-peer (P2P) systems, which are realized as overlays on top of the underlying Internet routing architecture, contribute a significant portion of today's Internet traffic. While the P2P users are a good source of revenue for the Internet Service Providers (ISPs), the immense P2P traffic also poses a significant traffic engineering challenge to the ISPs. This is because P2P systems either implement their own routing in the overlay topology or may use a P2P routing underlay [1], both of which are largely independent of the Internet routing, and thus impedes the ISP's traffic engineering capabilities. On the other hand, P2P users are primarily interested in finding their desired content quickly, with good performance. But as the P2P system has no access to the underlying network, it either has to measure the path performance itself or build its overlay topology agnostic of the underlay. This situation is disadvantageous for both the ISPs and the P2P users. To overcome this, we propose and evaluate the feasibility of a solution where the ISP offers an \"oracle\" to the P2P users. When the P2P user supplies the oracle with a list of possible P2P neighbors, the oracle ranks them according to certain criteria, like their proximity to the user or higher bandwidth links. This can be used by the P2P user to choose appropriate neighbors, and therefore improve its performance. The ISP can use this mechanism to better manage the immense P2P traffic, e.g., to keep it inside its network, or to direct it along a desired path. The improved network utilization will also enable the ISP to provide better service to its customers." ] }
0912.4248
2090869503
Usage of scanning coordinate-measuring machines for inspection of screw threads has become a common practice nowadays. Compared to touch trigger probing, scanning capabilities allow to speed up measuring process while still maintaining high accuracy. However, in some cases accuracy drasticaly depends on the scanning speed. In this paper a compensation method is proposed allowing to reduce the influence of some dynamic effects while scanning screw threads on coordinate-measuring machines.
Due to inertial forces while high-speed scanning freeform workpieces with low radius of curvature on coordinate-measuring machines it is difficult for the control software to maintain stable contact between the probe and the surface. This is one of the reasons preventing accurate scanning at high speeds @cite_2 . The study of dynamic effects was conducted by many researchers. Application of signal analysis and processing theory to dimensional metrology was studied in @cite_4 . Pereira and Hocken @cite_2 proposed classification and compensation methods for dynamic errors of scanning coordinate-measuring machines. They used Taylor series and Fourier analysis to compensate measurement of circular features. ISO 10360 @cite_9 defines acceptance tests for scanning coordinate-measuring machines. Farooqui and Morse @cite_5 proposed reference artifacts and tests to compare scanning performance of different coordinate-measuring machines. @cite_12 conducted experimental research and concluded that freeform surfaces scanning time reduction is limited at high speeds by acceleration and deceleration of the probing system.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_2", "@cite_5", "@cite_12" ], "mid": [ "2016762411", "", "2088650260", "1983915599", "2100487443" ], "abstract": [ "An approach to dynamic evaluation of measurement systems is presented. On the one hand, it separates physical experiments, analysis and signal processing methods into successive steps of evaluation. On the other hand, the structure allows for resolution of an entire measurement system into its components for dedicated analyses. There is no limitation to particular applications except that it should be possible to model the response of the measurement system with differential equations with constant coefficients. Proposed tools such as estimation of error and uncertainty and direct mapping methods for synthesis of signal restoration filters are new or recently published, while others like system identification are well known but not previously systematically used in the context of calibration.", "", "The necessity for reducing production cycle times while achieving better quality compels metrologists to look for new and improved ways to perform the inspection of parts manufactured. The advent of coordinate measuring machines led to a significant boost in accuracy, flexibility and reliability for measurement tasks. However, these machines are in some instances lagging behind machine tools and need improvement. One major limitation is the execution of measurements with low uncertainty at a reasonably fast rate to make it possible to measure more parts. This would ensure more reliability to the end product and better information to control the manufacturing process. Coordinate measuring machines with scanning capabilities offer the option to output high data density for parts at high speed. However, they are still considerably less accurate at faster measurement speeds and need to be improved. In this work a scanning measuring machine was extensively tested and a compensation model that accounts for part of its dynamic errors affecting measurement of circular features is proposed.", "Coordinate measuring machines (CMMs) with continuous-contact scanning capabilities are experiencing more and more use in a wide variety of discrete-part manufacturing industries. Many users of these CMMs, if asked, will say that their metrology requirements include high density scans at high speeds with high accuracy. These requirements are in conflict, as there will be some point at which the accuracy of the scanned data begins to decrease with an increase in the scanning speed. This paper addresses the effects of scanning speed on the performance of different CMMs using contact sensors. The use of a family of artifacts is proposed in order to evaluate the relative scanning performance as a function of scanning speed and the direction of the scan within the CMM volume. The artifacts that are developed for these tests have a sinusoidal waveform superimposed on a flat surface. This paper describes a series of experiments that utilize these artifacts and assess the ability of these tests to capture how a scanning CMM will perform when measuring actual parts.", "Common usage of Coordinate Measuring Machines in industry means that they are applied not only for the measurement of the details, but also for the digitalization of the geometrical elements. Nowadays, the development of scanning methods enables the easy measurement of free surfaces. Digitalization of such surfaces with scanning probe heads is much faster than touch trigger probing, while the uncertainty level remains almost the same. However, the accuracy of points localization is dependent upon the scanning speed." ] }
0912.4506
2129050115
New algorithms and optimization techniques are needed to balance the accelerating trend towards bandwidth-starved multicore chips. It is well known that the performance of stencil codes can be improved by temporal blocking, lessening the pressure on the memory interface. We introduce a new pipelined approach that makes explicit use of shared caches in multicore environments and minimizes synchronization and boundary overhead. For clusters of shared-memory nodes we demonstrate how temporal blocking can be employed successfully in a hybrid shared distributed-memory environment.
Improving the performance of stencil codes by temporal blocking is not a new idea, and many publications have studied different methods in depth @cite_1 @cite_2 @cite_9 @cite_8 @cite_7 @. However, the explicit use of shared caches provided by modern multicore CPUs has not yet been investigated to great detail. Ref. @cite_3 describes a wavefront'' method similar to the one introduced here. However, that work was motivated mainly by the architectural peculiarities of multi-core CPUs, and does not elaborate on specific optimizations like avoiding boundary copies and optimizing thread synchronization. Our investigation is more general and explores a much larger parameter space. Finally there is, to our knowledge, as yet no systematic analysis of the prospects of temporal blocking on hybrid architectures, notably clusters of shared-memory compute nodes.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "2129471558", "1968013322", "", "2160106616", "2150319905", "2108730477" ], "abstract": [ "Stencil-based kernels constitute the core of many scientific applications on block-structured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. We examine several optimizations on both the conventional cache-based memory systems of the Itanium 2, Opteron, and Power5, as well as the heterogeneous multicore design of the Cell processor. The optimizations target cache reuse across stencil sweeps, including both an implicit cache oblivious approach and a cache-aware algorithm blocked to match the cache structure. Finally, we consider stencil computations on a machine with an explicitly-managed memory hierarchy, the Cell processor. Overall, results show that a cache-aware approach is significantly faster than a cache oblivious approach and that the explicitly managed memory on Cell is more efficient: Relative to the Power5, it has almost 2x more memory bandwidth and is 3.7x faster.", "We present a cache oblivious algorithm for stencil computations, which arise for example in finite-difference methods. Our algorithm applies to arbitrary stencils in n-dimensional spaces. On an \"ideal cache\" of size Z, our algorithm saves a factor of Θ(Z1 n) cache misses compared to a naive algorithm, and it exploits temporal locality optimally throughout the entire memory hierarchy.", "", "Time skewing is a compile-time optimization that can provide arbitrarily high cache hit rates for a class of iterative calculations, given a sufficient number of time steps and sufficient cache memory. Thus, it can eliminate processor idle time caused by inadequate main memory bandwidth. In this article, we give a generalization of time skewing for multiprocessor architectures, and discuss time skewing for multilevel caches. Our generalization for multiprocessors lets us eliminate processor idle time caused by any combination of inadequate main memory bandwidth, limited network bandwidth, and high network latency, given a sufficiently large problem and sufficient cache. As in the uniprocessor case, the cache requirement grows with the machine balance rather than the problem size. Our techniques for using multilevel caches reduce the LI cache requirement, which would otherwise be unacceptably high for some architectures when using arrays of high dimension.", "We present a pipelined wavefront parallelization approach for stencil-based computations. Within a fixed spatial domain successive wavefronts are executed by threads scheduled to a multicore processor chip with a shared outer level cache. By re-using data from cache in the successive wavefronts this multicore-aware parallelization strategy employs temporal blocking in a simple and efficient way. We use the Jacobi algorithm in three dimensions as a prototype or stencil-based computations and prove the efficiency of our approach on the latest generations of Intel's x86 quad- and hexa-core processors.", "We present a strategy, called recursive prismatic time skewing, that increase temporal reuse at all memory hierarchy levels, thus improving the performance of scientific codes that use iterative methods. Prismatic time skewing partitions iteration space of multiple loops into skewed prisms with both spatial and temporal (or convergence) dimensions. Novel aspects of this work include: multi-dimensional loop skewing; handling carried data dependences in the skewed loops without additional storage; bi-directional skewing to accommodate periodic boundary conditions; and an analysis and transformation strategy that works inter-procedurally. We combine prismatic skewing with a recursive blocking strategy to boost reuse at all levels in a memory hierarchy. A preliminary evaluation of these techniques shows significant performance improvements compared both to original codes and to methods described previously in the literature. With an inter-procedural application of our techniques, we were able to reduce total primary cache misses of a large application code by 27 and secondary cache misses by 119 ." ] }
0912.4529
1565922019
In this paper, we first introduce the concept of an adaptive MRA (AMRA) structure which is a variant of the classical MRA structure suited to the main goal of a fast flexible decomposition strategy adapted to the data at each decomposition level. We then study this novel methodology for the general case of affine-like systems, and derive a Unitary Extension Principle (UEP) for filter design. Finally, we apply our results to the directional representation system of shearlets. This leads to a comprehensive theory for fast decomposition algorithms associated with shearlet systems which encompasses tight shearlet frames with spatially compactly supported generators within such an AMRA structure. Also shearlet-like systems associated with parabolic scaling and unimodular matrices optimally close to rotation as well as 3D shearlet systems are studied within this framework.
Several research teams have previously designed MRA decomposition algorithms based on shearlets: we mention the affine system-based approach @cite_7 , the subdivision-based approach @cite_26 , and the approach based on separability @cite_40 . However, neither of these approaches did satisfy all items of our list of desiderata (see Subsection ). Further non-MRA based approaches were undertaken, for instance, in @cite_4 . In our opinion, these pioneer efforts demonstrate real progress in directional representation, but further progress is needed to derive an in-all-aspects satisfactory comprehensive study of a fast spatial domain shearlet transform within an appropriate MRA framework with careful attention to mathematical exactness, faithfulness to the continuum transform, and computational feasibility, ideally fulfilling all our desiderata.
{ "cite_N": [ "@cite_40", "@cite_26", "@cite_4", "@cite_7" ], "mid": [ "1553997149", "2962880471", "2097061348", "2154507364" ], "abstract": [ "It is now widely acknowledged that analyzing the intrinsic geometrical features of an underlying image is essentially needed in image processing. In order to achieve this, several directional image representation schemes have been proposed. In this report, we develop the discrete shearlet transform (DST) which provides efficient multiscale directional representation. We also show that the implementation of the transform is built in the discrete framework based on a multiresolution analysis. We further assess the performance of the DST in image denoising and approximation applications. In image approximation, our adaptive approximation scheme using the DST significantly outperforms the wavelet transform (up to 3.0dB) and other competing transforms. Also, in image denoising, the DST compares favorably with other existing methods in the literature.", "In this paper, we propose a solution for a fundamental problem in computational harmonic analysis, namely, the construction of a multiresolution analysis with directional components. We will do so by constructing subdivision schemes which provide a means to incorporate directionality into the data and thus the limit function. We develop a new type of nonstationary bivariate subdivision scheme, which allows us to adapt the subdivision process depending on directionality constraints during its performance, and we derive a complete characterization of those masks for which these adaptive directional subdivision schemes converge. In addition, we present several numerical examples to illustrate how this scheme works. Secondly, we describe a fast decomposition associated with a sparse directional representation system for two-dimensional data, where we focus on the recently introduced sparse directional representation system of shearlets. In fact, we show that the introduced adaptive directional subdivision sch...", "Abstract In spite of their remarkable success in signal processing applications, it is now widely acknowledged that traditional wavelets are not very effective in dealing multidimensional signals containing distributed discontinuities such as edges. To overcome this limitation, one has to use basis elements with much higher directional sensitivity and of various shapes, to be able to capture the intrinsic geometrical features of multidimensional phenomena. This paper introduces a new discrete multiscale directional representation called the discrete shearlet transform. This approach, which is based on the shearlet transform, combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges. We describe two different methods of implementing the shearlet transform. The numerical experiments presented in this paper demonstrate that the discrete shearlet transform is very competitive in denoising applications both in terms of performance and computational efficiency.", "Abstract Affine systems are reproducing systems of the form A C = D c T k ψ l : 1 ⩽ l ⩽ L , k ∈ Z n , c ∈ C , which arise by applying lattice translation operators T k to one or more generators ψ l in L 2 ( R n ) , followed by the application of dilation operators D c , associated with a countable set C of invertible matrices. In the wavelet literature, C is usually taken to be the group consisting of all integer powers of a fixed expanding matrix. In this paper, we develop the properties of much more general systems, for which C = c = a b : a ∈ A , b ∈ B where A and B are not necessarily commuting matrix sets. C need not contain a single expanding matrix. Nonetheless, for many choices of A and B, there are wavelet systems with multiresolution properties very similar to those of classical dyadic wavelets. Typically, A expands or contracts only in certain directions, while B acts by volume-preserving maps in transverse directions. Then the resulting wavelets exhibit the geometric properties, e.g., directionality, elongated shapes, scales, oscillations, recently advocated by many authors for multidimensional signal and image processing applications. Our method is a systematic approach to the theory of affine-like systems yielding these and more general features." ] }
0912.4529
1565922019
In this paper, we first introduce the concept of an adaptive MRA (AMRA) structure which is a variant of the classical MRA structure suited to the main goal of a fast flexible decomposition strategy adapted to the data at each decomposition level. We then study this novel methodology for the general case of affine-like systems, and derive a Unitary Extension Principle (UEP) for filter design. Finally, we apply our results to the directional representation system of shearlets. This leads to a comprehensive theory for fast decomposition algorithms associated with shearlet systems which encompasses tight shearlet frames with spatially compactly supported generators within such an AMRA structure. Also shearlet-like systems associated with parabolic scaling and unimodular matrices optimally close to rotation as well as 3D shearlet systems are studied within this framework.
A particular credit deserves the work in @cite_26 , in which the adaptivity ideas were already lurking. The main difference to this paper is the additional freedom provided by the AMRA structure.
{ "cite_N": [ "@cite_26" ], "mid": [ "2962880471" ], "abstract": [ "In this paper, we propose a solution for a fundamental problem in computational harmonic analysis, namely, the construction of a multiresolution analysis with directional components. We will do so by constructing subdivision schemes which provide a means to incorporate directionality into the data and thus the limit function. We develop a new type of nonstationary bivariate subdivision scheme, which allows us to adapt the subdivision process depending on directionality constraints during its performance, and we derive a complete characterization of those masks for which these adaptive directional subdivision schemes converge. In addition, we present several numerical examples to illustrate how this scheme works. Secondly, we describe a fast decomposition associated with a sparse directional representation system for two-dimensional data, where we focus on the recently introduced sparse directional representation system of shearlets. In fact, we show that the introduced adaptive directional subdivision sch..." ] }
0912.2404
1484147144
In this paper, we identify a fundamental algorithmic problem that we term succinct dynamic covering (SDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X= 1,2,...,n and I= S_1, ..., S_m , S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), and maybe as little as O((m+n)polylog(mn)) space. The goal of SDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We call such a scheme a Coverage Oracle. We present algorithms and complexity results for coverage oracles. We present deterministic and probabilistic near-tight upper and lower bounds on the approximation ratio of SDC as a function of the amount of space available to the oracle. Our lower bound results show that to obtain constant-factor approximations we need Omega(mn) space. Fortunately, our upper bounds present an explicit tradeoff between space and approximation ratio, allowing us to determine the amount of space needed to guarantee certain accuracy.
Our study of the tradeoff between space and approximation ratio is in the spirit of the work of Thorup and Zwick @cite_5 on . They considered the problem of compressing a graph @math into a small datastructure, in such a way that the datastructure can be used to approximately answer queries for the distance between pairs of nodes in @math . Similar to our results, they showed matching upper and lower bounds on the space needed for compressing the graph subject to preserving a certain approximation ratio. Moreover, similarly to our upperbounds for SDC, their distance oracles benefit from a speedup at query time as approximation ratio is sacrificed for space.
{ "cite_N": [ "@cite_5" ], "mid": [ "2045446569" ], "abstract": [ "Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs." ] }
0912.2404
1484147144
In this paper, we identify a fundamental algorithmic problem that we term succinct dynamic covering (SDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X= 1,2,...,n and I= S_1, ..., S_m , S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), and maybe as little as O((m+n)polylog(mn)) space. The goal of SDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We call such a scheme a Coverage Oracle. We present algorithms and complexity results for coverage oracles. We present deterministic and probabilistic near-tight upper and lower bounds on the approximation ratio of SDC as a function of the amount of space available to the oracle. Our lower bound results show that to obtain constant-factor approximations we need Omega(mn) space. Fortunately, our upper bounds present an explicit tradeoff between space and approximation ratio, allowing us to determine the amount of space needed to guarantee certain accuracy.
Previous work has studied the set cover problem under streaming models. One model studied in @cite_7 @cite_13 assumes that the sets are known in advance, only elements arrive online, and, the algorithms do not know in advance which subset of elements will arrive. An alternative model assumes that elements are known in advance and sets arrive in a streaming fashion @cite_9 . Our work differs from these works in that SDC operates under a storage budget, so all sets cannot be stored; moreover, SDC needs to provide a good cover for all possible dynamic query inputs.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_7" ], "mid": [ "112063162", "1860570137", "2021185572" ], "abstract": [ "We generalize the graph streaming model to hypergraphs. In this streaming model, hyperedges are arriving online and any computation has to be done on-the-fly using a small amount of space. Each hyperedge can be viewed as a set of elements (nodes), so we refer to our proposed model as the “set-streaming” model of computation. We consider the problem of “maximum coverage”, in which k sets have to be selected that maximize the total weight of the covered elements. In the set-streaming model of computation, we show that our algorithm for maximumcoverage achieves an approximation factor of 14 . When multiple passes are allowed, we also provide a Θ(log n) approximation algorithm for the set-cover. We next consider a multi-topic blog-watch application, an extension of blogalert like applications for handling simultaneous multipletopic requests. We show how the problems of maximumcoverage and set-cover in the set-streaming model can be utilized to give efficient online solutions to this problem. We verify the effectiveness of our methods both on synthetic and real weblog data.", "We study a wide range of online covering and packing optimization problems. In an online covering problem a linear cost function is known in advance, but the linear constraints that define the feasible solution space are given one by one in an online fashion. In an online packing problem the profit function as well as the exact packing constraints are not fully known in advance. In each round additional information about the profit function and the constraints is revealed. We provide general deterministic schemes for online fractional covering and packing problems. We also provide deterministic algorithms for a couple of integral covering and packing problems.", "Let X=[1,2,•••,n] be a ground set of n elements, and let S be a family of subsets of X, |S|=m, with a positive cost cS associated with each S ∈ S.Consider the following online version of the set cover problem, described as a game between an algorithm and an adversary. An adversary gives elements to the algorithm from X one-by-one. Once a new element is given, the algorithm has to cover it by some set of S containing it. We assume that the elements of X and the members of S are known in advance to the algorithm, however, the set X' ⊆ X of elements given by the adversary is not known in advance to the algorithm. (In general, X' may be a strict subset of X.) The objective is to minimize the total cost of the sets chosen by the algorithm. Let C denote the family of sets in S that the algorithm chooses. At the end of the game the adversary also produces (off-line) a family of sets COPT that covers X'. The performance of the algorithm is the ratio between the cost of C and the cost of COPT. The maximum ratio, taken over all input sequences, is the competitive ratio of the algorithm.We present an O(log m log n) competitive deterministic algorithm for the problem, and establish a nearly matching Ω(log n log m log log m + log log n) lower bound for all interesting values of m and n. The techniques used are motivated by similar techniques developed in computational learning theory for online prediction (e.g., the WINNOW algorithm) together with a novel way of converting the fractional solution they supply into a deterministic online algorithm." ] }
0912.2404
1484147144
In this paper, we identify a fundamental algorithmic problem that we term succinct dynamic covering (SDC), arising in many modern-day web applications, including ad-serving and online recommendation systems in eBay and Netflix. Roughly speaking, SDC applies two restrictions to the well-studied Max-Coverage problem: Given an integer k, X= 1,2,...,n and I= S_1, ..., S_m , S_i a subset of X, find a subset J of I, such that |J| <= k and the union of S in J is as large as possible. The two restrictions applied by SDC are: (1) Dynamic: At query-time, we are given a query Q, a subset of X, and our goal is to find J such that the intersection of Q with the union of S in J is as large as possible; (2) Space-constrained: We don't have enough space to store (and process) the entire input; specifically, we have o(mn), and maybe as little as O((m+n)polylog(mn)) space. The goal of SDC is to maintain a small data structure so as to answer most dynamic queries with high accuracy. We call such a scheme a Coverage Oracle. We present algorithms and complexity results for coverage oracles. We present deterministic and probabilistic near-tight upper and lower bounds on the approximation ratio of SDC as a function of the amount of space available to the oracle. Our lower bound results show that to obtain constant-factor approximations we need Omega(mn) space. Fortunately, our upper bounds present an explicit tradeoff between space and approximation ratio, allowing us to determine the amount of space needed to guarantee certain accuracy.
Another related area is that of nearest neighbor search. It is easy to see that the problem with @math corresponds to nearest neighbor search using the dot product similarity measure, i.e., @math . However, following from a result from Charikar @cite_1 , there exists no locality sensitive hash function family for the dot product similarity function. Thus, there is no hope that signature schemes (like minhashing for the Jaccard distance) can be used for .
{ "cite_N": [ "@cite_1" ], "mid": [ "2012833704" ], "abstract": [ "(MATH) A locality sensitive hashing scheme is a distribution on a family @math of hash functions operating on a collection of objects, such that for two objects x,y, PrheF[h(x) = h(y)] = sim(x,y), where sim(x,y) e [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = |A P B| |A P Ehe [d(h(P),h(Q))] x O(log n log log n). EMD(P, Q). ." ] }
0912.2846
1544066577
Action description languages, such as A and B, are expressive instruments introduced for formalizing planning domains and planning problem instances. The paper starts by proposing a methodology to encode an action language (with conditional effects and static causal laws), a slight variation of B, using Constraint Logic Programming over Finite Domains. The approach is then generalized to raise the use of constraints to the level of the action language itself. A prototype implementation has been developed, and the preliminary results are presented and discussed. To appear in Theory and Practice of Logic Programming (TPLP)
Logic programming, and more specifically Prolog, has been also used to implement the first prototype of GOLOG (as discussed in @cite_21 ). GOLOG is a programming language for describing agents and their capabilities of changing the state of the world. The language builds on the foundations of situation calculus. It provides high level constructs for the definition of complex actions and for the introduction of control knowledge in the agent specification. Prolog is employed to create an interpreter, which enables, for example, to answer projection queries (i.e., determine the properties that hold in a situation after the execution of a sequence of actions). The goals of GOLOG and the use of logic programming in that work are radically different from the focus of our work.
{ "cite_N": [ "@cite_21" ], "mid": [ "2045474169" ], "abstract": [ "Abstract This paper proposes a new logic programming language called GOLOG whose interpreter automatically maintains an explicit representation of the dynamic world being modeled, on the basis of user supplied axioms about the preconditions and effects of actions and the initial state of the world. This allows programs to reason about the state of the world and consider the effects of various possible courses of action before committing to a particular behavior. The net effect is that programs may be written at a much higher level of abstraction than is usually possible. The language appears well suited for applications in high level control of robots and industrial processes, intelligent software agents, discrete event simulation, etc. It is based on a formal theory of action specified in an extended version of the situation calculus. A prototype implementation in Prolog has been developed." ] }
0912.2846
1544066577
Action description languages, such as A and B, are expressive instruments introduced for formalizing planning domains and planning problem instances. The paper starts by proposing a methodology to encode an action language (with conditional effects and static causal laws), a slight variation of B, using Constraint Logic Programming over Finite Domains. The approach is then generalized to raise the use of constraints to the level of the action language itself. A prototype implementation has been developed, and the preliminary results are presented and discussed. To appear in Theory and Practice of Logic Programming (TPLP)
A strong piece of work regarding the use of constraint programming in planning is @cite_22 . The authors use constraint programming, based on the CLAIRE language @cite_2 , to encode temporal planning problems and to search for minimal plans. They also use a series of interesting heuristics for solving that problem. This line of research is more accurate than ours from the implementation point of view---although their heuristic strategies can be implemented in our system and it would be interesting to exploit them during the labeling phase. On the other hand, the proposal by Vidal and Geffner only deals with Boolean fluents and without explicitly defined static causal laws.
{ "cite_N": [ "@cite_22", "@cite_2" ], "mid": [ "2115181241", "2327696470" ], "abstract": [ "A key feature of modern optimal planners such as graphplan and blackbox is their ability to prune large parts of the search space. Previous Partial Order Causal Link (POCL) planners provide an alternative branching scheme but lacking comparable pruning mechanisms do not perform as well. In this paper, a domain-independent formulation of temporal planning based on Constraint Programming is introduced that successfully combines a POCL branching scheme with powerful and sound pruning rules. The key novelty in the formulation is the ability to reason about supports, precedences, and causal links involving actions that are not in the plan. Experiments over a wide range of benchmarks show that the resulting optimal temporal planner is much faster than current ones and is competitive with the best parallel planners in the special case in which actions have all the same duration.", "This paper presents a programming language which includes paradigms that are usually associated with declarative languages, such as sets, rules and search, into an imperative (functional) language. Although these paradigms are separately well known and are available under various programming environments, the originality of the CLAIRE language comes from the tight integration, which yields interesting run-time performances, and from the richness of this combination, which yields new ways in which to express complex algorithmic patterns with few elegant lines. To achieve the opposite goals of a high abstraction level (conciseness and readability) and run-time performance (CLAIRE is used as a C++ preprocessor), we have developed two kinds of compiler: first, a pattern pre-processor handles iterations over both concrete and abstract sets (data types and program fragments), in a completely user-extensible manner; secondly, an inference compiler transforms a set of logical rules into a set of functions (demons that are used through procedural attachment)." ] }
0912.2846
1544066577
Action description languages, such as A and B, are expressive instruments introduced for formalizing planning domains and planning problem instances. The paper starts by proposing a methodology to encode an action language (with conditional effects and static causal laws), a slight variation of B, using Constraint Logic Programming over Finite Domains. The approach is then generalized to raise the use of constraints to the level of the action language itself. A prototype implementation has been developed, and the preliminary results are presented and discussed. To appear in Theory and Practice of Logic Programming (TPLP)
Similar considerations can be done with respect to the cited proposal by Lopez and Bacchus @cite_24 . The authors start from Graphplan and exploit constraints to encode @math -plan problems. Fluents are in this case only Boolean (not multi-valued) and the process is deterministic once an action is chosen (instead, we deal also with non-determinism, e.g., when we have consequences such as @math ). The proposal of Lopez and Bacchus does not address the encoding of static causal laws.
{ "cite_N": [ "@cite_24" ], "mid": [ "121593559" ], "abstract": [ "We examine the approach of encoding planning problems as CSPs more closely. First we present a simple CSP encoding for planning problems and then a set of transformations that can be used to eliminate variables and add new constraints to the encoding. We show that our transformations uncover additional structure in the planning problem, structure that subsumes the structure uncovered by GRAPHPLAN planning graphs. We solve the CSP encoded planning problem by using standard CSP algorithms. Empirical evidence is presented to validate the effectiveness of this approach to solving planning problems, and to show that even a prototype implementation is more effective than standard GRAPHPLAN. Our prototype is even competitive with far more optimized planning graph based implementations. We also demonstrate that this approach can be more easily lifted to more complex types of planning than can planning graphs. In particular, we show that the approach can be easily extended to planning with resources." ] }
0912.1155
1652603594
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Anderson @cite_14 and Varian @cite_17 informally discuss (via anecdotes) how the design of information security must take incentives into account. August and Tunca @cite_10 compare various ways to incentivize users to patch their systems in a setting where the users are more susceptible to attacks if their neighbors do not patch.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_17" ], "mid": [ "1483280370", "2125639892", "143246584" ], "abstract": [ "According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. The author puts forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.", "We study the effect of user incentives on software security in a network of individual users under costly patching and negative network security externalities. For proprietary software or freeware, we compare four alternative policies to manage network security: (i) consumer self-patching (where no external incentives are provided for patching or purchasing); (ii) mandatory patching; (iii) patching rebate; and (iv) usage tax. We show that for proprietary software, when the software security risk and the patching costs are high, for both a welfare-maximizing social planner and a profit-maximizing vendor, a patching rebate dominates the other policies. However, when the patching cost or the security risk is low, self-patching is best. We also show that when a rebate is effective, the profit-maximizing rebate is decreasing in the security risk and increasing in patching costs. The welfare-maximizing rebates are also increasing in patching costs, but can be increasing in the effective security risk when patching costs are high. For freeware, a usage tax is the most effective policy except when both patching costs, and security risk are low, in which case a patching rebate prevails. Optimal patching rebates and taxes tend to increase with increased security risk and patching costs, but can decrease in the security risk for high-risk levels. Our results suggest that both the value generated from software and vendor profits can be significantly improved by mechanisms that target user incentives to maintain software security.", "" ] }
0912.1155
1652603594
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Gordon and Loeb @cite_21 and Hausken @cite_28 analyze the costs and benefits of security in an economic model (with non-strategic attackers) where the probability of a successful exploit is a function of the defense investment. They use this model to compute the optimal level of investment. Varian @cite_29 studies various (single-shot) security games and identifies how much agents invest in security at equilibrium. Grossklags @cite_22 extends this model by letting agents self-insure.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_21", "@cite_22" ], "mid": [ "2062460331", "77047371", "2056075452", "2110522107" ], "abstract": [ "Four kinds of marginal returns to security investment to protect an information set are decrease, first increase and then decrease (logistic function), increase, and constancy. Gordon, L. A. and Loeb, M. (ACM Trans. Inf. Syst. Secur., 5:438---457, 2002). find for decreasing marginal returns that a firm invests maximum 37 (1? ?e) of the expected loss from a security breach, and that protecting moderately rather than extremely vulnerable information sets may be optimal. This article presents classes of all four kinds where the optimal investment is no longer capped at 1? ?e. First, investment in information security activities for the logistic function is zero for low vulnerabilities, jumps in a limited \"bang-bang\" manner to a positive level for intermediate vulnerabilities, and thereafter increases concavely in absolute terms. Second, we present an alternative class with decreasing marginal returns where the investment increases convexly in the vulnerability until a bound is reached, investing most heavily to protect the extremely vulnerable information sets. For the third and fourth kinds the optimal investment is of an all-out \"bang-bang\" nature, that is, zero for low vulnerabilities, and jumping to maximum investment for intermediate vulnerabilities.", "System reliability often depends on the effort of many individuals, making reliability a public good. It is well-known that purely voluntary provision of public goods may result in a free rider problem: individuals may tend to shirk, resulting in an inefficient level of the public good. How much effort each individual exerts will depend on his own benefits and costs, the efforts exerted by the other individuals, and the technology that relates individual effort to outcomes. In the context of system reliability, we can distinguish three prototype cases.", "This article presents an economic model that determines the optimal amount to invest to protect a given set of information. The model takes into account the vulnerability of the information to a security breach and the potential loss should such a breach occur. It is shown that for a given potential loss, a firm should not necessarily focus its investments on information sets with the highest vulnerability. Since extremely vulnerable information sets may be inordinately expensive to protect, a firm may be better off concentrating its efforts on information sets with midrange vulnerabilities. The analysis further suggests that to maximize the expected benefit from investment to protect information, a firm should spend only a small fraction of the expected loss due to a security breach.", "Despite general awareness of the importance of keeping one's system secure, and widespread availability of consumer security technologies, actual investment in security remains highly variable across the Internet population, allowing attacks such as distributed denial-of-service (DDoS) and spam distribution to continue unabated. By modeling security investment decision-making in established (e.g., weakest-link, best-shot) and novel games (e.g., weakest-target), and allowing expenditures in self-protection versus self-insurance technologies, we can examine how incentives may shift between investment in a public good (protection) and a private good (insurance), subject to factors such as network size, type of attack, loss probability, loss magnitude, and cost of technology. We can also characterize Nash equilibria and social optima for different classes of attacks and defenses. In the weakest-target game, an interesting result is that, for almost all parameter settings, more effort is exerted at Nash equilibrium than at the social optimum. We may attribute this to the \"strategic uncertainty\" of players seeking to self-protect at just slightly above the lowest protection level." ] }
0912.1155
1652603594
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Miura et al. @cite_23 study externalities that appear due to users having the same password across various websites and discuss pareto-improving security investments. Miura and Bambos @cite_4 rank vulnerabilities according to a random-attacker model. Skybox and RedSeal offer practical systems that help enterprises prioritize vulnerabilities based on a random-attacker model. Kumar et al. @cite_6 investigate optimal security architectures for a multi-division enterprise, taking into account losses due to lack of availability and confidentiality. None of the above papers explicitly model a truly adversarial attacker.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_23" ], "mid": [ "2154510959", "1561943197", "2144238966" ], "abstract": [ "In this paper, we introduce a new scheme called SecureRank for prioritizing vulnerabilities to patch in computing systems networks. This has become a key issue for IT infrastructures, as large numbers of vulnerabilities are continuously announced and IT administrators devote increasingly more resources to managing them. SecureRank prioritizes vulnerabilities and network nodes to patch based on the percentage of time a random attacker would spend trying to exploit them. Going beyond state-of-the-art approaches, SecureRank takes into account the network topology and potential node interactions in calculating their relative risk and priority. We define two metrics for the security of a network and use them to show how SecureRank outperforms key industry benchmarks in certain natural operational settings. We believe that these findings can be used as a starting point in exploring what defense strategies make sense given topology and attack strategy.", "Information security is growing to be an IT priority for many firms, but several critical dimensions of enterprise security like type of loss or strategic effects of countermeasures have received little attention in the economics-based literature. We develop a model of a contagious threat that can attack multiple divisions of a firm's enterprise network and cause both availability and confidentiality losses. Firms commonly deploy countermeasures to mitigate the harmful effects of threats. Such deployment is complicated by the CIO's lack of information on the information systems of the divisions and due to the differing goals of division managers. In this setting, we model the business process and interconnectivity requirements of the enterprise and demonstrate how to optimally design the security architecture, which consists of protection, recovery and cryptographic measures. We evaluate commonly suggested mechanisms like subsidies and liability and find that they are inadequate as well as informationally demanding. To remedy these problems which directly impact practitioners, we derive mechanisms that have no ex-post informational requirements and are easily implementable for both availability and confidentiality losses. Some of our results are counterintuitive, notably that countermeasure can be overdeployed by division managers and that having a single platform for all divisions can decrease unexpected confidentiality losses.", "In various settings, such as when customers use the same passwords at several independent web sites, security decisions by one organization may have a significant impact on the security of another. We develop a model for security decision-making in such settings, using a variation of linear influence networks. The linear influence model uses a matrix to represent linear dependence between security investment at one organization and resulting security at another, and utility functions to measure the overall benefit to each organization. A simple matrix condition implies the existence and uniqueness of Nash equilibria, which can be reached by a natural iterative algorithm. A free-riding index, expressible using quantities computed in this model, measures the degree to which one organization can potentially reduce its security investment and benefit from investments of others. We apply this framework to investigate three examples: web site security with shared passwords, customer education against phishing and identity theft, and anti-spam email filters. While we do not have sufficient quantitative data to draw quantitative conclusions about any of these situations, the model provides qualitative information about each example." ] }
0912.1155
1652603594
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.
Fultz @cite_15 generalizes @cite_22 by modeling attackers explicitly. Cavusoglu et al. @cite_20 highlight the importance of using a game-theoretic model over a decision theoretic model due to the presence of adversarial attackers. However, these models look at idealized settings that are not generically applicable. Lye and Wing @cite_26 study the Nash equilibrium of a single-shot game between an attacker and a defender that models a particular enterprise security scenario. Arguably this model is most similar to ours in terms of abstraction level. However, calculating the Nash equilibrium requires detailed knowledge of the adversary's incentives, which as discussed in the introduction, might not be readily available to the defender. Moreover, their game contains multiple equilibria, weakening their prescriptions.
{ "cite_N": [ "@cite_15", "@cite_26", "@cite_22", "@cite_20" ], "mid": [ "2130887290", "125853156", "2110522107", "2045006695" ], "abstract": [ "We develop a two-sided multiplayer model of security in which attackers aim to deny service and defenders strategize to secure their assets. Attackers benefit from the successful compromise of target systems, however, may suffer penalties for increased attack activities. Defenders weigh the force of an attack against the cost of security. We consider security decision-making in tightly and loosely coupled networks and allow defense expenditures in protection and self-insurance technologies.", "", "Despite general awareness of the importance of keeping one's system secure, and widespread availability of consumer security technologies, actual investment in security remains highly variable across the Internet population, allowing attacks such as distributed denial-of-service (DDoS) and spam distribution to continue unabated. By modeling security investment decision-making in established (e.g., weakest-link, best-shot) and novel games (e.g., weakest-target), and allowing expenditures in self-protection versus self-insurance technologies, we can examine how incentives may shift between investment in a public good (protection) and a private good (insurance), subject to factors such as network size, type of attack, loss probability, loss magnitude, and cost of technology. We can also characterize Nash equilibria and social optima for different classes of attacks and defenses. In the weakest-target game, an interesting result is that, for almost all parameter settings, more effort is exerted at Nash equilibrium than at the social optimum. We may attribute this to the \"strategic uncertainty\" of players seeking to self-protect at just slightly above the lowest protection level.", "Firms have been increasing their information technology (IT) security budgets significantly to deal with increased security threats. An examination of current practices reveals that managers view security investment as any other and use traditional decision-theoretic risk management techniques to determine security investments. We argue in this paper that this method is incomplete because of the problem's strategic nature-hackers alter their hacking strategies in response to a firm's investment strategies. We propose game theory for determining IT security investment levels and compare game theory and decision theory approaches on several dimensions such as the investment levels, vulnerability, and payoff from investments. We show that the sequential game results in the maximum payoff to the firm, but requires that the firm move first before the hacker. Even if a simultaneous game is played, the firm enjoys a higher payoff than that in the decision theory approach, except when the firm's estimate of the hacker effort in the decision theory approach is sufficiently close to the actual hacker effort. We also show that if the firm learns from prior observations of hacker effort and uses these to estimate future hacker effort in the decision theory approach, then the gap between the results of decision theory and game theory approaches diminishes over time. The rate of convergence and the extent of loss the firm suffers before convergence depend on the learning model employed by the firm to estimate hacker effort." ] }
0912.1580
1776654555
In this paper, we present algorithms for computing approximate hulls and centerpoints for collections of matrices in positive definite space. There are many applications where the data under consideration, rather than being points in a Euclidean space, are positive definite (p.d.) matrices. These applications include diffusion tensor imaging in the brain, elasticity analysis in mechanical engineering, and the theory of kernel maps in machine learning. Our work centers around the notion of a horoball: the limit of a ball fixed at one point whose radius goes to infinity. Horoballs possess many (though not all) of the properties of halfspaces; in particular, they lack a strong separation theorem where two horoballs can completely partition the space. In spite of this, we show that we can compute an approximate "horoball hull" that strictly contains the actual convex hull. This approximate hull also preserves geodesic extents, which is a result of independent value: an immediate corollary is that we can approximately solve problems like the diameter and width in positive definite space. We also use horoballs to show existence of and compute approximate robust centerpoints in positive definite space, via the horoball-equivalent of the notion of depth.
The mathematics of Riemannian manifolds, Cartan-Hadamard manifolds and @math is well-understood: the book by Bridson and Haefliger @cite_2 is an invaluable reference on metric spaces of nonpositive curvature, and Bhatia @cite_17 provides a detailed study of @math in particular. However, there are many fewer algorithmic results for problems in these spaces. To the best of our knowledge, the only prior work on algorithms for positive definite space are the work by Moakher @cite_3 on mean shapes in positive definite space, and papers by Fletcher and Joshi @cite_22 on doing principal geodesic analysis in symmetric spaces, and the robust median algorithms of Fletcher @cite_4 for general manifolds (including @math and @math ).
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_3", "@cite_2", "@cite_17" ], "mid": [ "2125391899", "1947363732", "92310073", "2134086356", "585133165" ], "abstract": [ "One of the primary goals of computational anatomy is the statistical analysis of anatomical variability in large populations of images. The study of anatomical shape is inherently related to the construction of transformations of the underlying coordinate space, which map one anatomy to another. It is now well established that representing the geometry of shapes or images in Euclidian spaces undermines our ability to represent natural variability in populations. In our previous work we have extended classical statistical analysis techniques, such as averaging, principal components analysis, and regression, to Riemannian manifolds, which are more appropriate representations for describing anatomical variability. In this paper we extend the notion of robust estimation, a well established and powerful tool in traditional statistical analysis of Euclidian data, to manifold-valued representations of anatomical variability. In particular, we extend the geometric median, a classic robust estimator of centrality for data in Euclidean spaces. We formulate the geometric median of data on a Riemannian manifold as the minimizer of the sum of geodesic distances to the data points. We prove existence and uniqueness of the geometric median on manifolds with non-positive sectional curvature and give sufficient conditions for uniqueness on positively curved manifolds. Generalizing the Weiszfeld procedure for finding the geometric median of Euclidean data, we present an algorithm for computing the geometric median on an arbitrary manifold. We show that this algorithm converges to the unique solution when it exists. In this paper we exemplify the robustness of the estimation technique by applying the procedure to various manifolds commonly used in the analysis of medical images. Using this approach, we also present a robust brain atlas estimation technique based on the geometric median in the space of deformable images.", "Diffusion tensor magnetic resonance imaging (DT-MRI) is emerging as an important tool in medical image analysis of the brain. However, relatively little work has been done on producing statistics of diffusion tensors. A main difficulty is that the space of diffusion tensors, i.e., the space of symmetric, positive-definite matrices, does not form a vector space. Therefore, standard linear statistical techniques do not apply. We show that the space of diffusion tensors is a type of curved manifold known as a Riemannian symmetric space. We then develop methods for producing statistics, namely averages and modes of variation, in this space. In our previous work we introduced principal geodesic analysis, a generalization of principal component analysis, to compute the modes of variation of data in Lie groups. In this work we expand the method of principal geodesic analysis to symmetric spaces and apply it to the computation of the variability of diffusion tensor data. We expect that these methods will be useful in the registration of diffusion tensor images, the production of statistical atlases from diffusion tensor data, and the quantification of the anatomical variability caused by disease.", "In many engineering applications that use tensor analysis, such as tensor imaging, the underlying tensors have the characteristic of being positive definite. It might therefore be more appropriate to use techniques specially adapted to such tensors. We will describe the geometry and calculus on the Riemannian symmetric space of positive-definite tensors. First, we will explain why the geometry, constructed by Emile Cartan, is a natural geometry on that space. Then, we will use this framework to present formulas for means and interpolations specific to positive-definite tensors.", "This book describes the global properties of simply-connected spaces that are non-positively curved in the sense of A. D. Alexandrov, and the structure of groups which act on such spaces by isometries. The theory of these objects is developed in a manner accessible to anyone familiar with the rudiments of topology and group theory: non-trivial theorems are proved by concatenating elementary geometric arguments, and many examples are given. Part I is an introduction to the geometry of geodesic spaces. In Part II the basic theory of spaces with upper curvature bounds is developed. More specialized topics, such as complexes of groups, are covered in Part III. The book is divided into three parts, each part is divided into chapters and the chapters have various subheadings. The chapters in Part III are longer and for ease of reference are divided into numbered sections.", "This book represents the first synthesis of the considerable body of new research into positive definite matrices. These matrices play the same role in noncommutative analysis as positive real numbers do in classical analysis. They have theoretical and computational uses across a broad spectrum of disciplines, including calculus, electrical engineering, statistics, physics, numerical analysis, quantum information theory, and geometry. Through detailed explanations and an authoritative and inspiring writing style, Rajendra Bhatia carefully develops general techniques that have wide applications in the study of such matrices. Bhatia introduces several key topics in functional analysis, operator theory, harmonic analysis, and differential geometry--all built around the central theme of positive definite matrices. He discusses positive and completely positive linear maps, and presents major theorems with simple and direct proofs. He examines matrix means and their applications, and shows how to use positive definite functions to derive operator inequalities that he and others proved in recent years. He guides the reader through the differential geometry of the manifold of positive definite matrices, and explains recent work on the geometric mean of several matrices. Positive Definite Matrices is an informative and useful reference book for mathematicians and other researchers and practitioners. The numerous exercises and notes at the end of each chapter also make it the ideal textbook for graduate-level courses." ] }
0912.1580
1776654555
In this paper, we present algorithms for computing approximate hulls and centerpoints for collections of matrices in positive definite space. There are many applications where the data under consideration, rather than being points in a Euclidean space, are positive definite (p.d.) matrices. These applications include diffusion tensor imaging in the brain, elasticity analysis in mechanical engineering, and the theory of kernel maps in machine learning. Our work centers around the notion of a horoball: the limit of a ball fixed at one point whose radius goes to infinity. Horoballs possess many (though not all) of the properties of halfspaces; in particular, they lack a strong separation theorem where two horoballs can completely partition the space. In spite of this, we show that we can compute an approximate "horoball hull" that strictly contains the actual convex hull. This approximate hull also preserves geodesic extents, which is a result of independent value: an immediate corollary is that we can approximately solve problems like the diameter and width in positive definite space. We also use horoballs to show existence of and compute approximate robust centerpoints in positive definite space, via the horoball-equivalent of the notion of depth.
Geometric algorithms in hyperbolic space are much more tractable. The Poincar ' e and Klein models of hyperbolic space preserve different properties of Euclidean space, and many algorithm carry over directly with no modifications. Leibon and Letscher @cite_18 were the first to study basic geometric primitives in general Riemannian manifolds, constructing Voronoi diagrams and Delaunay triangulations for sufficiently dense point sets in these spaces. Eppstein @cite_19 described hierarchical clustering algorithms in hyperbolic space. Krauthgamer and Lee @cite_24 studied the nearest neighbor problem for points in @math -hyperbolic space; these spaces are a combinatorial generalization of negatively curved space and are characterized by global, rather than local, definitions of curvature. Chepoi @cite_0 @cite_12 advanced this line of research, providing algorithms for computing the diameter and minimum enclosing ball of collections of points in @math -hyperbolic space.
{ "cite_N": [ "@cite_18", "@cite_24", "@cite_19", "@cite_0", "@cite_12" ], "mid": [ "2128878012", "2017848317", "2949153768", "1965650021", "1487479819" ], "abstract": [ "For a sufficiently dense set of points in any closed Riemannian manifold, we prove that a unique Delannay triangulation exists. This triangulation has the same properties as in Euclidean space. Algorithms for constructing these triangulations will also be described.", "We initiate the study of approximate algorithms on negatively curved spaces. These spaces have recently become of interest in various domains of computer science including networking and vision. The classical example of such a space is the real-hyperbolic space H ^d for d 2, but our approach applies to a more general family of spaces characterized by Gromov's (combinatorial) hyperbolic condition. We give efficient algorithms and data structures for problems like approximate nearest-neighbor search and compact, low-stretch routing on subsets of negatively curved spaces of fixed dimension (including H ^d as a special case). In a different direction, we show that there is a PTAS for the Traveling Salesman Problem when the set of cities lie, for example, in H ^d. This generalizes Arora's results for R ^d. Most of our algorithms use the intrinsic distance geometry of the data set, and only need the existence of an embedding into some negatively curved space in order to function properly. In other words, our algorithms regard the interpoint distance function as a black box, and are independent of the representation of the input points.", "We provide efficient constant-factor approximation algorithms for the problems of finding a hierarchical clustering of a point set in any metric space, minimizing the sum of minimimum spanning tree lengths within each cluster, and in the hyperbolic or Euclidean planes, minimizing the sum of cluster perimeters. Our algorithms for the hyperbolic and Euclidean planes can also be used to provide a pants decomposition, that is, a set of disjoint simple closed curves partitioning the plane minus the input points into subsets with exactly three boundary components, with approximately minimum total length. In the Euclidean case, these curves are squares; in the hyperbolic case, they combine our Euclidean square pants decomposition with our tree clustering method for general metric spaces.", "δ-Hyperbolic metric spaces have been defined by M. Gromov via a simple 4-point condition: for any four points u,v,w,x, the two larger of the sums d(u,v)+d(w,x), d(u,w)+d(v,x), d(u,x)+d(v,w) differ by at most 2δ. Given a finite set S of points of a δ-hyperbolic space, we present simple and fast methods for approximating the diameter of S with an additive error 2δ and computing an approximate radius and center of a smallest enclosing ball for S with an additive error 3δ. These algorithms run in linear time for classical hyperbolic spaces and for δ-hyperbolic graphs and networks. Furthermore, we show that for δ-hyperbolic graphs G=(V,E) with uniformly bounded degrees of vertices, the exact center of S can be computed in linear time O(|E|). We also provide a simple construction of distance approximating trees of δ-hyperbolic graphs G on n vertices with an additive error O(δlog2 n). This construction has an additive error comparable with that given by Gromov for n-point δ-hyperbolic spaces, but can be implemented in O(|E|) time (instead of O(n2)). Finally, we establish that several geometrical classes of graphs have bounded hyperbolicity.", "We consider the problem of covering and packing subsets ofΔ-hyperbolic metric spaces and graphs by balls.These spaces, defined via a combinatorial Gromov condition, haverecently become of interest in several domains of computer science.Specifically, given a subset Sof aΔ-hyperbolic graph Gand a positive numberR, let Δ(S,R) be theminimum number of balls of radius Rcovering S.It is known that computing Δ(S,R)or approximating this number within a constant factor is hard evenfor 2-hyperbolic graphs. In this paper, using a primal-dualapproach, we show how to construct in polynomial time a covering ofSwith at most Δ(S,R)balls of (slightly larger) radius R+ Δ.This result is established in the general framework ofΔ-hyperbolic geodesic metric spaces and is extendedto some other set families derived from balls. The coveringalgorithm is used to design better approximation algorithms for theaugmentation problem with diameter constraints and for thek-center problem in Δ-hyperbolicgraphs." ] }
0912.2199
2952627638
Mobile Ad Hoc networks, due to the unattended nature of the network itself and the dispersed location of nodes, are subject to several unique security issues. One of the most vexed security threat is node capture. A few solutions have already been proposed to address this problem; however, those solutions are either centralized or focused on theoretical mobility models alone. In the former case the solution does not fit well the distributed nature of the network while, in the latter case, the quality of the solutions obtained for realistic mobility models severely differs from the results obtained for theoretical models. The rationale of this paper is inspired by the observation that re-encounters of mobile nodes do elicit a form of social ties. Leveraging these ties, it is possible to design efficient and distributed algorithms that, with a moderated degree of node cooperation, enforce the emergent property of node capture detection. In particular, in this paper we provide a proof of concept proposing a set of algorithms that leverage, to different extent, node mobility and node cooperation--that is, identifying social ties--to thwart node capture attack. In particular, we test these algorithms on a realistic mobility scenario. Extensive simulations show the quality of the proposed solutions and, more important, the viability of the proposed approach.
Mobility as a means to enforce security in mobile networks has been considered in @cite_16 . In @cite_33 , the authors identified social and situational factors which impact group formation for wireless group key establishment. Further, mobility has been considered in the context of routing @cite_35 and of network property optimization @cite_22 . In particular, @cite_35 leverages node mobility in order to disseminate information about destination location without incurring any communication overhead. In @cite_22 the sink mobility is used to optimize the energy consumption of the whole network. A mobility-based solution for detecting the sybil attack has been recently presented in @cite_32 . Finally, note that a few solutions exist for node failure detection in ad hoc networks @cite_7 @cite_37 @cite_19 @cite_2 . However, such solutions assume a static network, missing a fundamental component of our scenario, as shown in the following.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_22", "@cite_33", "@cite_7", "@cite_32", "@cite_19", "@cite_2", "@cite_16" ], "mid": [ "2140531391", "", "2124804629", "2122798511", "2040341835", "1977197733", "", "", "2058722498" ], "abstract": [ "Routing in large-scale mobile ad hoc networks is challenging because all the nodes are potentially moving. Geographic routing can partially alleviate this problem, as nodes can make local routing decisions based solely on the destinations' geographic coordinates. However, geographic routing still requires an efficient location service, i.e., a distributed database recording the location of every destination node. Devising efficient, scalable, and robust location services has received considerable attention in recent years. The main purpose of this paper is to show that node mobility can be exploited to disseminate destination location information without incurring any communication overhead. We achieve this by letting each node maintain a local database of the time and location of its last encounter with every other node in the network. This database is consulted by packets to obtain estimates of their destination's current location. As a packet travels towards its destination, it is able to successively refine an estimate of the destination's precise location, because node mobility has \"diffused\" estimates of that location. We define and analyze a very simple algorithm called EASE (exponential age search) and show that in a model where N nodes perform independent random walks on a square lattice, the length of the routes computed by EASE are on the same order as the distance between the source and destination even for very large N. Therefore, without exchanging any explicit location information, the length of EASE routes are within a constant factor of routes obtained with perfect information. We discuss refinements of the EASE algorithm and evaluate it through extensive simulations. We discuss general conditions such that the mobility diffusion effect leads to efficient routes without an explicit location service. In practical settings, where these conditions may not always be met, we believe that the mobility diffusion effect can complement existing location services and enhance their robustness and scalability.", "", "Although many energy efficient conserving routing protocols have been proposed for wireless sensor networks, the concentration of data traffic towards a small number of base stations remains a major threat to the network lifetime. The main reason is that the sensor nodes located near a base station have to relay data for a large part of the network and thus deplete their batteries very quickly. The solution we propose in this paper suggests that the base station be mobile; in this way, the nodes located close to it change over time. Data collection protocols can then be optimized by taking both base station mobility and multi-hop routing into account. We first study the former, and conclude that the best mobility strategy consists in following the periphery of the network (we assume that the sensors are deployed within a circle). We then consider jointly mobility and routing algorithms in this case, and show that a better routing strategy uses a combination of round routes and short paths. We provide a detailed analytical model for each of our statements, and corroborate it with simulation results. We show that the obtained improvement in terms of network lifetime is in the order of 500 .", "Group communication is inherently a social activity. However, existing protocols for group key establishment often fail to consider important social dynamics. This paper examines the human requirements for wireless group key establishment. We identify seven social and situational factors which impact group formation. Using these factors, we examine the requirements of four common classes of group communications. Each scenario imposes a unique set of requirements on wireless group key establishment.", "This paper presents an efficient distributed self-monitoring mechanism for a class of wireless sensor networks used for monitoring and surveillance. In these applications, it is important to monitor the health of the network of sensors itself for security reasons. This mechanism employs a novel two-phase timer scheme that exploits local coordination and active probing. Simulation results show that this method can achieve low false alarm probability without increasing the response delay. Under a stable environment analytical estimates are provided as a guideline in designing optimal parameter values. Under a changing, noisy environment a self-parameter tuning functionality is provided and examined.", "Mobility is often a problem for providing security services in ad hoc networks. In this paper, we show that mobility can be used to enhance security. Specifically, we show that nodes that passively monitor traffic in the network can detect a Sybil attacker that uses a number of network identities simultaneously. We show through simulation that this detection can be done by a single node, or that multiple trusted nodes can join to improve the accuracy of detection. We then show that although the detection mechanism will falsely identify groups of nodes traveling together as a Sybil attacker, we can extend the protocol to monitor collisions at the MAC level to differentiate between a single attacker spoofing many addresses and a group of nodes traveling in close proximity.", "", "", "Contrary to the common belief that mobility makes security more difficult to achieve, we show that node mobility can, in fact, be useful to provide security in ad hoc networks. We propose a technique in which security associations between nodes are established, when they are in the vicinity of each other, by exchanging appropriate cryptographic material. We show that this technique is generic, by explaining its application to fully self-organized ad hoc networks and to ad hoc networks placed under an (off-line) authority. We also propose an extension of this basic mechanism, in which a security association can be established with the help of a \"friend\". We show that our mechanism can work in any network configuration and that the time necessary to set up the security associations is strongly influenced by several factors, including the size of the deployment area, the mobility patterns, and the number of friends; we provide a detailed investigation of this influence." ] }
0912.2199
2952627638
Mobile Ad Hoc networks, due to the unattended nature of the network itself and the dispersed location of nodes, are subject to several unique security issues. One of the most vexed security threat is node capture. A few solutions have already been proposed to address this problem; however, those solutions are either centralized or focused on theoretical mobility models alone. In the former case the solution does not fit well the distributed nature of the network while, in the latter case, the quality of the solutions obtained for realistic mobility models severely differs from the results obtained for theoretical models. The rationale of this paper is inspired by the observation that re-encounters of mobile nodes do elicit a form of social ties. Leveraging these ties, it is possible to design efficient and distributed algorithms that, with a moderated degree of node cooperation, enforce the emergent property of node capture detection. In particular, in this paper we provide a proof of concept proposing a set of algorithms that leverage, to different extent, node mobility and node cooperation--that is, identifying social ties--to thwart node capture attack. In particular, we test these algorithms on a realistic mobility scenario. Extensive simulations show the quality of the proposed solutions and, more important, the viability of the proposed approach.
In this work we do not consider mobility traces synthesized using the RWM model. Instead, we consider only real traces. Among the publicly available traces for mobile nodes (e.g. from @cite_9 ), we consider the traces collected at INFOCOM 2005 conference @cite_28 , already used in previous research works @cite_4 @cite_6 @cite_0 @cite_25 . In particular, the traces of mobile nodes were gathered using Bluetooth devices distributed to 41 people attending the INFOCOM 2005 conference.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_9", "@cite_6", "@cite_0", "@cite_25" ], "mid": [ "2112701160", "", "2168518742", "2164988386", "2009687783", "2164641477" ], "abstract": [ "Pocket Switched Networks (PSN) make use of both human mobility and local global connectivity in order to transfer data between mobile users' devices. This falls under the Delay Tolerant Networking (DTN) space, focusing on the use of opportunistic networking. One key problem in PSN is in designing forwarding algorithms which cope with human mobility patterns. We present an experiment measuring forty-one humans' mobility at the Infocom 2005 conference. The results of this experiment are similar to our previous experiments in corporate and academic working environments, in exhibiting a power-law distrbution for the time between node contacts. We then discuss the implications of these results on the design of forwarding algorithms for PSN.", "", "Wireless network researchers are seriously starved for data about how real users, applications, and devices use real networks under real network conditions. CRAWDAD (Community Resource for Archiving Wireless Data at Dartmouth) is a new National Science Foundation-funded project to build a wireless-network data archive for the research community. It will host wireless data and provide tools and documents to make collecting and using the data easy. This resource should help researchers identify and evaluate real and interesting problems in mobile and pervasive computing. To learn more about CRAWDAD and discuss its direction, about 30 interested people gathered at a workshop held in conjunction with MobiCom 2005.", "Portable devices have more data storage and increasing communication capabilities everyday. In addition to classic infrastructure based communication, these devices can exploit human mobility and opportunistic contacts to communicate. We analyze the characteristics of such opportunistic forwarding paths. We establish that opportunistic mobile networks in general are characterized by a small diameter, a destination device is reachable using only a small number of relays under tight delay constraint. This property is first demonstrated analytically on a family of mobile networks which follow a random graph process. We then establish a similar result empirically with four data sets capturing human mobility, using a new methodology to efficiently compute all the paths that impact the diameter of an opportunistic mobile networks. We complete our analysis of network diameter by studying the impact of intensity of contact rate and contact duration. This work is, to our knowledge, the first validation that the so called \"small world\" phenomenon applies very generally to opportunistic networking between mobile nodes.", "Mobile devices carried by people form dynamic networks. Understanding the social structures within the human mobility traces captured from the mobile devices help us to design more efficient message dissemination schemes. People who are in multiple communities are good message carriers. Thus, the ability to identify the different communities efficiently from the various communication traces e.g. contact traces from users' mobile devices is important. In this paper, using some human mobility traces from the real world, we first identify nodes that can play key roles using some social network metrics. Then, we investigate the usefulness of utilizing the keyrole nodes information in the design of multicast delivery schemes in human contact-based networks. Our results indicate that using such information can achieve similar delivery performance as the multi-copy epidemic scheme but at a much smaller communication cost.", "The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e., networks that do not change over time or built as the results of aggregation of information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture. In this paper we propose new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes taking into account the evolution of a network from a local and global view. We show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions), compared to the metrics used in the past on static graphs. As a proof of concept we apply these techniques to two classes of time-varying networks, namely connectivity of mobile devices and e-mail exchanges." ] }
0912.0071
2950943617
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the @math -differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
An alternative line of privacy work is in the Secure Multiparty Computation setting due to @cite_20 , where the sensitive data is split across multiple hostile databases, and the goal is to compute a function on the union of these databases. @cite_19 and @cite_14 consider computing privacy-preserving SVMs in this setting, and their goal is to design a distributed protocol to learn a classifier. This is in contrast with our work, which deals with a setting where the algorithm has access to the entire dataset.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_20" ], "mid": [ "", "2136926597", "2159024459" ], "abstract": [ "", "We propose private protocols implementing the Kernel Adatron and Kernel Perceptron learning algorithms, give private classification protocols and private polynomial kernel computation protocols. The new protocols return their outputs - either the kernel value, the classifier or the classifications - in encrypted form so that they can be decrypted only by a common agreement by the protocol participants. We show how to use the encrypted classifications to privately estimate many properties of the data and the classifier. The new SVM classifiers are the first to be proven private according to the standard cryptographic definitions.", "Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection." ] }
0912.0071
2950943617
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the @math -differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
Differential privacy, the formal privacy definition used in our paper, was proposed by the seminal work of @cite_28 , and has been used since in numerous works on privacy . Unlike many other privacy definitions, such as those mentioned above, differential privacy has been shown to be resistant to composition attacks (attacks involving side-information) . Some follow-up work on differential privacy includes work on differentially-private combinatorial optimization, due to @cite_13 , and differentially-private contingency tables, due to @cite_10 and @cite_21 . @cite_23 provide a more statistical view of differential privacy, and @cite_26 provide a technique of generating synthetic data using compression via random linear or affine transformations.
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_28", "@cite_21", "@cite_23", "@cite_10" ], "mid": [ "", "2080044359", "2101771965", "2034053794", "", "2123733729" ], "abstract": [ "", "In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.", "We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.", "We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.", "", "The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously. Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself. The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube." ] }
0912.0071
2950943617
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the @math -differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
Previous literature has also considered learning with differential privacy. One of the first such works is @cite_4 , which presents a general, although computationally inefficient, method for PAC-learning finite concept classes. @cite_24 presents a method for releasing a database in a differentially-private manner, so that certain fixed classes of queries can be answered accurately, provided the class of queries has a bounded VC-dimension. Their methods can also be used to learn classifiers with a fixed VC-dimension -- see @cite_4 ; however the resulting algorithm is also computationally inefficient. Some sample complexity lower bounds in this setting have been provided by @cite_9 . In addition, @cite_15 explore a connection between differential privacy and robust statistics, and provide an algorithm for privacy-preserving regression using ideas from robust statistics. However, their algorithm also requires a running time which is exponential in the data dimension, and is hence computationally inefficient.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_4", "@cite_15" ], "mid": [ "2169570643", "", "2163263459", "2138865266" ], "abstract": [ "We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.", "", "In a social network, nodes correspond topeople or other social entities, and edges correspond to social links between them. In an effort to preserve privacy, the practice of anonymization replaces names with meaningless unique identifiers. We describe a family of attacks such that even from a single anonymized copy of a social network, it is possible for an adversary to learn whether edges exist or not between specific targeted pairs of nodes.", "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems." ] }
0912.1045
2953297433
The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worst-case cost (summed over both days) is minimized? and considered the k-robust model where the possible outcomes tomorrow are given by all demand-subsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for k-robust problems: "having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold, augment the anticipatory solution to cover this demand as well, and repeat". In this paper we show that this template gives us improved approximation algorithms for k-robust Steiner tree and set cover, and the first approximation algorithms for k-robust Steiner forest, minimum-cut and multicut. All our approximation ratios (except for multicut) are almost best possible. As a by-product of our techniques, we also get algorithms for max-min problems of the form: "given a covering problem instance, which k of the elements are costliest to cover?".
Approximation algorithms for robust optimization was initiated by @cite_31 : they study the case when the scenarios were explicitly listed, and gave constant-factor approximations for Steiner tree and facility location, and logarithmic approximations to mincut multicut problems. @cite_13 improved the mincut result to a constant factor approximation, and also gave an @math -approximation for robust shortest-paths. The algorithms in @cite_13 were also thresholded algorithms'' and the algorithms in this paper can be seen as natural extensions of that idea to more complex uncertainty sets and larger class of problems (the uncertainty set in @cite_13 only contained singleton demands).
{ "cite_N": [ "@cite_31", "@cite_13" ], "mid": [ "2134396776", "1579892319" ], "abstract": [ "Robust optimization has traditionally focused on uncertainty in data and costs in optimization problems to formulate models whose solutions will be optimal in the worst-case among the various uncertain scenarios in the model. While these approaches may be thought of defining data- or cost-robust problems, we formulate a new \"demand-robust\" model motivated by recent work on two-stage stochastic optimization problems. We propose this in the framework of general covering problems and prove a general structural lemma about special types of first-stage solutions for such problems: there exists a first-stage solution that is a minimal feasible solution for the union of the demands for some subset of the scenarios and its objective function value is no more than twice the optimal. We then provide approximation algorithms for a variety of standard discrete covering problems in this setting, including minimum cut, minimum multi-cut, shortest paths, Steiner trees, vertex cover and un-capacitated facility location. While many of our results draw from rounding approaches recently developed for stochastic programming problems, we also show new applications of old metric rounding techniques for cut problems in this demand-robust setting.", "Demand-robust versions of common optimization problems were recently introduced by [4] motivated by the worst-case considerations of two-stage stochastic optimization models. We study the demand robust min-cut and shortest path problems, and exploit the nature of the robust objective to give improved approximation factors. Specifically, we give a @math approximation for robust min-cut and a 7.1 approximation for robust shortest path. Previously, the best approximation factors were O(log n) for robust min-cut and 16 for robust shortest paths, both due to [4]. Our main technique can be summarized as follows: We investigate each of the second stage scenarios individually, checking if it can be independently serviced in the second stage within an acceptable cost (namely, a guess of the optimal second stage costs). For the costly scenarios that cannot be serviced in this way (“rainy days”), we show that they can be fully taken care of in a near-optimal first stage solution (i.e., by ”paying today”). We also consider “hitting-set” extensions of the robust min-cut and shortest path problems and show that our techniques can be combined with algorithms for Steiner multicut and group Steiner tree problems to give similar approximation guarantees for the hitting-set versions of robust min-cut and shortest path problems respectively." ] }
0912.1045
2953297433
The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worst-case cost (summed over both days) is minimized? and considered the k-robust model where the possible outcomes tomorrow are given by all demand-subsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for k-robust problems: "having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold, augment the anticipatory solution to cover this demand as well, and repeat". In this paper we show that this template gives us improved approximation algorithms for k-robust Steiner tree and set cover, and the first approximation algorithms for k-robust Steiner forest, minimum-cut and multicut. All our approximation ratios (except for multicut) are almost best possible. As a by-product of our techniques, we also get algorithms for max-min problems of the form: "given a covering problem instance, which k of the elements are costliest to cover?".
The @math -robust model was introduced in @cite_12 , where they gave an @math -approximation for set cover; here @math and @math are the number of sets and elements in the set system. To get such an algorithm @cite_12 first gave an @math -approximation algorithm for @math -max-min set-cover problem using the online algorithm for set cover @cite_36 . They then used the @math -max-min problem as a separation oracle in an LP-rounding-based algorithm ( a la @cite_23 ) to get the same approximation guarantee for the @math -robust problem. They also showed an @math hardness of approximation for @math -max-min and @math -robust set cover. @cite_7 noted that the LP-based techniques of @cite_12 did not give good results for Steiner tree, and developed new combinatorial constant-factor approximations for @math -robust versions of Steiner tree, Steiner forest on trees and facility location. Using our framework, the algorithm we get for Steiner tree can be viewed as a rephrasing of their algorithm---our proof is arguably more transparent and results in a better bound. Our approach can also be used to get a slightly better ratio than @cite_7 for the Steiner forest problem on trees.
{ "cite_N": [ "@cite_36", "@cite_23", "@cite_7", "@cite_12" ], "mid": [ "", "2117399775", "2167240010", "1604332079" ], "abstract": [ "", "Stochastic optimization problems attempt to model uncertainty in the data by assuming that (part of) the input is specified in terms of a probability distribution. We consider the well-studied paradigm of 2-stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We give the first approximation algorithms for 2-stage discrete stochastic optimization problems with recourse for which the underlying random data is given by a \"black box\" and no restrictions are placed on the costs in the two stages, based on an FPRAS for the LP relaxation of the stochastic problem (which has exponentially many variables and constraints). Among the range of applications we consider are stochastic versions of the set cover, vertex cover, facility location, multicut (on trees), and multicommodity flow problems.", "We study two-stage robustvariants of combinatorial optimization problems like Steiner tree, Steiner forest, and uncapacitated facility location. The robust optimization problems, previously studied by [1], [6], and [4], are two-stage planning problems in which the requirements are revealed after some decisions are taken in stage one. One has to then complete the solution, at a higher cost, to meet the given requirements. In the robust Steiner tree problem, for example, one buys some edges in stage one after which some terminals are revealed. In the second stage, one has to buy more edges, at a higher cost, to complete the stage one solution to build a Steiner tree on these terminals. The objective is to minimize the total cost under the worst-case scenario. In this paper, we focus on the case of exponentially manyscenarios given implicitly. A scenario consists of any subset of kterminals (for Steiner tree), or any subset of kterminal-pairs (for Steiner forest), or any subset of kclients (for facility location). We present the first constant-factor approximation algorithms for the robust Steiner tree and robust uncapacitated facility location problems. For the robust Steiner forest problem with uniform inflation, we present an O(logn)-approximation and show that the problem with two inflation factors is impossible to approximate within O(log1 2 i¾? i¾?n) factor, for any constant i¾?> 0, unless NP has randomized quasi-polynomial time algorithms. Finally, we show APX-hardness of the robust min-cut problem (even with singleton-set scenarios), resolving an open question by [1] and [6].", "Following the well-studied two-stage optimization framework for stochastic optimization [15,8], we study approximation algorithms for robust two-stage optimization problems with an exponential number of scenarios. Prior to this work, [8] introduced approximation algorithms for two-stage robust optimization problems with explicitly given scenarios. In this paper, we assume the set of possible scenarios is given implicitly, for example by an upper bound on the number of active clients. In two-stage robust optimization, we need to pre-purchase some resources in the first stage before the adversary's action. In the second stage, after the adversary chooses the clients that need to be covered, we need to complement our solution by purchasing additional resources at an inflated price. The goal is to minimize the cost in the worst-case scenario. We give a general approach for solving such problems using LP rounding. Our approach uncovers an interesting connection between robust optimization and online competitive algorithms. We use this approach, together with known online algorithms, to develop approximation algorithms for several robust covering problems, such as set cover, vertex cover, and edge cover. We also study a simple buy-at-oncealgorithm that either covers all items in the first stage or does nothing in the first stage and waits to build the complete solution in the second stage. We show that this algorithm gives tight approximation factors for unweighted variants of these covering problems, but performs poorly for general weighted problems." ] }
0912.1045
2953297433
The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worst-case cost (summed over both days) is minimized? and considered the k-robust model where the possible outcomes tomorrow are given by all demand-subsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for k-robust problems: "having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold, augment the anticipatory solution to cover this demand as well, and repeat". In this paper we show that this template gives us improved approximation algorithms for k-robust Steiner tree and set cover, and the first approximation algorithms for k-robust Steiner forest, minimum-cut and multicut. All our approximation ratios (except for multicut) are almost best possible. As a by-product of our techniques, we also get algorithms for max-min problems of the form: "given a covering problem instance, which k of the elements are costliest to cover?".
To the best of our knowledge, none of the @math -max-min problems other than min-cut and set cover @cite_12 have been studied earlier. The @math - min -min versions of covering problems (i.e. which @math demands are the cheapest to cover?'') have been extensively studied for set cover @cite_11 @cite_19 , Steiner tree @cite_32 , Steiner forest @cite_9 , min-cut and multicut @cite_33 @cite_34 . However these problems seem to be related to the @math -max-min versions only in spirit.
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_32", "@cite_19", "@cite_34", "@cite_12", "@cite_11" ], "mid": [ "2013355418", "1500932665", "2038190710", "1993119087", "2169528477", "1604332079", "2155148197" ], "abstract": [ "We study the k-multicut problem: Given an edge-weighted undirected graph, a set of l pairs of vertices, and a target k ≤ l, find the minimum cost set of edges whose removal disconnects at least k pairs. This generalizes the well known multicut problem, where k = l. We show that the k-multicut problem on trees can be approximated within a factor of 8 3 + e, for any fixed e > 0, and within O(log2 n log log n) on general graphs, where n is the number of vertices in the graph.For any fixed e > 0, we also obtain a polynomial time algorithm for k-multicut on trees which returns a solution of cost at most (2 + e) · OPT, that separates at least (1 - e) · k pairs, where OPT is the cost of the optimal solution separating k pairs.Our techniques also give a simple 2-approximation algorithm for the multicut problem on trees using total unimodularity, matching the best known algorithm [8].", "The k-forest problem is a common generalization of both the k-MST and the dense-k-subgraph problems. Formally, given a metric space on n vertices V, with m demand pairs ⊆ V × V and a \"target\" k ≤ m, the goal is to find a minimum cost subgraph that connects at least k demand pairs. In this paper, we give an O(min √n,√k )- approximation algorithm for k-forest, improving on the previous best ratio of O(min n2 3,√m log n) by Segev and Segev [20]. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an n point metric space with m objects each with its own source and destination, and a vehicle capable of carrying at most k objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an a-approximation algorithm for the k-forest problem implies an O(αċlog2 n)-approximation algorithm for Dial-a-Ride. Using our results for k-forest, we get an O(min √n,√k ċlog2 n)-approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an O(√k log n)-approximation by Charikar and Raghavachari [5]; our results give a different proof of a similar approximation guarantee-- in fact, when the vehicle capacity k is large, we give a slight improvement on their results. The reduction from Dial-a-Ride to the k-forest problem is fairly robust, and allows us to obtain approximation algorithms (with the same guarantee) for the following generalizations: (i) Non-uniform Dial-a-Ride, where the cost of traversing each edge is an arbitrary nondecreasing function of the number of objects in the vehicle; and (ii) Weighted Diala-Ride, where demands are allowed to have different weights. The reduction is essential, as it is unclear how to extend the techniques of Charikar and Raghavachari to these Dial-a-Ride generalizations.", "We present a polynomial time 2-approximation algorithm for the problem of finding the minimum tree that spans at least k vertices. Our result also leads to a 2-approximation algorithm for finding the minimum tour that visits k vertices and to a 3-approximation algorithm for the problem of finding the maximum number of vertices that can be spanned by a tree of length at most a given bound.", "Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known.", "Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10,11,14,16]) depend on hierarchical graph decompositions. In this line of work a probability distribution over tree graphs is constructed from a given input graph, in such a way that the tree distances closely resemble the distances in the original graph. This allows it, to solve many problems with a distance-based cost function on trees, and then transfer the tree solution to general undirected graphs with only a logarithmic loss in the performance guarantee. The results about oblivious routing [30,22] in general undirected graphs are based on hierarchical decompositions of a different type in the sense that they are aiming to approximate the bottlenecks in the network (instead of the point-to-point distances). We call such decompositions cut-based decompositions. It has been shown that they also can be used to design approximation and online algorithms for a wide variety of different problems, but at the current state of the art the performance guarantee goes down by an O(log2n log log n)-factor when making the transition from tree networks to general graphs. In this paper we show how to construct cut-based decompositions that only result in a logarithmic loss in performance, which is asymptotically optimal. Remarkably, one major ingredient of our proof is a distance-based decomposition scheme due to Fakcharoenphol, Rao and Talwar [16]. This shows an interesting relationship between these seemingly different decomposition techniques. The main applications of the new decomposition are an optimal O(log n)-competitive algorithm for oblivious routing in general undirected graphs, and an O(log n)-approximation for Minimum Bisection, which improves the O(log1.5n) approximation by Feige and Krauthgamer [17].", "Following the well-studied two-stage optimization framework for stochastic optimization [15,8], we study approximation algorithms for robust two-stage optimization problems with an exponential number of scenarios. Prior to this work, [8] introduced approximation algorithms for two-stage robust optimization problems with explicitly given scenarios. In this paper, we assume the set of possible scenarios is given implicitly, for example by an upper bound on the number of active clients. In two-stage robust optimization, we need to pre-purchase some resources in the first stage before the adversary's action. In the second stage, after the adversary chooses the clients that need to be covered, we need to complement our solution by purchasing additional resources at an inflated price. The goal is to minimize the cost in the worst-case scenario. We give a general approach for solving such problems using LP rounding. Our approach uncovers an interesting connection between robust optimization and online competitive algorithms. We use this approach, together with known online algorithms, to develop approximation algorithms for several robust covering problems, such as set cover, vertex cover, and edge cover. We also study a simple buy-at-oncealgorithm that either covers all items in the first stage or does nothing in the first stage and waits to build the complete solution in the second stage. We show that this algorithm gives tight approximation factors for unweighted variants of these covering problems, but performs poorly for general weighted problems.", "We prove that the classical bounds on the performance of the greedy algorithm for approximating MINIMUM COVER with costs are valid for PARTIAL COVER as well, thus lowering, by more than a factor of two, the previously known estimate. In order to do so, we introduce a new simple technique that might be useful for attacking other similar problems." ] }
0911.5610
2088228768
We investigate three superconducting flux qubits coupled in a loop. In this setup, tripartite entanglement can be created in a natural, controllable, and stable way. Both generic kinds of tripartite entanglement—the W type as well as the GHZ type entanglement—can be identified among the eigenstates. We also discuss the violation of Bell inequalities in this system and show the impact of a limited measurement fidelity on the detection of entanglement and quantum nonlocality.
Related work has shown similar properties for a ring of exchange-coupled qubits @cite_5 even in the ground state. Open linear coupling topologies, albeit easier to prepare experimentally, require more complex pulse sequences @cite_18 @cite_32 @cite_22 because the eigenstates do not have tripartite entanglement; they become more efficient in connected networks @cite_6 . Also, tripartite entanglement between two superconducting cavities and one qubit has been proposed @cite_20 . Beyond tripartite entanglement, a circuit QED setup has been suggested for the fast preparation of an @math -qubit GHZ state in superconducting flux or charge qubits @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_32", "@cite_6", "@cite_5", "@cite_20" ], "mid": [ "2963869838", "1969852938", "2015591316", "1970167444", "2095175143", "2031027102", "2946078054" ], "abstract": [ "We propose a one-step scheme to generate GHZ states for superconducting flux qubits or charge qubits in a circuit QED setup. The GHZ state can be produced within the coherence time of the multi-qubit system. Our scheme is independent of the initial state of the transmission line resonator and works in the presence of higher harmonic modes. Our analysis also shows that the scheme is robust to various operation errors and environmental noise.", "We introduce a suit of simple entangling protocols for generating tripartite GHZ and W states in systems with anisotropic exchange interaction g(XX+YY)+g'ZZ. An interesting example is provided by macroscopic entanglement in Josephson phase qubits with capacitive (g'=0) and inductive (0<|g' g|<0.1) couplings.", "We consider the possibility of generating macroscopic entangled states in capacitively coupled phase qubits. First we discuss the operation of phase qubits and the implementation of the basic gate operations in them. We then analyze two possible procedures that can be used to generate n -qubit entangled states, such as the Greenberger–Horne–Zeilinger state for the case n =3. The procedures we propose are constructed under the experimentally motivated constraint of trying to minimize the number of control lines used to manipulate the qubits.", "Going beyond the entanglement of microscopic objects (such as photons, spins, and ions), here we propose an efficient approach to produce and control the quantum entanglement of three macroscopic coupled superconducting qubits. By conditionally rotating, one by one, selected Josephson-charge qubits, we show that their Greenberger-Horne-Zeilinger (GHZ) entangled states can be deterministically generated. The existence of GHZ correlations between these qubits could be experimentally demonstrated by effective single-qubit operations followed by high-fidelity single-shot readouts. The possibility of using the prepared GHZ correlations to test the macroscopic conflict between the noncommutativity of quantum mechanics and the commutativity of classical physics is also discussed.", "We generalize the recently proposed Greenberger-Horne-Zeilinger tripartite protocol [A. Galiautdinov and J. M. Martinis, Phys. Rev. A 78, 010305(R) (2008)] to fully connected networks of weakly coupled qubits interacting by way of anisotropic Heisenberg exchange g(XX+YY)+g-tildeZZ. Our model differs from the more familiar Ising-Heisenberg chain in that here every qubit interacts with every other qubit in the circuit. The assumption of identical couplings on all qubit pairs allows an elegant proof of the protocol for arbitrary N. In order to further make contact with experiment, we study fidelity degradation due to coupling imperfections by numerically simulating the N=3 and 4 cases. Our simulations indicate that the best fidelity at unequal couplings is achieved when (a) the system is initially prepared in the uniform superposition state (similarly to how it is done in the ideal case) and (b) the entangling time and the final rotations on each of the qubits are appropriately adjusted.", "We investigate the creation of highly entangled ground states in a system of three exchange-coupled qubits arranged in a ring geometry. Suitable magnetic field configurations yielding approximate Greenberger-Horne-Zeilinger and exact W ground states are identified. The entanglement in the system is studied at finite temperature in terms of the mixed-state tangle tau. By generalizing a conjugate gradient optimization algorithm originally developed to evaluate the entanglement of formation, we demonstrate that tau can be calculated efficiently and with high precision. We identify the parameter regime for which the equilibrium entanglement of the tripartite system reaches its maximum.", "" ] }
0911.4219
2085984866
In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements DMM . The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.
According to the standard prescription, messages used in the the sum-product algorithm should be probability measures over the real line @math , cf. Eqs. ), ). This is impractical from a computational point of view. (A low complexity message-passing algorithm for a related problem was used in @cite_11 ).
{ "cite_N": [ "@cite_11" ], "mid": [ "2059739152" ], "abstract": [ "Fine-grained network measurement requires routers and switches to update large arrays of counters at very high link speed (e.g. 40 Gbps). A naive algorithm needs an infeasible amount of SRAM to store both the counters and a flow-to-counter association rule, so that arriving packets can update corresponding counters at link speed. This has made accurate per-flow measurement complex and expensive, and motivated approximate methods that detect and measure only the large flows. This paper revisits the problem of accurate per-flow measurement. We present a counter architecture, called Counter Braids, inspired by sparse random graph codes. In a nutshell, Counter Braids \"compresses while counting\". It solves the central problems (counter space and flow-to-counter association) of per-flow measurement by \"braiding\" a hierarchy of counters with random graphs. Braiding results in drastic space reduction by sharing counters among flows; and using random graphs generated on-the-fly with hash functions avoids the storage of flow-to-counter association. The Counter Braids architecture is optimal (albeit with a complex decoder) as it achieves the maximum compression rate asymptotically. For implementation, we present a low-complexity message passing decoding algorithm, which can recover flow sizes with essentially zero error. Evaluation on Internet traces demonstrates that almost all flow sizes are recovered exactly with only a few bits of counter space per flow." ] }
0911.4366
2951797984
We consider the following NP-hard problem: in a weighted graph, find a minimum cost set of vertices whose removal leaves a graph in which no two cycles share an edge. We obtain a constant-factor approximation algorithm, based on the primal-dual method. Moreover, we show that the integrality gap of the natural LP relaxation of the problem is ( n), where n denotes the number of vertices in the graph.
Although there exists a simple @math -approximation algorithm for the vertex cover problem, there is strong evidence that approximating the problem with a factor of @math might be hard, for every @math @cite_10 . It should be noted that the feedback vertex set and diamond hitting set problems are at least as hard to approximate as the vertex cover problem, in the sense that the existence of a @math -approximation algorithm for one of these two problems implies the existence of a @math -approximation algorithm for the vertex cover problem, where @math is a constant.
{ "cite_N": [ "@cite_10" ], "mid": [ "2137118456" ], "abstract": [ "Based on a conjecture regarding the power of unique 2-prover-1-round games presented in [S. Khot, On the power of unique 2-Prover 1-Round games, in: Proc. 34th ACM Symp. on Theory of Computing, STOC, May 2002, pp. 767-775], we show that vertex cover is hard to approximate within any constant factor better than 2. We actually show a stronger result, namely, based on the same conjecture, vertex cover on k-uniform hypergraphs is hard to approximate within any constant factor better than k." ] }
0911.4366
2951797984
We consider the following NP-hard problem: in a weighted graph, find a minimum cost set of vertices whose removal leaves a graph in which no two cycles share an edge. We obtain a constant-factor approximation algorithm, based on the primal-dual method. Moreover, we show that the integrality gap of the natural LP relaxation of the problem is ( n), where n denotes the number of vertices in the graph.
Concerning the feedback vertex set problem, the first approximation algorithm is due to Bar-Yehuda, Geiger, Naor, and Roth @cite_1 and its approximation factor is @math . Later, @math -approximation algorithms have been proposed by Bafna, Berman, and Fujito @cite_6 , and Becker and Geiger @cite_11 . Chudak, Goemans, Hochbaum and Williamson @cite_5 showed that these algorithms can be seen as deriving from the primal-dual method (see for instance @cite_7 @cite_2 ). Starting with an integer programming formulation of the problem, these algorithms simultaneously construct a feasible integral solution and a feasible dual solution of the linear programming relaxation, such that the values of these two solutions are within a constant factor of each other.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_6", "@cite_2", "@cite_5", "@cite_11" ], "mid": [ "2053913299", "2081880478", "2024517697", "", "1990800802", "2015905456" ], "abstract": [ "This clearly written , mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NPcomplete problems, more. All chapters are supplemented by thoughtprovoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering. Mathematicians wishing a self-contained introduction need look no further.—American Mathematical Monthly. 1982 ed.", "A feedback vertex set of an undirected graph is a subset of vertices that intersects with the vertex set of each cycle in the graph. Given an undirected graph G with n vertices and weights on its vertices, polynomial-time algorithms are provided for approximating the problem of finding a feedback vertex set of G with smallest weight. When the weights of all vertices in G are equal, the performance ratio attained by these algorithms is 4-(2 n). This improves a previous algorithm which achieved an approximation factor of @math for this case. For general vertex weights, the performance ratio becomes @math where @math denotes the maximum degree in G. For the special case of planar graphs this ratio is reduced to 10. An interesting special case of weighted graphs where a performance ratio of 4-(2 n) is achieved is the one where a prescribed subset of the vertices, so-called blackout vertices, is not allowed to participate in any feedback vertex set. It is shown how these algorithms can improve the search performance for constraint satisfaction problems. An application in the area of Bayesian inference of graphs with blackout vertices is also presented.", "A feedback vertex set of a graph is a subset of vertices that contains at least one vertex from every cycle in the graph. The problem considered is that of finding a minimum feedback vertex set given a weighted and undirected graph. We present a simple and efficient approximation algorithm with performance ratio of at most 2, improving previous best bounds for either weighted or unweighted cases of the problem. Any further improvement on this bound, matching the best constant factor known for the vertex cover problem, is deemed challenging. The approximation principle, underlying the algorithm, is based on a generalized form of the classical local ratio theorem, originally developed for approximation of the vertex cover problem, and a more flexible style of its application.", "", "Recently, Becker and Geiger and Bafna, Berman and Fujito gave 2-approximation algorithms for the feedback vertex set problem in undirected graphs. We show how their algorithms can be explained in terms of the primal-dual method for approximation algorithms, which has been used to derive approximation algorithms for network design problems. In the process, we give a new integer programming formulation for the feedback vertex set problem whose integrality gap is at worst a factor of two; the well-known cycle formulation has an integrality gap of @Q(logn), as shown by Even, Naor, Schieber and Zosin. We also give a new 2-approximation algorithm for the problem which is a simplification of the algorithm.", "Abstract We show how to find a small loop cutset in a Bayesian network. Finding such a loop cutset is the first step in the method of conditioning for inference. Our algorithm for finding a loop cutset, called MGA, finds a loop cutset which is guaranteed in the worst case to contain less than twice the number of variables contained in a minimum loop cutset. The algorithm is based on a reduction to the weighted vertex feedback set problem and a 2-approximation of the latter problem. The complexity of MGA is O( m + n log n ) where m and n are the number of edges and vertices respectively. A greedy algorithm, called GA, for the weighted vertex feedback set problem is also analyzed and bounds on its performance are given. We test MGA on randomly generated graphs and find that the average ratio between the number of instances associated with the algorithm's output and the number of instances associated with an optimum solution is far better than the worst-case bound." ] }
0911.4366
2951797984
We consider the following NP-hard problem: in a weighted graph, find a minimum cost set of vertices whose removal leaves a graph in which no two cycles share an edge. We obtain a constant-factor approximation algorithm, based on the primal-dual method. Moreover, we show that the integrality gap of the natural LP relaxation of the problem is ( n), where n denotes the number of vertices in the graph.
These algorithms also lead to a characterization of the integrality gap The integrality gap of an integer programming formulation is the worst-case ratio between the optimum value of the integer program and the optimum value of its linear relaxation. of two different integer programming formulations of the problem, as we now explain. Let @math denote the collection of all the cycles @math of @math . A natural integer programming formulation for the feedback vertex set problem is as follows: (Throughout, @math denotes the (non-negative) cost of vertex @math .) The algorithm of Bar- @cite_1 implies that the integrality gap of this integer program is @math . Later, Even, Naor, Schieber, and Zosin @cite_0 proved that its integrality gap is also @math .
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2072261513", "2081880478" ], "abstract": [ "Let G=(V,E) be a weighted undirected graph where all weights are at least one. We consider the following generalization of feedback set problems. Let @math be a subset of the vertices. A cycle is called interesting if it intersects the set S. A subset feedback edge (vertex) set is a subset of the edges (vertices) that intersects all interesting cycles. In minimum subset feedback problems the goal is to find such sets of minimum weight. This problem has a variety of applications, among them genetic linkage analysis and circuit testing. The case in which S consists of a single vertex is equivalent to the multiway cut problem, in which the goal is to separate a given set of terminals. Hence, the subset feedback problem is NP-complete and also generalizes the multiway cut problem. We provide a polynomial time algorithm for approximating the subset feedback edge set problem that achieves an approximation factor of two. This implies a @math -approximation algorithm for the subset feedback vertex set problem, where @math is the maximum degree in G. We also consider the multicut problem and show how to achieve an @math approximation factor for this problem, where @math is the value of the optimal fractional solution. To achieve the @math factor we employ a bootstrapping technique.", "A feedback vertex set of an undirected graph is a subset of vertices that intersects with the vertex set of each cycle in the graph. Given an undirected graph G with n vertices and weights on its vertices, polynomial-time algorithms are provided for approximating the problem of finding a feedback vertex set of G with smallest weight. When the weights of all vertices in G are equal, the performance ratio attained by these algorithms is 4-(2 n). This improves a previous algorithm which achieved an approximation factor of @math for this case. For general vertex weights, the performance ratio becomes @math where @math denotes the maximum degree in G. For the special case of planar graphs this ratio is reduced to 10. An interesting special case of weighted graphs where a performance ratio of 4-(2 n) is achieved is the one where a prescribed subset of the vertices, so-called blackout vertices, is not allowed to participate in any feedback vertex set. It is shown how these algorithms can improve the search performance for constraint satisfaction problems. An application in the area of Bayesian inference of graphs with blackout vertices is also presented." ] }
0911.4366
2951797984
We consider the following NP-hard problem: in a weighted graph, find a minimum cost set of vertices whose removal leaves a graph in which no two cycles share an edge. We obtain a constant-factor approximation algorithm, based on the primal-dual method. Moreover, we show that the integrality gap of the natural LP relaxation of the problem is ( n), where n denotes the number of vertices in the graph.
A better formulation has been introduced by @cite_5 . For @math , denote by @math the set of the edges of @math having both ends in @math , by @math the subgraph of @math induced by @math , and by @math the degree of @math in @math . Then, the following is a formulation for the feedback vertex set problem: @cite_5 showed that the integrality gap of this integer program asymptotically equals @math . Constraints ) derive from the simple observation that the removal of a feedback vertex set @math from @math generates a forest having at most @math edges. Notice that the covering inequalities ) are implied by ).
{ "cite_N": [ "@cite_5" ], "mid": [ "1990800802" ], "abstract": [ "Recently, Becker and Geiger and Bafna, Berman and Fujito gave 2-approximation algorithms for the feedback vertex set problem in undirected graphs. We show how their algorithms can be explained in terms of the primal-dual method for approximation algorithms, which has been used to derive approximation algorithms for network design problems. In the process, we give a new integer programming formulation for the feedback vertex set problem whose integrality gap is at worst a factor of two; the well-known cycle formulation has an integrality gap of @Q(logn), as shown by Even, Naor, Schieber and Zosin. We also give a new 2-approximation algorithm for the problem which is a simplification of the algorithm." ] }
0911.3786
1740147330
We tackle the problem of graph transformation with a particular focus on node cloning. We propose a new approach to graph rewriting where nodes can be cloned zero, one or more times. A node can be cloned together with all its incident edges, with only its outgoing edges, with only its incoming edges or with none of its incident edges. We thus subsume previous works such as the sesqui-pushout, the heterogeneous pushout and the adaptive star grammars approaches. A rewrite rule is defined as a span where the right-hand and left-hand sides are graphs while the interface is a polarized graph. A polarized graph is a graph endowed with some annotations on nodes. The way a node is cloned is indicated by its polarization annotation. We use these annotations for designing graph transformation with polarized cloning. We show how a clone of a node can be built according to the different possible polarizations and define a rewrite step as a final pullback complement followed by a pushout. This is called the polarized sesqui-pushout approach. We also provide an algorithmic presentation of the proposed graph transformation with polarized cloning.
Cloning is also one of the features of the sesqui-pushout approach to graph transformation @cite_0 . In this approach, a rule is a span @math of multigraphs and the application of a rule to a graph @math can be illustrated by the same figure as for a DPO step (as in the introduction), where the right-hand side is a pushout as in the DPO approach but the left-hand side is a pullback, and moreover it is a final pullback complement. The sesqui-pushout approach and ours mainly differ in the way of handling cloning. In @cite_0 , the cloning of a node is performed by copying all incident edges (incoming and outgoing edges) of the cloned node. This is a particular case of our way of cloning nodes. The use of polarized multigraphs helped us to specify for every clone, the way incident edges can be copied. Therefore, a sesqui-pushout rewriting step can be simulated by a rewriting step with polarized rules, but the converse does not hold in general.
{ "cite_N": [ "@cite_0" ], "mid": [ "2092445638" ], "abstract": [ "Abstract The purpose of the present paper is twofold: Firstly, show that it is possible to rewrite graphs in a way equivalent to, and in fact slightly more powerful than that of Ehrig, Pfender and Schneider (1973), which has, since then, been developed mainly by the Berlin school. Our method consists in using a single push-out of partial morphisms and is described in Section 3. Section 1 is devoted to the elementary definitions concerning graphs and related terms. Section 2 contains the set-theoretic prerequisites for the sequel but the proofs have been moved into an appendix, for easier reading. Secondly, we indicate in Section 4 why this method is not really fit for rewriting graphs that represent collapsed terms (i.e., sharing common subterms) and we introduce pushouts of total functions, which are not morphisms everywhere on their domain. This method is connected to the classical rewriting of the corresponding terms. The adequacy of these new rewriting rules is then tested to prove a local confluence criterion a la Knuth-Bendix (1970) in Section 5, the proof of which turns out to be very short." ] }
0911.3786
1740147330
We tackle the problem of graph transformation with a particular focus on node cloning. We propose a new approach to graph rewriting where nodes can be cloned zero, one or more times. A node can be cloned together with all its incident edges, with only its outgoing edges, with only its incoming edges or with none of its incident edges. We thus subsume previous works such as the sesqui-pushout, the heterogeneous pushout and the adaptive star grammars approaches. A rewrite rule is defined as a span where the right-hand and left-hand sides are graphs while the interface is a polarized graph. A polarized graph is a graph endowed with some annotations on nodes. The way a node is cloned is indicated by its polarization annotation. We use these annotations for designing graph transformation with polarized cloning. We show how a clone of a node can be built according to the different possible polarizations and define a rewrite step as a final pullback complement followed by a pushout. This is called the polarized sesqui-pushout approach. We also provide an algorithmic presentation of the proposed graph transformation with polarized cloning.
In @cite_0 , the sesqui-pushout approach has been compared to the classical double pushout and single pushout approaches. showed that the sesqui-pushout and the DPO approaches coincide under some conditions (see [Proposition 12] CorradiniHHK06 ). They also showed how the sesqui-pushout approach can be simulated by the SPO approach and gave conditions under which a SPO derivation can be simulated by a sesqui-pushout (see [Proposition 13] CorradiniHHK06 ). So, according to Proposition , which shows how to simulate a sesqui-pushout step in our setting, we can infer the same comparisons with respect to DPO and SPO for our graph rewriting definition.
{ "cite_N": [ "@cite_0" ], "mid": [ "2092445638" ], "abstract": [ "Abstract The purpose of the present paper is twofold: Firstly, show that it is possible to rewrite graphs in a way equivalent to, and in fact slightly more powerful than that of Ehrig, Pfender and Schneider (1973), which has, since then, been developed mainly by the Berlin school. Our method consists in using a single push-out of partial morphisms and is described in Section 3. Section 1 is devoted to the elementary definitions concerning graphs and related terms. Section 2 contains the set-theoretic prerequisites for the sequel but the proofs have been moved into an appendix, for easier reading. Secondly, we indicate in Section 4 why this method is not really fit for rewriting graphs that represent collapsed terms (i.e., sharing common subterms) and we introduce pushouts of total functions, which are not morphisms everywhere on their domain. This method is connected to the classical rewriting of the corresponding terms. The adequacy of these new rewriting rules is then tested to prove a local confluence criterion a la Knuth-Bendix (1970) in Section 5, the proof of which turns out to be very short." ] }
0911.3786
1740147330
We tackle the problem of graph transformation with a particular focus on node cloning. We propose a new approach to graph rewriting where nodes can be cloned zero, one or more times. A node can be cloned together with all its incident edges, with only its outgoing edges, with only its incoming edges or with none of its incident edges. We thus subsume previous works such as the sesqui-pushout, the heterogeneous pushout and the adaptive star grammars approaches. A rewrite rule is defined as a span where the right-hand and left-hand sides are graphs while the interface is a polarized graph. A polarized graph is a graph endowed with some annotations on nodes. The way a node is cloned is indicated by its polarization annotation. We use these annotations for designing graph transformation with polarized cloning. We show how a clone of a node can be built according to the different possible polarizations and define a rewrite step as a final pullback complement followed by a pushout. This is called the polarized sesqui-pushout approach. We also provide an algorithmic presentation of the proposed graph transformation with polarized cloning.
Cloning has also been subject of interest in @cite_8 . The authors considered rewrite rules of the form @math where @math is a star, i.e., @math is a (nonterminal) node surrounded by its adjacent nodes together with the edges that connect them. Rewrite rules which perform the cloning of a node have been given in [Def. 6] drewes06 . These rules show how a star can be removed, kept identical to itself or copied (cloned) more than once. Here again, unlike our framework, the cloning does not care about the arity of the nodes and, as in the case of the sesqui-pushout approach, a node is copied together with all its incoming and outgoing edges.
{ "cite_N": [ "@cite_8" ], "mid": [ "2165138661" ], "abstract": [ "We propose an extension of node and hyperedge replacement grammars, called adaptive star grammars, and study their basic properties. A rule in an adaptive star grammar is actually a rule schema which, via the so-called cloning operation, yields a potentially infinite number of concrete rules. Adaptive star grammars are motivated by application areas such as modeling and refactoring object-oriented programs. We prove that cloning can be applied lazily. Unrestricted adaptive star grammars are shown to be capable of generating every type-0 string language. However, we identify a reasonably large subclass for which the membership problem is decidable." ] }
0911.4108
1950741127
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
In @cite_20 , Arora, Hazan, and Kale describe a random sparsification algorithm which partially quantizes its inputs and requires only one pass through the matrix. They use an epsilon-net argument and Chernoff bounds to establish that with high probability the resulting approximant has small error and high sparsity.
{ "cite_N": [ "@cite_20" ], "mid": [ "1581656968" ], "abstract": [ "We describe a simple random-sampling based procedure for producing sparse matrix approximations. Our procedure and analysis are extremely simple: the analysis uses nothing more than the Chernoff-Hoeffding bounds. Despite the simplicity, the approximation is comparable and sometimes better than previous work. Our algorithm computes the sparse matrix approximation in a single pass over the data. Further, most of the entries in the output matrix are quantized, and can be succinctly represented by a bit vector, thus leading to much savings in space." ] }
0911.4108
1950741127
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
In @cite_10 , Rudelson and Vershynin take a different approach to the Monte Carlo methodology for low-rank approximation. They consider @math as a linear operator between finite-dimensional Banach spaces and apply techniques of probability in Banach spaces: decoupling, symmetrization, Slepian's lemma for Rademacher random variables, and a law of large numbers for operator-valued random variables. They show that, if @math can be approximated by any rank- @math matrix, then it is possible to obtain an accurate rank- @math approximation to @math by sampling @math rows of @math . Additionally, they quantify the behavior of the @math and @math norms of random submatrices.
{ "cite_N": [ "@cite_10" ], "mid": [ "1998058722" ], "abstract": [ "We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(rlog r) with a small error in the spectral norm, where r e VAV2F VAV22 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables." ] }
0911.4108
1950741127
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
Our methods are similar to those of Rudelson and Vershynin in @cite_10 in that we consider @math as a linear operator between finite-dimensional Banach spaces and use some of the same tools of probability in Banach spaces. Whereas Rudelson and Vershynin consider the behavior of the norms of random submatrices of @math , we consider the behavior of the norms of matrices formed by randomly sparsifying (or quantizing) the entries of @math . This yields error bounds applicable to schemes that sparsify or quantize matrices entrywise. Since some graph algorithms depend more on the number of edges in the graph than the number of vertices, such schemes may be useful in developing algorithms for handling large graphs.
{ "cite_N": [ "@cite_10" ], "mid": [ "1998058722" ], "abstract": [ "We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(rlog r) with a small error in the spectral norm, where r e VAV2F VAV22 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best-known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables." ] }
0911.4108
1950741127
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
Consider a weighted simple graph @math with adjacency matrix @math given by [ a_ jk = ] A is a partition of the vertices into two blocks: @math . The of a cut is the sum of the weights of all edges in @math which have one vertex in @math and one vertex in @math . Several problems relating to cuts are of considerable practical interest. In particular, the problem, to determine the cut of maximum cost in a graph, is common in computer science applications. The cuts of maximum cost are exactly those which realize the of the adjacency matrix, which is defined as [ A = S E | (j,k) E jk ( 1 _ S )_ j ( 1 _ S )_ k |, ] where @math is the indicator vector for @math . Finding the cut-norm of a general matrix is NP -hard, but in @cite_8 , the authors offer a randomized polynomial-time algorithm which finds a submatrix @math of @math such that @math . One crucial point in the derivation of the algorithm is the fact that the @math norm is strongly equivalent with the cut-norm: [ A A A . ]
{ "cite_N": [ "@cite_8" ], "mid": [ "1990811538" ], "abstract": [ "The cut-norm ||A|| C of a real matrix A=(a ij ) i∈ R,j∈S is the maximum, over all I ⊂ R, J ⊂ S of the quantity | Σ i ∈ I, j ∈ J a ij |. This concept plays a major role in the design of efficient approximation algorithms for dense graph and matrix problems. Here we show that the problem of approximating the cut-norm of a given real matrix is MAX SNP hard, and provide an efficient approximation algorithm. This algorithm finds, for a given matrix A=(a ij ) i ∈ R, j ∈ S , two subsets I ⊂ R and J ⊂ S, such that | Σ i ∈ I, j ∈ J a ij | ≥ ρ ||A|| C , where ρ > 0 is an absolute constant satisfying $ρ > 0. 56. The algorithm combines semidefinite programming with a rounding technique based on Grothendieck's Inequality. We present three known proofs of Grothendieck's inequality, with the necessary modifications which emphasize their algorithmic aspects. These proofs contain rounding techniques which go beyond the random hyperplane rounding of Goemans and Williamson [12], allowing us to transfer various algorithms for dense graph and matrix problems to the sparse case." ] }
0911.4108
1950741127
Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.
In @cite_5 , Spielman and Srivastava improve upon this sampling scheme, instead keeping an edge with probability proportional to its ---a measure of how likely it is to appear in a random spanning tree of the graph. They provide an algorithm which produces a sparsifier with @math edges, where @math is the number of vertices in the graph. They obtain this result by reducing the problem to the behavior of projection matrices @math and @math associated with the original graph and the sparsifier, and appealing to a spectral norm concentration result.
{ "cite_N": [ "@cite_5" ], "mid": [ "2125664420" ], "abstract": [ "We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph G=(V,E,w) and a parameter e>0, we produce a weighted subgraph H=(V, E, w) of G such that | E|=O(n log n e2) and for all vectors x in RV. (1-e) ∑uv ∈ E (x(u)-x(v))2wuv≤ ∑uv in E(x(u)-x(v))2 wuv ≤ (1+e)∑uv ∈ E(x(u)-x(v))2wuv. This improves upon the sparsifiers constructed by Spielman and Teng, which had O(n logc n) edges for some large constant c, and upon those of Benczur and Karger, which only satisfied (1) for x in 0,1 V. We conjecture the existence of sparsifiers with O(n) edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in O(log n) time." ] }
0911.3473
1542217987
We study the computational power of polynomial threshold functions, that is, threshold functions of real polynomials over the boolean cube. We provide two new results bounding the computational power of this model. Our first result shows that low-degree polynomial threshold functions cannot approximate any function with many influential variables. We provide a couple of examples where this technique yields tight approximation bounds. Our second result relates to constructing pseudorandom generators fooling low-degree polynomial threshold functions. This problem has received attention recently, where proved that @math -wise independence suffices to fool linear threshold functions. We prove that any low-degree polynomial threshold function, which can be represented as a function of a small number of linear threshold functions, can also be fooled by @math -wise independence. We view this as an important step towards fooling general polynomial threshold functions, and we discuss a plausible approach achieving this goal based on our techniques. Our results combine tools from real approximation theory, hyper-contractive inequalities and probabilistic methods. In particular, we develop several new tools in approximation theory which may be of independent interest.
Bruck @cite_0 studied polynomial threshold functions, and proved that such functions can be computed by depth- @math polynomial sized circuits with unbounded fan-in linear threshold gates. @cite_19 studied the approximation of boolean functions by some threshold functions. Namely, they study the best possible approximation for the parity function and other symmetric functions by low-degree PTF, and proved that for every degree- @math PTF @math , we have @math and this bound is tight. However, their bounds for other functions are not fully explicit and are not tight.
{ "cite_N": [ "@cite_0", "@cite_19" ], "mid": [ "2097575812", "1990538552" ], "abstract": [ "The analysis of linear threshold Boolean functions has recently attracted the attention of those interested in circuit complexity as well as of those interested in neural networks. Here a generalization of linear threshold functions is defined, namely, polynomial threshold functions, and its relation to the class of linear threshold functions is investigated.A Boolean function is polynomial threshold if it can be represented as a sign function of a polynomial that consists of a polynomial (in the number of variables) number of terms. The main result of this paper is showing that the class of polynomial threshold functions (which is called @math ) is strictly contained in the class of Boolean functions that can be computed by a depth 2, unbounded fan-in polynomial size circuit of linear threshold gates (which is called @math ).Harmonic analysis of Boolean functions is used to derive a necessary and sufficient condition for a function to be an S-threshold function for a given set S of monomials. This cond...", "We consider the problem of approximating a Boolean functionf∶ 0,1 n → 0,1 by the sign of an integer polynomialp of degreek. For us, a polynomialp(x) predicts the value off(x) if, wheneverp(x)≥0,f(x)=1, and wheneverp(x)<0,f(x)=0. A low-degree polynomialp is a good approximator forf if it predictsf at almost all points. Given a positive integerk, and a Boolean functionf, we ask, “how good is the best degreek approximation tof?” We introduce a new lower bound technique which applies to any Boolean function. We show that the lower bound technique yields tight bounds in the casef is parity. Minsky and Papert [10] proved that a perceptron cannot compute parity; our bounds indicate exactly how well a perceptron canapproximate it. As a consequence, we are able to give the first correct proof that, for a random oracleA, PP A is properly contained in PSPACE A . We are also able to prove the old AC0 exponential-size lower bounds in a new way. This allows us to prove the new result that an AC0 circuit with one majority gate cannot approximate parity. Our proof depends only on basic properties of integer polynomials." ] }
0911.3473
1542217987
We study the computational power of polynomial threshold functions, that is, threshold functions of real polynomials over the boolean cube. We provide two new results bounding the computational power of this model. Our first result shows that low-degree polynomial threshold functions cannot approximate any function with many influential variables. We provide a couple of examples where this technique yields tight approximation bounds. Our second result relates to constructing pseudorandom generators fooling low-degree polynomial threshold functions. This problem has received attention recently, where proved that @math -wise independence suffices to fool linear threshold functions. We prove that any low-degree polynomial threshold function, which can be represented as a function of a small number of linear threshold functions, can also be fooled by @math -wise independence. We view this as an important step towards fooling general polynomial threshold functions, and we discuss a plausible approach achieving this goal based on our techniques. Our results combine tools from real approximation theory, hyper-contractive inequalities and probabilistic methods. In particular, we develop several new tools in approximation theory which may be of independent interest.
A subsequent work of @cite_6 show that @math -wise independence fools quadratic threshold functions, and intersections of such functions.
{ "cite_N": [ "@cite_6" ], "mid": [ "2952038943" ], "abstract": [ "Let x be a random vector coming from any k-wise independent distribution over -1,1 ^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive epsilon for k = poly(1 epsilon). This answers an open question of (FOCS 2009). Using standard constructions of k-wise independent distributions, we obtain a broad class of explicit generators that epsilon-fool the class of degree-2 threshold functions with seed length log(n)*poly(1 epsilon). Our approach is quite robust: it easily extends to yield that the intersection of any constant number of degree-2 threshold functions is epsilon-fooled by poly(1 epsilon)-wise independence. Our results also hold if the entries of x are k-wise independent standard normals, implying for example that bounded independence derandomizes the Goemans-Williamson hyperplane rounding scheme. To achieve our results, we introduce a technique we dub multivariate FT-mollification, a generalization of the univariate form introduced by (SODA 2010) in the context of streaming algorithms. Along the way we prove a generalized hypercontractive inequality for quadratic forms which takes the operator norm of the associated matrix into account. These techniques may be of independent interest." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Adding i.i.d. white noise to protect data privacy is one common approach for statistical disclosure control @cite_5 . The perturbed data allows the retrieval of aggregate statistics of the original data ( e.g. sample mean and variance) without disclosing values of individual records. Moreover, additive white noise perturbation has received attention in the data mining literature from the perspective (described at the beginning of Section ). Clearly, additive noise does not preserve Euclidean distance perfectly. However, it can be shown that additive noise preserves the squared Euclidean distance between data tuples on expectation, but, the associated variance is large. To our knowledge, such observations have not been made before. We defer the details of this analysis to future work and do not consider additive noise further in this paper.
{ "cite_N": [ "@cite_5" ], "mid": [ "2113427031" ], "abstract": [ "This paper considers the problem of providing security to statistical databases against disclosure of confidential information. Security-control methods suggested in the literature are classified into four general approaches: conceptual, query restriction, data perturbation, and output perturbation. Criteria for evaluating the performance of the various security-control methods are identified. Security-control methods that are based on each of the four approaches are discussed, together with their performance with respect to the identified evaluation criteria. A detailed comparative analysis of the most promising methods for protecting dynamic-online statistical databases is also presented. To date no single security-control method prevents both exact and partial disclosures. There are, however, a few perturbation-based methods that prevent exact disclosure and enable the database administrator to exercise \"statistical disclosure control.\" Some of these methods, however introduce bias into query responses or suffer from the 0 1 query-set-size problem (i.e., partial disclosure is possible in case of null query set or a query set of size 1). We recommend directing future research efforts toward developing new methods that prevent exact disclosure and provide statistical-disclosure control, while at the same time do not suffer from the bias problem and the 0 1 query-set-size problem. Furthermore, efforts directed toward developing a bias-correction mechanism and solving the general problem of small query-set-size would help salvage a few of the current perturbation-based methods." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
To assess the security of traditional multiplicative perturbation together with additive perturbation, Trottini @cite_7 proposed a Bayesian intruder model that considers both prior and posterior knowledge of the data. Their overall strategy of attacking the privacy of perturbed data using prior knowledge is the same as ours. However, they particularly focused on linkage privacy breaches, where an intruder tries to identify the identity (of a person) linked to a specific record; while we are primarily interested in data record recovery. Moreover, they did not consider Euclidean distance preserving perturbation as we do.
{ "cite_N": [ "@cite_7" ], "mid": [ "1925467533" ], "abstract": [ "This paper focuses on a combination of two disclosure limitation techniques, additive noise and multiplicative bias, and studies their efficacy in protecting confidentiality of continuous microdata. A Bayesian intruder model is extensively simulated in order to assess the performance of these disclosure limitation techniques as a function of key parameters like the variability amongst profiles in the original data, the amount of users prior information, the amount of bias and noise introduced in the data. The results of the simulation offer insight into the degree of vulnerability of data on continuous random variables and suggests some guidelines for effective protection measures." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Samarati and Sweeney @cite_18 @cite_34 developed the k-anonymity framework wherein the original data is perturbed so that the information for any individual cannot be distinguished from at least k-1 others. Values from the original data are generalized (replaced by a less specific value) to produce the anonymized data. This framework has drawn lots of attention because of its simple privacy definition. A variety of refinements have been proposed, see discussions on k-anonymity in various chapters in @cite_24 . None of these approaches consider Euclidean distance preserving perturbation as we do.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_34" ], "mid": [ "50268958", "2119067110", "" ], "abstract": [ "We focus primarily on the use of additive and matrix multiplicative data perturbation techniques in privacy preserving data mining (PPDM). We survey a recent body of research aimed at better understanding the vulnerabilities of these techniques. These researchers assumed the role of an attacker and developed methods for estimating the original data from the perturbed data and any available prior knowledge. Finally, we briefly discuss research aimed at attacking k-anonymization, another data perturbation technique in PPDM.", "Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.", "" ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Evfimievski @cite_11 , Rizvi and Haritza @cite_8 considered the use of categorical data perturbation in the context of association rule mining. Their algorithms delete real items and add bogus items to the original records. Association rules present in the original data can be estimated from the perturbed data. Along a related line, Verykios @cite_28 considered perturbation techniques which allow the discovery of some association rules while hiding others considered to be sensitive.
{ "cite_N": [ "@cite_28", "@cite_8", "@cite_11" ], "mid": [ "2095576022", "2161067131", "2130099852" ], "abstract": [ "Large repositories of data contain sensitive information that must be protected against unauthorized access. The protection of the confidentiality of this information has been a long-term goal for the database security research community and for the government statistical agencies. Recent advances in data mining and machine learning algorithms have increased the disclosure risks that one may encounter when releasing data to outside parties. A key problem, and still not sufficiently investigated, is the need to balance the confidentiality of the disclosed data with the legitimate needs of the data users. Every disclosure limitation method affects, in some way, and modifies true data values and relationships. We investigate confidentiality issues of a broad category of rules, the association rules. In particular, we present three strategies and five algorithms for hiding a group of association rules, which is characterized as sensitive. One rule is characterized as sensitive if its disclosure risk is above a certain privacy threshold. Sometimes, sensitive rules should not be disclosed to the public since, among other things, they may be used for inferring sensitive data, or they may provide business competitors with an advantage. We also perform an evaluation study of the hiding algorithms in order to analyze their time complexity and the impact that they have in the original database.", "Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. We investigate here, with respect to mining association rules, whether users can be encouraged to provide correct information by ensuring that the mining process cannot, with any reasonable degree of certainty, violate their privacy. We present a scheme, based on probabilistic distortion of user data, that can simultaneously provide a high degree of privacy to the user and retain a high level of accuracy in the mining results. The performance of the scheme is validated against representative real and synthetic datasets.", "There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The model is then built over the randomized data, after first compensating for the randomization (at the aggregate level). This approach is potentially vulnerable to privacy breaches: based on the distribution of the data, one may be able to learn with high confidence that some of the randomized records satisfy a specified property, even though privacy is preserved on average.In this paper, we present a new formulation of privacy breaches, together with a methodology, \"amplification\", for limiting them. Unlike earlier approaches, amplification makes it is possible to guarantee limits on privacy breaches without any knowledge of the distribution of the original data. We instantiate this methodology for the problem of mining association rules, and modify the algorithm from [9] to limit privacy breaches without knowledge of the data distribution. Next, we address the problem that the amount of randomization required to avoid privacy breaches (when mining association rules) results in very long transactions. By using pseudorandom generators and carefully choosing seeds such that the desired items from the original transaction are present in the randomized transaction, we can send just the seed instead of the transaction, resulting in a dramatic drop in communication and storage cost. Finally, we define new information measures that take privacy breaches into account when quantifying the amount of privacy preserved by randomization." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Oliveira and Zaiane @cite_30 @cite_19 , Chen and Liu @cite_35 discussed the use of geometric rotation for clustering and classification. These authors observed that the distance preserving nature of rotation makes it useful in PPDM, but did not analyze its privacy limitations, nor did they consider prior knowledge.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_35" ], "mid": [ "2111272198", "1534285372", "" ], "abstract": [ "Despite its benefit in a wide range of applications, data mining techniques also have raised a number of ethical issues. Some such issues include those of privacy, data security, intellectual property rights, and many others. In this paper, we address the privacy problem against unauthorized secondary use of information. To do so, we introduce a family of geometric data transformation methods (GDTMs) which ensure that the mining process will not violate privacy up to a certain degree of security. We focus primarily on privacy preserving data clustering, notably on partition-based and hierarchical methods. Our proposed methods distort only confidential numerical attributes to meet privacy requirements, while preserving general features for clustering analysis. Our experiments demonstrate that our methods are effective and provide acceptable values in practice for balancing privacy and accuracy. We report the main results of our performance evaluation and discuss some open research issues.", "In this paper, we address the problem of protecting the underlying attribute values when sharing data for clustering. The challenge is how to meet privacy requirements and guarantee valid clustering results as well. To achieve this dual goal, we propose a novel spatial data transformation method called Rotation-Based Transformation (RBT). The major features of our data transformation are: a) it is independent of any clustering algorithm, b) it has a sound mathematical foundation; c) it is efficient and accurate; and d) it does not rely on intractability hypotheses from algebra and does not require CPU-intensive operations. We show analytically that although the data are transformed to achieve privacy, we can also get accurate clustering results by the safeguard of the global distances between data points.", "" ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Chen @cite_38 also discussed a known input attack technique. Unlike ours, they considered a combination of distance preserving data perturbation followed by additive noise. And, they assumed a stronger form of known input prior knowledge: the attacker knows a subset of private data records and knows to which perturbed tuples they correspond. Finally, they assume that the number of linearly independent known input data records is no smaller than @math (the dimensionality of the records). They pointed out that linear regression can be used to re-estimate private data tuples.
{ "cite_N": [ "@cite_38" ], "mid": [ "1569223999" ], "abstract": [ "Data perturbation is a popular technique for privacypreserving data mining. The major challenge of data perturbation is balancing privacy protection and data quality, which are normally considered as a pair of contradictive factors. We propose that selectively preserving only the task model specific information in perturbation would improve the balance. Geometric data perturbation, consisting of random rotation perturbation, random translation perturbation, and noise addition, aims at preserving the important geometric properties of a multidimensional dataset, while providing better privacy guarantee for data classification modeling. The preliminary study has shown that random geometric perturbation can well preserve model accuracy for several popular classification models, including kernel methods, linear classifiers, and SVM classifiers, while it also revealed some security concerns to random geometric perturbation. In this paper, we address some potential attacks to random geometric perturbation and design several methods to reduce the threat of these attacks. Experimental study shows that the enhanced geometric perturbation can provide satisfactory privacy guarantee while still well preserving model accuracy for the discussed data classification models." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Mukherjee @cite_15 considered the use of discrete Fourier transformation (DFT) and discrete cosine transformation (DCT) to perturb the data. Only the high energy DFT DCT coefficients are used, and the transformed data in the new domain approximately preserves Euclidean distance. The DFT DCT coefficients were further permuted to enhance the privacy protection level. Note that DFT and DCT are (complex) orthogonal transforms. Hence their perturbation technique can be expressed as left multiplication by a (complex) orthogonal matrix (corresponding to the DFT DCT followed by a perturbation of the resulting coefficients), then a left multiplication by an identity matrix with some zeros on the diagonal (corresponding to dropping all but the high-energy coefficients). They did not consider attacks based on prior knowledge. As future work, it would be interesting to do so.
{ "cite_N": [ "@cite_15" ], "mid": [ "2161470918" ], "abstract": [ "Privacy preserving data mining has become increasingly popular because it allows sharing of privacy-sensitive data for analysis purposes. However, existing techniques such as random perturbation do not fare well for simple yet widely used and efficient Euclidean distance-based mining algorithms. Although original data distributions can be pretty accurately reconstructed from the perturbed data, distances between individual data points are not preserved, leading to poor accuracy for the distance-based mining methods. Besides, they do not generally focus on data reduction. Other studies on secure multi-party computation often concentrate on techniques useful to very specific mining algorithms and scenarios such that they require modification of the mining algorithms and are often difficult to generalize to other mining algorithms or scenarios. This paper proposes a novel generalized approach using the well-known energy compaction power of Fourier-related transforms to hide sensitive data values and to approximately preserve Euclidean distances in centralized and distributed scenarios to a great degree of accuracy. Three algorithms to select the most important transform coefficients are presented, one for a centralized database case, the second one for a horizontally partitioned, and the third one for a vertically partitioned database case. Experimental results demonstrate the effectiveness of the proposed approach." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Ting @cite_10 considered left-multiplication by a randomly generated orthogonal matrix. However, they assume the original data tuples are rows rather than columns as we do. As a result, Euclidean distance between original data tuples is not preserved, but, sample mean and covariance are. If the original data arose as independent samples from multi-variate Gaussian distribution, then the perturbed data allows inferences to be drawn about this underlying distribution just as well as the original data. For all but small or very high-dimensional datasets, their approach is more resistant to prior knowledge attacks than Euclidean distance preserving perturbations. Their perturbation matrix is @math ( @math the number of original data tuples), much bigger than Euclidean distance preserving perturbation matrices, @math ( @math the number of entries in each original data tuple).
{ "cite_N": [ "@cite_10" ], "mid": [ "1975118429" ], "abstract": [ "Statistically defensible methods for disclosure limitation allow users to make inferences about parameters in a model similar to those that would be possible using the original unreleased data. We present a new perturbation method for protecting confidential continuous microdata Random Orthogonal Matrix Masking (ROMM) which preserves the sufficient statistics for multivariate normal distributions, and thus is statistically defensible. ROMM encompasses all methods that preserve these statistics and can be restricted to provide 'small' perturbations. We contrast ROMM with other microdata perturbation methods and we discuss methods for evaluating it from the perspective of the tradeoff between disclosure risk and data utility." ] }
0911.2942
2950720334
We examine Euclidean distance-preserving data perturbation as a tool for privacy-preserving data mining. Such perturbations allow many important data mining algorithms e.g. hierarchical and k-means clustering), with only minor modification, to be applied to the perturbed data and produce exactly the same results as if applied to the original data. However, the issue of how well the privacy of the original data is preserved needs careful study. We engage in this study by assuming the role of an attacker armed with a small set of known original data tuples (inputs). Little work has been done examining this kind of attack when the number of known original tuples is less than the number of data dimensions. We focus on this important case, develop and rigorously analyze an attack that utilizes any number of known original tuples. The approach allows the attacker to estimate the original data tuple associated with each perturbed tuple and calculate the probability that the estimation results in a privacy breach. On a real 16-dimensional dataset, we show that the attacker, with 4 known original tuples, can estimate an original unknown tuple with less than 7 error with probability exceeding 0.8.
Before we briefly describe another two attacks based on independent component analysis (ICA) @cite_20 , it is necessary to give a brief ICA overview.
{ "cite_N": [ "@cite_20" ], "mid": [ "2123649031" ], "abstract": [ "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject." ] }
0911.3347
2950192596
We address the problem of finding optimal strategies for computing Boolean symmetric functions. We consider a collocated network, where each node's transmissions can be heard by every other node. Each node has a Boolean measurement and we wish to compute a given Boolean function of these measurements with zero error. We allow for block computation to enhance data fusion efficiency, and determine the minimum worst-case total bits to be communicated to perform the desired computation. We restrict attention to the class of symmetric Boolean functions, which only depend on the number of 1s among the n measurements. We define three classes of functions, namely threshold functions, delta functions and interval functions. We provide exactly optimal strategies for the first two classes, and an order-optimal strategy with optimal preconstant for interval functions. Using these results, we can characterize the complexity of computing percentile type functions, which is of great interest. In our analysis, we use lower bounds from communication complexity theory, and provide an achievable scheme using information theoretic tools.
The problem of worst-case block function computation was formulated in @cite_9 . The authors identify two classes of symmetric functions namely functions exemplified by Mean and Median, and functions, exemplified by Maximum and Minimum. The maximum rates for computation of type-sensitive and type-threshold functions in random planar networks are shown to be @math and @math respectively, for a network of @math nodes. A communication complexity approach was used to establish upper bounds on the rate of computation in collocated networks.
{ "cite_N": [ "@cite_9" ], "mid": [ "2123976939" ], "abstract": [ "In wireless sensor networks, one is not interested in downloading all the data from all the sensors. Rather, one is only interested in collecting from a sink node a relevant function of the sensor measurements. This paper studies the maximum rate at which functions of sensor measurements can be computed and communicated to the sink node. It focuses on symmetric functions, where only the data from a sensor is important, not its identity. The results include the following. The maximum rate of downloading the frequency histogram in a random planar multihop network with n nodes is O(1 logn) A subclass of functions, called type-sensitive functions, is maximally difficult to compute. In a collocated network, they can be computed at rate O(1 n), and in a random planar multihop network at rate O(1 logn). This class includes the mean, mode, median, etc. Another subclass of functions, called type-threshold functions, is exponentially easier to compute. In a collocated network they can be computed at rate O(1 logn), and in a random planar multihop network at rate O(1 loglogn). This class includes the max, min, range, etc. The results also show the architecture for processing information across sensor networks." ] }
0911.3347
2950192596
We address the problem of finding optimal strategies for computing Boolean symmetric functions. We consider a collocated network, where each node's transmissions can be heard by every other node. Each node has a Boolean measurement and we wish to compute a given Boolean function of these measurements with zero error. We allow for block computation to enhance data fusion efficiency, and determine the minimum worst-case total bits to be communicated to perform the desired computation. We restrict attention to the class of symmetric Boolean functions, which only depend on the number of 1s among the n measurements. We define three classes of functions, namely threshold functions, delta functions and interval functions. We provide exactly optimal strategies for the first two classes, and an order-optimal strategy with optimal preconstant for interval functions. Using these results, we can characterize the complexity of computing percentile type functions, which is of great interest. In our analysis, we use lower bounds from communication complexity theory, and provide an achievable scheme using information theoretic tools.
While we have considered worst case computation in this paper, one could also impose a probability distribution on the measurements. In @cite_6 , the average complexity of computing a type-threshold function was shown to be @math , in contrast with the worst case complexity of @math . Thus, we can obtain constant rate computation on the average.
{ "cite_N": [ "@cite_6" ], "mid": [ "2152546104" ], "abstract": [ "We consider the problem of data harvesting in wireless sensor networks. A designated collector node seeks to compute a function of the sensor measurements. For a directed graph G = (V ,ℰ) on the sensor nodes, we wish to determine the optimal encoders on each edge which achieve zero-error block computation of the function at the collector node. Our goal is to characterize the rate region in R∣ℰ∣." ] }
0911.3347
2950192596
We address the problem of finding optimal strategies for computing Boolean symmetric functions. We consider a collocated network, where each node's transmissions can be heard by every other node. Each node has a Boolean measurement and we wish to compute a given Boolean function of these measurements with zero error. We allow for block computation to enhance data fusion efficiency, and determine the minimum worst-case total bits to be communicated to perform the desired computation. We restrict attention to the class of symmetric Boolean functions, which only depend on the number of 1s among the n measurements. We define three classes of functions, namely threshold functions, delta functions and interval functions. We provide exactly optimal strategies for the first two classes, and an order-optimal strategy with optimal preconstant for interval functions. Using these results, we can characterize the complexity of computing percentile type functions, which is of great interest. In our analysis, we use lower bounds from communication complexity theory, and provide an achievable scheme using information theoretic tools.
As argued in @cite_9 , an information-theoretic formulation of this problem combines the complexity of source coding with rate distortion as well as the manifold collaborative possibilities in wireless, together with the complications introduced by the function structure. There is little or no work that addresses this most general framework. One special case, a source coding problem for function computation with side information, has been studied in @cite_3 . Recently, the rate region for multi-round interactive function computation has been characterized for two nodes @cite_4 , and for collocated networks @cite_7 .
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7", "@cite_3" ], "mid": [ "2123976939", "2950300807", "2158565632", "2164651000" ], "abstract": [ "In wireless sensor networks, one is not interested in downloading all the data from all the sensors. Rather, one is only interested in collecting from a sink node a relevant function of the sensor measurements. This paper studies the maximum rate at which functions of sensor measurements can be computed and communicated to the sink node. It focuses on symmetric functions, where only the data from a sensor is important, not its identity. The results include the following. The maximum rate of downloading the frequency histogram in a random planar multihop network with n nodes is O(1 logn) A subclass of functions, called type-sensitive functions, is maximally difficult to compute. In a collocated network, they can be computed at rate O(1 n), and in a random planar multihop network at rate O(1 logn). This class includes the mean, mode, median, etc. Another subclass of functions, called type-threshold functions, is exponentially easier to compute. In a collocated network they can be computed at rate O(1 logn), and in a random planar multihop network at rate O(1 loglogn). This class includes the max, min, range, etc. The results also show the architecture for processing information across sensor networks.", "", "We study the limits of communication efficiency for function computation in collocated networks within the framework of multi-terminal block source coding theory. With the goal of computing a desired function of sources at a sink, nodes interact with each other through a sequence of error-free, network-wide broadcasts of finite-rate messages. For any function of independent sources, we derive a computable characterization of the set of all feasible message coding rates - the rate region -in terms of single-letter information measures. We show that when computing symmetric functions of binary sources, the sink will inevitably learn certain additional information which is not demanded in computing the function. This conceptual understanding leads to new improved bounds for the minimum sum-rate. The new bounds are shown to be orderwise better than those based on cut-sets as the network scales. The scaling law of the minimum sum-rate is explored for different classes of symmetric functions and source parameters.", "A sender communicates with a receiver who wishes to reliably evaluate a function of their combined data. We show that if only the sender can transmit, the number of bits required is a conditional entropy of a naturally defined graph. We also determine the number of bits needed when the communicators exchange two messages. Reference is made to the results of rate distortion in evaluating the function of two random variables." ] }
0911.3708
207670768
For many voting rules, it is NP-hard to compute a successful manipulation. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. We study empirically the cost of manipulating the single transferable vote (STV) rule. This was one of the first rules shown to be NP-hard to manipulate. It also appears to be one of the harder rules to manipulate since it involves multiple rounds and since, unlike many other rules, it is NP-hard for a single agent to manipulate without weights on the votes or uncertainty about how the other agents have voted. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible. It remains an interesting open question if manipulation by a coalition of agents is hard to compute in practice.
Coleman and Teague proposed algorithms to compute a manipulation for the STV rule @cite_7 . They also conducted an empirical study which demonstrates that only relatively small coalitions are needed to change the elimination order of the STV rule. They observed that most uniform and random elections are not trivially manipulable using a simple greedy heuristic. On the other hand, our results suggest that, for manipulation by a single agent, a limited amount of backtracking is needed to find a manipulation or prove that none exists.
{ "cite_N": [ "@cite_7" ], "mid": [ "1618180659" ], "abstract": [ "We study the manipulation of voting schemes, where a voter lies about their preferences in the hope of improving the election's outcome. All voting schemes are potentially manipulable. However, some, such as the Single Transferable Vote (STV) scheme used in Australian elections, are resistant to manipulation because it is NP-hard to compute the manipulating vote(s). We concentrate on STV and some natural generalisations of it called Scoring Elimination Protocols. We show that the hardness result for STV is true only if both the number of voters and the number of candidates are unbounded---we provide algorithms for a manipulation if either of these is fixed. This means that manipulation would not be hard in practice when either number is small. Next we show that the weighted version of the manipulation problem is NP-hard for all Scoring Elimination Protocols except one, which we provide an algorithm for manipulating. Finally we experimentally test a heuristic for solving the manipulation problem and conclude that it would not usually be effective." ] }
0911.3708
207670768
For many voting rules, it is NP-hard to compute a successful manipulation. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. We study empirically the cost of manipulating the single transferable vote (STV) rule. This was one of the first rules shown to be NP-hard to manipulate. It also appears to be one of the harder rules to manipulate since it involves multiple rounds and since, unlike many other rules, it is NP-hard for a single agent to manipulate without weights on the votes or uncertainty about how the other agents have voted. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible. It remains an interesting open question if manipulation by a coalition of agents is hard to compute in practice.
Walsh empirically studied the cost of manipulating the veto rule by a coalition of agents whose votes were weighted @cite_18 . He showed that there was a smooth transition in the probability that a coalition can elect a desired candidate as the size of the manipulating coalition increases. He also showed that it was easy to find manipulations of the veto rule or prove that none exist for many independent and identically distributed votes even when the coalition of manipulators was critical in size. He was able to identify a situation in which manipulation was computationally hard. This is when votes are highly correlated and the election is hung''. However, even a single uncorrelated voter was enough to make manipulation easy again.
{ "cite_N": [ "@cite_18" ], "mid": [ "2952439423" ], "abstract": [ "Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is \"hung\". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again." ] }
0911.3708
207670768
For many voting rules, it is NP-hard to compute a successful manipulation. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. We study empirically the cost of manipulating the single transferable vote (STV) rule. This was one of the first rules shown to be NP-hard to manipulate. It also appears to be one of the harder rules to manipulate since it involves multiple rounds and since, unlike many other rules, it is NP-hard for a single agent to manipulate without weights on the votes or uncertainty about how the other agents have voted. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible. It remains an interesting open question if manipulation by a coalition of agents is hard to compute in practice.
As indicated, there have been several theoretical results recently that suggest elections are easy to manipulate in practice despite NP-hardness results. For instance, Xia and Conitzer have shown that for a large class of voting rules including STV, as the number of agents grows, either the probability that a coalition can manipulate the result is very small (as the coalition is too small), or the probability that they can easily manipulate the result to make any alternative win is very large @cite_4 . They left open only a small interval in the size for the coalition for which the coalition is large enough to be able to manipulate but not obviously large enough to be able to manipulate the result easily.
{ "cite_N": [ "@cite_4" ], "mid": [ "2100161145" ], "abstract": [ "We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors---more specifically, only on the order (in terms of score) of the sum's components. This class is extremely general: we do not know of any commonly studied rule that is not a generalized scoring rule. We then study the coalitional manipulation problem for generalized scoring rules. We prove that under certain natural assumptions, if the number of manipulators is O(np) (for any p 1 2) and o(n), then the probability that a random profile is manipulable (to any possible winner under the voting rule) is 1--O(e--Ω(n2p--1)). We also show that common voting rules satisfy these conditions (for the uniform distribution). These results generalize earlier results by Procaccia and Rosenschein as well as even earlier results on the probability of an election being tied." ] }
0911.3708
207670768
For many voting rules, it is NP-hard to compute a successful manipulation. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. We study empirically the cost of manipulating the single transferable vote (STV) rule. This was one of the first rules shown to be NP-hard to manipulate. It also appears to be one of the harder rules to manipulate since it involves multiple rounds and since, unlike many other rules, it is NP-hard for a single agent to manipulate without weights on the votes or uncertainty about how the other agents have voted. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible. It remains an interesting open question if manipulation by a coalition of agents is hard to compute in practice.
As a second example, Procaccia and Rosenschein proved that for most scoring rules and a wide variety of distributions over votes, when the size of the coalition is @math , the probability that they can change the result tends to 0, and when it is @math , the probability that they can manipulate the result tends to 1 @cite_16 . They also gave a simple greedy procedure that will find a manipulation of a scoring rule in polynomial time with a probability of failure that is an inverse polynomial in @math @cite_6 . Friedgut, Kalai and Nisan proved that if the voting rule is neutral and far from dictatorial and there are 3 candidates then there exists an agent for whom a random manipulation succeeds with probability @math @cite_3 . Starting from different assumptions, Xia and Conitzer showed that a random manipulation would succeed with probability @math for 3 or more candidates for STV, for 4 or more candidates for any scoring rule and for 5 or more candidates for Copeland @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_3", "@cite_6" ], "mid": [ "", "2133067733", "2070986136", "1493942848" ], "abstract": [ "", "Recent results have established that a variety of voting rules are computationally hard to manipulate in the worst-case; this arguably provides some guarantee of resistance to manipulation when the voters have bounded computational power. Nevertheless, it has become apparent that a truly dependable obstacle to manipulation can only be provided by voting rules that are average-case hard to manipulate. In this paper, we analytically demonstrate that, with respect to a wide range of distributions over votes, the coalitional manipulation problem can be decided with overwhelming probability of success by simply considering the ratio between the number of truthful and untruthful voters. Our results can be employed to significantly focus the search for that elusive average-case-hard-to-manipulate voting rule, but at the same time these results also strengthen the case against the existence of such a rule.", "The Gibbard-Satterthwaite theorem states that every non-trivial voting method among at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probability for every neutral voting method among 3 alternatives that is far from being a dictatorship.", "Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant." ] }
0911.1619
2097319163
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
The work that is most closely related to our paper is @cite_24 . There the authors study the sales price of an object as part of a viral marketing campaign. They assume that all converted'' nodes will try to convert all of their neighbors and that the conversion probability depends both on the number of neighbors converted and on the sales price. They do consider the problem of how the recommendation itself should be rewarded. In fact, they mention the problem of finding optimal cashbacks'' in settings where the nodes behave strategically as an open problem.
{ "cite_N": [ "@cite_24" ], "mid": [ "2115890216" ], "abstract": [ "We study the use of viral marketing strategies on social networks that seek to maximize revenue from the sale of a single product. We propose a model in which the decision of a buyer to buy the product is influenced by friends that own the product and the price at which the product is offered. The influence model we analyze is quite general, naturally extending both the Linear Threshold model and the Independent Cascade model, while also incorporating price information. We consider sales proceeding in a cascading manner through the network, i.e. a buyer is offered the product via recommendations from its neighbors who own the product. In this setting, the seller influences events by offering a cashback to recommenders and by setting prices (via coupons or discounts) for each buyer in the social network. This choice of prices for the buyers is termed as the seller's strategy. Finding a seller strategy which maximizes the expected revenue in this setting turns out to be NP-hard. However, we propose a seller strategy that generates revenue guaranteed to be within a constant factor of the optimal strategy in a wide variety of models. The strategy is based on an influence-and-exploit idea, and it consists of finding the right trade-off at each time step between: generating revenue from the current user versus offering the product for free and using the influence generated from this sale later in the process." ] }
0911.1619
2097319163
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
The problem of optimal pricing with non-social recommender systems, where the recommendations directly come from the potential seller, was studied in @cite_31 . Here by non-social'' we mean computer-generated'' and a typical example would be Amazon's Customers who bought X also bought Y'' http: www.amazon.com . The somewhat surprising argument is that customers are willing to for relevant recommendations as they create value by reducing product uncertainty for the customers''. In this paper, we consider the case where the recommendations are social and do not come from the seller directly. Though it is imaginable that the recommendee pays the recommender for a good recommendation, we do not investigate the pricing of this possible payment.
{ "cite_N": [ "@cite_31" ], "mid": [ "2120057884" ], "abstract": [ "We study optimal pricing in the presence of recommender systems. A recommender system affects the market in two ways: (i) it creates value by reducing product uncertainty for the customers and hence (ii) its recommendations can be offered as add-ons which generate informational externalities. The quality of the recommendation add-on is endogenously determined by sales. We investigate the impact of these factors on the optimal pricing by a seller with a recommender system against a competitive fringe without such a system. If the recommender system is sufficiently effective in reducing uncertainty, then the seller prices otherwise symmetric products differently to have some products experienced more aggressively. Moreover, the seller segments the market so that customers with more in.exible tastes pay higher prices to get better recommendation." ] }
0911.1619
2097319163
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
It should be clear that we are addressing the problem of to recommend, a problem typically encountered by stores such as Amazon and usually solved using collaborative filtering'' techniques @cite_19 @cite_21 . In the first part of this paper (), we assume that the recommender recommends an item because she believes this item to be of interest to the recommendee, and the algorithm used by her to determine potential interest is irrelevant. In the second part (), the recommender is profit maximizing and now only cares about the reward offered to her by the seller and the probability @math that the recommendee will buy the item. In this model the what'' is absorbed into @math and the recommender simply decides on to recommend.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2124591829", "1971040550" ], "abstract": [ "This paper describes a technique for making personalized recommendations from any type of database to a user based on similarities between the interest profile of that user and those of other users. In particular, we discuss the implementation of a networked system called Ringo, which makes personalized recommendations for music albums and artists. Ringo's database of users and artists grows dynamically as more people use the system and enter more information. Four different algorithms for making recommendations by using social information filtering were tested and compared. We present quantitative and qualitative results obtained from the use of Ringo by more than 2000 people.", "Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated." ] }